some terminology 2: descriptive statistcs and inferential statistics

– descriptive statistics – summarize your current dataset  with summary charts and tables, but do not attempt to draw conclusions about the population from which the sample was taken

– inferential statistics –  draw conclusions about an additional population outside of your dataset by testing a hypothesis and drawing conclusions about a population, based on your sample with ANOVA, T-Test, Chi-Squared, confidence interval, regression, etc.

analytical vs. numerical approaches to problems

– analytical solutions can be obtained exactly with pencil and paper,
– numerical solutions cannot be obtained exactly in finite time and typically cannot be solved using pencil and paper.

These distinctions, however, can vary. There are increasingly many theorems and equations that can only be solved using a computer; however, the computer doesn’t do any approximations, it simply can do more steps than any human can ever hope to do without error. This is the realm of “symbolic computation” and its cousin “automatic theorem proving.” There is substantial debate as to the validity of these solutions — checking them is difficult, and one cannot always be sure the source code is error-free. Some folks argue that computer-assisted proofs should not be accepted.

some terminology: prediction vs. estimation

“estimation” and “prediction” indeed are sometimes used interchangeably in non-technical writing and they seem to function similarly, but there is a sharp distinction between them in the standard model of a statistical problem.

– an estimator uses data to guess at a parameter (experimental), while,

– a predictor uses the data to guess at some random value (observational)  that is not part of the dataset.

Data Reduction Methods – Statistics: MDS vs. PCA

Multi-dimensional scaling (MDS) is a well-known statistical method for mapping pairwise relationships to coordinates. The coordinates that MDS generates are an optimal linear fit to the given dissimilarities between points, in a least squares
sense, assuming the distance used is metric.

Principal component analysis (PCA) is a statistical procedure that uses an orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables called principal components – based on the eigenvectors of the covariance matrix.
– Morrison, A., Ross, G., Chalmers, M. “Fast Multidimensional Scaling through Sampling, Springs, and Interpolation,” Information Visualization 2(1), pp. 68-77, March 2003.

What is #orms in analytics age? What to say about “pure math vs. applied math”, “statistics vs. computer science”? Do we need optimization?

Mathematics (queueing theory)
Statistics (measure theory) – replacable
Optimization (unconstraint – process control vs. constraint – goal programming) – fuzzyness

Readings for ORMS/Analytics:

Ackoff, R.L. (1979). The future of operational research is past. The Journal of the Operational Research Society, 30 (1979), pp. 93–104.

Meisel S., Mattfeld D.C. (2007) Synergies of Data Mining and Operations Research. Proceedings of the 40th Hawaii International Conference on System Sciences.

Anderson, C. (2008). The end of theory: The data deluge makes the scientific method obsolete. In Wired magazine, 16th July 2008. <>

M. Köksalan, J. Wallenius, S. Zionts (2011). Multiple criteria decision making: From early history to the 21st century. World Scientific, Singapore.