A problem is said to be insensitive, or well-conditioned, if a given relative change in the
input data causes a reasonably commensurate relative change in the solution.
An algorithm is stable if the result it produces is relatively insensitive to perturbations resulting
from approximations made during the computation.
“software development” includes aspects such as requirements engineering, development processes, software design, and documentation.
“programming” is the act of actually creating an implementation, as well as testing it.
– descriptive statistics – summarize your current dataset with summary charts and tables, but do not attempt to draw conclusions about the population from which the sample was taken
– inferential statistics – draw conclusions about an additional population outside of your dataset by testing a hypothesis and drawing conclusions about a population, based on your sample with ANOVA, T-Test, Chi-Squared, confidence interval, regression, etc.
– analytical solutions can be obtained exactly with pencil and paper,
– numerical solutions cannot be obtained exactly in finite time and typically cannot be solved using pencil and paper.
These distinctions, however, can vary. There are increasingly many theorems and equations that can only be solved using a computer; however, the computer doesn’t do any approximations, it simply can do more steps than any human can ever hope to do without error. This is the realm of “symbolic computation” and its cousin “automatic theorem proving.” There is substantial debate as to the validity of these solutions — checking them is difficult, and one cannot always be sure the source code is error-free. Some folks argue that computer-assisted proofs should not be accepted.
“estimation” and “prediction” indeed are sometimes used interchangeably in non-technical writing and they seem to function similarly, but there is a sharp distinction between them in the standard model of a statistical problem.
– an estimator uses data to guess at a parameter (experimental), while,
– a predictor uses the data to guess at some random value (observational) that is not part of the dataset.
Multi-dimensional scaling (MDS) is a well-known statistical method for mapping pairwise relationships to coordinates. The coordinates that MDS generates are an optimal linear fit to the given dissimilarities between points, in a least squares
sense, assuming the distance used is metric.
Principal component analysis (PCA) is a statistical procedure that uses an orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables called principal components – based on the eigenvectors of the covariance matrix.
– Morrison, A., Ross, G., Chalmers, M. “Fast Multidimensional Scaling through Sampling, Springs, and Interpolation,” Information Visualization 2(1), pp. 68-77, March 2003.
Mathematics (queueing theory)
Statistics (measure theory) – replacable
Optimization (unconstraint – process control vs. constraint – goal programming) – fuzzyness
Readings for ORMS/Analytics:
Ackoff, R.L. (1979). The future of operational research is past. The Journal of the Operational Research Society, 30 (1979), pp. 93–104.
Meisel S., Mattfeld D.C. (2007) Synergies of Data Mining and Operations Research. Proceedings of the 40th Hawaii International Conference on System Sciences.
Anderson, C. (2008). The end of theory: The data deluge makes the scientific method obsolete. In Wired magazine, 16th July 2008. <http://www.wired.com/science/discoveries/magazine/16-07/pb_theory>
M. Köksalan, J. Wallenius, S. Zionts (2011). Multiple criteria decision making: From early history to the 21st century. World Scientific, Singapore.