Introduction to Computing, Modeling, and Visualization: first lectures of Stat 221
I am excited to be teaching Statistics 221 this semester. Not only because it connects all aspects of a rigorous applied data analysis process, but also because it “runs along the edge of current statistics”. Together with the students, we both explore the questions to which Statistics as a science has a well-established answer, and venture to the now common shades of grey, the areas of current statistical research. Here are the first several lectures of the class:
- Lecture 1, Course Introduction
- Lecture 2, Introduction to Visualization, Modeling, and Computing (VMC)
- Lecture 3, Intro VMC – Modeling and Computing
- Lecture 4 – Guest Lecture by Rachel Schutt, Introduction to Data Science
- Lecture 5, A More Rigorous Look at Visualization
- Lecture 6, Statistical Models and Likelihood
- Lecture 7, Likelihood Principle, MLE Foundations, Odyssey
Today, in the lecture on the Likelihood principle and MLE theory, we briefly covered likelihood-based inference, and highlighted its connection to modern data-intensive and custom model-heavy analysis problems. We saw how one by one, MLE principles fall victim to overparametrization, non-identifiability, and data inter-dependence. We also saw how beautifully simple and rich the likelihood principle is, and started thinking about how to use it more fully and improve on it.
In my opinion, it is extremely useful to rigorously explore and question the current methods of Statistical Inference, Decision Analysis, and other areas of Statistics. Only knowing what caveats the models and computing methods can create, a data analyst can successfully manage to discover real meaning and structure behind the masses of Big Data.