9 Comments

In 1962, my father made a move from UCLA to the University of Oregon. They had Sputnik money to expand their Statistics and Sociology departments, particularly grads and postgrads. He was there for more than a decade. When he returned from sabbatical about 1973, he was told he could no longer give any grade lower than a C in his Statistical Methods course (which washed out more sociology students than any other single class). The Sputnik money had apparently run out.

He quit in protest, of course.

I tell a small group of people that being raised by statisticians was like being raised by very numerate wolves.

I had no idea what my father had really done for a living (aside from teaching) until I took a statistics refresher in grad school. It was my first classroom with wifi, and being bored, I searched for something the professor mentioned in passing and found “Ecological Correlations and the Behavior of Individuals” by W. S. Robinson — my father. He proved the ecological fallacy, although there were inconsequential math errors in the paper and he never used the term “ecological fallacy”.

Expand full comment

Another excellent resource which focuses primarily on medical and biomedical research is https://discourse.datamethods.org/

Expand full comment

Agreed. It's an excellent resource I should make more time for.

Expand full comment

I love 3Blue1Brown. Whenever I try to teach a statistical concept to students, I think to myself, "You're definitely no Grant Sanderson."

Expand full comment

nice points

Expand full comment

Fantastic.

I have found that learning statistics over the last few years has produced as many realisations of ignorance as it has moments of clarity. And I think a lot of that ignorance follows from those 'why' questions, which are often so difficult to answer. What is the precise question we wish to answer? Why is it important? How do we present the results so that they can be understood and applied by our audience? Those questions are often so much more subtle and challenging than reflexively identifying the 'appropriate' model or test for a particular problem (abstracted from any practical context).

It's actually one of the more difficult aspects of the work. People come to you with questions like 'how do I fit and interpret this model' and it would be easy to say 'use this software, put your predictors here and outcome here'. But seriously engaging on those why

questions takes real thought and time. I feel bad when someone comes to me with what they feel is a simple question and they leave the meeting with a whole bunch of new questions and confusions.

Expand full comment

Thank you. This was a very encouraging read and a good collection of resources for a junior researches such as myself!

(btw I noticed the link to "improving your statistical inferences" also goes to "understanding psychology as a science")

Expand full comment

Thank you! And I'm glad you found it useful.

Expand full comment

I have a degree in Statistics Applied to Biology. In my publications in biology, I hardly ever used the knowledge related to this diploma, and I did the same thing as my colleagues (when in Rome…).

My impression is that when your goal is to find the truth, you don't need fancy statistical methods, just a good understanding of probability and Popperian logic.

If, on the contrary, you want to promote your career, you will have to use all possible statistical artifices, including P-hacking, or the elimination of inconvenient data.

Research institutions want simple numbers like the H-index. They don't want to know what they are worth.

Expand full comment