A Rhythm in Notion
Small(er) Steps Toward a Much Better World

A Small Andrew Gelman Blog Library

The best thing about reading Andrew Gelman’s blog is Andrew Gelman.

Like Tyler Cowen or Noahpinion, he’s genuinely open to changing his mind - indeed, even eager to do so! I’ve seen him do it in in the comments to his own post. While he frequently criticises folks who use trashy statistics, he also thoughtfully considers the value their work might have, despite its lack of factual basis.

He comes across as an open, decent guy, with a sense of humor about himself, who’s also written what looks like some of the best work in the field (Bayesian Data Analysis, Stan).

Statistics Done Wrong

Gelman links to this fun read on the most common statistical errors in science, since expanded into a book.

Differences between biology and statistics

Why have some biologists tried to start a backlash against the replication movement? Maybe it’s something to do with the incentive structure.

Gelman Quotes

A collection of often-amusing quotations his students wrote down from his classes.

Handy Statistic Definitions

A lot of good concepts to know, with links to helpful columns for every one!

The fault in our stars

A significant result is often marked with an asterisk or star, hence the title. On the abuse of statistics

What Hypothesis Testing is All About

Until reading this column, I still hadn’t realized that most scientific statistics use the p-value to pull a judo move with Popperian falsification. Most scientists play a game: make up a null hypothesis - whatever you want to disprove. Then, by failing to find data to support the null hypothesis, you falsify it, and by disproving it, provide support for your new hypothesis.

Neat, huh? Gelman explains why disproving the p-value doesn’t help you, but failing to disprove it does.

How do data and experiments fit into a scientific research program?

What if someone builds a career out of a line of research that uses small sample sizes to make “progress”? They’ll keep finding spurious correlations, but earnestly believe they are doing good science. Does this mean their work is useless? And what role is the data playing?