If you’re not a stats maven, this may sound esoteric, but let’s see if I can express it well.

One of the things that’s hard to learn in orthodox statistics is the whole machinery of statistical tests. You can *train* yourself (or a monkey) to do it right, but it seems to be a morass of weird rules and formulas. Remember to divide by (*n*–1) when you compute the standard deviation. You have to have an expected count of at least five in every cell to use chi-squared. You can use *z* instead of *t* if *df* > 30. And then there’s remembering what tests to use in which situation. You wind up with a big flowchart in your head about whether the data are paired, whether the variables are categorical, etc., etc., etc. And as a learner, you lose sight of the big picture: what a test is really saying.

George Cobb wrote a terrific article explaining why this is all unnecessary. The short version goes like this: you can unify a lot of inferential statistics if, instead of the tests we now use (*z*, *t*, chi-squared, ANOVA…) we used randomization tests.

Here’s the basic idea, to which we will often refer as the “Aunt Belinda” problem. Your Aunt Belinda claims to have supernatural powers. She says she can make tossed nickels come up heads. You don’t believe her, so you get a dollar’s worth of nickels (20 of them); she speaks an incantation over them; you toss them all at once; and sixteen come up heads.

Does she have supernatural powers?

Continue reading Randomization