So. Long time no post, too many words to write elsewhere, mea culpa, mea culpa. But let me get to it:
Since forever (well, last year) I have been trying to run a (regular) statistics course using a randomization approach à la George Cobb. And let’s be clear, this is a radical randomization course.
What do I mean by that? Two things, I think:
- Randomization is not just a means of making traditional stats more approachable, but rather a substitute for traditional stats.
- If you go this way, students need to know how to construct the technological tools they use.
This is a noble goal, but it’s sure not universally working. Some students get it. But there are always some that do—despite our best efforts.
But others, well, it could be senioritis, but after a whole year with some of this class, I’m beginning to think that it’s something developmental. No traction whatsoever. It could be me; I sure spend a lot of time thinking that I suck as a teacher, and of course I make bonehead instructional choices. But if it’s not me, it may be impossible. Which grieves me because I so want this to work.
But let me take some space here to describe what I’m thinking.
If any reader doesn’t know what the heck I’m talking about, check out this old post. If you’re saying, “yeah, randomization, cool,” know that we use Fathom, everybody has it at home so I can (and do) assign homework, and we use randomization procedures exclusively. So if we want to do a hypothesis test, you invent a measure for the effect you’re seeing, figure out what the null hypothesis is, simulate it, and compare the resulting sampling distribution of the measure to the test statistic.
The beauty of this approach is supposed to be (and I still believe it) that students reallio-trulio have to face the real meaning of the stats, that whole subjunctive thing, over and over again.
The problem (referring to the two bullets above) is twofold:
- The business of constructing a null hypothesis and keeping track of the measure and the sampling distribution is just plain hard.
- Making the simulation is also hard.
And the combination is just brutal.
A less-radical approach might be:
Do a lot of problems where the measure is always about the mean or standard deviation. Develop the whole thing as if you’re about to use the Normal distribution. But stop short of that. Give the students template documents (or write applets) so they can see the distributions they get but don’t have to actually build the machinery themselves.
Here’s a rebuttal:
Why do we want to do randomization in the first place? Because traditional frequentist Normal-distribution-based stats are limiting and create incorrect hooks in kids’ heads. They get bits of formulas stuck in there and don’t understand what’s really going on. They don’t distinguish (for example) between the distribution of the data and the sampling distribution of the mean. And this, in turn, leads to a slavish devotion to turning some crank and finding p < 0.05 and nothing else.
And with technology like Fathom, they can actually do what the big dogs do: create statistics (measures) that accomplish whatever they want, and use those to describe what’s going on. This leads to what I think should be one of the main goals of high-school math education: showing students that they can use symbolic mathematics as a tool to accomplish what they want. If you just use means, you fall prey to the tyranny of the center (I still need to write that article!) and lead them to complacent orthodoxy.
Which leads to the re-rebuttal:
But in fact students cannot all get it. The measures mechanism, as splendid as it is, is too hard to use. Most people don’t use it. There are a few fluent advanced Fathom users, but no more. Why else did Fifty Fathoms sell so well? So you can’t expect the most math-troubled, insecure students to be able to use it no matter how sweetly you cajole them.
And of course it’s not just Fathom—it’s the field. Statistical thinking is hard. Lower your expectations. A few people will get it now. A few more will in college. That will have to be enough.
In any case, this year’s experience has ground me down. I won’t be coming back next year. Maybe after some time out of the classroom I’ll be able to face it again. Any suggestions are welcome.
This is a bold and courageous approach: Were there some successes along the way? Were there elements of the process that students understood and appreciated along the way?
Having been a long-time AP Statistics teacher, and a current “experimenter” with a simulation based approach to inference for a class called “Topics in Statistics and Calculus,” I can say that there are some elements of the simulation based approach which, I think, were understood by most students in the class. These students were not the most accomplished learners, but I was convinced that some “got it” when they were able to design simulations based on measurements that were not like the ones I have presented in class.
I tended to use manipulatives less complex than Fathom (10 sided dice, coins, and random number generators) first.
Thanks! Wow: “Topics in Stats and Calc” is a really interesting course title. Where can we learn more??
Meanwhile, about starting with “manual” simulation using manipulatives, dice, spinners, etc.: I could have done more, and even followed the schemes set forth in the old Art and Techniques of Simulation in the Data Literacy series (I think). If I’d started the year with that, setting the stage for later work, it might have helped. My skeptic replies that no, it’s (a) still too hard for some students to use that level of abstraction and (b) if they did, it would become a course in modeling and simulation…which would probably be a good thing!