The last two years, I started my second semester (of this NON-AP course) with probability. Now, most of the probability in the class is destined to be empirical because of my overall approach—for example, you get a P-value by simulating a sampling distribution and counting the cases in the tail—but I had a notion that this was the place to learn some of the nuts and bolts of solving probability problems “by hand”—using area models and tree diagrams and learning when you add and when you multiply, and why. I even flipped the classroom a bit, making a series of videos describing the tools and techniques I wanted students to use.
Now I’m not sure that was such a good idea. It took a lot of time, it confused some of the students, and frankly we didn’t use it much. The only place we used the concepts was in addressing some issues in conditional probability—and a lot of that might be approached elsewhere (e.g., with the Markov data game). And in a traditional class, the main place you use probability is probably to get a value from the cumulative Normal distribution, and that certainly doesn’t require understanding of tree diagrams and vocabulary like “mutually exclusive.”
Instead, if I were to do this again (though that looks less and less likely, and I find myself regretting that more and more) I’m thinking I might start out that second semester with random walks.
Why? Because they’re all about variation, and from them, you can use modeling to get the root-N dependence, and it’s a good setup for learning the effects of sample size. Which have gotten short shrift this year, and I think they’re really important. (See this video, by the way, for an example why.)
Variations on the traditional random walk can also help you see conceptually what bias is all about.
Besides, they’re fun to simulate in Fathom.
(That link, by the way, is to the new, nearly unread “sister blog” to this one in which we focus on Fathom techniques.)