Another vid: Fathom simulations with sampling

The classic randomization procedure in Fathom has three collections:

  • a “source” collection, from which you sample to make
  • a “sample” collection, in which you define a statistic (a measure in Fathom-ese), which you create repeatedly, creating
  • a “measures” collection, which now contains the sampling distribution (okay, an approximate sampling distribution) of the statistic you collected.

This is conceptually really difficult; but if you can do this (and understand that the thing you’re making is really the simulation of what it would be like if the effect you’re studying did not exist—the deeply subjunctive philosophy of the null hypothesis, coupled with tollendo tolens…much more on this later), then you can do all of basic statistical inference without ever mentioning the Normal distribution or the t statistic. Not that they’re bad, but they sow confusion, and many students cope by trying to remember recipes and acronyms.

My claim is that if you learn inference through simulation and randomization, you will wind up understanding it better because (a) it’s more immediate and (b) it unifies many statistical procedures into one: simulate the null hypothesis; create the sampling distribution; and compare your situation to that.

Ha. We’ll see. In class, we have just begun to look at these “three-collection” simulations. I made a video demonstrating the mechanics, following the one on one- and two-collection sims described in an earlier post. They are all collected on YouTube, but here is the new one.

Comments welcome.

Author: Tim Erickson

Math-science ed freelancer and sometime math and science teacher. Currently working on various projects.

6 thoughts on “Another vid: Fathom simulations with sampling”

  1. Tim, I am currently teaching a course along the lines of simulation and randomization. The course is based by the book of Tabor and Franklin “Statistical Reasoning in Sports”. There is also another book on the same principles coming out of Hope College in Michigan. Granted that as you, George Cobb and others put it: inference through simulations and randomization is more intuitive and therefore provides a pedagogically more approachable way in a first stats course. In my mind, the question still remains as to what will the students be left with if this is the only stats course they take. If they face a problem that they should apply statistical analysis to should they approach it through a computer simulation? Shouldn’t they know about the t-test for example, what it does, when can it be applied, etc.? Will students be able to critically consume research that involves statistical analysis if they are only exposed to the simulation/randomization method?

    1. Great questions. I wonder too. Here’s my first guess:

      • (snark alert) They do not need to know what a t-test is really, nor what the requirements are, because many people who use them don’t know either.
      • More seriously, we’re starting to compute empirical probabilities based on sampling distributions. I intend to move gradually into identifying those probabilities as “P-values”; from there, I hope it will not be too hard to have these students (a) look at published studies and interpret the P-values (correctly, since they learned all this though having made their own distributions using more intuitive statstics such as the difference between two means), and (b) use a t-test in Fathom without even recognizing it, that is, by choosing (say) “difference of means” from the menu in the Test object. I would present this as a short-cut that packages up the whole simulation mechanism in an easy-to-implement package, but that this is the quick and easy path, which has its dangers. And if they have any qualms at all, they should check out their results using simulation.
      • Then, of course, we do some examples of where it works and where it does not.
      • Will we get to all of this? I’m not sure. Will they really learn it? Not sure there either, but I’ll at least have some student work come June.

        And finally, I did administer the CAOS pre-test in the Fall. If I’m really gutsy, I’ll administer the post in the Spring…

  2. Tim,

    I think that if we are to reform the teaching of the first course in statistics along the lines of Cobb et al, we need to look beyond that limited horizon – we need to ask what will our students do after that first course, when and if will they need statistical analysis and if so will it be work that they need to do on their own or will they be (mostly) consumers of statistics based results.

    I agree with what you said that the students should leave this first course with an understanding of p-values and the ability to pull down the tests menu from Fathom, but as much as I like (LOVE) Fathom, do we expect the students to depend on it after the first course?

    1. Good question; my first thought is, after a high-school AP course, do we expect them to depend on a TI-84 and tables from the back of a book? My (as usual data-free) assertion is that understanding the underlying principle is the main thing; for stats, whenever these students actually need to do stats in their future, they will need to learn whatever technology is appropriate at the time. And if they only need to understand some published stats, well then, what I have suggested above (knowing what P means, recognizing t as an alternative statistic, etc) will be at least as good as what the average educated citizen remembers from their traditional course.

      The other thing is, these are “regular” stats students, many of then second-semester seniors. Many of them are on a trajectory to avoid taking any math course again for the rest of their lives. When I look into their eyes, I cannot see them caring whether they should (for example) divide by n or n – 1 when they compute a sample SD. But some basic modeling, some EDA, computing probabilities and expected values empirically, the fundamentally subjunctive way of looking at a null-hypothesis situation—I think those will serve well, and might just convince some of them to stay in the game.

      That’s the hope, anyway. I am making this up as I go along….

  3. Tim –
    I believe you are absolutely correct in (a) your assessment of the kids and (b) what we can and should do in this kind of course. I consider my greatest success in this class that 3 of the students have decided to sign up for AP Stats. This is the first math course where they were successful and they want to push ahead. The obverse of the coin is that I started with 35 and now I have 15.

    Two more information points. There seems to be a small, but growing community of people exploring the replication/randomization approach to a first stats course. Two sources that I would recommend to contact/stay in touch with are:

    Josh Tabor who wrote the textbook (pre-print) we are now using: Statistical Reasoning in Sports. Josh is one of the initiators of the AP Stats curriculum and he now teaches in Arizona at Canyon del Oro HS.

    Nathan Tintle at Hope College in Michigan. He and his colleagues are planning a book on Statistical Inference along the lines we discussed and their chapter 1 is online. I believe they are also working with George Cobb as an adviser.

    Finally, I think using Fathom is just the right thing for such an introductory course. I believe both Josh and Nathan are using it, but I am not sure they are introducing it to the students.

    We’ll see what comes out of all this – for my part I looking forward to moving some this stuff to my AP Stats class if I have the time.

    All the best
    Dean

  4. I’m a little late getting in on this discussion, but will need to go back now and read more of Tim’s posts. As an up front disclaimer, I’m working on a textbook project that is based on using bootstrap and randomization methods to introduce students to the core ideas of statistical inference. I’m a long-time fan of Fathom which I find ideal for this sort of approach.

    On the specific question of where traditional methods, such as the t-test, fit with a more computer intensive simulation-based approach, our current thinking is to still show students such methods, AFTER they (hopefully) have a good grasp of the core ideas. I did this with my college level intro class last fall. We had fairly early material on using bootstraps to construct confidence intervals and randomization procedures for doing significance testing – all before they’d ever seen a normal distribution. After doing quite a bit with the computer intensive methods, I point out that we keep seeing the same general shape over and over, and that we can, in many cases, predict that shape in advance. This gives a nice motivation for the normal distribution (and later t) and we then consider the traditional tests as shortcuts to save us the effort needed to construct the bootstrap/randomization distribution in those situations.

    Thus, by the end of the course, students have seen the same standard tests/intervals as before – but (we hope) now with better understanding of what’s really going on. It allows us to separate the mechanics for the traditional tests/intervals (which now just involve finding an appropriate estimate for a standard error and conditions for a distribution) from the more fundamental logic of inference. I found that we can go through those standard procedures much quicker than usual, since we aren’t trying to establish ideas like choosing hypotheses, understanding a p-value, or interpreting a confidence interval at the same time.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: