A First “Claim” Investigation

A slide showing another version of the instructions

These are something like the entire instructions for a mini-investigation that has taken much of the second and third of our class meetings:

Mess around with U S Census data in Fathom until you notice some pattern or relationship. Then make a claim: a statement that must be either true or false. Then create a visualization (in this case, a graph) that speaks to your claim. Then make one or two sentences of commentary. These go onto one or two PowerPoint slides.

The purpose is severalfold:

  • You get chance to play with the data
  • You learn more Fathom features, largely by induction or osmosis or something; in any case, you learn them when you need them
  • You get to direct your own investigation
  • You get practice communicating in writing—or at least slideSpeak
  • I get to see how you do on all these things
  • We all get to try out the online assignment drop-box

In fact, it has gone pretty well. We started on Wednesday (the second class) with my demonstrating how to get anything other than the default variables. I modeled the make-a-claim and make-a-graph part by showing how to compare incomes between men and women.

Continue reading A First “Claim” Investigation

Survived Day One

The Census Age/Marital-Status Dance. Sample of what we did during the Fathom part of the block. Click to expand if it's illegible.

Whew.

I have been agonizing (all my friends and family will corroborate) about what to say and do on the first day. A lot of the worry has been about how to set the right tone. If we’re going to try to play the whole game (Perkins; Buell), I want the kids to know right away the game we will be playing. Which meant I had to decide what that was: what do I think is most important for them to learn?

Well. I survived the first class. Some things went better than others. But I want to acknowledge here two good decisions I made.

Background: the whole school is going through a re-writing of the mission statement. There is even a mission statement task force, on which I thankfully do not serve. But despite how horrendously dreary and time-wasting mission statements can be, I was surprised that I actually like the new mission statement a lot. It accomplishes its purpose well. With that in mind, could I capture what’s important in a couple of sentences? Make a stat class mission statement?

Here’s what I came up with after thinking about a lot of different (and probably equivalent) ways to carve up the territory:

In this class, you will

  • Learn to make effective and valid arguments using data
  • Become a critical consumer of data

Continue reading Survived Day One

Core Standards: Mathematcal Practices

Core Standards logo
Core Standards Logo

The core standards, increasingly adopted around the country (though sometimes with modification), are not bad, although not nearly as gutsy as the Project 2061 Benchmarks and Standards for science. Besides the lists of skills and examples in the content standards, they include a separate list of “mathematical practices”:

1. Make sense of problems and persevere in solving them.

2. Reason abstractly and quantitatively.

3. Construct viable arguments and critique the reasoning of others.

4. Model with mathematics.

5. Use appropriate tools strategically.

6. Attend to precision.

7. Look for and make use of structure.

8. Look for and express regularity in repeated reasoning.

I like these. It’s a good list. And the core-standards document gives them prominence by listing them first—before the content—on pages 6–8, with a paragraph for each one. Of course, the document is almost 100 pages long, and most of it contains lists of expectations for each grade level and, at high school, for each major topic. So it would be lamentably easy, given the sheer weight of pages, to ignore these and teach to the longer lists.

Continue reading Core Standards: Mathematcal Practices

My troubled relationship with textbooks

I’ve recently been reading a new edition of a major AP stats book. Let me stipulate that the book is good: it covers the material thoughtfully. It’s well-written. It has good problems. And when I read it, I get seduced: I start to think that my course ought to be just like it. For example, I start to think that when I introduce distributions of continuous data, students ought to attend to shape, center, spread, and outliers, and that they can remember this by thinking, “don’t forget your SOCS.”

But the truth is, I don’t even want to “introduce distributions.” I want to introduce situations and then notice when distributions show up. But as I read the book, my resolve starts to soften. Should I be so relentlessly new-age-constructivist-progressive-touchy-feely in my attitude towards curriculum, or should I just get real and teach distributions? This spinelessness gets worse as we get into parts of the course I haven’t thought of as carefully: gee, if I’m not sure how to teach this, maybe I should go with the book. And more insidiously, I’m not sure whether to teach this; but it’s in the book, so maybe it’s important.

Continue reading My troubled relationship with textbooks

Measurement: invented, inexact, and indirect

This is another topic I want to write about. I did speak about it a few years ago at Asilomar, but it’s still kinda half-baked and is worth revisiting here because of how it fits with what I want to do in class this year. It will be interesting to look back in May and see what its role was. So, here we go:

Like many math educators, I used to dismiss the “measurement” strand. I thought of it as the weak sister of the NCTM content standards, nowhere near the importance of geometry or algebra, or even the late lamented discrete math. But I have seen the light, and now it’s one of my favorites. Not for how NCTM represents it, but for the juicy stuff that got left out.

I like the “rule of three” slogan of the title: Invented, inexact, and indirect. Oddly, I have trouble remembering it, and I created it. This suggests that something is wrong—but for now, let’s proceed as if it were perfect.

Continue reading Measurement: invented, inexact, and indirect

SBG: The Search for Standards Continues

Yesterday I came across a great resource from missCalcul8: an SBG wiki for noobs. (Thanks to yet another blog, The Space Between the Numbers, by Breedeen Murray, for the pointer.) It includes how-tos from some of the luminaries in this field, plus, joy of joys, actual lists of standards so that we can imagine what they’re really talking about.  (She has also just posted a number of frightening skills lists on her own blog.)

For me, well, none of them are in statistics yet, but maybe that’s a place where I can contribute when I make that list.

So I tried to get started. One place to look for statistics standards is in the GAISE materials. That’s Guidelines for Assessment and Instruction in Statistics Education, put out by the American Statistical Association (ASA) and designed to elaborate on the NCTM Standards. These guidelines come in two downloadable pdf books, one for pre-college (that’s us!) and another for postsecondary. In our book, they define three levels, named A, B, and C. These do not correspond to elementary, middle, and secondary; many high-school students (not to mention adults, not to mention me) have not fully mastered the ideas in levels A and B.

Continue reading SBG: The Search for Standards Continues

Tyranny of the Center

Tyranny of the Center: a favorite phrase of mine that I keep threatening to write about. Here is a first and brief stab, inspired by my having recently used it in a comment on ThinkThankThunk.

In elementary statistics, you learn about measures of center, especially mean, median, and mode. These are important values; they stand in for the whole set of data and make it easier to deal with, especially when we make comparisons. Are we heavier now than we were 30 years ago? You bet: the average (i.e., mean) weight has gone up. Would you rather live in Shady Glen than Vulture Gulch? Sure, but the median home price is a lot higher.

We often forget, however, that the mean or median, although useful in many ways, does not necessarily reflect individual cases. You could very well find a cheap home in Shady Glen or a skinny person in 2010. Nevertheless, it is true that on average we’re fatter now—so when we picture the situation, we tend to think that everyone is fatter.

One of my goals is to immunize my students against this tendency to assume that all the individuals in a data set are just like some center value; I think it is a good habit of mind to try to look at the whole distribution whenever possible. Let’s look at a couple situations so you can see why I care so much.

Continue reading Tyranny of the Center

Reality interferes with my planning

I’ve been away for three weeks, sans computer—more on that anon—spending much vacation-and-conference time mulling over what I want to do in this class and fantasizing about The First Day.

Yesterday we returned. I logged in. The list was available; I could see what kids will be in my perfect little class. There are 18 cherubs, juniors and seniors, a few of whom I had in class a couple years ago, most of whom I don’t really know at all. And I know that they will be great, that they are smart, that I can communicate my love of the subject and infect them.

But now, instead of an idealized, pristine version of a progressive student-centered, SBG stats class, my vision had actual students in it. And somehow my ideals started sliding out from under me. I could imagine giving fun, engaging lectures instead of designing explorations; awarding points for showing up and doing homework instead of for mastery of standards; dealing with deadlines and extensions; and generally succumbing to the quick and easy path, sliding off the razor’s edge in the direction of being a stand-and-deliver math teacher.

It’s not the kids. They’re great. But they’re real, and reality somehow sucks me towards what’s comfortable.

Fortunately, I have help. One good source of backbone is in the repeated rants at ThinkThankThunk. I once thought that hammering on the same anvil over and over was bad form, an indication of being enslaved to one good idea. But I was wrong. I appreciate Shawn’s willingness to remind us newbies why we decided to think about doing things differently. And I confess that one of the main reasons I put that link in this post is so that I can find it again when I find myself going over to the dark side.

Another place to find clarity, or at least reality, is at f(t), where in the post of that link, Kate Nowak reminds us how messy it all is. There is no razor’s edge, no clean, perfect educational slam-dunk; we deal with human beings every single time, and that is both a burden and a privilege.

Still. I like being in Fantasyland, where standards-based grading works beautifully from day one, where the students who have been badly treated by math in their past realize that they really can look at the world quantitatively; where they connect the math they thought was meaningless to the real world; and where these students design their own projects and critique one another’s work fairly but kindly, building classwide self-esteem while insisting on an appropriate, deepening level of rigor. Ha.

I guess I know that despite this being, after-all, a best-case scenario, it won’t be perfect. I won’t be. The kids won’t be. But we’ll get parts of all that, and a lot more that I can’t predict, because of the particular alchemy of these 18 kids.

I am so scared.

The Subjunctive Thing

In yesterday’s post—part of my before-reality-sets-in idealistic lunacy—I briefly mentioned the subjunctive mood while talking about inferential statistics. That deserves a little elaboration. (I elaborated on it quite a bit in a paper (here, so you can read that if you wish. This is shorter.)

The subjunctive mood is the bane of many language students. One of the reasons is that in English, the subjunctive is becoming invisible. It still exists in a few places (“If I were to give you an A for that work, I would be doing you a disservice” is correct, if pompous) but even that construction is vanishing (“If she was teaching summer school, she couldn’t go to Hawaii” sounds increasingly OK).

One of the reasons to use the subjunctive is to express something contrary to fact. That is, I’m not giving you an A. She is not teaching summer school. It also expresses something you might do in the future, when you’re not sure of the outcome: If I were to give you a puppy, would you love me forever?

Aside. Note that we could also say, “If I gave you a puppy, would you love me forever?” In that sentence, gave is subjunctive, but it looks just like past tense even though it’s in the future. That’s one reason it’s hard to identify subjunctive in English. Note how the indicative If I give you a puppy, will you love me forever? seems different: it’s more an offer than a hypothetical.

Statistical inference is fundamentally subjunctive: we’re saying, if Belinda had no power and if she were to flip 20 coins over and over, how often would she get 16 heads? It’s a hypothetical question. In an orthodox stats class, you would hardly ever flip actual coins; but using George-Cobb-inspired randomization tests, that’s exactly what we do (in simulation at least) all the time. We take the contrary-to-fact subjunctive and make it real.

I claim that one of the things that makes inferential statistics hard is that the machinery is based on a strange, hypothetical, subjunctive, contrary-to-fact set of assumptions and procedures that none of us are well-equipped to understand for more than about 30 seconds at a stretch. So to the extent that we can alleviate some of the unreality, students will have a better chance of understanding what it’s all about.

Do I have evidence for this claim? I do not. I will at least get some insight into it next year. With a real class, I wonder if I will see any evidence one way or the other…

Randomization

If you’re not a stats maven, this may sound esoteric, but let’s see if I can express it well.

One of the things that’s hard to learn in orthodox statistics is the whole machinery of statistical tests. You can train yourself (or a monkey) to do it right, but it seems to be a morass of weird rules and formulas. Remember to divide by (n–1) when you compute the standard deviation. You have to have an expected count of at least five in every cell to use chi-squared. You can use z instead of t if df > 30. And then there’s remembering what tests to use in which situation. You wind up with a big flowchart in your head about whether the data are paired, whether the variables are categorical, etc., etc., etc. And as a learner, you lose sight of the big picture: what a test is really saying.

George Cobb wrote a terrific article explaining why this is all unnecessary. The short version goes like this: you can unify a lot of inferential statistics if, instead of the tests we now use (z, t, chi-squared, ANOVA…) we used randomization tests.

Here’s the basic idea, to which we will often refer as the “Aunt Belinda” problem. Your Aunt Belinda claims to have supernatural powers. She says she can make tossed nickels come up heads. You don’t believe her, so you get a dollar’s worth of nickels (20 of them); she speaks an incantation over them; you toss them all at once; and sixteen come up heads.

Does she have supernatural powers?

Continue reading Randomization