## An Empirical Approach to Dice Probability

We’re starting to learn about probability. Surely one of the quintessential settings is rolling two dice and adding. I’ll try to walk that back another time and rationalize why I include it, but for now, I want students to be able to explain why seven is more likely than ten. I want them to have that archetypal diagram in their heads.

But starting with the theoretical approach won’t go very well. Furthermore, with my commitment to data and using randomization for inference, an empirical approach seems to make more sense and be more coherent. So that’s what I’m trying.

The key lesson for me for this report—related to “trust the data”—is that actual data, with the right technology, can illuminate the important concepts, such as independence. This makes me ask how much theoretical probability we need, if any.

### What Happened in Class

To do the Dice Sonata (previous post), I had given each student two dice: a red one and another one. They rolled them 50 times, recording each result twice: once to do the sonata, so they could make the graph of actual results by hand, and also on the computer in a Fathom Survey so we could easily assemble the results for the entire class.

If you haven’t used Fathom Surveys, you can think of it as a Google form that you can later drag directly into Fathom. The key thing here is that they recorded the red die and the other die separately. When we were done, we had 838 pairs.

This was Thursday, the second class of the semester. After students discussed the homework, and saw that their sets of 50 rolls didn’t produce graphs with their predicted shapes, we went to the computers to see if things looked any different with more data. To make the relevant graph, students had to make a new attribute (= variable = column ) to add the two values—which they already knew how to do. Here is the bottom of the table and the graph:

One could stop here. But Fathom lets us look more deeply using its “synchronous selection” feature (stolen lovingly from ActivStats): what you select in any view is selected in all views.

## The Dice Sonata

Intrepid readers will remember the form of a Sonata for Data and Brain: you have three parts, prediction, data (or analysis or measurement), and comparison. As a first probability activity, we were to roll two dice and sum, and then repeat the process 50 times. The prediction question was, if you graphed these 50 sums, what would it look like? Then, of course, they were to do it with actual physical dice (more on the data from that next post) and then compare their graphs of real data with their predictions.

Note that we’re starting this entirely empirically. We might expect these juniors and seniors to “know the answer” because they probably did it theoretically and with real dice back in seventh grade. We would be wrong.

A key point: in the post referenced above, I bemoaned the problem of getting the kids to make explicit predictions. What’s great about doing this reflective writing (especially in this community) is that it prompts head-smacking realizations about practice and how to improve it, to wit: have the students turn in the prediction before they do the activity. In this case, I had them predict at the end of the first day (Tuesday) and turn it in; I copied them and taped the originals to their lockers before lunch; and today (Thursday), the next class, they turned in the Sonatas as homework. (I have not looked at them yet.)

### Sample Predictions

Remember, I asked for a graph, and that’s what I got. We discussed the phenomenon briefly to establish basic understanding, e.g., that the possible numbers are [2–12]. But for your viewing pleasure, a few actual graphs appear at right.

The two outliers appear below. Even so, notice the variety. What does it say about student understandings (or preconceptions) about distributions?

In any case, my hope here was that when they plotted real data, they would be appalled by how not-following-the-pattern a collection of fifty rolls would be.

### What We Did

As you may recall, the “mission statement” for the class is that each student will:

• Learn to make effective and valid arguments using data
• Become a critical consumer of data

Along about the beginning of November, we had been doing some problem sets from The Model Shop, and other activities from Data in Depth, and I had been getting laconic answers like “four” and “0.87%” and really wanted a little more meat. That is, getting the right answer is not the same as making an effective argument, or even telling a decent story. In a flash of d’oh! inspiration I realized that if I wanted it, I should assess it.

But there is a problem with that:  I have been constructing my standards (I’m calling then learning goals) as I go along, and had not figured out how to deal with the larger, mushier issues that are part of making valid and effective arguments using data.

This post is all about an at-least-partially-successful resolution.

### Constructing learning Goals for Larger Pieces of Work

I love the kids in this class partly because they let me get away with, “this is a quiz and I want you to do your best but I really don’t know how to express the learning goals (a.k.a. standards) that go with it. So we’ll take the quiz first, OK? And figure out how to grade it later.” I explained to them what I was after in hand-wavey terms and off they went.

So they took the quiz (described later). Using their responses (and the pathologies therein),  I was able to construct learning goals for this kind of writing, in particular, for the final semester project I alluded to in the last couple of posts. And here they are, quoted, as if they were official or something (we start with Learning Goal 17): Continue reading A Road to Writing

## Sonatas for Data and Brain

In the semester that just ended, we started and ended with US Census data. The first assignment was to make a “claim” about some data you explored. The final project was to write a medium-sized paper using Census data to illustrate a phenomenon in US History. (More on that later.)

In between, we did a number of things that, if I wanted to look really smart and together, I’d say were designed to take the interest that the first activity kindled, and help students develop the skills they were able to apply in the final project.

The reality is not so organized and purposeful, but I did intentionally use Sonatas for Data and Brain as preparation for the final project. Since you’ve never heard of them, I will explain.

In the book Data in Depth, now out of print (by me, Key Curriculum Press, 2001; it was replaced by some other books, but I don’t know if the sonatas are in them), I invented this genre of assignment to give students small-scale investigations. If you’re a music type, you know that a piece in “sonata form” has three distinct sections: the exposition, in which themes are introduced; the development; in which they get all mixed up and transformed; and the recapitulation, in which the themes re-appear in close to their original forms.

Taking off from this, a sonata for data and brain has three sections: prediction, measurement (or analysis), and comparison.  These sections are fairly self-explanatory except that in prediction, the idea is to describe as accurately as possible what you think the data will look like when the investigation is done. Then you do the data thing (measurement, analysis) and finally compare what you got to your prediction.

## Timers and Variability II

So what happened in class? First, you want to see the data, right?

The basic story so far is that maybe a week ago, I let the students take the measurements, uploading the data—so we could all get everyone’s measurements—using Fathom Surveys. That worked great, but there was of course not enough time to do the analysis, so that got postponed.

And we still haven’t quite gotten through it—though they have had a couple dollops of homework to make progress—at least partly because I’m not sure the best path to take. Next class—Thursday—I finally have enough time allocated to do more, and get to the bottom of something about variation; the next step in this thread is to do The Case of the Steady Hand.

So what actually happened and why am I a little at sea when the data are so interesting?

## Timers and Variability I

Time for a curriculum discovery! This may be old hat to others, but hey, it was new to me and I was very pleased with the idea. I’ll explain it, tell what is happening with it in the classroom, and muse briefly on the philosophical consequences. Onward!

You know those sand timers that you get in games? The ones that go one minute, or two, or three and you use them to time your turn? Our assistant head got a box of them, and has been offering them to us faculty for a while in case we wanted to use them in team-building meetings to help regulate turn-taking, i.e., keep us from running off at the mouth about our own precious problems.

She still had a lot left, and I realized they could be a great solution to a problem that has been in the back of my mind: how to teach about variability.

The nub: there are many sources of variability. Here are two:

• If you measure the same thing repeatedly, you get different results.
• If you measure different things, even if they’re supposed to be the same, you get different results.

The first has to do with the process of measurement, and the inevitable inaccuracies that result. The second has to do with genuine variation. It would be great to have a touchstone activity we could refer back to throughout the course when we want to make that distinction.

The trick is to find something that has the right characteristics. I had tried to have them measure the distance from our class to the wood shop under trying and variation-inducing conditions, but those data, as interesting as they were, didn’t quite fit the bill. Enter the sand timers, with the added bonus that we often measure distance—so measuring time is a treat. Continue reading Timers and Variability I

## Trust the Data: A good idea?

When we last left our hero, he was wringing his hands about teaching stats and being behind; we saw a combination of atavistic coverage-worship and stick-it-to-the-man, can-do support for authenticity in math education. The gaping hole in the story was what was actually happening in the classroom. The plan in this post is to describe an arc of lessons we’ve been doing, tell what I like about it, and tell what I’m still worried about. Along the way we’ll talk about trusting the data. Ready? Good.

You know how students are exposed to proportional reasoning in Grade 5 or earlier, and they spend most of their middle-school years cementing their this essential understanding? And how, despite all this, a lot of high-school students—and college students, and adults—seem not to have exactly mastered proportional reasoning?

I figured this was likely to be the case in my class, so when someone showed me the Kaiser State Health Facts site, I jumped right in, and pulled the class in with me. In it, you find all kinds of stats, for example, this snip from a page about New Mexico:

When you see something like this, you can’t make sense out of it until you know more, for example, what does the 96 mean? You have to look more carefully at the page to discover that it’s “per 100,000 population.” And nowhere do you see that it’s also “per year.”

But once you decode it, you can answer some questions. An obvious one is, “how many teenagers died in New Mexico that year?” Before we jump into proportions, though, let’s point out that this is probably not a very interesting question unless you live in New Mexico, and maybe not even then.

So I just did one quick example in front of the kids, and then the assignment was to spend at least 15 minutes on the site, finding some rate of any interest at all, decode it, and report one calculation you can make. We started in class. Kids found things that interested or horrified them. Abortion, pregnancy, and STD rates figured prominently.

For example:
Continue reading Trust the Data: A good idea?

## Reaction Shot: Learning the Rules to a Game

Riley Lark just posted the first “Flunecy” episode; the last paragraph reminds me of game rules. He asks:

how do you decide where to draw the line?  When do you say “this is fundamental and we need to understand it before we move on,” and when do you say, “you can sort of see how this works from this picture; now let’s move on?”

I wonder how parallel this is to learning to play a board game or a card game?

Usually somebody is there who knows the rules, and you hear the basic idea and a few tips, and everybody agrees that we’ll all start playing and learn as we go. When is that sufficient, and when do you actually have to read the rules?

I think that reading and internalizing rules is an interesting skill. Does that skill help with mathematics—or is it just something (like number theory, Riley might agree) that gives you formal underpinnings but is not really essential to becoming mathematically powerful? Don’t know.

I made some curriculum having to do with this. Like many math teachers, I like NIM games, but I’ve gotten tired of explaining the rules. So I made NIM problems where groups also have to learn the rules without prior explanation.

This is one of those cooperative-learning deals where each group gets 4–6 cards that they deal out; each member can look only at their own cards; they can share the information; the group has a problem to solve. You’re probably familiar with the format.

In this case, the problem is to learn to play the game and figure out how to win. The image at right links to a pdf. Print it out, cut it up, and pass out the cards. Seems to work pretty well.

This is from United We Solve; I learned this NIM variant in Winning Ways.