Flipping the Classroom: Exposition at Home?

Thank you, blogosphere. Here’s an example of an idea from somebody else that made a positive difference in my classroom (and gave me a topic to post about)…

A while back, @TeachingStatistics wrote about flipping the classroom: all the exposition takes place in videos that kids watch at home, leaving more class time for groupwork and actual one-on-one interaction with students. The guy she referenced—Aaron Sams, in Woodland Park, CO—has this inspiring vlurb:

This idea has lots of plusses, of which two are the time savings and the fact that the kids are watching videos all the time anyway, so why not for learning?

And one big minus: you have to make the videos. “I love Camtasia Studio,” he wrote. Well. I may not know Camtasia Studio, but I have made short instructional videos (for Fathom) using iMovie and the like, and I knew firsthand how frigging much time it takes to get it all right.

But then several things happened. One is that I did that ignite talk, for which I made my slides in Keynote, and another is that someone, maybe my chair, reminded me not to to let the perfect be the enemy of the good, or something like that. Besides, I usually regret spending exposition time in class. When I get started, I usually talk too long. And it’s just not a great way to transfer information.

So I tried it, using Keynote (Apple’s PowerPoint; I assume PowerPoint has the same features) and as little time as possible in production. Keynote makes the technology part easy; I made the slides and then just recorded me presenting them. I reviewed the slides, thought about what I wanted to say, and usually got the voiceover right in one take. Well, not “right,” but good enough. We were just starting the new semester, and a new topic, probability. So over a couple weeks, I made an eight-part series, each one about 5 minutes long. It covers some fundamental ideas in probability, area models, and tree diagrams. The videos set us up for conditional probability without actually opening that door. Think of it as a one-short-period lecture on basic probability, broken up into chunks.

Anyhow, they were a hit. Students actually did the homework (“watch these videos”) and knew, at the beginning of class, some of the things I “covered” in the vids. And we could start with what they didn’t know. Furthermore, kids who did understand some technique from the video could help others in the discussions about the homework. And one even said (without any prompting) that the videos helped because you could stop them and go back. Hooray!

It still takes me quite a while to make each one, but all that is in getting the slides to do what I want. I don’t know PowerPoint, but Keynote’s graphics capabilities, though simple, are capable enough to do useful animations.

The biggest possible improvement: have the kids make the videos. Stay tuned.

If you look at them, I know I have to redo #6. If it looks OK, that means I fixed it… 🙂 Suggestions welcome.

(Here is a link to the 8-part probability series.)


December’s Ignite Talk

I had the distinct honor and terror of giving an “ignite” talk at the Asilomar math conference in December 2010. I was much more frightened than usual. This was partly due to the company of luminaries including Phil Daro, Scott Farrand, Steve Leinwand, Gretchen Mueller, and many others.

The extra anxiety also comes from the format: You submit 20 slides beforehand and they auto-advance every 15 seconds. So that’s five minutes, and there is no way to take a little extra time on a point if you need it, no chance to slip up. My voice is about a major 6th higher than usual. Still, it went well, and I’m pleased enough to want to share.

Here is the link for the event.

And here is my talk. I don’t understand why it’s longer than the allotted 5 minutes, but they set my slides and I’m not gonna check up on them…

An Empirical Approach to Dice Probability

dice probability diagram
Why seven is more likely than ten: the diagram I want them to have in their heads

We’re starting to learn about probability. Surely one of the quintessential settings is rolling two dice and adding. I’ll try to walk that back another time and rationalize why I include it, but for now, I want students to be able to explain why seven is more likely than ten. I want them to have that archetypal diagram in their heads.

But starting with the theoretical approach won’t go very well. Furthermore, with my commitment to data and using randomization for inference, an empirical approach seems to make more sense and be more coherent. So that’s what I’m trying.

The key lesson for me for this report—related to “trust the data”—is that actual data, with the right technology, can illuminate the important concepts, such as independence. This makes me ask how much theoretical probability we need, if any.

What Happened in Class

To do the Dice Sonata (previous post), I had given each student two dice: a red one and another one. They rolled them 50 times, recording each result twice: once to do the sonata, so they could make the graph of actual results by hand, and also on the computer in a Fathom Survey so we could easily assemble the results for the entire class.

If you haven’t used Fathom Surveys, you can think of it as a Google form that you can later drag directly into Fathom. The key thing here is that they recorded the red die and the other die separately. When we were done, we had 838 pairs.

This was Thursday, the second class of the semester. After students discussed the homework, and saw that their sets of 50 rolls didn’t produce graphs with their predicted shapes, we went to the computers to see if things looked any different with more data. To make the relevant graph, students had to make a new attribute (= variable = column ) to add the two values—which they already knew how to do. Here is the bottom of the table and the graph:

The data table and the graph of the sum. BTW: notice the "13?" Someone had entered 5 and 8 for the two dice, resulting in hilarity, accusations, and a good lesson about cleaning your data.

One could stop here. But Fathom lets us look more deeply using its “synchronous selection” feature (stolen lovingly from ActivStats): what you select in any view is selected in all views.

Continue reading An Empirical Approach to Dice Probability

A Road to Writing

As you may recall, the “mission statement” for the class is that each student will:

  • Learn to make effective and valid arguments using data
  • Become a critical consumer of data

Along about the beginning of November, we had been doing some problem sets from The Model Shop, and other activities from Data in Depth, and I had been getting laconic answers like “four” and “0.87%” and really wanted a little more meat. That is, getting the right answer is not the same as making an effective argument, or even telling a decent story. In a flash of d’oh! inspiration I realized that if I wanted it, I should assess it.

But there is a problem with that:  I have been constructing my standards (I’m calling then learning goals) as I go along, and had not figured out how to deal with the larger, mushier issues that are part of making valid and effective arguments using data.

This post is all about an at-least-partially-successful resolution.

Constructing learning Goals for Larger Pieces of Work

I love the kids in this class partly because they let me get away with, “this is a quiz and I want you to do your best but I really don’t know how to express the learning goals (a.k.a. standards) that go with it. So we’ll take the quiz first, OK? And figure out how to grade it later.” I explained to them what I was after in hand-wavey terms and off they went.

So they took the quiz (described later). Using their responses (and the pathologies therein),  I was able to construct learning goals for this kind of writing, in particular, for the final semester project I alluded to in the last couple of posts. And here they are, quoted, as if they were official or something (we start with Learning Goal 17): Continue reading A Road to Writing

Timers and Variability II

So what happened in class? First, you want to see the data, right?

Timer data. Stripes indicate groups.

The basic story so far is that maybe a week ago, I let the students take the measurements, uploading the data—so we could all get everyone’s measurements—using Fathom Surveys. That worked great, but there was of course not enough time to do the analysis, so that got postponed.

And we still haven’t quite gotten through it—though they have had a couple dollops of homework to make progress—at least partly because I’m not sure the best path to take. Next class—Thursday—I finally have enough time allocated to do more, and get to the bottom of something about variation; the next step in this thread is to do The Case of the Steady Hand.

So what actually happened and why am I a little at sea when the data are so interesting?

Continue reading Timers and Variability II

Simple Sampling Distribution Simulation in Fathom

What we're looking for. Result from 500 runs of the simulation.

Yesterday’s APstat listserve had a question about Fathom:

How do I create a simulation to run over and over to pick 10 employees.  2/3 of the employees are male

Since my reply to the listserve had to be in plain old text, I thought I’d reproduce it here with a couple of illustrations…

There are at least two basic strategies. I’ll address just one; this is the quick one and uses random number-ish functions. The others mostly use sampling. If you use TPS, I think it’s Chapter 8 of the Fathom guide that explains them in excruciating detail 🙂

Okay: we’re going to pick 10 employees over and over from a (large) population in which 2/3 of the employees are male.

(Why large? To avoid “without-replacement” issues. If you were simulating layoffs of 10 employees from an 18-employee company, 12 of whom were male, you would need to use sampling and make sure you were sampling without replacement.)

(1) Make a collection with one attribute, sex, and 10 cases

(2) Give the sex attribute this formula:

randomPick( “male”, “male”, “female”)

Continue reading Simple Sampling Distribution Simulation in Fathom

A First “Claim” Investigation

A slide showing another version of the instructions

These are something like the entire instructions for a mini-investigation that has taken much of the second and third of our class meetings:

Mess around with U S Census data in Fathom until you notice some pattern or relationship. Then make a claim: a statement that must be either true or false. Then create a visualization (in this case, a graph) that speaks to your claim. Then make one or two sentences of commentary. These go onto one or two PowerPoint slides.

The purpose is severalfold:

  • You get chance to play with the data
  • You learn more Fathom features, largely by induction or osmosis or something; in any case, you learn them when you need them
  • You get to direct your own investigation
  • You get practice communicating in writing—or at least slideSpeak
  • I get to see how you do on all these things
  • We all get to try out the online assignment drop-box

In fact, it has gone pretty well. We started on Wednesday (the second class) with my demonstrating how to get anything other than the default variables. I modeled the make-a-claim and make-a-graph part by showing how to compare incomes between men and women.

Continue reading A First “Claim” Investigation