Wave Slicing and Remainders: a cool way to find the period of a periodic function

Last month, in Falmouth High School in Maine, some Honors Physics students were estimating the period of a mass hanging from a spring. They used InquirySpace/Data Games software and Vernier motion sensors, and got data that looks like this (Reading is in meters; Time in seconds):

To do their investigations, they needed the period of this wave.

• Some students found the peak of one wave, and subtracted its time from the peak of the next wave. This is the most straightforward and obvious. But if you do that, your period will always be a multiple of the time between points, in this case, 0.05 seconds. (This is part of what must have happened in the previous post.)
• Some students—sometimes with prodding—would take the time difference across several waves, and divide by the number of periods. It’s not obvious to students that this technique gives a more precise measurement for the period. It’s interesting to think about how we know that this is so; for example, if you use five periods, it’s now possible to get any multiple of 0.01 seconds; but does that mean it’s actually more precise? (Yes.) This technique also gives students a chance to be off by one: do you count the peaks? No. You have to count the spaces between the peaks. (Getting students to explain why is illuminating.)
• We could imagine trying to fit a sinusoid (and some students would, but it’s hard) or using a Fourier Transform (which is a black box for most students).

But this post is about an alternative to all of these techniques—one that uses all the data and gives a much more precise result than the first two.

Strom’s Credibility Criterion

Long ago, way back when Dark Matter had not yet been inferred, I attended UC Berkeley. One day, a fellow astronomy grad student mentioned Strom’s Credibility Criterion in a (possibly beer-fueled) conversation—attributed to astronomer Stephen Strom, who was at U Mass Amherst at the time.

It went something like this:

Don’t believe any set of data unless at least one data point deviates dramatically from the model.

The principle has stuck with me over the years, bubbling in the background. It rose to the surface in a recent trip to the mysterious east (Maine) to visit a physics classroom field-testing materials from a project I advise called InquirySpace.

Background

There is a great deal to say about this trip, including some really great experiences using vague questions and prediction in the classroom, but this incident is about precision, data, and habits of mind. To get there, you need some background.

Students were investigating the motion of hanging springs with weights attached. (Vague question: how fast will the spring go up and down? Answer: it depends. Follow-up: depends on what? Answers: many, including weight and ‘how far you pull it,’ i.e., amplitude.)

So we make better definitions and better questions, get the equipment, and measure. In one phase of this multi-day investigation, students studied how the amplitude affected the period of this vertical spring thing.

If you remember your high-school physics, you may recall that amplitude has no (first-order) effect (just as weight has no effect in a pendulum). So it was interesting to have students make a pre-measurement prediction (often, that the relationship would be linear and increasing) and then turn them loose to discover that there is no effect and to try to explain why.

Enter Strom, after a fashion

Let us leave the issue of how the students measured period for another post. But one very capable and conscientious group found the following periods, in seconds, for four different amplitudes:

0.8, 0.8, 0.8, 0.8

Many of my colleagues in the project were happy with this result. The students found out—and commented—that their prediction had been wrong. So the main point of the lesson was achieved. But as a data guy, I heard the echo of Stephen Strom.

Questions to Encourage Actually Using Functions

Dan Meyer read yesterday’s post!! And commented!! Thoughtful as always, it deserves a post of its own—not just a reply—in reply.

The first question in a lot of the activities is “Predict what the relationship will look like.” What advantages and disadvantages does that question have over a question like, “Predict how tall a stack of 500 cups will be,” or another question that requires the relationship but which involves a more concrete objective.

Dan makes a good point. That sort of task promotes using the function. I think it’s especially powerful when it’s late in a sequence of questions. A prototype might be the handshake problem, where early questions help the kids understand what we’re asking with small numbers. The small-number, intuitive, draw-able, act-out-able cases help students get the notion of a sequence and start to organize their data. Then, as he points out, the question about a large number makes using symbols practical and desirable. Furthermore, the concrete example presumably helps bridge that abstraction gap.

In this case, a good question of that sort might be, How many triangles do you have to make before the “spokes” are one meter long? That would be a good addition, but I’m not sure that the beginning is the right place to add it.

Let me explain: In this book manuscript, that opening question,

How will [the sequence number and the side lengths] be related?
Predict: What do you think the relationships will look like?

asks the kids to make a prediction before they measure anything. I think it makes a difference because it’s about data, and because the relationships are not intuitive.

This prediction business is tricky. I don’t pretend to know exactly what I’m talking about, but it’s informed by what I’ve learned using Sonatas for Data and Brain, and from some wonderful work in physics education by Eugenia Etkina and her colleagues.

Modeling a Spiral, and enjoying Desmos

At a recent meeting, I got to tell people about an old, non-finished book, EGADs (Enriching Geometry and Algebra through Data). The idea of the book is that there are geometrical constructions that have relationships under them—usually a relationship about length—that you can model using a symbolic formula.

Like that spiral. How does the length of the “spokes” of this spiral depend on the spoke number?

This post has two purposes:

1. To get you to try the spiral example.
2. To show how you can use the Desmos graphing calculator to do the graphing and calculation.

The draft of the book (link above) is free for now, but it occurred to me that you could do at least one activity (integrates trig, geometry, data, exponential functions) easily using Desmos’s cool new technology. Read on!

The activity: Spiral 20

Here’s what you do:

2. Measure the long legs of the triangles as suggested on the handout.
3. Go to the Desmos graphing calculator.
4. Enter the data.
• To do that you need a table. Click the “<” button under the little panel, upper left, where you will eventually enter a function.
• Click “table.” A table appears.
• Enter the data. You can change the variable names if you want.
5. Enter a function in that top panel. Try to match the data!
• If you leave parameters in the formula, it will ask if you want sliders. I love sliders. You will too.
• You can type values in, even if you have sliders.
• You can change the range of sliders by clicking the limits at the slider ends.
6. Play with the sliders and all the other features of this site.

Here is a graph of mine from Desmos, with only a little of the data:

Relationships. What Else Matters?

I’m working on a curriculum project associated with the Core Standards. In the high-school section on “Interpreting Categorical and Quantitative Data,” it occurred to me—as it had before when I was designing physics materials—that we really care about relationships instead of simple answers.

In physics, it was all about functions. You’re rolling a ball down a ramp. How long does it take? It depends. On what? Well, the steepness of the ramp, how far it has to roll, the moment of inertia of the ball, and so forth. We would rather have students construct the function—how the time depends on all these quantities—than simply to answer the question (12.2 seconds) for some specific setup.

When I first was working on materials for Fathom, I studied what stats education looked like. As you can imagine, very early on, we look at one variable (everybody repeat, “shape, center, spread!”) and learn about box plots and mean Continue reading

When a Center Does Not Hold

Many books and stats “modules” for school misuse measures of center. At least that’s my conclusion right now; I’d be interested in comments.

I was struck by a lesson that presented an oft-used data set: the mean heart rates of a bunch of animals. The lesson presented the data, had students calculate the mean and median, and eventually make a box plot. An example dot plot appears at right.

The intention here is a good one: use real data about interesting things. But we have to stop and ask, what does the mean (or median) of this data set actually mean? The answer: not much. Finding meaning requires an absurd reach: If you randomly choose an animal from a sample that has one of each species (or two if you have an ark), the mean—in this case, 143 beats per minute—is the expected value of that animal’s heart rate. But we’re not going to do that. So why calculate the mean? (Savvy stats teachers will recognize that the unit of investigation—or whatever you want to call it—is a species here rather than an animal.)

[Note: the dot plot above is actually useful if you label the points; then you can see graphically where each species stands.]

An Unexpected Expected-Value Problem, and What Was Wrong With It

It’s such a joy when my daughter asks for help with math. It used to happen all the time; it’s rare now. She just started medical school, and had come home for the weekend to get a quiet space for concentrated study.

“Dad, I have a statistics question.” Be still, my heart!

“It’s asking, if you have a random mRNA sequence with 2000 base pairs, how many times do you expect the stop codon AUG to appear? How do you figure that out?”

I got her to explain enough about messenger RNA so that I could picture this random sequence of 2000 characters, each one A, U, G, or C, and remembered from somewhere that a codon was a chunk of three of these.

“I think it’s more of a probability, or combinatoric question than stats…” I said. (I was wrong about that; interval estimates come up later. Read on.)

Four blocks and their shadow. I set them on graph paper just to make this shot.

Dan Meyer’s post today is lovely as usual, and mentions the tree/shadow problem (we math teachers make right triangles to help us figure things out because the “tree-ness and shadow-ness don’t matter”).

And that reminded me of a problem I gave teachers long ago in SEQUALS-land that (a) worked really well to get at what I was after and (b) could turn into a great modeling activity that could fit in to that first-year course my fellow revolutionaries and I are gradually getting serious about.

Here’s the idea: we want to be able to predict the length of the shadow of a pile of blocks. So we’re going to make piles of blocks and measure the shadows, which will lead us to make a graph, find a function, etc. etc.

The sneaky part is that we’re doing this in a classroom, so to make good shadows we bring in a floor lamp and turn the class lights off.

I will let you noble readers figure out why this messes things up in a really delicious way. Two delicious ways, actually. I’ll give away the second:

The end of the shadow, closer up. Really: how long is it?

Of course we have all done height/shadow problems. But have you tried to measure a shadow lately? You have to make a lot of interesting decisions to measure a shadow; and a shadow from a pile of blocks made from a floor lamp exaggerates the problems, such as where do you measure from—the middle of the stack? The base on the shadow side? Where? And where do you measure to—where the fuzzy part of the shadow begins? Where it ends? And why is it fuzzy anyway?

This is why I love measurement as a strand so much. We always think of it as the weakling among content areas at the secondary level; it doesn’t have the intellectual heft of algebra or functions. But if you look closely (and go beyond the words in the standards) it’s a thing of beauty and (since we’re referencing Dan Meyer) perplexity. I did a chapel talk at Asilomar many Sundays ago in which I said that measurement was invented, inexact, and indirect. I still think that’s true, although as alliterative slogans go it’s hard to remember.

So: try this at home. Use Fathom if you have it. Come up with a function that models the shadow lengths. But don’t just figure it out like a math teacher—get the lamp, stack the blocks, and measure.

KOLD Curriculum: Killing the Darlings in Math

—Stephen King, On Writing

Fiction writers have heard this advice: “Kill your [little] darlings.” I realized (at a wonderful lunch yesterday with colleagues who are planning a revolution) how this might apply to reinventing math curriculum.

The problem is that it’s really hard to imagine a math course without some of our favorite parts of math. We all have things we despise (one of mine is factoring trinomials) but every one of these things is someone other teacher’s fave, the thing where they suddenly got how cool math was. And if we have that meeting where we decide what to throw out in order to put modeling in, we’ll keep everything. It would be like writing science standards in the 90s.

It’s a values/positions thing. We need to figure out carefully what each darling (ours and the others’) really means and see where the meat of it fits. Maybe we really can get at it with a modeling approach. Maybe it needs to remain the way it is. Or maybe it’s just later in the sequence.

And it’s not a zero-sum game (thanks, Mariel!). Ideally, kids get everything, using the best tools, always appropriately, in the most efficient imaginable sequence. But we will, occasionally, have to Kill Our Little Darlings. KOLD, but necessary.

Just for fun, I list a few of mine. Okay, some are not darlings, but I want to kill them off. So my lesson in collaboration may be letting them live…

• Factoring trinomials (but recognize squares!)
• Obscure trig identities (but keep Pythagoras, double angle, etc.)
• Remainder Theorem
• Integration by partial fractions

What are yours? And what can’t you give up?

Reflection on Modeling

Capybara. The world’s largest rodent.

I’m writing a paper for a book, and just finished a section whose draft is worth posting. For what it’s worth, I claim here that the book publisher (Springer) will own the copyright and I’m posting this here as fair use and besides, it will get edited.

Here we go:

Modeling activities exist along a continuum of abstraction. This is important because we can choose a level of abstraction appropriate to the students we’re targeting; presumably, a sequence of activities can bring students along that continuum towards abstraction if that is our goal.

As an example, consider this problem:

What are the dimensions of the Queen’s two pet pens?
The Queen wants you to use a total of 100 meters of fence to build a Circular pen for her pet Capybara and a Square pen for her pet Sloth. Because she prizes her pets, she wants the pet pens paved in platinum. Because she is a prudent queen, she wants you to minimize the total area.

Let’s look at approaches to this problem at several stops along this continuum:

a. Each pair of students gets 100 centimeters of string. They cut the string in an arbitrary place, form one piece into a circle and the other into a square, measure the dimensions of the figures, and calculate the areas. Glue or tape these to pieces of paper. The class makes a display of these shapes and their areas, organizes them—perhaps by the sizes of the squares, and draws a conclusion about the approximate dimensions of the minimum-area enclosures.

b. Same as above, but we plot them on a graph. A sketch of the curve through the points helps us figure out the dimensions and the minimum area.

Using Fathom to analyze area data. Sliders control (and display) parameter values. I have suppressed the residual plot, which is essential for getting a good fit.

c. This time we enter the data into dynamic data software, guess that the points fit a parabola, and enter a quadratic in vertex form, adjusting its parameters to fit the data. We see that two of these parameters are the side of the square and the minimum area.

d. Instead of making the shapes with string, we draw them on paper. Any of the three previous schemes apply here; and an individual or a small group can more easily make several different sets of enclosures. Here, however, the students need to ensure that the total perimeter is constant—the string no longer enforces the constraint. Note that we are still using specific dimensions.

e. We use dynamic geometry software to enforce the constraint; we drag a point along a segment to indicate where to divide the fence. We instruct the software to draw the enclosures and calculate the area.

f. We make a diagram, but use a variable for the length of a side. Using that, we write expressions for the areas of the figures and plot their sum as a function of the side length. We read the minimum off the graph.

g. As above, but we use algebraic techniques (including completing the square) to convert the expression to vertex form, from which we read the exact solutions. In this version, we might not even have plotted the function.

h. As above, but we avoid some messy algebra by using calculus.

Now let’s comment on these different versions.