What’s the purpose of mathematical modeling? The easy answer is something like, to understand the real world. When I look more deeply, however, I see distinct reasons to model—and to model in the classroom. I hope that trying to define these will help me clarify my thinking and shed light on some of the worries I have about how modeling might be portrayed.

Let’s look at a few purposes and try to distinguish them. To save the casual reader time, I’ll talk about prediction, finding parameter values, and finding insight. I think the last is the most subtle and the one most likely to be missed or misused by future developers.

Maybe I’ll post more about each of these in detail later, but for now I’ll move quickly and not give extended examples.

I’m still trying to clarify my own thinking about modeling. Maybe trying to write it down will help.

Yesterday, we explored some definitions. Now let’s describe different types of mathematical modeling. I’ll call them “genres” here, but that may be too fussy. The point is that people mean different things when they use the term “modeling.” Sometimes it’s a difference in definition (is the key thing real-world application, or simplification, or does that matter?) and sometimes it’s using a different set of tools.

Here, then, are several different kinds of activities that all seem to me to be more-or-less clearly modeling.

Functions modeling data

This is the clearest to me, and the most obviously useful. And fun. It’s one of the things I like best about experimental science. It’s really cool to find the function that characterizes a set of data, and see how it fits (or doesn’t fit) the theory that gave rise to it. (You can also use the data and function to get insight into the phenomenon—use the data to figure out the theory—but that’s for another post.)

This type of modeling fits with our current math curriculum as well, the one that hurtles students towards calculus, because it’s all about functions. Students who fit curves to data find a use for all those functions they’ve been learning about. This is the impetus behind (my books!) EGADs and The Model Shop in math and A Den of Inquiry in physics.

In such an activity, you have data you can represent on a scatter plot (this means the data are continuous as opposed to categorical), and you try to find a function that “fits” the data as well as possible.

This function is imperfect. It’s a simplification of reality, but possibly a useful one. You can use this function to make predictions (more on this another time; I think it’s an important purpose of modeling), but, because the data don’t lie exactly on the function, there is some uncertainty in any prediction. The function’s precise shape and location are governed by parameters (e.g., slope and intercept for a line). Modeling reveals the values (or range of plausible values) for these parameters; often these values have a meaning in the original context (e.g., speed for the slope on a distance-time graph). That is, with modeling, you can learn things about the situation that gave rise to the data. Continue reading Genres of Modeling

Modeling is at the center of what I love about math and math education, so I’m thrilled that the Core Standards highlight modeling and that it figures in our latest drafty California framework.

But I’m worried about definition creep. I’m worried that, in two years, when they’re trying to come up with modeling curriculum, people in schools doing the hard day-to-day work will be tempted to say that practically anything is modeling and come up with plausible rationalizations. That, in turn, will dilute the importance of including modeling in policy documents, and result in students who can’t model.

To forestall this, it’s important to know what modeling is and isn’t. So it’s with some embarrassment that I, modeling maven and aficionado, have trouble drawing the lines. So consider this a first step in clarifying these questions for myself:

What is modeling?

What isn’t modeling?

How much do we care whether we can come up with a definition?

Let’s start with the Framework (Modeling appendix, April 2013 review draft, lines 13–14):

Put simply, mathematical modeling is the process of using mathematical tools and methods to ask and answer questions about real world situations (Abrams, 2012).

Of course they go on at length, but the key is a connection to the real world. Here is another definition that I have used recently:

A model is an abstract, simplified, and idealized representation of a real object, a system of relations, or an evolutionary process, within a description of reality. (Henry, 2001, p. 151; quoted in Chaput et al., 2008)

Here, the key ingredients are abstraction and simplification.

Another distinction worth noting is that the first is a definition of modeling—a process—and the second defines a model—a representation. I have no clue whether that matters much.

What do the Core Standards themselves say? First of all, the document identifies modeling as one of eight Mathematical Practices, a great list I have mentioned before. Here is the one called Model with Mathematics, and it’s worth quoting in its entirety:

Mathematically proficient students can apply the mathematics they know to solve problems arising in everyday life, society, and the workplace. In early grades, this might be as simple as writing an addition equation to describe a situation. In middle grades, a student might apply proportional reasoning to plan a school event or analyze a problem in the community. By high school, a student might use geometry to solve a design problem or use a function to describe how one quantity of interest depends on another. Mathematically proficient students who can apply what they know are comfortable making assumptions and approximations to simplify a complicated situation, realizing that these may need revision later. They are able to identify important quantities in a practical situation and map their relationships using such tools as diagrams, two-way tables, graphs, flowcharts and formulas. They can analyze those relationships mathematically to draw conclusions. They routinely interpret their mathematical results in the context of the situation and reflect on whether the results make sense, possibly improving the model if it has not served its purpose.

This is great, but it’s easy to imagine this lofty overarching idea getting lost when you’re designing a curriculum—or an assessment—and you have a chart of content to fill in. Fortunately, the Core Standards promote Modeling to the level of a content standard at high school. What do they say? Here’s a quote I find chilling:

Modeling is best interpreted not as a collection of isolated topics but in relation to other standards. Making mathematical models is a Standard for Mathematical Practice, and specific modeling standards appear throughout the high school standards indicated by a star symbol.

That is, you should find it everywhere, so we won’t list very many actual skills and goals. One could see this as a good thing: we’re celebrating the ubiquity of modeling. But I’m less sanguine about our ability to keep “overarching ideas” in mind, especially as we design assessments. Continue reading Modeling: Looking for definitions

Last month, in Falmouth High School in Maine, some Honors Physics students were estimating the period of a mass hanging from a spring. They used InquirySpace/Data Games software and Vernier motion sensors, and got data that looks like this (Reading is in meters; Time in seconds):

To do their investigations, they needed the period of this wave.

Some students found the peak of one wave, and subtracted its time from the peak of the next wave. This is the most straightforward and obvious. But if you do that, your period will always be a multiple of the time between points, in this case, 0.05 seconds. (This is part of what must have happened in the previous post.)

Some students—sometimes with prodding—would take the time difference across several waves, and divide by the number of periods. It’s not obvious to students that this technique gives a more precise measurement for the period. It’s interesting to think about how we know that this is so; for example, if you use five periods, it’s now possible to get any multiple of 0.01 seconds; but does that mean it’s actually more precise? (Yes.) This technique also gives students a chance to be off by one: do you count the peaks? No. You have to count the spaces between the peaks. (Getting students to explain why is illuminating.)

We could imagine trying to fit a sinusoid (and some students would, but it’s hard) or using a Fourier Transform (which is a black box for most students).

But this post is about an alternative to all of these techniques—one that uses all the data and gives a much more precise result than the first two.

Long ago, way back when Dark Matter had not yet been inferred, I attended UC Berkeley. One day, a fellow astronomy grad student mentioned Strom’s Credibility Criterion in a (possibly beer-fueled) conversation—attributed to astronomer Stephen Strom, who was at U Mass Amherst at the time.

It went something like this:

Don’t believe any set of data unless at least one data point deviates dramatically from the model.

The principle has stuck with me over the years, bubbling in the background. It rose to the surface in a recent trip to the mysterious east (Maine) to visit a physics classroom field-testing materials from a project I advise called InquirySpace.

Background

There is a great deal to say about this trip, including some really great experiences using vague questions and prediction in the classroom, but this incident is about precision, data, and habits of mind. To get there, you need some background.

Students were investigating the motion of hanging springs with weights attached. (Vague question: how fast will the spring go up and down? Answer: it depends. Follow-up: depends on what? Answers: many, including weight and ‘how far you pull it,’ i.e., amplitude.)

So we make better definitions and better questions, get the equipment, and measure. In one phase of this multi-day investigation, students studied how the amplitude affected the period of this vertical spring thing.

If you remember your high-school physics, you may recall that amplitude has no (first-order) effect (just as weight has no effect in a pendulum). So it was interesting to have students make a pre-measurement prediction (often, that the relationship would be linear and increasing) and then turn them loose to discover that there is no effect and to try to explain why.

Enter Strom, after a fashion

Let us leave the issue of how the students measured period for another post. But one very capable and conscientious group found the following periods, in seconds, for four different amplitudes:

0.8, 0.8, 0.8, 0.8

Many of my colleagues in the project were happy with this result. The students found out—and commented—that their prediction had been wrong. So the main point of the lesson was achieved. But as a data guy, I heard the echo of Stephen Strom.

At a recent meeting, I got to tell people about an old, non-finished book, EGADs (Enriching Geometry and Algebra through Data). The idea of the book is that there are geometrical constructions that have relationships under them—usually a relationship about length—that you can model using a symbolic formula.

Like that spiral. How does the length of the “spokes” of this spiral depend on the spoke number?

This post has two purposes:

To get you to try the spiral example.

To show how you can use the Desmos graphing calculator to do the graphing and calculation.

The draft of the book (link above) is free for now, but it occurred to me that you could do at least one activity (integrates trig, geometry, data, exponential functions) easily using Desmos’s cool new technology. Read on!

It’s such a joy when my daughter asks for help with math. It used to happen all the time; it’s rare now. She just started medical school, and had come home for the weekend to get a quiet space for concentrated study.

“Dad, I have a statistics question.” Be still, my heart!

“It’s asking, if you have a random mRNA sequence with 2000 base pairs, how many times do you expect the stop codon AUG to appear? How do you figure that out?”

I got her to explain enough about messenger RNA so that I could picture this random sequence of 2000 characters, each one A, U, G, or C, and remembered from somewhere that a codon was a chunk of three of these.

“I think it’s more of a probability, or combinatoric question than stats…” I said. (I was wrong about that; interval estimates come up later. Read on.)

Dan Meyer’s post today is lovely as usual, and mentions the tree/shadow problem (we math teachers make right triangles to help us figure things out because the “tree-ness and shadow-ness don’t matter”).

And that reminded me of a problem I gave teachers long ago in SEQUALS-land that (a) worked really well to get at what I was after and (b) could turn into a great modeling activity that could fit in to that first-year course my fellow revolutionaries and I are gradually getting serious about.

Here’s the idea: we want to be able to predict the length of the shadow of a pile of blocks. So we’re going to make piles of blocks and measure the shadows, which will lead us to make a graph, find a function, etc. etc.

The sneaky part is that we’re doing this in a classroom, so to make good shadows we bring in a floor lamp and turn the class lights off.

I will let you noble readers figure out why this messes things up in a really delicious way. Two delicious ways, actually. I’ll give away the second:

Of course we have all done height/shadow problems. But have you tried to measure a shadow lately? You have to make a lot of interesting decisions to measure a shadow; and a shadow from a pile of blocks made from a floor lamp exaggerates the problems, such as where do you measure from—the middle of the stack? The base on the shadow side? Where? And where do you measure to—where the fuzzy part of the shadow begins? Where it ends? And why is it fuzzy anyway?

This is why I love measurement as a strand so much. We always think of it as the weakling among content areas at the secondary level; it doesn’t have the intellectual heft of algebra or functions. But if you look closely (and go beyond the words in the standards) it’s a thing of beauty and (since we’re referencing Dan Meyer) perplexity. I did a chapel talk at Asilomar many Sundays ago in which I said that measurement was invented, inexact, and indirect. I still think that’s true, although as alliterative slogans go it’s hard to remember.

So: try this at home. Use Fathom if you have it. Come up with a function that models the shadow lengths. But don’t just figure it out like a math teacher—get the lamp, stack the blocks, and measure.

“Kill your darlings, kill your darlings, even when it breaks your egocentric little scribbler’s heart, kill your darlings”

—Stephen King, On Writing

Fiction writers have heard this advice: “Kill your [little] darlings.” I realized (at a wonderful lunch yesterday with colleagues who are planning a revolution) how this might apply to reinventing math curriculum.

The problem is that it’s really hard to imagine a math course without some of our favorite parts of math. We all have things we despise (one of mine is factoring trinomials) but every one of these things is someone other teacher’s fave, the thing where they suddenly got how cool math was. And if we have that meeting where we decide what to throw out in order to put modeling in, we’ll keep everything. It would be like writing science standards in the 90s.

It’s a values/positions thing. We need to figure out carefully what each darling (ours and the others’) really means and see where the meat of it fits. Maybe we really can get at it with a modeling approach. Maybe it needs to remain the way it is. Or maybe it’s just later in the sequence.

And it’s not a zero-sum game (thanks, Mariel!). Ideally, kids get everything, using the best tools, always appropriately, in the most efficient imaginable sequence. But we will, occasionally, have to Kill Our Little Darlings. KOLD, but necessary.

Just for fun, I list a few of mine. Okay, some are not darlings, but I want to kill them off. So my lesson in collaboration may be letting them live…

I’m writing a paper for a book, and just finished a section whose draft is worth posting. For what it’s worth, I claim here that the book publisher (Springer) will own the copyright and I’m posting this here as fair use and besides, it will get edited.

Here we go:

Modeling activities exist along a continuum of abstraction. This is important because we can choose a level of abstraction appropriate to the students we’re targeting; presumably, a sequence of activities can bring students along that continuum towards abstraction if that is our goal.

As an example, consider this problem:

What are the dimensions of the Queen’s two pet pens?
The Queen wants you to use a total of 100 meters of fence to build a Circular pen for her pet Capybara and a Square pen for her pet Sloth. Because she prizes her pets, she wants the pet pens paved in platinum. Because she is a prudent queen, she wants you to minimize the total area.

Let’s look at approaches to this problem at several stops along this continuum:

a. Each pair of students gets 100 centimeters of string. They cut the string in an arbitrary place, form one piece into a circle and the other into a square, measure the dimensions of the figures, and calculate the areas. Glue or tape these to pieces of paper. The class makes a display of these shapes and their areas, organizes them—perhaps by the sizes of the squares, and draws a conclusion about the approximate dimensions of the minimum-area enclosures.

b. Same as above, but we plot them on a graph. A sketch of the curve through the points helps us figure out the dimensions and the minimum area.

c. This time we enter the data into dynamic data software, guess that the points fit a parabola, and enter a quadratic in vertex form, adjusting its parameters to fit the data. We see that two of these parameters are the side of the square and the minimum area.

d. Instead of making the shapes with string, we draw them on paper. Any of the three previous schemes apply here; and an individual or a small group can more easily make several different sets of enclosures. Here, however, the students need to ensure that the total perimeter is constant—the string no longer enforces the constraint. Note that we are still using specific dimensions.

e. We use dynamic geometry software to enforce the constraint; we drag a point along a segment to indicate where to divide the fence. We instruct the software to draw the enclosures and calculate the area. (In 2014, Dan Meyer did a number on a related problem and made two terrific dynamic geometry widgets, Act One and Act Two.)

f. We make a diagram, but use a variable for the length of a side. Using that, we write expressions for the areas of the figures and plot their sum as a function of the side length. We read the minimum off the graph.

g. As above, but we use algebraic techniques (including completing the square) to convert the expression to vertex form, from which we read the exact solutions. In this version, we might not even have plotted the function.

h. As above, but we avoid some messy algebra by using calculus.