I’m still trying to clarify my own thinking about modeling. Maybe trying to write it down will help.
Yesterday, we explored some definitions. Now let’s describe different types of mathematical modeling. I’ll call them “genres” here, but that may be too fussy. The point is that people mean different things when they use the term “modeling.” Sometimes it’s a difference in definition (is the key thing real-world application, or simplification, or does that matter?) and sometimes it’s using a different set of tools.
Here, then, are several different kinds of activities that all seem to me to be more-or-less clearly modeling.
Functions modeling data
This is the clearest to me, and the most obviously useful. And fun. It’s one of the things I like best about experimental science. It’s really cool to find the function that characterizes a set of data, and see how it fits (or doesn’t fit) the theory that gave rise to it. (You can also use the data and function to get insight into the phenomenon—use the data to figure out the theory—but that’s for another post.)
This type of modeling fits with our current math curriculum as well, the one that hurtles students towards calculus, because it’s all about functions. Students who fit curves to data find a use for all those functions they’ve been learning about. This is the impetus behind (my books!) EGADs and The Model Shop in math and A Den of Inquiry in physics.
The paragraphs activities are good prototypes for this kind of thing.
In such an activity, you have data you can represent on a scatter plot (this means the data are continuous as opposed to categorical), and you try to find a function that “fits” the data as well as possible.
This function is imperfect. It’s a simplification of reality, but possibly a useful one. You can use this function to make predictions (more on this another time; I think it’s an important purpose of modeling), but, because the data don’t lie exactly on the function, there is some uncertainty in any prediction. The function’s precise shape and location are governed by parameters (e.g., slope and intercept for a line). Modeling reveals the values (or range of plausible values) for these parameters; often these values have a meaning in the original context (e.g., speed for the slope on a distance-time graph). That is, with modeling, you can learn things about the situation that gave rise to the data.
Probability models
These function-models are not the only story, however. When you study probability, the phrase “probability model” comes up, as in (Yale stats web site):
A probability model is a mathematical representation of a random phenomenon. It is defined by its sample space, events within the sample space, and probabilities associated with each event.
They go on to explain that, if the sample space contains N equally-likely events, and k of them represent a particular outcome A, then we write P(A) = k/N. The usual probability rules follow.
To me, this smells completely different from fitting a curve to data points. How is this anything like the kind of model we were talking about before?
We could say that it’s creating a mathematical structure, and using the rules that follow from it, in order to understand a real-world phenomenon, just as we put a line on a distance-time graph to find the speed of a train. But this seems like a stretch to me. Remember I’m worried about definition creep; if I were satisfied with that, I could start to justify everything as modeling.
I think the key is simplification. To me (at least as I’ve been thinking for the last couple of months) the key to understanding this as modeling is that the rules are not exactly true, but they’re coherent and they let us make predictions. Consider a six-sided die. What’s the probability that you roll a one?
1/6? Wrong. No way it’s exactly 1/6. There are imperfections in the die. Asymmetries. But it’s close to 1/6. It makes sense that it’s 1/6. And any predictions we make based on that assumption will be pretty much borne out. So we accept the fiction, the simplification, because it works.
What’s the probability of rolling snake-eyes? Not 1/36. Not only is P(1) ≠ 1/6, but our model assumes that the rolls of the two dice are independent of one another. They’re not, not completely. But the interaction is not large, and any prediction we make using that simplification will be good enough for almost any application. I mean, if Vegas is satisfied (for most customers) who are we to complain?
So our model is that things have these simple probabilities. The consequences of the model are that we can manipulate these probabilities using the familiar rules (such as P( not 1 ) = (1 – 1/6) ).
Still, it doesn’t quite smell right to me. It doesn’t have the same modeling “feel.” See simulation (below) for something related that does.
Geometrical models
The California Math Framework’s modeling appendix gives an example that includes:
“How can you eat a peanut butter cup candy in more than one bite and ensure that each bite has the same ratio of chocolate to peanut butter?” After simplifying the peanut butter cup to two cylinders, a peanut butter cylinder embedded in a chocolate cylinder, and simplifying a bite into an arc of a circle intersecting these two cylinders, students went to work trying to discover a formula for the volumes of both chocolate and peanut butter in each bite. (April 2013 review draft, circa line 300)
I love this one. It’s not like the functions thing, it’s not about data, but it really feels like modeling to me. Again, it feels just like what we have to do in science (“consider a spherical cow”), and again, it involves simplification: the peanut butter cup is not a cylinder. But we can make progress by solving the problem as if it were.
I’ve labeled this section “geometrical models” because they’re common and easy to justify: city blocks might as well be rectangles; the volume of an ice-cream cone might be a cone plus a hemisphere; and so forth. But there are surely other examples I can’t think of in which we represent a real-world situation with some mathematical simplification.
The skills students need for this are different from the ones they need for modeling data with functions. This kind of abstraction and simplification has nothing to do with residuals, say, or the effect of parameters. Instead, it’s about making reasonable approximations and representing those real-world relationship in mathematical language.
Numerical models of change
If you’re studying something quantitative that changes, you can make a numerical model to study that change. For example, suppose you drop a ball at time t = 0. It accelerates with the acceleration of gravity, g. You may have learned that the displacement s as a function of time is
s = (1/2) gt2.
But suppose you don’t know that. You could model the motion by looking at small time intervals Δt. During each interval, you
- Update the velocity v by adding gΔt.
- Update the position x by adding vΔt.
As you make Δt smaller, the results get as close as you want to the official, one-formula result. (You can also improve the quality of the results by using other techniques such as alternating velocity and position recalculations, doing one on even “ticks” of the clock and the other on the odd ticks.)
We generally don’t teach students numerical modeling, but the big dogs use it all the time. Many relationships (e.g., the n-body problem) have no analytic solution, so the only way to figure them out (for example, the position of Mars when the next probe arrives) is to do so numerically.
Although you have make many calculations to get a result, this sort of calculation is simpler than the “real” one, since everything is linear. And of course, a computer does the repetitive work. The model, in this case, is that the object—whether a ball or a planet—is moving at constant speed, but only for a short period of time. Then we use a similar and related model for the changing speed and acceleration, applying the new speed at the next time step.
To do this kind of modeling, you have to be able to express how the interrelated variables are changing. It’s the kind of thinking that you do in order to set up differential equations. But in this case, you don’t need to solve them. Instead, you’re solving difference equations, and doing those only for short time steps. It’s not exact—after all, it’s a model, a simplification—but again, you can make it as precise as you like simply by making smaller time steps.
Systems models
We can take these difference equations to a higher level.
Suppose you’re studying a system—the kind of thing you might represent as a network of boxes and arrows, sources and sinks. Systems models include dynamic ecological systems, economics, or the models of weather. A simple example might be the population of foxes and rabbits on an island; you might model these quantities as numbers (the number of foxes, the number of rabbits, the total amount of grass that’s growing) that are related to the others, and time, through formulas.
Here, the model is not so much the calculation as the decision of what should be in the model and the choice of what the relationships are. Like the dynamical model, these typically get solved numerically. If you compare the results of your model to reality, you’re doing science.
This may be the same thing as the numerical modeling, above. I’m not sure. The difference may be that when we do planets, we’re convinced we know the short-term behavior of the system: what influences what, and how much. When we “run” the model on simple cases, we see whether we did the calculation correctly.
When we do economics or the weather, on the other hand, we’re making educated guesses about the underlying influences. When we “run” the model, we see the consequences of our guesses about how the system is structured. These may also be more randomness in the model, so that to make a prediction—about the weather or about the economy—we might run it a thousand times and report the distribution of results. As in, a 30% chance of rain.
Here’s another distinction: when we model planets, we use numerical techniques to simplify the calculation: it’s now linear instead of intractable. When we model the economy, that’s still true, but we’re also simplifying the system by limiting it to the variables we’ve put in and the relationships we’ve specified.
Simulation
Numerical modeling is one form of simulation. The word itself implies some kind of simplification. It’s not the real thing. There is no way to make a simulation as complicated as real life, so it’s bound to be simpler. When we make a model of a dynamical system, we decide on the relationships. To run that model and see its consequences (which is presumably part of modeling) we implement it somehow and run it. The thing we run is a simulation of reality.
This is also what we do when we do statistics (using resampling techniques) or study probability, and explicitly investigate variability. We use a probability model to help design a simulation, and run it to see its consequences.
When I was back East recently consulting for The Concord Consortium on their InquirySpace project, they had online simulations of physics phenomena that students could use to study the effects of different variables and to plan their hands-on investigations. They called these simulations models. This led (me) to some confusion: when you take the output of one of these simulations and try to fit a function to it, which one is the modeling? The answer, I think, is that the student is doing modeling only when they fit the function. The software developers were modeling when they made the simulation. If the student gets to make the simulation, they are doing two different kinds of modeling.
I agree with your final comment: modeling is the process of building a model, whether by fitting parameters, creating a system of equations, or writing a simulation. Watching a simulation is not modeling, but I see people rave about the pHET simulations as if they were an adequate substitute for real-world measurements.
And they are very cool. It’s true that simulations let us mess around with stuff that would be impractical, expensive, or dangerous. I think they can be a great learning tool.
It just seems to me that it’s not modeling. Using a model, yes. Making a model, no.
(Now, some simulations actually do let you wrestle with the data and make a model. Or, I guess you may develop some mental model in order to make sense of what you see.)
You also bring up another really interesting topic: the importance (or lack of, but you and I agree I think) of making real-world measurements.