This is another topic I want to write about. I did speak about it a few years ago at Asilomar, but it’s still kinda half-baked and is worth revisiting here because of how it fits with what I want to do in class this year. It will be interesting to look back in May and see what its role was. So, here we go:
Like many math educators, I used to dismiss the “measurement” strand. I thought of it as the weak sister of the NCTM content standards, nowhere near the importance of geometry or algebra, or even the late lamented discrete math. But I have seen the light, and now it’s one of my favorites. Not for how NCTM represents it, but for the juicy stuff that got left out.
I like the “rule of three” slogan of the title: Invented, inexact, and indirect. Oddly, I have trouble remembering it, and I created it. This suggests that something is wrong—but for now, let’s proceed as if it were perfect.
No matter where you stand on the great philosophical debate of whether mathematics is invented or discovered, it’s clear that measurement is a human creation. This makes it particularly ripe for us constructivists; it is part of the essence of measurement that people make it to serve their needs. Enough for now, let’s move on:
This is another essence of measurement, one we think of more often. Repeated measurements give different values; this leads to statistical ideas such as averaging a bunch of measurements, and to pedagogical ones such as insisting in some situations that answers not be precise numbers but rather ranges of values. Learners of all ages seem to have trouble with inexactness in mathematics; it seems to go against the nature of math. Probability has some of the same “OMG this isn’t like math is supposed to be” patina.
As an aside, I like to distinguish here between errors and mistakes. A sloppy measurement, or a misuse of a measuring tool, is a mistake. You can avoid mistakes. Error, on the other hand, is inevitable, part of the nature of measurement.
For today’s purposes, this is the most important. With the possible exception of the most elementary measurements, all measurements are indirect. Let’s stipulate that using a ruler or a stopwatch (say) is direct when measuring distance or time. To go beyond that usually requires some additional help.
Consider area. If you measure the area of a rectangle by seeing how many little squares cover it, that’s direct. But more often, we measure the height and the width and multiply. We take direct measurements and combine them using some sort of rule to get this derived, indirect measurement. Usually this combination connects to other aspects of mathematics: we express it symbolically as area = height × width and can do algebra to figure other stuff out. More examples:
- Inaccessible distances are more clearly “indirect”: we can use trig or some other similar-triangle technique to find the height of a flagpole we don’t want to climb or the distance across a river.
- Inaccessible a different way: we measure the size of oil molecules by dripping a known volume onto water and measuring the area of the slick. We assume a monomolecular layer; since volume = area × thickness, thickness = volume ÷ area.
- We seldom measure speed directly. Instead, we measure distance and time, and divide.
- When we measure something like the acceleration of gravity, we get even more indirect. We have a model with lots of assumptions, and measure quantities—such as the period of a pendulum and its length—to help us find what we want as a parameter in the model. Notice that you could think of speed (or even area) this way, even though we usually don’t.
- When you step on a scale to measure your weight, there is all sorts of indirectness going on requiring that a toolmaker understood physics (and maybe electronics) well enough to make a reliable instrument, and calibrate it, so that the position of a needle or the active segments of an LCD indicates the right value.
- Now let’s get all statsy: when we try to determine a country’s per capita income, I claim that this is a measurement situation, and our indirectness comes from a different source: we might take a sample, and the sample is not the same as the population. But just as in the determination of gravity, we make assumptions, adopt a model, and find a parameter in that model that fits the data. But here, the model is based on probability instead of on some function; that’s what makes this a task in inferential statistics instead of physics.
That’s just about enough for now. But we have at least two threads to follow from here I want to foreshadow so I remember them:
- Notice how interesting it is to think of the per capita income as a parameter—in parallel with the gravity problem? We’re not after the average of the sample (which is a measure) but rather the unknown average across the population. And a probability model helps us decide the extent of the (interval) answer. I’m intrigued by the way this seems to be connecting the (function-oriented) modeling part of stats to the inferential side, at least where we make interval estimates.
- We are heading for one of my goals for kids: that they be able to develop plausible measures for quantities they are interested in. And how this connects to their ability to build useful things using symbolic mathematics. (And imagine how hard that will be to assess…)