Last time, we discussed random and not-so-random star fields, and saw how we could use the mean of the minimum distances between stars as a measure of clumpiness. The smaller the mean minimum distance, the more clumpy.
What other measures could we use?
It turns out that the Professionals have some. I bet there are a lot of them, but the one I dimly remembered from my undergraduate days was the “index of clumpiness,” made popular—at least among astronomy students—by Neyman (that Neyman), Scott, and Shane in the mid-50s. They were studying Shane (& Wirtanen)’s catalog of galaxies and studying the galaxies’ clustering. We are simply asking, is there clustering? They went much further, and asked, how much clustering is there, and what are its characteristics?
They are the Big Dogs in this park, so we will take lessons from them. They began with a lovely idea: instead of looking at the galaxies (or stars) as individuals, divide up the sky into smaller regions, and count how many fall in each region.
There really is such a thing. Some background: The illustration shows a random collection of 1000 dots. Each coordinate (x and y) is a (pseudo-)random number in the range [0, 1) — multiplied by 300 to get a reasonable number of pixels.
The point is that we can all see patterns in it. Me, I see curves and channels and little clumps. If they were stars, I’d think the clumps were star clusters, gravitationally bound to each other.
But they’re not. They’re random. The patterns we see are self-deception. This is related to an activity many stats teachers have used, in which the students are to secretly record a set of 100 coin flips, in order, and also make up a set of 100 random coin flips. The teacher returns to the room and can instantly tell which is the real one and which is the fake. It’s a nice trick, but easy: students usually make the coin flips too uniform. There aren’t enough streaks. Real randomness tends to have things that look non-random.
Actually teaching every day again has seriously cut into my already-sporadic posting. So let me be brief, and hope I can get back soon with the many insights that are rattling around and beg to be written down so I don’t lose them.
Here’s what I just posted on the apstat listserv; refer to the illustration above:
I’ve been trying to understand Bayesian inference, and have been blogging about my early attempts both to understand the basics and to assess how teachable it might be. In the course of that (extremely sporadic) work, I just got beyond simple discrete situations, gritted my teeth, and decided to tackle how you update a prior distribution of a parameter (e.g., a probability) and update it with data to get a posterior distribution. I was thinking I’d do it in Python, but decided to try it in Fathom first.
It worked really well. I made a Fathom doc in which you repeatedly flip a coin of unknown fairness, that is, P( heads ) is somewhere between 0 and 1. You can choose between two priors (or make your own) and see how the posterior changes as you increase the number of flips or change the number of heads.
Since it’s Fathom, it updates dynamically…
Not an AP topic. But should it be?
Here’s a link to the post, from which you can get the file. I hope you can get access without being a member. Let me know if you can’t and I’ll just email it to you.
Last time, we saw how the length of a hanging slinky is quadratic in the the number of links, namely,
where M is the mass of the hanging part of the slinky, g is the acceleration of gravity, and is the “stretchiness” of the material (related to the spring constant k—but see the previous post for details).
And this almost perfectly fit the data, except when we looked closely and found that the fit was better if we slid the parabola to the right a little bit. Here are the two graphs, with residual plots:
Last time, we (re-)introduced the Hanging Slinky problem, designed a few years back as a physics lab but suitable for a math class, say Algebra II or beyond. We looked at the length of the hanging slinky as a function of the number of slinks that hang down, and it looked seriously quadratic.
I claim that knowing that the real-world data is quadratic will help you explain why the data has that shape. That is, “answer analysis” will guide your calculations.
I beg you to work this out for yourself as much as you can before reading this. I made many many many wrong turns in what is supposed to be an easy analysis, and do not want to deprive you of that—and the learning that comes with it.
Slinkies are great. You can demonstrate waves. You can make them go down stairs. They are super-dynamic physics toys. They make a great sound.
But they are also pretty great when static. Consider, for example, a hanging slinky. How far down does it hang?
Well. It depends.
For this post, I’ll skip the question-posing part of this and go directly to what it mostly depends on: the number of coils (slinks) that are hanging down.
Let’s skip all the way to the data. Here is a graph of the length (in cm) of a hanging slinky as a function of the number of slinks. You should, of course, record your own data, if for no other reason than to experience the glorious difficulty of measuring the distance.
We can pause here and make sure the graph makes sense. What do you see in the slinky itself? How would you describe the spacing of the coils in the hanging slinky? How does that pattern get reflected in the data and in the graph? Continue reading The Hanging Slinky
Here’s something I’m puzzled about in trying to push this picture of math education, the one where we collect data and try to model it with functions: when I come up with ideas for suitable activities, they often require “harder” functions than students may be used to seeing. Let me give an example based on a super-traditional problem such as
A 5-meter ladder is leaning against a wall. The bottom of the ladder is 2.5 meters from the foot of the wall. How high is the top of the ladder?
We know what the kid is supposed to do, traditionally. Draw the picture, recognize the right triangle, notice that the hypotenuse is 5, and write something like
and solve the equation to find that .
But in the approach I’m trying to push, we de-emphasize the specific answer and focus on the relationship. We take a 5-meter ladder (or its more practical equivalent, a chair) and set it next to a wall, at different distances (let’s call that distance base), and see how high the ladder reaches (height). We plot height against base, and try to figure out a function that is a good model for the data. Continue reading Does Modeling Mean Using Harder Functions?
You should already have read or skimmed part 1 and part 2 of this saga. In part 2, we showed the picture of the globe (reproduced here) that purported to show how the aspect ratio of a 10° by 10° box at 60° was 2:1.
Of course, it’s only approximately 2:1, I told the students, because (small pause to get attention) the scale on the globe is changing with every degree. In fact, it’s changing continuously. At this point, the students muttered, “mmm, calculus.” So at least they smelt it coming.
I’d like to be able to say that at this point I turned it over to them: “Indeed! Calculus! We’re trying to find , the function that gives us the total y-distance on the map as a function of the latitude. Work in your groups to figure out just what calculus you need to do to find that function, and be ready to present in about five minutes.”
But I can’t. Being realistic, it was good that they sensed calculus, but in this unusual context—the sphere and everything—it would have been excruciating. So I scaffolded like my life depended on it. I reproduce the chart we developed last time that shows the dimensions of the 1° boxes.
“Indeed! Calculus! After all, we want , the y-distance as a function of latitude. So to find the total distance, to, say, 60°, I’m going to start at 0° and use this number we have in the table, sec(0°), times 1° because that’s the height per 1°, right?. Then for the next degree, I have to add sec(1°) times 1°, then for the next one, sec(2°) times 1°, all the way up to 60.”
I write the sum on the board, (still ignoring the constant factor, by the way, but that was OK):
“And of course, within each of these 1° sections, it’s changing continuously too. So what should we do?”
Now, almost in unison, “integrate secant,” and a student bravely writes,
Doing the Integral—the way cool way
“And of course, you all know the integral of the secant, right?”
They squirm, but I take them off the hook. “Of course you don’t. Nobody should remember that. But what’s the most practical way to find this integral?”
“A substitution?” they ask. “Do we, like, do it by parts somehow?”
“All good suggestions, but it’s really not obvious what to substitute. And for parts, usually there’s more stuff, right? So you can have a v and a du? What I’m really asking is, what’s a really practical way to find this integral?”
“Look it up?”
“That will work, but I want to show you another way.” I pull out my iPhone and hook it up to the projector. “Siri, integrate secant of x.”
“You’re kidding me,” says one kid. Class chatter rises.
“Hmm, let me think…,” says Siri. She shows me an approximation and a link to Wolfram Alpha, which I tap. And this appears:
At this point, they returned to desmos and their data to see if this function actually worked. And it did, to enormous satisfaction throughout the class. That this obscure function, a log of a combination of trig functions, fit these data, was a bit of a miracle, a triumph of actually using calculus.
Actually Finding the Integral
We did not find the integral in class, but in case you care, here is a derivation, which shows just how arcane these can get. This is why you look these up. And why it’s so great that Wolfram Alpha exists, because they give you the answer and the derivation.
It turns out that there is a great substitution! First you multiply top and bottom by :
Now we do a u-substitution, choosing
When you take its derivative, you discover that by some miracle,
Which means that
Which turns out (after even more mind-numbing algebra than it took to find du above) to be equivalent to the horrible formula with the half-angles that Wolfram gave us at first.
Perhaps there is a way to anticipate that that particular substitution would work out so well, but I sure don’t have that kind of insight.
You really don’t want to read all the details. But among the crucial steps in getting the lesson outlined last time to work right, the crucialest might be the place where the students figure out that the y-scale goes as the secant.
How do we help them figure that out? (And what do we mean by that?) That’s what we’ll talk about today.
First of all: that y-scale is the number of centimeters per degree in the y-direction, and that depends on latitude. And when you look at a Mercator-projected Earth (like the one in the figure) you can see that this scale increases with latitude. How do we know? Because the lines of latitude get farther apart. So more centimeters per degree.
Last week, Zoya let me into her calculus class to do a data-rich activity of my choosing. Ideally it would involve calculus, appropriate for these kids who had already taken the AB exam. Most of my activities that use elementary functions to model data we get from the world (The Model Shop) or measure ourselves (EGADs) don’t involve calculus, although I think they suit a wide range of students.
For some weeks I thought about freeways, and the optimization problem of figuring out at what speed freeways have the greatest flow of traffic. It’s yummy because of the optimization, of course (and that gives us calculus, or at least smells that way) and also because you have to wrap your mind around what you mean by flow. I also found public data from CalTrans—but that’s all a story for another time.
Wisely, I think, I backed off that idea and instead went with a problem I faced long ago when I tried to write a program to draw a Mercator-projection map of the world. Namely: what’s the function in the y-direction that relates distance on the map to latitude?