Data Moves with CO2

The concentration of CO2 in the atmosphere is rising, and we have good data on that from, among other sources, atmospheric measurements that have been taken near the summit of Mauna Loa, in Hawaii, for decades.

Here is a link to monthly data through September 2018, as a CODAP document. There’s a clear upward trend.

CO2 concentration (mole fraction, parts per million) as a function of time, here represented as a “decimal year.”

Each of the 726 dots in the graph represents the average value for one month of data.

What do we have to do—what data moves can we make—to make better sense of the data? One thing that any beginning stats person might do is to fit a line to the data. I won’t do that here, but you can imagine what happens: the data curve upward, so the line is a poor model, but the positive slope of the line (about 1.5, which is in ppm per year) is a useful average rate of increase over the interval we’re looking at. You could consider fitting a curve, or a sequence of line segments, but we won’t do that either.

Instead, let’s point out that the swath of points is wide. There are lots of overlapping points. We should zoom in and see if there is a pattern.

Continue reading Data Moves with CO2
Advertisements

Data Moves: the cards metaphor

In the Data Science Games project, we started talking, early, about what we called data moves. We weren’t quite sure what they were exactly, but we recognized some when we did them.

In CODAP, for example (like in Fathom), there is this thing we learn to do where you select points in one graph and, since when you select data in CODAP, the data are selected everywhere, the same points are selected in all other graphs—and you can see patterns that were otherwise hidden.

You can use that same selection idea to hide the selected or unselected points, thereby filtering the data that you’re seeing. Anyway, that felt like a data move, a tool in our data toolbox. We could imagine pointing them out to students as a frequently-useful action to take.

I’ve mentioned the idea in a couple of posts because it seemed to me that data moves were characteristic of data science, or at least the proto-data-science that we have been trying to do: we use data moves to make sense of rich data where things can get confusing; we use data moves to help when we are awash in data. In traditional intro stats, you don’t need data moves because you generally are given exactly the data you need.

Continue reading Data Moves: the cards metaphor

Trees. And. Diagnosis. (Live!)

I’ve been invited to give a webinar about our work on trees; it will include material from the previous two posts.

Here’s the Eventbrite link. Get your free ticket!

Here’s the blurb:

Data, Decisions, and Trees

We often say that we want to make decisions “based on data.” What does that really mean? We’ll look at a simple approach to data-based decisionmaking using a representation we might not use every day: the tree. In this webinar, you’ll use data to make trees, and then use the trees to diagnose diseases.

On the surface, trees are very simple. But for some reason — perhaps because we’re less familiar with using trees — people (and by that we mean us) have more trouble than we expect. Anticipate having a couple of “wait a second, let me think about this!” moments.

Trees. And. Diagnosis. (Part two)

Last time we introduced decision trees and a tool we’ve made to explore them. With that tool, embedded in a simple game (Arbor), you can generate data from alien creatures with a simulated malady, figure out its predictors, and make a decision tree that will let you automate its diagnosis. (Here is the link to that not-quite-game.)

Your job was to get through the diseases ague and botulosis. Today I want to reflect on those two scenarios.

Ague

Ague is ridiculously simple, and with that ridiculous simplicity, the user is supposed to be able to learn the basics of the game, that is, how to “drive” the tools. One way to figure out the disease is to sort the table by health and see what matches health. Here is what the sorted table looks like:

agueTableSorted

Just scanning the various columns, you can see that health is associated with hair color.  Pink means sick, blue means well. With that insight, you can go on to diagnose individual creatures and then make a simple tree, which looks like this:

agueTree

Although there is a lot of information in the tree, users can generally figure it out. If they (or you) have trouble, they can get additional information by hovering over the boxes or the links.

Continue reading Trees. And. Diagnosis. (Part two)

Trees. And. Diagnosis. (Part one)

(This is part one. Link to part two.)

In the Data Science Games project, we have recently been exploring decision trees. It’s been great fun, and it’s time to post about it so you dear readers (all three or so of you) can play as well. There is even a working online not-quite-game you can play, and its URL will probably endure even as the software gets upgraded, so in a year it might even still work.

Here’s the genesis of all this: my German colleague Laura Martignon has been doing research on trees and learning, related to work by Gerd Gigerenzer at the Harding Center for Risk Literacy. A typical context is that of a doctor making a diagnosis. The doctor asks a series of questions; each question gets a binary, yes-no answer, which leads either to a diagnosis or a further question. The diagnosis could be either positive (the doc thinks you have the disease) or negative (the doc thinks you don’t).

The risk comes in because the doctor might be wrong. The diagnosis could be a false positive or a false negative. Furthermore, these two forms of failure are generally not equivalent.

Anyway, you can represent the sequence of questions as a decision tree, a kind of flowchart to follow as you diagnose a patient. And it’s a special kind of tree: all branchings are binary—there are always two choices—and all of the ends—the leaves, the “terminal nodes”—are one of two types: positive or negative.

The task is to design the tree. There are fancy ways (such as CART and Random Forest) to do this automatically using machine learning techniques. These techniques use a “training set”—a collection of cases where you know the correct diagnosis—to produce the tree according to some optimization criteria (such as how bad false positives and false negatives are relative to one another).  So it’s a data science thing.

But in data science education, a question arises: what if you don’t really understand what a tree is? How can you learn?

That’s where our game comes in. It lets you build trees by hand, starting with simple situations. Your trees will not in general be optimal, but that’s not the point. You get to mess around with the tree and see how well it works on the training set, using whatever criteria you like to judge the tree. Then, in the game, you can let the tree diagnose a fresh set of cases and see how it does.

That’s enough for now. Your job is to play around with the tool. It will look like this to start:

startingArborScreen

The first few scenarios are designed so that it’s possible to make perfect diagnoses. No false positives, no false negatives. So it’s all about logic, and not about risk or statistics. But even that much is really interesting. As you mess around, think about the representation, and how amazingly hard it can be to think about what’s going on.

There are instructions on the left in the tan-colored “tile” labeled ArborWorkshop. Start with those. There is also a help panel in the tree tile on the right. It may not be up to date. All of the software is under development.

Here is the link:
http://codap.concord.org/releases/latest/static/dg/en/cert/index.html#shared=31771.

The first disease scenario, ague, is very simple. The next one, botulosis, is almost as simple, and worth reflecting on. That will happen soon, I hope after you have tried it.

(Part two. About Ague and Botulosis.)

Note: if you are unfamiliar with this platform, CODAP, go to the link, then to the “hamburger” menu. Upper left. Choose New. Then Open Document or Browse Examples. Then Getting Started with CODAP. That should be enough for now.

A Calculus Rant (with stats at the end)

Let’s look at a simple optimization problem. Bear with me, because the point is not the problem itself, but in what we have to know and do in order to solve it. Here we go:

Suppose you have a string 12 cm long. You form it into the shape of a rectangle. What shape gives you the maximum area?

Traditionally, how do we expect students to solve this in a calculus class? Here is one of several approaches, in excruciating detail: Continue reading A Calculus Rant (with stats at the end)

Dare I Even Suggest…?

rockridge-embarcadero
April 15, 2015: BART ridership between Rockridge and Embarcadero stations, by hour.

If you’ve been following along, and reading my mind over the past six months while I have been mostly not posting, you know I’m thinking a lot about data science education (as opposed to data science). In particular, I wonder what sorts of things we could do at K–12 —especially at high school — to help students think like data scientists.

To that end, the good people at The Concord Consortium are hosting a webinar series. And I’m hosting the third of these sessions Tuesday July 25 at 9 AM Pacific time.

Click this link to to to EventBrite to tell us you’re coming.

The main thing I’d like to do is to present some of our ideas about “data moves”—things students can learn to do with data that tend not to be taught in a statistics class, or anywhere, but might be characteristic of the sorts of things that underpin data science ideas—and let you, the participants, actually do them. Then we can discuss what happened and see whether you think these really do “smell like” data science, or not.

You could also think of this as trying to decide whether using some of these data skills, such as filtering a data set, or reorganizing its hierarchy, might also be examples of computational thinking.

The webinar (my first ever, crikey) is free, of course, and we will use CODAP, the Common Online Data Analysis Platform, which is web-based and also free and brought to you by Concord and by you, the taxpayer. Thanks, NSF!

We’ll explore data from NHANES, a national health survey, and from BART, the Bay Area Rapid Transit District. And whatever else I shoehorn in as I plan over the next day.