Asking the Wrong Questions of the Wrong Data

By Alex Medler

I get a kick out of my kids’ musical performances. What do we parents discuss during intermissions? Our universal pride in our young performers, how much we liked the music, how much the kids have improved at their instruments, and how well the directors have gotten them all to play together as a group. Never have we chatted about the impact that our kids’ time in the band might have on their math scores. If only more researchers were willing to listen to the music.

I believe in the power of data and the importance of asking the right question about policies and programs. Too often, people trying to understand how well something works use the wrong data to answer the wrong question. We have great data on student performance. But that isn’t always the outcome that matters the most for a program.

Which outcomes we choose to measure should reflect the goals of our programs. We care about many things, in addition to math and reading test scores.  As a society—and as researchers and wonks—we should primarily ask about the impact these programs have on their primary objectives. Part of the challenge is to build support for and understanding of outcomes besides state tests of literacy and numeracy.  I think we can.

Don’t get me wrong. I want more high-quality schools and I support efforts to increase academic results, as well as the state tests that measure student performance. But when programs have other primary objectives, we should look for data that measures progress toward those goals instead. School climate, nutrition, and music instruction illustrate this tension.

A recent study of California schools found positive relationships between improvements in school climate and school performance. That’s a good thing. But what if it was already enough of a good thing to improve the climate of our schools because we want our children to experience better school climates? Like the proverbial tree falling in an empty forest—if we improve a school’s climate, but don’t look at its subsequent test scores, has anything been accomplished? I’d argue it has. We ought to care about the climate our children sit in for six hours a day, primarily because they must endure it.

Consider a recent study on the link between school lunch and test scores. Another study of California, explored the connections between various approaches to nutrition programs and test scores. If the primary goal of a program is to improve student health, we should make our decisions about these programs mostly based on data about student health. In this case, the study found that schools that contracted with a healthy lunch vendor had little change in obesity, but a small, positive, impact on learning. What should one do with such data?  Imagine, what would anybody do with the information if we learned that a program didn’t decrease obesity and kids didn’t enjoy the food, but test scores went up? The test score finding answers the wrong question and distracts from efforts to understand this program.

Finally, consider the perennial arguments of music instructors about the link between music, brains, and math scores. I don’t have to link to any studies; every parent of a school-aged musician gets an earful from their choir, orchestra, or band directors about how, “studies show students who participate in music raise their math scores.” That’s great for building support for music programs, which I support. Unfortunately, in practice it tends to pit music programs (and budgets) against art, theater, or recess—which for some reason haven’t been as quick to promote their own version of a “studies show” talking point.

We should evaluate our music programs based on the music, as well as the kids’ mastery and love of it, just like we should evaluate the theater by the quality of the play and the poise of the brave young people putting it on. When we decide how much time and money to put into things like band and orchestra, we should also consider how well other programs, like art and theater, are at forwarding their own arts.

I hope the urgency of improving student performance is not lost or reduced in the future. Plenty of people argue against testing data, and this argument seems to imply we don’t have work to do to improve student learning. I reject that argument.  My point is simpler. We should find ways to add much more data, and when evaluating the latest programs, we ought to pick the right data to reflect the specific outcomes we value. That is a much better recipe for achieving all the better outcomes we value.