Bringing the Scientific Method to the Classroom

Access and Inclusion September 11, 2018

Context

Hundreds of millions of children are in schools and are not learning much. There are lots of plausible reasons for that: failed policies, resource shortages and substandard management being some of them. But what about what actually happens inside the classroom – the actual 40-minute experience an 8th grade child has in her Science class every day? The good news is that this question is being rigorously investigated. The not so good news is: we’re not integrating very much of it, particularly in contexts that need it most. (With some notable and praiseworthy exceptions). Generally, the gap between “the quality of standard pedagogy in these contexts” and “what we now know about the science of teaching and learning” appears to only be widening.

How do we integrate findings from promising research into schools, all of which are in low or middle-income countries? How do we make sure that the evidence is top-notch, and that it is reliably used to improve a child’s daily experience in schools?

Rachel Glennerster, on a useful framework:

“Innovate, test, then scale. The sequence seems obvious—but is in fact a radical departure. Too often the ….process looks more like ‘have a hunch, find an anecdote, then claim success.’”

Seems like a straightforward enough formula; let’s break it down.

First: where do the ideas from innovations within the classroom come from? Let’s look at two places:

1. What the research says works in the specific contexts in question (i.e. low and middle-income countries)

2. What the research suggests is promising (i.e. the explosion of research into learning science) but hasn’t yet been tested in contexts similar to our own

At Bridge, we operate and/or support almost a thousand schools across Africa and India. We are (extremely) lucky to be able to formally and empirically test these ideas, at scale, using randomized controlled trials. We’re also very fortunate to have partnered with some excellent researchers, economists, and PhDs to help us with that work.

Looking for Ideas to Test in Learning Science

What do the experts on the science of learning point to, and where are the good summaries of what’s going on? A review of the literature has several recurring themes – namely, that some techniques are much more efficient than others at generating learning.

It’s wise for practitioners to make use of this work. Indeed, the majority of the most promising practices are free, and in many cases require a slight re-sequencing of the conventional 40-minute lesson. For example, do you want to get kids to revise material more efficiently? Maybe use self-testing or quizzing; it’s probably more effective than commonly used strategies like re-reading. Interested in how to get kids to regulate and slow down their thinking? Take time to point out common misconceptions and discuss them as a class (via “predict, observe, explain”). Want to make sure kids are seeing AND engaging with great models? Worked examples can be effective, particularly when combined with self-explanation, which forces children to clarify their thinking.

There are a range of intriguing ideas that theoretically cost almost nothing to implement. But our experience at Bridge is that the devil is in the details – we suspect that getting these methods as close to exactly right as possible is crucial. In that spirit, my colleague Utpal Sinha is looking at “interleaving.” With “interleaved practice,” you study or practice several skills at a time instead of just one. A problem set in maths, for example, might include skills A, B, C, and D…instead of just A.

Interleaving is fascinating because children reliably perform worse in the short-term, but better in the medium- and long-term. Willingham explains:

If a homework assignment asks the student to do 15 problems, all of them variants on the same mathematical algorithm, the student will have an easier time than if the 15 problems call for any of five different algorithms.

So, Utpal and his team will test interleaved vs. blocked in several dozen classrooms this Fall. With the help of Michael Kremer and Ronak Jain, we’re aiming to look at a range of outcomes, including short- and long-term test scores. Depending on that, we’ll decide on how and whether to scale.

Coming back to Rachel Glennerster:

“Innovate, test, then scale. The sequence seems obvious—but is in fact a radical departure.”

I was inspired by “radical departure.” What we need, I thought, is something similar to the What Works Clearinghouse, a top-notch evidence base that will drive decision-making on matters large and small in the classroom. And I still believe that!

But I’ve come to realize that approaching problems in this way is more challenging and less cognitively pleasant. You have to slow down a lot – consider the moving parts of an intervention that you haven’t quite contemplated before, and that you may be incentivized to ignore when you’ve got a strong hunch. Innovate, Test, Scale is a simple framework, but it requires a lot of discipline, and can take a while.

When an evidenced based approach is implemented the results can be a stand-out success, as we have seen recently with many pupils, especially girls, who are now excelling in STEM and other subjects.

It’s also essential, and invigorating. We should push hard to build an evidence base about learning that makes use of all of the promising research from different fields in a way that improves the daily experience of kids in classrooms all over the world.