Pulling the wool over your eyes
Posted February 21, 2017on:
Two recent reads articulated what I sometimes struggle to put into words: What seems to work in schools is sometimes an illusion.
I elaborate on the first today, an Edsurge article, that explained how much “education” research is based on flawed designs.
One example was how interventions are compared to lectures or didactic teaching. With the baseline for comparison so low, it was (and still is) easy to show how anything else could work better.
Then there is the false dichotomy of H0 (null hypothesis) and H1 (hypothesis). The conventional wisdom is that if can prove that H0 is false, then H1 is true. This is not the case because you might be ignoring other contributing or causal agents.
Finally, if there is no significant difference (NSD) between a control and the new intervention, then the intervention is judged to be just as good. Why is it not just as bad?
This makes it easy for unscrupulous edtech vendors to sell their wares by fooling administrators and decision-makers with a numbers game.
There was something else that the article skimmed on that was just as important.
This graph was the hook of the article. If the data are correct, then the number of movies that Nicholas Cage appeared inform 1999 to 2009 eerily correlates with the number of swimming pool drownings during the same period.
No one in their right minds would say that Cage being in movies caused those drownings (or vice versa). Such a causal link is ridiculous. What we have is a correlation of unrelated phenomena.
However, just about anything can be correlated if you have many sources and large corpuses of data. So someone can find a correlation between a product use and better grades. But doing this ignores other possible causes like changes in mindsets, expectations, or behaviours of stakeholders.
So what are educators, decision-makers, and concerned researchers to do? The article recommends a three-pronged approach:
- Recognise that null hypothesis significance testing does not provide all the information that you need.
- Instead of NSD comparisons, seek work that explains the practical impacts of strategies and tools.
- Instead of relying on studies that obscure by “averaging”, seek those that describe how the intervention works across different students and/or contexts.
This is good advice because it saves money, invests in informed decision-making, and prevents implementation heartache.
I have seen far too many edtech ventures fail or lose steam in schools not just because the old ways accommodate and neutralise the new ones. They stutter from the start because flawed decisions are made by relying on flawed studies. Pulling the wool away from our eyes is long overdue.