Independent EdTech Research – Untethered Mad Science

No one would deny that education research is difficult to do well. Humans don’t behave like frictionless carts in perfectly elastic collisions. The gold standard of a double-blind study with randomly assigned subjects is impossible to accomplish in a school setting.

Education institutions would like to use data-rooted studies to effect positive change, but the students in first period algebra are not clones of the students in fourth period algebra, and Mrs. Farnsworth always seems a little more perky during third period after she’s had that second cup of coffee. Such variation confounds attempts at simple comparison studies.

My school is undergoing an accreditation cycle this year. Part of the process is demonstrating how our decision-making is data driven. This includes macro-scale decisions like scheduling, but also micro-scale decisions like how to teach mathematical factoring.

To demonstrate that our decision-making is data-driven our faculty has taken time to examine test scores, primarily, and make some inferences about practices that might effect positive change. These decisions usually happen in small groups using the blunt instrument of scores on bubble tests.

This is probably a bit farther than most schools will go because we have the resources to contract a data aggregator that packages our scores for us. Yet, our inferences, and the changes we promise to make from them are the beginning and the end of that cycle. There will be no follow-up to see if the inference-driven change process was effective. The entire process is motivated by the requirement to show data-based decision making – not by showing how the process is linked to results.

Edtech firms are well acquainted with the need schools have to demonstrate that their decision-making is analytical. Some of the more established edtech institutions have taken their R&D resources to contract independent research firms that agree to examine the impact using their product has on student outcomes.

This is a dark art.

Even the most respected education researchers in the world are reticent to make claims about what does and what does not ‘work’ in the education setting. The most confident associations I have ever read from highly regarded education researchers are usually couched in terms like “strong correlation,” or “highly suggestive.” Never is there an assumption of causality.

It was peculiar to me, then, when I read two studies carried out by the independent research firm, SEG Measurement, that summarize findings with the following statements..

These effects indicate that the use of Atomic Learning for professional development has a substantial impact on student Reading Comprehension and Mathematics skills growth.
-SEG Measurement, A Study of the Impact of the Atomic Learning Professional Development Solution On Student Achievement
The findings of this study demonstrate that students using BrainPOP can make significant gains in Reading, Language, Vocabulary and Science during one school year’s time and make significantly greater gains than students who do not use BrainPOP.
-SEG Measurement, Improving Student Science and English Language Skills: A Study of the Effectiveness of Brainpop

While the second claim is logically no different than saying, “Women wearing pink can attract significantly more men than women who do not wear pink,” (they also could not!) and thus leaves SEG free from the blame of having made an easily controvertible statement because of the choice of the word ‘can’, the claim about the impact of Atomic Learning is more assertive, and therefore perhaps more exceptional.

The first study compared the test scores of students of teachers who used Atomic Learning professional development for an hour or two each week during the study period to the test scores of students of teachers that did not use Atomic Learning at all. Without controlling for the fact that the teacher pool that would choose to take on an additional two hours of professional development each week is quite likely to be a more proactive and perhaps inherently an a priori better qualified group of teachers, this study becomes more promotional pamphlet than research.

I called SEG to ask them if they made any attempt to control for teacher experience or quality. Their representative clarified that teachers volunteered for the study. In fact, she stated that at least some of the teachers had been using the product for several years. No one was randomly assigned to the treatment group in either of these studies.

The chances are good that a self-selecting treatment group has collectively more experience than the control group, though no such data were gathered by the study. The most significant factor that could confound the results of both of these studies was left uncontrolled. Here is a study that suggests, contrary to public perception, it is the seasoned teacher who is more likely to employ new technology in her classroom – not the new, young ‘digital native.’

Here is another study still, carried out by Linda Darling-Hammond, suggesting that it is the effectiveness of the teacher that has the most significant impact on student achievement. Ironically, a similar study was cited by SEG in the Atomic Learning paper as further evidence that Atomic Learning was the factor that was responsible for the success of the treatment group. As a veteran teacher, and teacher trainer, it seems obvious to me that the regular use of online technology training would likely be associated with the greater comfort one has in a profession that comes with years of experience, and that any effect the Atomic Learning technology training has on teacher effectiveness is likely dwarfed by the effect of years of additional experience and perhaps additional degrees of certification.

In most fields it is considered a clear conflict of interest when a research group is paid by the same for-profit institution that is the subject of the study. The first page of the SEG study of the use of Brainpop prints the following at the bottom:

This research was supported by a grant fromBrainPOP.

I could not find any such admission on the Atomic Learning Study, but Atomic Learning identifies SEG as a partner on their website. And, given the promotional tone of the Atomic Learning study, I feel confident in my inference that a similar ‘grant’ must have been given to SEG from Atomic Learning.

Teachers are not trained researchers, but are frequently asked to make inferences from limited data – so they do; perhaps poor ones that may or may not actually have any strong impact on their practice. This is unfortunate, but excusable. The primary job of the teacher is to carry out the duties assigned to her with the training she has been given. Like other service professionals: lawyers, doctors, and plumbers, teachers are mechanics. Reflection on practice is important, and innovation is a valuable part of any professional practice, but bubble-test data is one-dimensional. Human beings are not.

Less excusable is when a research institution works with an edtech firm to carry out a quasi-experimental study on the effectiveness of a product in producing bubble test gains, and then makes bold, assertive claims about a causal link between the edtech intervention and student test score gains. Independent research firms must know that school leaders will never take the time to read a forty page research paper (like I will) to examine the effectiveness of control conditions.

I suspect the last decade has seen an increase in school purchasing decisions that present a veil of reliance on data. Likely, someone at a school district office who has already made up his mind to use outsourced, web-based professional development (for example) simply needs a quote from a research report to present his case before the decision-making board. SEG is filling a niche.

I have used both Brainpop and Atomic Learning in my practice. Both are useful tools. It was not my intention in this post to prove otherwise. Our over-reliance, as a country, on information provided by unidimensional tests is forcing us into a decision-making process that is akin to the witch trial in Monty Python’s Holy Grail. In the wild west of edtech and the unregulated world of education research everyone is party to the logical fallacy.

About Jack West

Teacher, team member, father, neighbor.

2 Responses

  1. Hi Jack,
    Thank you for taking the time to bring the discussion to the table about decisions being made about technology being used in the classroom. I think that it is unfortunate that you do not preface your article with the last 2 paragraphs of your article.

    After working with several education systems around the world I have found that it is difficult to justify additional spending on Educational Technology when traditional test scores have not moved much. As you mention this is a failure of modern assessments not on those that are supporting new skills that are currently not being assessed.

    Just like the current standardized assessments, investment decisions are made based on resources that do not support current skills. This forces companies like Atomic Learning and BrainPop to spend lots of money on grants to tick the white paper box like a technical requirement.

    I would really like to hear you thoughts on how you think decisions on investment of ICT in schools should be made?

    Peter Schneider, Managing Director of ict Maven Group

What do you have to say?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s