Government Study of Online Learning Revisited

    • “The difference between student outcomes for online and face-to-face classes—measured as the difference between treatment and control means, divided by the pooled standard deviation—was larger in those studies contrasting conditions that blended elements of online and face-to-face instruction with conditions taught entirely face-to-face.”
    • Blended environments trumped both face to face and entirely online environments.
    • “Analysts noted that these blended conditions often included additional learning time and instructional elements not received by students in control conditions. This finding suggests that the positive effects associated with blended learning should not be attributed to the media, per se.”
    • This was true of Gamedesk study of MotionMath as well. Perhaps the blended environment encourages greater participation outside of class?
    • “An unexpected finding was the small number of rigorous published studies contrasting online and face-to-face learning conditions for K–12 students. In light of this small corpus, caution is required in generalizing to the K–12 population because the results are derived for the most part from studies in other settings (e.g., medical training, higher education).”
    • Very few studies in k12 were used in this meta-analysis. How many? Clearly, professionals have high motivation and can better appreciate the flexibility of the online learning environment.
    • “Thus, analytic findings with implications for K–12 learning are reported here, but caution is required in generalizing to the K–12 population because the results are derived for the most part from studies in other settings (e.g., medical training, higher education).”
    • Caution in extending findings to k12
    • “Four of the nine studies involving K–12 learners were excluded from the meta-analysis: Two were quasi-experiments without statistical control for preexisting group differences; the other two failed to provide sufficient information to support computation of an effect size.  “
    • Only 5 k12 studies were included in this meta analysis of online learning. Still, it was 5. I wonder if they discriminate out the results from these 5. Guess we will see..
    • “The 50 estimated effect sizes included seven contrasts from five studies conducted with K–12 learners—two from eighth-grade students in social studies classes, one for eighth- and ninth-grade students taking Algebra I, two from a study of middle school students taking Spanish, one for fifth-grade students in science classes in Taiwan, and one from elementary-age students in special education classes. T”
    • Description of k12 study students. Only one on high school students – at 9th grade.
    • “Key Findings The main finding from the literature review was that ! Few rigorous research studies of the effectiveness of online learning for K–12 students have been published”
    • Interesting that this is identified as a key finding.
    • ” Interpretations of this result, however, should take into consideration the fact that online and face-to-face conditions generally differed on multiple dimensions, including the amount of time that learners spent on task. The advantages observed for online learning conditions therefore may be the product of aspects of those treatment conditions other than the instructional delivery medium per se. “
    • The benefits of online learning may not be because of the online medium.
    • “Though positive, the mean effect size is not significant for the seven contrasts involving K–12 students, but the number of K–12 studies is too small to warrant much confidence in the mean effect estimate for this learner group. Three of the K–12 studies hadsignificant effects favoring a blended learning condition, one had a significant negative effect favoring face-to-face instruction, and three contrasts did not attain statistical significance. The test for learner type as a moderator variable was nonsignificant. No significant differences in effectiveness were found that related to the subject of instruction.”
    • Summary of the k12 results of the study.
    • “Despite what appears to be strong support for blended learning applications, the studies in this meta-analysis do not demonstrate that online learning is superior as a medium, In many of the studies showing an advantage for blended learning, the online and classroom conditions differed in terms of time spent, curriculum and pedagogy”
    • Overall, the study could not isolate online learning or blended learning as a medium that alone impacts student learning.
    • Online learning does not trump face to face in k12, but sample size was too small to have much confidence in this assertion.
    • “Finally, the great majority of estimated effect sizes in the meta-analysis are for undergraduate and older students, not elementary or secondary learners. Although this meta-analysis did not find a significant effect by learner type, when learners’ age groups are considered separately, the mean effect size is significantly positive for undergraduate and other older learners but not for K–12 students.  “

Posted from Diigo. The rest of my favorite links are here.

About Jack West

Teacher, team member, father, neighbor.

1 Response

  1. Rob

    The results of this report are flawed methodologically and biased to meet the ideology of the researchers. The results are not supported by other meta-analyses comparing online to classroom, which tend to find no difference in student performance. The fact that the study did not look at methodology beyond delivery environments is fatal because of the long established assertion of Clark that instructional delivery medium doesn’t matter, or at least matter much less than instructional method. The authors of the study try to make a the case that “active learning” (meaning higher student control) is somehow associated with each delivery environment. Higher learner control is assumed to be a benefit based primarily on constructivist ideology which is not supported by strong research evidence. They completely ignore the synchronous versus asynchronous aspect of instructional method as an alternative to learner control.

What do you have to say?