In medicine, a placebo is an innocuous intervention (e.g., a sugar pill) given to participants to make them believe they are being treated when in reality they are not. In some cases, those receiving the placebo report their condition has improved and express complete satisfaction in the marvelous pill they have been given. Researchers call this response the placebo effect. This term frequently surfaces in clinical trials; however, it has correlates in education. Each year, schools implement any number of strategies and interventions hoping to improve outcomes.

“If we do A,” they say, “B will get better.” Sometimes they are right, at least partially. B may in fact get better. The problem is that, in many cases, B gets better without A having anything to do with it. Because we said A was going to work, though, we call it a success. Those who study logic label this fallacy post hoc ergo propter hoc. Translated, it means, “After it, therefore, because of it.” It is an alarmingly common error among educational practitioners and researchers, and it can have consequences. Misidentifying the connection between actions and outcomes sometimes carries only a small risk. For example, the $500 spent on a science kit to help students learn about the water cycle may not actually be the reason why students’ scores improved on the end-of-chapter and district-wide benchmark science tests, despite a particular teacher’s insistence. In the end, however, it was only $500 out of an institution’s entire budget, right? What if, however, that  teacher sits on a curriculum committee in a large school district and convinces others that this resource is essential for all 70 elementary schools in the district? Now the misdiagnosis has the potential to be a $35,000 mistake. This, of course, is to say nothing of the class time that could have been used for other purposes.[1]

Institutions engaging in school turnaround are at even greater risk from these types of problems as they typically engage in more drastic and costly interventions than the average school. The stakes also tend to be much higher in these schools where staff and administrators frequently lose their jobs or are reassigned to other institutions. In fact, based on School Improvement Grant (SIG) criteria from the U.S. federal government, a true turnaround model requires that the administration and at least half of the staff be replaced (Hurlburt, Therriault, & LeFloch, 2012). Given that the average costs of teacher turnover in the U.S. have been estimated to range anywhere from around $4,000 per teacher in small rural districts to $18,000 or more in larger urban districts (Barnes, Crowe, & Schaefer, 2007), decision makers should have some degree of certainty regarding whether these changes in staffing will truly lead to improvement before making changes.

With so much on the line, including costs to taxpayers and parents, individuals’ livelihoods, and student learning, it is absolutely necessary that we all look for causality in reform.

The idea of causality certainly will not sound novel. Organizations from a number of fields have been talking about Root cause analysis for decades. These processes and frameworks focus on identifying the source of a problem, though, rather than on evaluating the solutions. As such, these inquiries do not generally help us to discern whether interventions are working. To accomplish this, governmental entities and even some larger school systems, focus a great deal on evaluating the successes of various programs using methods far more sophisticated than a Root cause analysis and relying on individuals with backgrounds and terminal degrees in statistics, economics, and/or research and evaluation to perform the work. Employing these individuals (or departments of them) is no doubt expensive.

However, governmental education agencies often make decisions related to programs and interventions on a scale that can trigger costs in the millions of dollars, so it is often worth the investment in employing researchers that can reliably evaluate efforts.

Based on the SIG criteria mentioned above, for example, schools classified as “persistently low-achieving” in the U.S. are eligible for up to $6 million ($2 million per year for a three-year period; U.S. Department of Education, 2010) to use for making improvements. To investigate whether this policy is working, a number of studies have emerged using sophisticated approaches (what economists refer to as identification strategies[2]) in an attempt to establish causal links between SIG funded interventions and improved student outcomes in various locations across the U.S. (Abdulkadiroğlu, Angrist, Hull, & Pathak, 2015; Dee, 2012; Schueler, Goodman, & Deming, 2016).

Studies such as these can be very helpful. They cannot be the only evaluation solution, however. First, individual schools will certainly lack the resources or in-house expertise to conduct these types of statistical evaluations of their efforts. Second, these types of empirical studies generally cannot distinguish the impact of smaller components of larger strategic approaches to school improvement (Schueler et al., 2016). Fortunately, using the three following steps, school leaders can make headway in evaluating the effectiveness of their turnaround efforts, even if they cannot do so statistically.

Determine whether the results of the intervention indicated success

At some point, the leadership team should have determined how the success of a turnaround effort would be judged. What were the measures of that success? How have results changed? If they are positive, this is the first sign that the strategy worked. To keep things simple, let’s return to the above example of the water cycle curriculum materials. The teacher who initially bought the materials believed that incorporating them would lead to improved scores on the end-of-unit and system-wide benchmark tests? Did scores actually improve from last year when those materials were not available? If so, we can begin to believe the curriculum was effective.

Verify the intervention was implemented with fidelity

In order to say a treatment worked, it is important to make sure the treatment was actually applied correctly. In our example, even if average end-of-chapter test scores increased after the district purchased the materials, if few or none of the teachers actually used them or followed the guidance for instruction, it is not appropriate to conclude that the materials were effective.

Consider competing explanations for success (or failure)

This final step poses the most serious challenges. In fact, the lion’s share of the efforts researchers and statisticians put into their work often relates to this part of the process. What if the example district both (1) noticed improved test scores and (2) concluded that the water cycle curriculum was incorporated consistently and according to instructions? Officials still have to consider a wide array of other factors before congratulating themselves. For instance, are this year’s students noticeably more proficient in science than last year’s based on other metrics? Was any other new curriculum introduced this year that also addressed the same principles and concepts? Did the assessments change at all? Are the teachers the same as last year? Any or all of these factors (and likely many more) could have made a significant impact on the results of this particular intervention. Some interventions have more factors to consider than this; others may have fewer.

This step is admittedly quite complicated. However, it is absolutely necessary for school leaders to undertake, even more so for leaders of schools pursuing turn-around strategies. When stakes are high, there exists an equally high need for strong and convincing evaluation.

We should make every effort to ensure we never remove teachers or administrators for problems they did not cause. Similarly, we should work hard to ensure we do not replicate and bring to scale any strategies, especially expensive ones, that in reality made no difference.

We may not all be economists, statisticians, or trained education researchers, but we do serve a profession that impacts the lives of millions of individuals worldwide. When making decisions that carry important consequences or those having immense scope and scale, we should do all we can to ensure those decisions are based on rigorous and thoughtful reasoning and evaluation.

ReferencesAbdulkadiroğlu, A., Angrist, J., Hull, P., & Pathak, P. (2016). Charters without lotteries: Testing takeovers in New Orleans and Boston. American Economic Review, 106(7), 1878-1920.

Barnes, G., Crowe, E., & Schaefer, B. (2007). The cost of teacher turnover in five school districts: A pilot study. National Commission on Teaching and America’s Future. Retrieved from //files.eric.ed.gov/fulltext/ED497176.pdf

Dee, T. (2012). School turnarounds: Evidence from the 2009 stimulus (No. w17990). National Bureau of Economic Research.

Levin, H., & McEwan, P. (2001). Cost effectiveness analysis: Methods and application (2nd Ed.) Thousand Oaks, CA: Sage.

Hurlburt, S., Therriault, S., & Le Floch, K. (2012). School improvement grants: Analyses of state applications and eligible and awarded schools (NCEE 2012-4060). Washington, DC: National Center for Education Evaluation and Regional Assistance, Institute of Education Sciences, U.S. Department of Education.

Murnane, R., & Willett, J. (2011). Methods matter: Improving causal inference in educational and social science research. New York: Oxford University Press.

Schueler, B., Goodman, J., & Deming, D. (2016). Can states take over and turn around school districts? Evidence from Lawrence, Massachusetts (No. w21895). National Bureau of Economic Research.

U.S. Department of Education. (2010). Guidance on school improvement grants under section 1003 (g) of the Elementary and Secondary Education Act of 1965. Washington, DC: Office of Elementary and Secondary Education. Retrieved from www2.ed.gov/programs/sif/legislation.html

Endnotes1Though it is beyond the scope of this brief essay, Levin and McEwan (2001) have created an excellent text addressing the concept of cost-effectiveness (as compared to cost-benefit) analysis that is worth consulting when comparing competing alternatives for allocating resources.

2See Murnane and Willett (2011) for a review of these types of research approaches and how they are applied in education settings.