I’m talking about both state accountability testing and local testing—i.e., district, school, and classroom. The reason I’m addressing both is because I’ve seen the significant impact the former can have on the latter. In their own testing, local educators often mimic the kinds of tests that produce results for which they are being held accountable. Various factors in the late 1980s and ’90s led many states to “experiment” with non-traditional testing approaches, such as performance tasks and portfolios. Local educators across the country learned about scoring rubrics. They became hungry for performance tasks and wanted training in the development and use of them. Because the testing industry lacked some of the tools we have today to facilitate such approaches on a large scale statewide, and to assure their measurement quality, critics came out of the woodwork to attack “authentic” assessments. Then, with its enactment in 2002, the No Child Left Behind Act (NCLB) put the nails in the authentic assessment coffin by requiring much more accountability testing and quick turnaround of results, leading to an emphasis on the more efficient approaches to testing.
A little over a decade later the Race to the Top program increased the stakes of state testing results for educators, put added requirements on states regarding curriculum standards and assessments, and stimulated increased interim testing by schools and districts. The major Race to the Top state assessment consortia, Smarter Balanced and Partnership for Assessment of Readiness for College and Careers (PARCC), had originally planned to include innovative performance assessment components in their programs, but ultimately backed off those plans. Touted as “next generation” assessments, the only thing “next generation” about them was that they were administered online. Otherwise, while of high technical quality, they were much like several of the previous state assessments that used efficient instruments relying heavily on machine-scoring, but with a few short on-demand performance tasks resembling the extended constructed-response questions and writing prompts many states had been using for years.
The Every Student Succeeds Act (ESSA), the most recent Elementary and Secondary Education Act reauthorization after NCLB, lifted some of the demands of Race to the Top, giving states control of how they used the results of required testing for accountability purposes. The new law offered some flexibility regarding innovative assessment approaches. As more and more states have dropped out of the major assessment consortia, however, they have not taken advantage of ESSA flexibility and instead have generally continued with efficient accountability testing and thus continued to shortchange attention to deeper learning.
Where We Are Today
Online testing has taken root. Since federal requirements can be satisfied with time- and cost-efficient testing, that is the approach that states have generally adopted. And this means that state tests do little to stimulate more innovative, local testing approaches focusing on deeper learning. (A New Hampshire pilot program is a notable exception). Of course, many local educators are appropriately enamored with the immediate turnaround of test results associated with online testing. For purposes of classroom instruction, such results are more useful. Thus, there is high demand on the part of school and district personnel for efficient computer-based, interim testing of core knowledge and skills to align with state tests and to provide teachers with immediately actionable results.
Interestingly, an opposing force countering this demand also emanates from the grassroots level.
Educators in many schools, districts, and district consortia are concerned about over testing and are not satisfied with state tests that don’t cover what they value in terms of student learning.
Consequently, these educators have taken it upon themselves to develop assessments that tap higher-order thinking, embracing performance-based, even project-based learning. In many cases, their hope is that their state programs will adopt similar approaches. Unfortunately, some of these efforts have been undertaken by individuals with low opinions of state testing. So they have given little thought to the federal requirements of state testing pertaining to technical quality and comparability of results across schools, or to how states might transition to more innovative testing. At the same time, I have heard rumblings in one state about how educators implementing programs stressing deeper learning–arguably the “right” approach—are nonetheless concerned that their good efforts will not be reflected in the results of the state’s traditional, efficient tests.
The Future: A Problem and a Solution
So one might ask, “In the coming years, which will educators and education policy makers choose for their testing—efficiency or deeper learning?” It is unfortunate that testing has gone the way of other aspects of education: unnecessary dichotomies have been created that pit one practice or focus against another as if they cannot co-exist. Past experience gives us many examples of such controversies, such as whole language versus phonics, writing process versus drill-and-kill grammar instruction, content versus process, and basic skills versus deeper learning. As history has shown, testing, like so many other educational activities, has exhibited the pendulum behavior with testing practices swinging back and forth between two polar opposites to satisfy the ideological differences of those who come and go in positions of influence and control.
The pessimist in me says this situation will not change. Right now, efficiency seems to be ruling the day in testing. However, there are forces at work that are pushing the pendulum toward greater attention to deeper learning. The past history of this movement has not been good. “Authentic assessment” enthusiasts of the 1990s were addressing the concern that higher order skills (deeper learning) were being neglected. Core knowledge advocates have been continually concerned that important knowledge and basic skills are neglected by advocates of 21st century skills, past and present. Critics of performance and portfolio assessments have also been concerned about measurement quality. Unfortunately, all these concerns have been legitimate.
The optimist in me hopes that someday education policy makers and local educators will all recognize and work toward the obvious solution to the testing, and in fact to the larger educational, dilemma: at all levels we need to address both foundational or basic knowledge/skills and deeper learning. State testing programs can be designed to address both through the implementation of two components—an abbreviated efficient end-of-year test and a curriculum-embedded performance component. Local educators have many tools available to them. They can allow students to work online on basic knowledge and skills, monitoring and intervening as necessary via effective formative assessment practices. They can also guide students and student teams in tackling engaging tasks or projects that tap higher-order skills.
Educators often talk about a “balanced assessment system” in terms of its components: state summative assessments, interim or benchmark assessments, and classroom assessments (both summative and formative). However, balance in terms of what assessment instruments cover is also important.
Basic knowledge and skills have been assessed at all levels relatively well. On the other hand, assessing higher-order thinking has often been challenging. But we know how to do it.
In recent years, various groups have taken a closer look at the domain of assessment literacy, defining what various players in the education arena need to understand about assessment topics. The key to assessment literacy is communication and professional development. With assessment-literate policy makers and educators, a better future for testing would be possible— a future that is less susceptible to political and ideological whims and that better serves student learning.
© Cognia Inc.
This article may be republished or reproduced in accordance with The Source Copyright Policy.
The information in this article is given to the reader with the understanding that neither the author nor Cognia is in engaged in rendering any legal or business advice to the user or general public. The views, thoughts, and opinions expressed in this article belong solely to the author(s), and do not necessarily reflect the official policy or position of Cognia, the author’s employer, organization, or other group or individual.