A Rube Goldberg, according to Webster's Dictionary, is "a comically involved, complicated invention, laboriously contrived to perform a simple operation." We can see these contraptions in operation in school administration in the fetishizing of strategic planning, which has become something of an end in itself, and particularly in how we determine the effectiveness of our schools and their impact on student success. Today's accountability systems fail to identify the root causes of underperformance and therefore fail to enable actionable strategies for improvement. We have made improvement and accountability of our schools a complicated process with limited impact on student success.
A wealth of empirical evidence confirms what common sense suggests: Students do better in school when teachers make frequent efforts to check on students' progress and, if students are floundering, to help them get back on course right away. Simply put, the approach that educators call "classroom formative assessment" works. It can play a powerful part in teachers' efforts to improve student learning. Less-widely recognized, though, is the role that this strategy can play in the improvement of entire schools.
In classrooms, ongoing evaluation ensures that educators regularly collect assessment-elicited evidence from students regarding how well they are learning. Depending on what the evidence indicates, then both teachers and students can decide whether they need to adjust what they are doing.
Sometimes accompanied by other less salient indicators such as graduation rates or attendance levels, students' performances on state-administered achievement tests dominate the way we appraise the educational efforts of schools, districts, and states.
This approach works well because it exemplifies the kind of ends-means thinking that has served the human race for eons. In the classroom, curricular targets (the ends) are chosen, and then instructional procedures (the means) are selected to promote students' mastery of those curricular targets. As instruction proceeds, teachers periodically collect assessment evidence of students' progress so that, if the initially chosen instructional procedures are not working satisfactorily, changes can be made in those instructional means. It is this sort of en-route evidence gathering to see if the originally selected means are determining the desired ends that makes this approach so successful and seem as little more than common sense.
In recent years, our public schools have usually been subjected to once-a-year evaluations based on a handful of limited indicators — typically students' scores on annually administered accountability tests. Sometimes accompanied by other less salient indicators such as graduation rates or attendance levels, students' performances on state-administered achievement tests dominate the way we appraise the educational efforts of schools, districts, and states. Chiefly intended to identify poor performance and spur improvements in subsequent years, the limited data from these annual tests have sparked a groundswell of complaints about the over-testing of students.
Moreover, the preoccupation with students' test scores have sometimes masked — rather than illuminated — causes of the real problems confronting our schools. Too often, educators have become fixated on students' declining test scores or a rise in the dropout rate that are often the symptoms of much deeper, systemic problems that have gone unidentified. And yet, these problems could be identified and resolved by applying a comprehensive formative-assessment strategy to the chief aspects of a school's operation.
Making this profound shift in our approach to the annual evaluations of schools would require pressing the "reset button" on our school-evaluation thinking.
Almost all proponents to regularly adjust instruction agree that this approach is a planned process whereby the teacher decides in advance when to collect evidence of students' progress toward the key building blocks of a learning progression. If, for example, an elementary teacher is to improve students' composition skills, the teacher might have identified what skills students need to master to be successful, such as selecting and organizing content and how they employ conventions of writing such as spelling and punctuation. This requires thoughtful planning to identify when and how to assess and then determine what levels of insufficient progress warrant an adjustment in instruction.
Making this profound shift in our approach to the annual evaluations of schools would require pressing the "reset button" on our school-evaluation thinking. Instead of appraising a school's success once a year, primarily on the basis of students' performances on state-level accountability tests, we would need to replace an evaluation strategy with an improvement strategy similar to what teachers do regularly in classrooms to make evidence-based adjustments in their work. For schools, we would need to move beyond what we do today to require that any sort of overall appraisal of a school be rooted in the degree of actual improvement seen in a school.
This approach — now being tested across entire states such as Kentucky and Michigan — will benefit young people by helping educators make needed changes across all aspects of what happens in school that affect learning. These states, for example, are tracking changes in student performance, school climate, student engagement, and other indicators. At the local level, school districts and principals can use a possible series of pre-planned actions to get the evidence they need to adjust and improve.
School decision-makers must isolate a manageable and relevant number of such contributory dimensions — not too many — because for each such dimension one or more measurable indicators would need to be identified.
First, a school's staff would need to work together to identify what matters most in their school's operation that contributes to the overall quality of education experienced by students. For example, these dimensions might include teacher morale, community engagement, instructional materials, students' attitudes toward learning, and instructional quality. School decision-makers must isolate a manageable and relevant number of such contributory dimensions — not too many — because for each such dimension one or more measurable indicators would need to be identified. Such dimensions must define clearly the conditions, processes and practices that impact student learning.
These measurable indicators provide the ongoing evidence to a school's staff about which, if any, adjustments should be made.
As in teaching, introducing ongoing diagnostic assessment will provide timely and actionable information for those managing change. This approach will help school leaders eliminate their blind spots and jolt schools that have little initiative to improve out of their complacency. Too many schools assume that all is well, even though a closer inspection reveals areas that need improvement in student learning, school climate, and leadership, among others. And by implementing this type of ongoing evaluation for improvement, schools where test scores are low and state sanctions are looming, may find encouraging clues to provide quality instruction and build on their hidden strengths.
The way we examine how well schools are doing must be systemic and address, on an ongoing basis, all aspects of schooling, using more comprehensive assessment data to identify needed changes. By making the complexities of schooling more visible we can make continuous school improvement possible.
© Cognia Inc.
This article may be republished or reproduced in accordance with The Source Copyright Policy.