Some SEAs may be tempted to do the bare minimum to meet the new requirements, continuing the same testing regime and adding a survey or two here and there to gauge what’s happening in schools. Some states will add a few extra measures but still not go beyond a compliance checklist with some additional accountability. These SEAs may (or may not) earn federal approval, but they are unlikely to improve results.

Section 7 of 10.

Instead of modifying or improving on existing systems, states should take the opportunity under the Every Student Succeeds Act to acknowledge lessons learned, wipe the slate clean and envision a forward-thinking education system.

Defining Purpose and Direction

To map out the parameters of a new system, the visioning process must begin with a clear purpose and direction. This information helps SEAs provide a reference point for innovation and decision-making as the demands and complexities of the system evolve.

While education systems can serve multiple purposes, the most critical is to continuously drive and create improvement so that every learner succeeds. The primary purpose of the education system should not be to simply improve student achievement scores; that is merely one of many system outcomes that should be monitored. Simply monitoring outcomes will never provide clarity for observed trends or create an environment in which the emphasis is not solely on monitoring but ion improving the factors and conditions that will improve those outcomes. The purpose and direction individually defined within all levels of a state system do not need to be the same stated purpose and direction. However, the purpose and direction at all levels need to be aligned to accomplish the larger goal of college and/or career readiness for all of our nation’s learners.

Effectively measuring, monitoring, and improving the complex conditions of any system (whether education, healthcare and hospitals, the economy, or the criminal justice system) requires information about a multitude of internal and external factors and how they change over time. This collection of indicators is benchmarked to determine progress, and/or compared with industry standards or other similar entities as a means of making judgments.

To identify what they need to track to capture the desired qualities of schools and the state education system, SEAs should consider the following steps:

  1. Establish a shared vision of learner success. Describe what a “successful student” looks like upon exiting the education system in 2030. 
  2. Identify the key actors of the education system who play a role in achieving this vision.
  3. Identify the environments/systems in which these actors operate.
  4. Determine the core set of factors within these environments/systems that influence the actors and their ability to achieve the desired vision. 
  5. Define the desired level of quality expected for each of these factors and the criteria by which the level of quality can be determined.
  6. Establish a process for continuous improvement through which the quality of these factors and the related environments/systems are assessed and improved.
  7. Identify the indicators and outcome measures that best determine the overall effectiveness of this improvement process.
  8. Develop a system of accountability that leverages these indicator data over multiple years to highlight successes, identify institutions in need of improvement, and target interventions and support services based on reliable diagnostic data.
  9. Provide transparency and encourage shared ownership of student success through public reporting of action and outcome measures, as well as qualitative information that accurately characterizes the education system and its effectiveness.
Strategic Data Gathering

To deepen understanding of what is happening in schools, SEAs must be sure to capture crucial information about schools and districts across four areas:

  •  School culture, which helps ascertain the school’s underlying expectations, norms, and values; the depth of student engagement and efficacy; teacher and stakeholder engagement; school leadership; the quality of the learning environment; and school safety and student well-being.
  • Talent, which helps determine the school’s ability to recruit, induct, support, and retain staff and provide opportunities for professional learning and growth. These are often the first things cut by districts.
  • Execution, which identifies how well schools are implementing plans and strategies; the fidelity to a course of action; what school leaders, superintendents, and SEAs do to ensure plans and strategies are producing results, including monitoring and adapting strategies to consistently improve; and how effectively districts and schools allocate resources to meet identified needs and to achieve desired results.
  • Knowledge, which determines what schools and districts do with the wealth of data that they have and whether they actually are using it to improve results; whether the information is used broadly and shared; and the extent to which it is useful for improvement. Schools also need to determine when information can be made available, as continuous formative information can ensure that educators can improve their practice, tailor learning for individual students and address gaps in student skills knowledge in real time.

SEAs can ensure that school and district understanding of success in each of these areas is based on a rich store of local data—the kind that does not show up on test scores—by ensuring that schools and districts have simple tools, surveys, observation protocols, and information gathering systems to measure school health in these categories.

Consider, for example, the way in which a school or district must support the development of its students in ways not identified by test scores but that directly impact student learning and development. Several districts have identified the vital role that local school or district bus drivers play in the success of children facing the challenges of poverty. If the bus drivers are not showing up for work on time or have absences that cause the buses to be delayed, students may not arrive in time to get their school-provided breakfast. The lack of proper nutrition has been shown to materially impact student academic performance.

Similarly, schools may think that teachers are improving in engaging students in classroom debate or dialogue thanks to a new approach to reading instruction being used across the curriculum. But observations might reveal that students  rarely are encouraged to write reports based on topics they choose or have little opportunity to show personal efficacy in taking responsibility for directing their own learning. These observations would allow the school to adapt its approach rather than wait for the data to be collected, reviewed, and disseminated after testing.

Identifying Appropriate Measures

To address the broader terms of ESSA, SEAs must choose the types of additional measures it will use. These indicators should include the right metrics to measure what matters most in improving performance. In considering school quality factors, the so-called “non-academic” factors materially impact academic success, and we should be looking at how well educational institutions identify and address those factors. These non-academic factors include such areas as culture, effectiveness of teaching and learning, quality of leadership, student engagement, and resource allocation. These pieces are just as important as proficiency on tests and graduation rates.

Variations of External Review

Several states have gone beyond this level of information gathering and planning to help schools better understand their situations and make more informed decisions about improvement. While accreditation in the United States and inspectorate models in other nations have been criticized for focusing too much on compliance, external review builds on the best of these approaches. As Marc Tucker describes it:

The state would take responsibility for using the data generated by this system to identify schools whose students appeared to be in danger of falling significantly behind the expected progressions through the state curriculum and schools in which vulnerable groups of children were falling significantly behind. Schools thus identified would be scheduled by teams of experts trained and assembled for this purpose by the state. The expert teams would be charged with identifying the problems in the school and with producing recommendations for improving school performance through actions carried out by school faculty, the school district, the community, and state assistance teams.[9] 

Most states do not currently have the capacity to implement this type of system with any fidelity, but we have seen modified versions work well in a number of states. For example, Kentucky voluntarily underwent an external diagnostic review of its lowest-performing schools. The review process takes into consideration all aspects of school performance—from instructional quality to curriculum design, leadership capacity, teacher morale, student advising and community engagement—that influence learning.

Kentucky’s approach to accountability is not to render a single verdict on the school’s performance overall but to provide information about its strengths and weaknesses, so that teachers and administrators can identify priorities for improvement. Kentucky’s diagnostic reviews account for 20 percent of a school’s score in the state’s rating system. Every low-performing school undergoes external evaluation with teams of experts who visit the school and monitor performance and develop improvement plans based on actual needs identified in the evaluation. The approach has provided comprehensive and reliable data to meet federal and state requirements, make informed decisions, and guide and validate its ongoing work in achieving college and career readiness, according to state officials.

Michigan introduced a more streamlined approach to continuous improvement and the monitoring of progress as part of its accountability system. Its approach requires schools to undergo school-based audits that diagnose and identify schools’ greatest challenges with a degree of specificity that allows for planning and implementation of targeted strategies for improvement alongside a comprehensive web-based monitoring system.

These types of approaches have the potential to be scaled up in many other states, with outside support from educational improvement organizations and institutions that have the capacity and analytical tools that most states lack.

Additional Support and Interventions

As part of the improvement process, SEAs might deploy specialized turnaround intervention teams to support low-performing schools and their students. Arizona, for example, uses such teams as part of a structured, comprehensive support system for low-performing schools in the state. The teams—overseen by regional directors—provide technical assistance, professional development, progress monitoring, and compliance monitoring.[10] 

Importance of Formative Reviews and Analysis

SEAs also should include formative assessments in their plans to allow for corrective student and school-level actions over time. It’s not helpful to have a single test score on a single day that in May results in a pass or fail for a student, teacher, or institution, because the results may not be an accurate reflection of performance and will come too late to drive meaningful changes over the course of a year. SEAs should look at testing not as a one-time event, but as a means to look at student achievement over a multi-year period. Under a CI framework, interim assessments offer real-time data that can drive supports and ultimately impact results and growth.