Strengthening Performance Based Assessments in the Classroom

For several decades now, the K-12 education world has promoted the use of performance-based assessments (PBAs) for measuring student learning as a strong alternative to traditional assessments based on rote memorization and recitation.* The appeal of incorporating PBAs as part of a teacher’s assessment strategy remains strong to this day.

I have seen this myself over the course of the last three years, having observed nearly 200 classrooms and interviewed teachers across 10 different U.S. states, as well as in Mexico and Egypt. In that time, I have seen and discussed approximately 100 PBAs covering any number of topics. While I encountered a great deal of variety among these assessments, each of these PBAs were created, or at least should have been created, with some common principles in mind.

First, the performance-based component of a PBA indicates that students are expected to demonstrate their learning. This demonstration should involve the creation of a product, the performance of a skill, or the completion of a process that allows students to demonstrate their thinking (McTighe & Ferrara, 1998). Second, the assessment component indicates that the task is measuring something and assigning a value of quality based on that measurement. Ideally the something being measured is important and relates to the standards and learning objectives students are responsible for mastering.

Although the above two principles seem straightforward, educators can struggle to adhere to them when developing their PBAs. Even when teachers master these principles they must still be able to take the data they have gathered from their PBAs interpret what the data signals about student learning, and then adapt subsequent instruction accordingly.

A teacher’s ability to balance all of these aspects is what scholars refer to as assessment literacy (Fullan, 2001; 2002; Park, 2017). And while assessment literacy extends far beyond PBAs, as a teacher’s assessment literacy increases, he or she should be able to improve the quality of PBAs they incorporate in their classroom. With this idea in mind, here are a few guiding questions that might help classroom teachers strengthen their PBAs and, in turn, their evaluations of student learning:

What important content or skills is the PBA actually assessing?

If you are calling a task an assessment, then by definition it must be measuring something. And if you are choosing to spend valuable classroom (and planning) time on the activity, the skills or content you are assessing should be important and relevant to your learning standards. What is more, students should know with certainty what knowledge or skills mastery they are expected to demonstrate. In a number of PBAs I have observed, however, it has been unclear to me, and even many students, what exactly the activity is assessing.

Consider a real-life example. A popular PBA in STEM schools calls upon students to form teams, design and construct a bridge from various materials (often balsa wood and glue), and then test the strength of the design by adding weights to the bridge until the structure collapses. The team whose bridge holds the most weight is usually declared the “winner.” However, it is often the case that none of the members from the winning team can explain how the content they were studying in class informed their design in such a way that helped them win—or learn. Why then did the students build a bridge at all?

Whenever I see the bridge building activity, I try to ask students, “Why did your bridge hold more weight than the other bridge?” Usually, the answer given is some version of, “We used more sticks and glue” or “Our design was better.” While those responses may contain some elements of truth, to what degree do they reflect the learning goals of the related content? Was the lesson about tensile strength or performing load calculations? Was the task meant to discern whether students would align support beams at the optimal angles? These questions inevitably vary based on the maturity of the students, the content of the class, and the specific goals of the lesson. However, educators must never lose sight of the fact that the content or skills being assessed should be apparent throughout the duration of the task (McTighe & Ferrara, 1998).

Of course, there are times when educators leverage these types of activities to assess skill development (McTighe & Ferrara, 1998; National Research Council, 2001). In general, the same principles hold. Whatever skills are being assessed should be made known from the beginning and performance expectations should be stated clearly (McTighe & Ferrara, 1998). It is not good enough to say “We are working on collaboration.” This is surely a noble pursuit, but educators ought to tell students which specific elements of collaboration you are assessing. Is the focus on providing constructive criticism or feedback? Are you interested in the use of clarifying questions instead of negative or dismissive statements? These same questions apply to any type of skill you might want to assess from a given activity.

How does the performance or creation aspect of the task demonstrate students’ acquisition of the skills and content you are assessing?

In choosing to use a performance-based assessment, you have forgone the use of a more traditional or recitation style exam. That decision likely reflects a belief that you can learn more about the student’s knowledge by observing them actually applying that learning through some related task. Recall the bridge building activity. A geometry teacher might be teaching a unit about angles. She relates the content to engineering, explaining that the degree of angles in support beams can influence the strength of a structure. Knowing this, the teacher expects that students will build bridges containing angles that maximize the strength of the bridge. Once the bridge has been completed (and before it is destroyed), the teacher may decide to measure the angles using a protractor to see if they align with design guidelines discussed in class. Again, the idea is that the product students created should signal what learning has occurred (Dixson & Worrell, 2016).

Do you still need to give a traditional assessment for the same content after students have presented to find out what they know?

Remember, a PBA is an assessment. If you design one correctly, you should be able to determine what your students have learned related to the skills and content taught (McTighe & Ferrara, 1998; National Research Council, 2001). If you schedule time for performance-based assessments and then find that you almost immediately give a traditional quiz or test over the exact same material, this might be an indication that your PBA isn’t really much of an assessment at all. While you certainly may wish to use multiple measures to triangulate student learning, be careful not to fall into the trap of developing elaborate “participation grade” assignments rather than true problem-based assessments. Your time, and that of your students, is far too valuable to waste. Considering the time required for creating, implementing, and responding to data from PBAs, teachers must make sure that the investment yields more than a change of pace from the typical class day.

* see for example, Dixson & Worrell, 2016; McTighe & Ferrara, 1998; National Research Council, 2001; Park, 2017.

References

Dixson, D., Worrell, F. (2016). Formative and summative assessment in the classroom. Theory Into Practice, 55, 153—159.

Fullan, M. (2001). The meaning of educational change (3rd ed.). New York: Teachers College Press.

Fullan, M. (2002). The role of leadership in the promotion of knowledge management in schools. Teacher and Teaching: Theory and Practice, 8(3), 409-419.

McTighe, J., & Ferrara, S. (1998). Assessing learning in the classroom. Burlingame, CA: National Education Association.

National Research Council. (2001). Classroom assessment and the National Science Education Standards. Washington, DC: National Academies Press. Retrieved from http://www.nap.edu/catalog/9847/classroom-assessment-and-the-national-science-education-standards

Park, Y. (2017). Examining South Korea’s elementary physical education performance assessment using assessment literacy perspectives. International Electronic Journal of Elementary Education, 10(2), 207-213.

Jeffrey Harding, Ph.D.
Jeffrey Harding, Ph.D., is the director for development of AdvancED. His primary responsibilities focus on the research and development of products and processes that can be applied in educational and classroom settings to improve outcomes for a variety of stakeholders. Dr. Harding began his career as a middle school language arts and social studies teacher before moving on to work as a graduate research assistant at the University of Georgia’s Institute of Higher Education, where he studied policy topics that spanned the K-20 spectrum as well as quantitative and qualitative methods. He has presented at national and international conferences on a range of topics, and his work was recently published in the peer reviewed journal Teachers College Record. His most recent scholarly endeavors have focused on issues related to the transition of students from high school into postsecondary education.