Technology devices, apps, and artificial intelligence (AI) capabilities are constantly evolving. It’s exciting and a bit of a head-spinner. Their role in statewide and districtwide assessments—for example, automated scoring of student written responses and online testing—will continue to expand. Automated item generation, automated scoring, and intelligent score report generation will reduce costs to states and districts and enable more immediate delivery of scores and guidance on student learning needs.

Development of even better technology-enhanced items—items that enable highlighting text as evidence to support an answer or graphing of mathematical functions—happens every year. Soon, science assessments will contain interactive, manipulable environments as well as the animated demonstrations of science phenomena like the effects of friction and starting heights on the motion and speed of objects. All of these developments will enable assessment of deeper conceptual understanding and skills that traditional items cannot. That’s better assessment.

Technology and AI will play an even larger and perhaps stunning new role in teaching, learning, and classroom assessment. How?

Through embedded assessment. Assessment already is embedded in the teaching and learning process. Effective teachers make moment-to-moment assessments of their teaching performance and student learning. They use those assessment results to make decisions about, for example, re-explaining a concept, posing “learning questions” to students, and so forth. And that’s in addition to the homework, quizzes, and unit tests they use for summative purposes like grading students. This classroom assessment practice that teachers use as part of the teaching and learning process requires skill and insight that many teachers develop in their first few years of teaching.

Assessment of student learning that is built into online learning environments is a new form of embedded assessment. It’s becoming quite common in undergraduate education. So far, it’s much like the test items that accompany textbooks and other curriculum materials— if those questions were interspersed with learning content.

Right now, many online, AI driven learning systems include diagnostic assessments that indicate what students currently know and can do and what they’re ready to learn next. For example, the assessment may determine that a second grader can add three digit numbers, using either the standard algorithm or decomposing the three digits of one number into hundreds, tens, and ones. The intelligent system then might decide that this student is ready to learn about addition with carrying. After tutorial instruction in the system, or instruction by the classroom teacher, the learner then would be assessed to determine whether the learner has mastered carrying or needs more instruction and practice. This sort of online assess-teach-assess approach is not unlike what teachers do when they work with students and groups each day. The advantage of the online system is that it can manage student learning and learning needs, deliver instruction, and track progress of individual students and groups for teachers, automatically. It also can be used as a core curriculum or supplement. Feedback that supports learning tends to be messages about errors and hints on how to respond correctly. Research shows that this kind of feedback is moderately effective in enhancing learning, but not as effective as elaborated feedback, which includes hints and also provides additional information, extra study material, and an explanation of the correct answer, all of which are especially effective for mastering higher order learning outcomes (Shute, 2008).

Embedded assessment will soon get much smarter, thanks to AI and machine learning. What will that look like?

Some online learning systems capitalize on features of online games and children’s TV shows to engage learners, keep them motivated, and yes, to help them learn academic content. Imagine you want students in your classroom to learn how to decompose three digits to aid in addition, as well as to follow the standard algorithm using this example: 389 +142.

You know the standard algorithm:

  1. 2+9=11, write down the 1, carry the 1
  2. 8+4+1=13, write down the 3, carry the 1
  3. 3+1+1=5
  4. Answer: 531

Nicely done. You’ve followed the algorithm but there is no direct connection to the meanings of the values and sums in the ones, tens, and hundreds columns. One approach to the decomposing process goes like this:

  1. 389 = 3 hundreds, 8 tens, and 9 ones
  2. 142 = 1 hundred, 4 tens, and 2 ones
  3. That’s 4 hundreds; 12 tens, or 120; and 11 ones, or 1 ten and 1 one
  4. That’s 520 + 11 or 531

Much easier—and it helps learners connect to a fundamental arithmetic concept: place values, or the numbers in the hundreds, tens, and one columns of a three digit number.

An intelligent system could demonstrate both approaches as part of a story played out by animated algorithm and decomposition recurring characters. Such a system could track the student’s progress through solving problems like these following both approaches, identify errors, provide elaborated feedback, and continue to provide practice until the student demonstrates mastery of addition with carrying.

In even more sophisticated, simulated learning environments, learners interact with objects to explore, observe, experiment—and learn specified concepts and skills. One such example is EcoMUVE, a curriculum ecology that is delivered online in a multi-user virtual environment. The learner navigates this immersive 3-D virtual world, observing problems and attempting to solve them. These interactions are then used to target learning outcomes, determine student learning and assess subsequent needs. This approach to embedded assessment has been called stealth assessment (Shute, 2011) and ongoing, ubiquitous, unobtrusive assessment (DiCerbo & Behrens, 2014). A more general term is invisible assessment.

What are the benefits of embedded assessment versus stop and test assessment? That’s a topic for another day.

References

DiCerbo, K. E., & Behrens, J. T. (2014). Impacts of the digital ocean. London: Pearson. (Available from https://www.pearson.com/content/dam/one-dot-com/one-dot-com/global/Files/about-pearson/innovation/open-ideas/DigitalOcean.pdf)

Shute, V. J. (2008). Focus on formative feedback. Review of Educational Research, 78(1), 153-189.

Shute, V. J. (2011). Stealth assessment in computer-based games to support learning. In S. Tobias & J. D. Fletcher (Eds.) Computer games and instruction. Charlotte, NC: Information Age Publishers.

 

Steve Ferrara, Ph.D.
Steve Ferrara, Ph.D., formerly of Cognia, was a Head Start teacher and high school special education teacher. His career includes teaching and advocating for special education, serving as a state assessment director, and conducting award-winning research and innovation in assessment, language assessment, and psychometrics research. He was also Maryland’s state director of student assessment during the days of the Maryland School Performance Assessment Program (“Mizpap”). Steve designs assessment programs, conducts measurement research, and has published on a variety of assessment topics, including a recent book he coauthored with Jay McTighe, Assessing Student Learning by Design: Principles and Practices for Teachers and School Leaders. Steve has three grandchildren so far.