top of page
Search

Thoughts on Assessment

  • Colleen Farris
  • Nov 6
  • 3 min read

Updated: Nov 15

I have been exploring assessment for almost three months. While I did not have the benefit of a broad teacher preparation program as a lateral entry teacher, my recent exposure to the history of grading, assessment, and educational technology in the United States opened my eyes to the manipulation, for political purposes, of our nation’s narrative concerning assessment (Au, 2008; Corrigan and Beazley, 2020, Lampland and Star, 2009). Prior to this, I understood that public education was vulnerable to political whims. I knew that legislators rather than educators made funding decisions in my state, often without accountability for the impacts of those decisions. I did not realize how intertwined the narrative of failing schools, high-stakes testing, and technology were with efforts to retain power for white, monied Americans (Au, 2008). 


I discovered another insidious problem with educational technology assessment tools...

This new understanding leaves me with a highly critical view of assessment; but there is another, secondary level of concern that my exploration of educational technology has surfaced. There are many worthless, and even morally reprehensible, educational technology tools that are marketed to appeal to the people who purchase them, not the people who have to use them–teachers and students (Corrigan and Beazley, 2020, Learning and Teaching: Teach HQ, n.d.). Even tools that effectively solve teacher needs, such as autograders like ZipGrade, are only useful if teachers are relying on multiple choice assessments, which are of limited use in really measuring what students know and can do. In many ways the educational technology tools I have been exploring work against what we know are best practices for teaching and learning, but there is more. In my recent exploration of a variety of assessment techniques and technologies, I discovered another insidious problem with educational technology assessment tools–they waste teachers’ time. 


...the more I focus on creating authentic measurements of learning, the less helpful digital assessments become.

Let me explain using an example. I recently used QuickKey to create a group assessment activity with open-ended prompts to be evaluated using a 4-column rubric (see image) for my second year Culinary Arts program students. The learning objective was for students to be able to understand and explain the duties and responsibilities of Food Protection Managers (FPM) with regard to the requirements of facilities management in foodservice establishments that serve the public. The assessment tasked student groups with visualizing the safety concerns related to each area of the facility and explaining FPM duties in their own words. The assessment required them to draw on their classroom learning, and hands-on kitchen lab experiences. What I discovered is that the more I focus on creating authentic measurement of learning, the less helpful digital assessments become (Learning and Teaching: Teach HQ, n.d.). The assessment that I created using QuickKey would have taken me less than 30 minutes to create and print on paper. Even using a simple platform like Google Classroom would have taken less time. 


4-column rubric for Culinary Arts group assessment using QuickKey.
4-column rubric for Culinary Arts group assessment using QuickKey.

The inaptly named “QuickKey” application, on the other hand, involves a multi-step input process to make assessments, one question at a time. This requires a huge time commitment. Questions can be edited and reused for later assessments, but that still requires question-by-question review. A built-in rubric with custom feedback scales that allow teachers to depart from traditional grading seems attractive, but not worth the time investment required to build and maintain assessments in the platform. Pair that with the sunsetting of Google Chrome support for Google Applications, such as QuickKey, and the vulnerability of investing the time in building assessments for one digital platform becomes apparent. Teachers do not have that time to spare.


References


Au, W. (2008). Unequal by design: High-stakes testing and the standardization of inequality. Routledge.


Corrigan, A. & Beazley, G. (Executive Producers). (2020, October 30). Failure to Disrupt book club with Chris Gilliard [Audio Podcast]. TeachLab.


Lampland, M. and Star, S. L. (2009). Reckoning with standards. In M. Lampland and S. L. Star (Eds.), Standards and their stories: How quantifying, classifying, and formalizing practices shape everyday life, pp. 3-24. Cornell University Press.


Learning and Teaching: Teach HQ. (n.d.). Generative AI and assessment. Monash University. https://www.monash.edu/learning-teaching/teachhq/Teaching-practices/artificial-intelligence/generative-ai-and-assessment

 
 
bottom of page