As we discussed in the first part of this blog, the practices of assessment in higher education have been problematic for decades. From a student experience perspective, assessment is the root cause of significant institutional administration burden and stress. It is also at the centre of student angst in the form of appeals, academic integrity investigations and ongoing fears of a surveillance culture built on mistrust and punishment. The conduct of assessment within the frameworks of quality dictated by regulatory bodies has resulted in the use of technologies that have caused responses varying from anger to outrage and student action against platforms to ensure integrity (EFF, 2020) or to enhance the quality of the assessment submission (Honi Soit, 2023). The resulting pressure has seen many institutions reverting back to more traditional forms of decidedly inauthentic forms of assessment such as pen and paper exams as an uncontentious default (at least from the political and media perspective).
This retrograding and reversion results in students consistently and universally telling institutions in student satisfaction surveys at a local and national level their dissatisfaction with assessment and feedback. Extrapolating the impact and informing solutions through data is made more complex when student satisfaction surveys conflate satisfaction with overall happiness (Dean and Gibbs, 2015) and use simple measures to measure complex processes and outcomes (Winstone et al, 2022). In this context assessment has become a zero-sum game in defining the student experience, with student decisions about engagement, participation, attendance and transformation taken through the prism of the time and risk associated with achieving the desired grade through assessment, even if not participating is to the detriment of that very aim.
Yet each time we move away from the high stakes summative assessment regime defined by the invigilated exam, or the over-assessing we do in part to manage the engagement and participation challenges (and, often just the attendance aspect of participation, no one likes empty classrooms), we end up being challenged by the next existential crisis. We are faced with the disruptions of how generative AI will end the university as we know it. We are buffeted by changes to government policy around employability driven by how employers want better, more prepared graduates. Each time we get hyped up about the impacts of these challenges on our practice we snap back to the modes of assessment we feel familiar with. Even during the pandemic, the pervasiveness of proctoring was a recall mechanism of traditional practice in whatever replicated form we could muster at short notice.
Hegel in his thinking about the familiar argues that it separates us from the tensions and uncertainties that make us uncomfortable or uncertain and returns us to a safe space, with Aldouri (2021) observing that:
The familiar operates in Hegel’s description an image of a comfortable world in which we no longer experience ourselves (somatically and cognitively) as the site of bifurcations, divisions and tensions (that is, as modern) but are, rather, in a state of quietude, fully reconciled with ourselves in our given socio-historical context.
In the first part of this blog, I quoted Hegel and his argument that the familiar is not cognitively understood. In the Phenomenology of Spirit Hegel elaborates on this assertion by saying:
Quite generally, the familiar, just because it is familiar, is not cognitively understood. The commonest way in which we deceive either ourselves or others about understanding is by assuming something as familiar and accepting it on that account, with all its pros and cons.
The decisions taken by institutions and academics about assessment are not exclusively determined by the efficacy of assessment to support and facilitate learning. The same perverse logic applies to authentic assessment, with considerations of economies of scale, integrity and technological capability defining the nature of many assessments. The Hegelian familiar is the other gravitational force at play here. As a sector, we are deeply wedded to the exam as mode of assessment, returning to it like a recently found lost favourite sweater after the pandemic. We hold an almost romantic attachment to invigilated pen and paper exams held in large rooms, with students cramming every word they can remember into tiny books filled with smudged pen scrawls. Implicitly and sometimes explicitly we assert that they did us no harm when we were students. As Rapanta et al, (2022) note ‘…for many teachers, virtual assessment in the Covid-19 pandemic context has been a nightmare because it has been difficult for them to think of another way to assess that is not based on face-to-face or traditional exams’. This is despite how much we know that this method of assessment is flawed (and in the main is determinedly inauthentic) (see Dillon et al, 2018). We have returned to exams, reset up all the infrastructure and systems and acted like they never went away, even after a crisis. We criticise poor handwriting and marking loads, we hate setting multiple exams, we know it creates significant sometimes overwhelming cognitive overload in students at peak times and yet we return to it in part because it is a known quantity, an experience that shaped our education and one that we feel familiar with.
We have deeply inculcated a small suite of assessment modes and practices into our LMS, our technology suite, our integrity detection and our measures of quality, achievement, and performance. This rusts these practices onto our curriculum and quality assurance and enhancement processes. This is what makes it familiar and reduces the tensions created by change, in both staff and students.
We have been talking and debating the design and practices of authentic assessment for well over a decade now with little evidence of a widely accepted definition, frameworks and new types of assessment modes, question types, contexts and feedback. As universities have marketised and increased cohorts and breadth of program offerings, the driving imperative for the design and practice assessment has become coping with scale. Methods that enable the administration of assessment at scale such as auto-marking, online exams, AI generated feedback and other efficiency interventions are fed by more simplistic, dichotomous, memory-based, or standardised questions. This scale culture is informed and catalysed by the administrative culture, which has led to the prevalence of the assessment of learning, which is retrospective and compliant, but not developmental or in itself learning.
The critical opportunity for academics is to design authentic assessment for learning. If the assessment task can be completed, and all that students are able to do is demonstrate what they have already learnt (or memorised) then that task fails to be authentic. If feedback is only produced after it can be used by the student to learn (or feedforward) or to justify the allocation of a numeric grade, then it is not authentic. Design for learning means that students learn through the act of doing assessment. That doesn’t mean that prior learning is irrelevant, but it exists within frameworks of uncertainty, transition and shifting degrees of confidence. Traditional assessment is driven by the opposing forces of fear (of failing or not achieving a desired outcome) and confidence (I have memorised or learnt everything I need to in order to pass or achieve the same outcomes). The balance of those forces influences motivation, performance and wellbeing in varying interesting degrees. It also increases the weighting put on critical thinking, reflective practice and creativity, all of which are out of sync with memory-based testing, especially with the highest stakes being placed on practices like rote learning, memorisation and recall.
In part 3 of this blog, I will outline a new definition for authentic assessment and how we can build authenticity into the fabric of the assessment task. It will explore how students and staff engage in the design and practices of authentic assessment.
Photo: Unspalsh / Siora Photography
About the author
Associate Dean Education and Co-Director of the Co-Design Research Group