This three-part blog post will look at the challenges of assessment in modern universities, both in terms of defining it but also deigning and enhancing it to ensure that it can deliver both the benefits of assessment of learning (which institutions require) and assessment for learning (or perhaps changing that common conceptual framing to learning through assessment).
There have been millions of words written about the practices of assessment in higher education. It has been deeply studied, critiqued, reimagined and undesigned to the point where defining what assessment is has become an essentially meaningless exercise. Assessment has become the measure by which success is determined, both from the perspective of the student and their learning journey, but also in how it forms the pathway to graduate employability through the heavy lifting work of assuring learning and graduate attributes (see Oliver, 2015).
Assessment is also the one of the most derided and criticised parts of the higher education experience featuring as it does in the lower reaches of many student satisfaction surveys (although not always directly correlated with the indicators of poor teaching, see Burgess et al, 2018 and Langan & Harris, 2019). It seems nobody really likes assessment. Students hate doing them, academics hate marking them. As one student noted in a 2007 study by Crossman ‘Yes, if I see the value, I just swallow my dislike and just get into it’. She added, ‘I see the value in them; I just absolutely hate doing them’. Assessment is also the most systematised, policy-driven and fear-informed process in higher education, often worthy of its own separate policy ecosystem, controlled affordances for its stress and overall life impact and organised like the rules of the road with all the fear of punishment that comes from infractions or the reward for successful performance.
The ‘doing’ of assessment has moved further and further away from notions of learning through its conduct to become defined by compliance, accreditation and fears of integrity and cheating. Zajda and Rust (2020) argue that managerialism and the neo-liberal, marketised university have made academic staff increasingly accountable for the resources allocated to assessment, especially as student numbers increase. Assessment practices are often front and centre in media coverage of higher education from how technology is being used to disrupt or take over assessment functions (see this article on the ‘dangers’ of exam proctoring, amongst hundreds of others), to how a university education is not helping failing students (see this article on why students fail), to how it has become weaponised within the industrial disputes that have disrupted universities since the pandemic (see this article on the marking bans in the UK). These issues are magnified and multiplied at-scale where assessment and feedback capacity is increasingly metricised, subject to continued pressures to be more productive and identify opportunities for automation in increasingly time-compressed and number-expanded teaching periods. Graham Gibbs noted in 2006:
There is less academic time available per student and intense pressure on academics to increase research productivity. At the same time there is increased bureaucracy associated with meeting external requirements for quality assurance and requirements for accountability concerning use of funds. Even if student numbers had remained stable it would have been difficult to maintain the previous level of academic time invested in assessment.
I have explored the impacts of scale on teaching and learning in greater detail in these previous blog posts starting here.
Authentic assessment has been used a conceptual marker differentiating better, more ‘authentic’ assessment practices from the derisively labelled ‘traditional’ forms such as exams and multiple-choice tests. Authentic assessment has been deployed as the panacea for the challenges created by the graduate employability agenda (Sokhanvar et al, 2021), the impacts of generative AI on integrity (Gonsalves, n.d) and unholy and unexplainable link between student satisfaction, university rankings and successful domestic and international student recruitment (Manvile et al, 2021). Wiggins (1996) argues that authentic assessment is where we (the academy) ‘…directly examine student performance on worthy intellectual tasks’, with worthy representing the apparent unworthiness of alternate tasks of assessment. Another common frame for defining authentic assessment is context. Darling-Hammond and Snyder (2000) argue that authentic assessment develops the capacity of learners to adapt their learning to different contexts as their career progresses. The most common definitions of context conflate the notions of assessment in the context of the ‘real world’ (see Ashford-Rowe et al, 2017 and Karunanayaka & Naidu, 2021), with assessment tasks that applies skills to the ways of working students will use in their jobs, and potentially only in the first graduate job (see Villarroel et al, 2018 and Wiewiora & Kowalkiewicz, 2019).
Authenticity is another overly used and poorly defined corollary to the theory and practices of assessment. Petragalia (1998) argues that the notion of authenticity, however vaguely defined in assessment literature is so familiar it does not need defining, through as Hegel in his philosophical treatise from 1907 ‘The Phenomenology of Spirit’ warns ‘quite generally, the familiar, just because it is familiar, is not cognitively understood’. Heidegger argues that authenticity resides within our sense of being and our relationships with it. There is, in the Heideggerian world, no single authenticity, with learning emerging from where individuals feel unsettled or unhomely with their own being or identity. It is this angst at the ontological state of not knowing that can be educative within their learning experiences and thereby positive (Withy, 2015)
At the core of any analysis of the definitional and modelling frameworks of authentic assessment is a decided (but not exclusive to be fair) absence of the agency and the current and future ontological state of the student. Paul Gibbs in his 2001 article on the marketisation of higher education argues that:
The adoption of this model of the market for HEIs, and its accompanying discourse of marketing, is based on a manifestation of the concept of rights, particularly consumer rights, and can be seen in the move towards structured, consumable education through modularisation, semesterisation and self-directed learning. This leads to education being dealt with as a commodity. The sense of ends rather than means that this confers is most visible in outcome-driven education. Here process is incidental and the outcome sought is not an educated person in the classical sense, but an accredited person able to use their educational outcomes (or competencies) to further their economic desires. (p.87)
Gibbs argues that process and the journey of higher education as made manifest through assessment have been replaced by a problem-solution mentality more akin to consumer behaviour theory rather than education. Many articles have attempted to navigate this tension through the application of a wider theoretical lens to authentic assessment such as Freirean critical pedagogy (see Serrano et al, 2018) or work-based or real world learning (Archer et al, 2021). Others have leant into the ends versus the means dialogic and used outcomes as a starting point for the design and rubricisation of authentic assessment (see Gulikers, 2004 for example).
Academics design assessments, whether it be for context, for perceptions of relevance to employability or to comply with policy and accreditation. Students undertake assessment often with little or no agency over its design, its content or its impact. Assessment is familiar, we all have engaged in assessment since we were small children. We have been poked, prodded and tested on for decades. But assessment is increasingly not designed for students. It is designed for teaching. Ramezandadeh et al (2017, p.299) argue the counter logic positing that authenticity exists in how the assessment is framed by the student and the teachers own framing of self, noting:
The results revealed that authenticity in teaching consisted of themes of being one’s own self, pedagogical relationships, contestation, and ultimate meaning which were enacted in the participants’ practices through their sense of responsibility, awareness of their possibilities, understanding of pedagogical relationships, self-reflection, critical reflection, and critical hope.
This leaves educational designers and academics with a challenging quandary. Authentic assessment gets wheeled out each time another disruptive force threatens to change higher education forever (see Bryant, 2023). It is used as a counter to the regulatory arguments that institutions need to do more to prepare students for the challenges of work. Authentic assessment design is removed from the purposes of learning to the point of becoming abstracted compliance in the form of graded tick box exercises (the very thing that authentic assessment should be railing against). Yet in the wider context of being truly authentic, deep engagement with the student, their agency over how the assessment is related to their learning and benefits of an assessment design that utilises critical reflection is critical to discovering pathways to assessment practices that are for learning. To do this, we need to redefine authentic assessment within both the philosophical framing of authenticity and for the institutional and quality assurance purposes that assessment will always hold in a modern university.
In part 2 and 3 of this blog, I will explore a (re)defining of authentic assessment and posit a framing of different modalities of authentic assessment that straddle this pedagogical/institutional conflict.
Photo by Sincerely Media on Unsplash
About the author
Associate Dean Education and Co-Director of the Co-Design Research Group
Authentic assessment is not hard to understand: get the student to do what they will have to do when they graduate, and see how well they do it. The hard part is providing the opportunity for the student to do it, either in a real workplace, or a reasonable simulation. As an example, I am sitting in the games control for the Australian Crisis Simulation Summit in the Moot Court of the Australian National University in Canberra. A dozen students are sending out fake news items, and government statements, to a hundred playing the role of government teams dealings with cyber attacks, plague, and war, across Australia and the USA. This is a good way to learn to run a country, but how would you grade it?