Incorporating Generative AI in Authentic Assessments

The emerging use of generative AI has led to concerns about academic integrity (AAIN, 2023). However, instead of bemoaning its use, we argue that in business education, integrating generative AI can provide insightful and practical ways for our students to demonstrate their abilities and contributions to business in ways a machine cannot. Our recent experiences in integrating the use of generative AI in a large postgraduate coursework program demonstrated that students were largely overconfident with the output produced, accepting content at face value. There was a lack of critical thinking applied to the information produced. Students neglected to cite the use of generative AI. This was despite instructions that use of generative AI was acceptable, as long as it was formally cited. This blog describes how we supported students to use generative AI in an ethical manner. 

The challenge

The rapid rise of the use of generative AI in higher education in 2023 has challenged us all about whether, and how much, these tools should be allowed as an aid to student learning and assessment. TEQSA has been informative from an integrity perspective (TEQSA, 2023).

Information is now not only readily available, but so too are plausible, generic answers that students can submit and could earn marks for (Webb, 2022). However, this generated output lacks both ‘burstiness’ (Stokel-Walker & Van Noorden, 2023) and truthfulness. When a human writes, their work has burstiness, meaning the sentences are different lengths, and are uneven. Generative AI on the other hand, creates similar length and structured responses with an often bland, overly objective stance. This lack of burstiness is often the greatest clue that something is amiss when reading machine generated text.  

ChatGPT, or the ‘Chatty one’ as it is sometimes called, is designed to provide plausible yet creative answers, they sound ‘truthy’. It is this plausibility that has led some students to an overconfidence in what generative AI could produce. Generative AI is designed to be creative, but not truthful. Part of the reason generative AI lacks truthfulness is due to the parameters set in terms of its creativity level. The default setting of OpenAI’s ChatGPT creativity level aims to strike a balance between providing helpful and informative responses while also introducing a moderate level of creativity and novelty. Therefore, while the plausibility factor means what it produces could be true, and sounds true, any critical thinking applied to most AI answers will highlight generative AI’s flaws.

The free version of ChatGPT has an additional flaw. Free ChatGPT has had no internet access since 2021. As a result, much of the information it is drawing from is out of date (Webb, 2022). Generative AI can also get stuck in its own recursive loop. This means that given a certain prompt, it will keep producing the same answer even if prompted to reword it. Once it produces information, it continues to produce the same information. As a result, the incorrect answer remains incorrect, discriminatory, or out of date.  

How are students using generative AI?

Our experience to date shows that the highly engaged students viewed the output of generative AI as ‘yet another source’. They both critiqued the source and cited its use.  On the other hand, students who only engaged with the task at a surface level or who relied on generative AI to produce ‘the answer’, were unlikely to give critical, scholarly responses, and failed to cite the AI tool as a source. These students produced incorrect, discriminatory and/or bland attempts that lacked human burstiness in the responses.

We therefore need to do more work to educate our students on the acceptability of using generative AI. We need to be clear on what ‘the work’ is. While generative AI can do some groundwork, similar to the way students may have used Wikipedia or Googled answers previously, the plausible output of generative AI may lead students to accept its response without question. This is particularly evident where students have stayed at a surface level of learning and, as such, are unable to explain the information in their own words.  

The approach

Our cohort is largely international students studying finance and accounting majors. We must consider this when looking at our experience, as this may have impacted the actions taken by students. As an ethics and sustainability course, it was not the course’s remit to teach AI use. Rather, it was to teach students to use AI ethically and in context (COPE, 2023). Therefore we encouraged the use of AI as a potential source. We also required acknowledgement of its use.  

The basis of our approach to support students’ ethical use of AI drew on fundamentals of assessment design. The first principle is to determine – what is the purpose of the assessment? Next, we needed to determine what was being measured, or what ‘the work’ was (Soledad Ibarra-Sáiz et al., 2021). For this assignment, students were to write a memo to their ‘manager’ exploring a solution to a problem and demonstrating their written influencing skills. In the process, students demonstrate their ability to use concepts encountered in the course in an applied manner to a real business problem.

Therefore, ‘the work’ was the ability to link course knowledge to a complex problem to address gender equality (SDG5) and reduced inequalities (SDG10) while fulfilling SDG4, quality education. Equally critical to ‘the work’, given this largely international cohort, was the ability to write in their own words.  At the same time, the course reinforces the practice of scholarly traditions such as supporting assertions through citations.  

Assessing ‘the work’

We had determined ‘the work’, now we also needed to ensure the students knew how ‘the work’ would be graded. Again, the cohort is important, as recognising students’ background, expectations and concerns can help academics support them in their transformative learning experiences. By clarifying and lifting expectations of outcomes there was a change of grading standards. The standards were now focused very clearly on measuring application of course knowledge and linkages. Applying course knowledge was now ‘the work’ being assessed, having the knowledge was a given in this instance. Guidelines provided on how to incorporate the use of AI tools, such as via an appendix or as a research source, was key to supporting students.

Findings

Our findings show that students had a general lack of awareness of what AI is. They had no understanding of AI ‘truthiness’ and ‘fibbiness’. The differences between different versions of generative AI are little understood by our students (this could also be an issue for faculty). Students may not even recognise they are using AI (for instance in the case of the newly released Google Docs Help Me Write). They may not see the problem with using these tools in relation to their studies. The data also showed those who experienced problems with the use of AI were more likely to be international students. They might lack confidence and experience in their language skills.

Discussions must therefore not only cover what is ‘the work’ but also what is acceptable use, and where AI is lurking (Gendron et al., 2022). From auto correct to grammar checking, these tools are embedded in our daily use. However, translating a full assignment between one language to another may not be acceptable if there is a course learning outcome linked to written communication. Further, using generative AI as a research assistant or to reverse-engineer citations, will not meet the learning outcome of conducting traditional research as done just a year back. As the ‘Chatty one’ is creative, it ‘invents’ citations too! 

Supporting students

Education is key to building skills and capabilities for success in this new AI accessible world (Markauskaite et al., 2022). We are now engaging students in a discussion about what ‘the work’ is. This has been critical. Also critical is a discussion about the value of their degrees and what they signal to the world at large. Discussions in class have included:

  • are you translating a word or a paragraph?
  • do not generate your whole work. You must be able to demonstrate that you can communicate in English to a certain standard.
  • when is AI a co-author, or when is it an editor?

These are all the start of a change in how we educate and support our students to use generative AI ethically (Foltynek et al., 2023).

We brought the students back to a demonstration of learning in an authentic and applied manner. Generative AI was deliberately promoted as ‘another source’ to draw information from. It also highlighted several limitations with AI, meaning that students must refine, fact find, sense check and integrate various sources. This shifts the student to higher order thinking and mimics more closely the roles they may take in their future jobs. The future of work will see generative AI integrated in day-to-day operations. Our role in business schools is to ensure our students can use generative AI ethically. They must be able to do the ‘work’ their degree certifies them for.

***

Feature image created by Alison Casey with generative AI tool Adobe Firefly.

About the author

Associate Professor Lynn Gribble

Associate Professor Lynn Gribble is an Education Focused academic in the School of Management and Governance at The University of New South Wales Sydney. Awarded an AAUT citation for her leadership and impact as a digital innovator, she has taught management to large classes of Master of Business Administration and Master of Commerce students for 15+ years and has pioneered the use of voice recordings, audience response platforms and learning analytics to personalise every interaction with her students, increasing both their engagement and learning outcomes. Lynn co-leads Communities of Practice in Digital, Online Learning and Innovation, and the 4Cs (A Strategic Approach to Impact) and is Senior Fellow of the Advance HE UK.

Associate Professor Janis Wardrop

Associate Professor Janis Wardrop is an award-winning Academic leader, educational change agent and commentator on management education, business ethics and governance. With 15+ years’ experience in academia as both lecturer, program leader and manager her expertise lies in adopting a holistic approach to curriculum design and delivery. She is a leader, mentor and champion of education focussed colleagues both at UNSW and across the sector. Janis is also co-Chair of the NSW branch of the Higher Education Research and Development Society of Australia, and also serves on the Oceania committee of the Management and Organizational Behaviour Teaching Society.

%d bloggers like this: