“AI” and Academic Integrity
In another forum, I recently wrote, “Current fears about the impact of AI on academic integrity only highlight what we have known for a very long time: our habitual methods of assessing student learning, including at higher levels of research training, are fundamentally flawed.” Studies conducted long before the present AI media-melodrama demonstrated that more students cheat than we detect (see, for example, the devastatingly powerful work (n = 14,086!) of the late Tracey Bretag and her team). This is our integrity problem in terms of how we teach and assess student learning. Even worse, there is a stratification by wealth in this: rich students can afford the finest quality of cheating, which remains largely undetectable. We’ve always known this as a Higher Education sector, and yet, for the most part, we carry on setting the same old kinds of assessment. This is a generous gift to the large multinational cheating companies that are our symbiotic shadow industry. The devil might even argue that ChatGPT has somewhat levelled the playing field, democratising and making aspects of this technology more freely available to all students.
If we were not able to guarantee the integrity and equity of our assessment frameworks at unit level, our courses would be devalued. Under UNE policy, the responsibility for assessment design and the integrity of marking is part of the delegated authority held by unit coordinators—subject to appropriate governance accreditation. If, as a UC, I have reason to suspect that my assessment framework is vulnerable to any kind of cheating, it is my responsibility to change it. My responsibility is to ensure the integrity of the outcomes of my teaching and how student learning is assessed. Students are also responsible for the integrity of what they submit; however, academic integrity is an educational issue, and if we don’t teach students about this (in every unit and course offered by HASS), who will?
Since last year, we have faced a huge increase in reported integrity problems. This is partly because AI makes it easier to detect cheating because a large percentage of what LLM (large language model) systems churn out is obviously garbage (expensive garbage, as it is arguably destroying the planet in the process, burning up colossal amounts of energy and water). Rather than just waiting for the tsunami of cheating to ruin everything we value in university education, we can take action on one crucial front: we can rethink how we assess every learning outcome from the ground up.
How to do this? TEQSA, our benevolent industry regulator, has been publishing extensive guidance over the past year or so, and they will be expecting that as educators, we are thoroughly informing ourselves. Many useful resources are helpfully curated for us here: https://www.teqsa.gov.au/guides-resources/higher-education-good-practice-hub/artificial-intelligence
One of these items that might be of particular interest, in terms of practical application, is this from UTS, as recommended by TEQSA: https://lx.uts.edu.au/collections/artificial-intelligence-in-learning-and-teaching/resources/quick-guide-for-adapting-to-ai/
The time to be revising and reinventing assessment tasks is now, if ever. If you’ve been thinking, ‘One day, I’ll fix that assessment’, do it now, and I’ll see you on the other side!
Recent Comments