- CIRTL Homepage
- Meet our Team
- Resources
- Short Guides
- Short Guide 1: Starting Well
- Short Guide 2: Scaffolding Learning
- Short Guide 3: Icebreakers
- Short Guide 5: Discussions for Online Learning
- Short Guide 4: Visualising Thinking
- Short Guide 6: Universal Design for Learning
- Short Guide 7: Group Work
- Short Guide 8: Reimagining Practicals
- Short Guide 9: Assessment in the Age of AI
- Universal Design for Learning
- CIRTL Series
- Sustainable Development Goals Toolkit
- Learning Design Workshops
- Connected Curriculum
- Group Work
- Civic Engagement Toolkit
- Learning Outcomes
- DigiEd Reading List
- Ethical Use of GenAI Toolkit
- Short Guides
- Professional Development
- Research your Teaching
- Projects
- Events
- Office of the Vice President for Learning & Teaching
Short Guide 9: Assessment in the Age of AI
This CIRTL Short Guide provides a brief overview of AI in the context of Higher Education and some specific suggestions of ways to design assessments less vulnerable to AI interference. It closes with some examples of ways that staff at UCC have designed assessments which embrace and/or mitigate AI.
What is AI?
“AI” in the context of education usually refers to generative AI, such as chatbots (e.g. ChatGPT) which uses machine learning to create text that could conceivably have been written by a human, to write code, or to solve equations. Users type a question or prompt into the chat and the AI will generate a response which can then be further refined (in real time) through conversation with the user.
AI chatbots like ChatGPT are a type of Large Language Model ‘trained’ on vast quantities of text from the internet. Responses are generated through recognising patterns and then predicting the most likely responses – similarly to how predictive text on a mobile device will suggest the next word in a sentence.
If you’d like to read more on generative AI in education, we recommend starting with either this Generative AI Primer from Jisc or this excellent AI in Education course from the University of Sydney.
Limits of AI
Watching coherent, grammatical prose on practically any topic appear on the screen as if by magic can make AI seem like an unstoppable force. But, like all technology, it has its limits. Most importantly, AI is only as good/accurate as its training materials (e.g. the free version of ChatGPT’s ‘knowledge’ stops in 2021 while paid versions have access to the internet and more up-to-date materials) and the prompts and clarifications entered by the user. It doesn’t ‘remember’ conversations as such, so entering the same prompt at different times can result in dramatically different responses.
Additionally, AI can be incredibly biased (depending on the material it has been trained on) and has been known to “hallucinate” and produce entirely erroneous material – including citations to texts that do not exist – so anything it produces should be taken with a grain of salt and, ideally, checked for accuracy and coherence. Which, of course, requires a degree of expertise and knowledge of the topic.
Lastly, AI really struggles with less standard requests requiring creativity and forward thinking. As it relies on predictive modelling to construct its responses, anything with a very standardised format is much ‘easier’ for generative AI than something unique or unexpected.
AI and Assessment
Conveniently, we can use these limitations to help design assessments which are less vulnerable to AI-powered academic dishonesty. After all, as early attempts demonstrated, attempts to ban AI entirely will almost always fail. New tools are constantly appearing while old ones are evolving; it’s nearly impossible to accurately identify their outputs, and as generative AI is integrated into tools such as Microsoft Word or Google Docs, it’s becoming increasingly difficult to determine where the line between AI- and person-created content exists. Lastly, AI will almost certainly be part of students’ lives once they leave UCC, so it’s worth thinking seriously about how to provide them with the tools to best navigate the world they’ll find upon graduation.
So What Can We Do?
Jisc describe three broad approaches to AI in assessment:
- Avoid (e.g. vivas or in-person exams)
- Outrun (e.g. AI detection software)
- Embrace (and adapt) (e.g. redesign assessments)
In most contexts, the third approach is the most practical and sustainable and so some suggestions on how to (re)design assessments are provided below.
Types of Assessments
One response to the increased availability and sophistication of AI tools is to create more assessments which explicitly demonstrate the student’s intellectual ownership over their work – whether through in-person assessments such as presentations, vivas, invigilated exams, etc. or through assessments which document process such as the annotated drafts idea from Shane Crowley (Food Science) below – but it is not always appropriate (or advisable) to assess student learning in these ways, so it’s worth considering alternatives.
When writing an early draft of this short guide, I asked ChatGPT to suggest some assessments which would “limit its usefulness” and found its response really interesting. Not only did it suggest many of the same assessments or approaches we’d recommend, it also included a brief explanation of why it would potentially struggle with each. Some of the most broadly applicable ones are below (you’ll notice that ChatGPT used US English for its responses) and click here if you’d like to see the full list.
- Emphasize higher-order thinking: Focus on assessing higher-order thinking skills such as analysis, synthesis, evaluation, and application rather than simple recall or factual knowledge. Design questions that require students to demonstrate their ability to think critically, solve problems, and make connections between different concepts. These types of questions are less likely to have direct answers that can be easily generated by ChatGPT.
- Use scenario-based or case-based assessments: Present students with real-life scenarios or case studies related to the course material. Ask them to analyze the situation, identify relevant concepts, and apply their knowledge to propose solutions or make informed decisions. By using contextualized assessments, you challenge students to demonstrate their understanding in a practical context that may be less predictable for ChatGPT.
- Incorporate collaborative elements: Encourage collaboration and group work in your assessments. Assign tasks that require students to work together, discuss ideas, and synthesize information from multiple perspectives. This approach not only promotes active engagement but also reduces the reliance on ChatGPT as students are required to contribute their own insights and thoughts.
- Include authentic assessments: Authentic assessments mirror real-world applications of knowledge and skills. Assign projects, presentations, or research papers that require students to delve deep into the subject matter, apply critical thinking, and demonstrate their ability to synthesize information. Authentic assessments are inherently difficult for an AI model like ChatGPT to replicate as they often require creativity, originality, and context-specific understanding.
- Provide opportunities for reflection and self-assessment: Incorporate reflective elements in your assessments, such as asking students to evaluate their own work or explain their reasoning process. This encourages students to think metacognitively about their learning and helps them develop a deeper understanding of the subject matter beyond what ChatGPT can provide.
Beyond Assessment
While assessment is the focus of this short guide, it’s important to remember that assessment is just one part of Teaching and Learning. Therefore, it might be worth considering some of the ways that you could use AI in your teaching:
- quickly generate MCQs with answers and distractors
- develop a draft lesson plan
- produce a piece of writing for students to analyse
- play the role of an interviewee in role plays
- solve a mathematical problem
- write code
- provide an initial draft to be edited
Of course, if you do use ChatGPT or similar tools in your teaching, make sure to model best practices for your students by explicitly citing your use of AI and discussing how/why you used it.
AI and Academic Integrity
Whatever assessment decisions you make, please seriously consider at least one conversation with your students about your goals for the assessment (ideally linking it back to the module and programme learning outcomes as well as discipline-specific skills and knowledge) and about where and how AI fits. If you decide to forbid the use of AI, explain why to your students and if you decide to allow limited use – or, indeed, design an assignment actively requiring students to engage with AI – explain your reasoning behind those decisions, too. And, of course, give students guidance on how to document and cite their use of AI (with specific examples of how to do so as the standards are still evolving) as UCC’s updated plagiarism policy explicitly includes AI/ChatGPT. If you use ChatGPT in a professional capacity, model this use for your students, explaining why and how you use it and acknowledge it.
The more transparency there is around the rationale behind assessment decisions and the lines around acceptable or forbidden AI use, the better students will be able to meet these expectations. Because, at the end of the day, AI is a tool and, as such, is neither inherently good nor inherently bad – what matters is how it is used. But to use it effectively – or to follow advice to avoid it completely – students need to understand what AI is, why it is or is not appropriate, and, perhaps most importantly, to see the ways their own experience and expertise shape its use and effectiveness.
Examples from UCC
UCC staff have designed – or redesigned – their assessments in the wake of these developments with some embracing AI and others seeking to limit its applicability to their assignments. As noted above, there is no one-size-fits-all approach to designing assessment in the face of AI advances, but the examples below are a great reminder that it can be done – and done well!
Gillian Barrett (Management and Marketing) and Ciara Fitzgerald (Business Information Systems)
In our final year undergraduate entrepreneurship module, we conducted an experiential assessment with 130 students. Students were asked to choose a ‘local’ small and medium sized organisation (SME), interview a manager to assess and analyse current business model(s) and to propose a complementary business model for future growth opportunities.
Students were then required to use ChatGPT and to prompt ChatGPT on the current and future business models of their chosen SME. The role of the student was threefold. First, to understand the importance of ‘asking the right question’ to improve their learning (Abdelghani et al., 2022). Second, to help the student to evaluate and to critically analyse the ChatGPT output (Mollick and Mollick, 2022). Finally, to encourage ‘responsible use’ of ChatGPT (Cacciamani, Collins and Inderbir, 2023).
The students were initially surprised at the inclusion of ChatGPT (given its novelty and disruptive nature); however, this assessment element helped students to leverage their learning and overall, to affirm confidence in their knowledge.
Joel Walmsley (Philosophy)
In spring 2023, I redesigned my essay assignments so that students were required to use ChatGPT – and to document their process – as part of the work. Doing so still meets the learning objectives of the module, and still requires that students engage with the texts and ideas that we discussed in class. But it also provides an introduction to the technology itself, by enabling students to learn, and demonstrate, best practice in using it.
In my module on Philosophy of AI, I usually assign one essay question on the Turing Test, and another on Descartes’s contention (from 1637!) that it is “inconceivable” that a machine could use language in the way that humans do. But this year, instead of the more familiar process of analysing the literature and examining the objections, students were required either to conduct a Turing Test with ChatGPT, or else to get it to answer in the style of Descartes and develop a dialogue accordingly. In both cases, students had to document their process with screenshots and commentary, drawing on both the texts and ideas we’d discussed and their own ideas about the philosophical content and the prompts they chose.
The resulting essays were a delight to read. Students produced creative, insightful, and highly original essays that demonstrated exactly the kind of engagement and understanding I was hoping for. Furthermore, one student told me that the thought of “cheating” hadn’t even crossed their mind, because the project was so much fun. I was pleasantly reminded of this recent tweet from Andrew Ng (former head of Google Brain):
Click here to read a longer description of this assignment.
Shane Crowley (Food Science)
The below is extracted from a longer post on assessments using self-documentation and versioning as a way for students to show their work when submitting written assessments.
Students are asked to write because writing is considered useful. Muddled thoughts can be clarified once an attempt is made to write them down. Extracting key information on a topic and combining them into an overview is often far more effective than merely reading about the topic.
Although writing is a process, student work is often corrected as a static artifact. The final version is assessed and deemed a reflection of that underlying process. In an era of Large Language Models, there is an increasing probability that this final version is the only version and it was drafted by an undetectable AI.
Perhaps then the process of writing — what is often valued — needs to be re-emphasised. An analog solution is simply to require students to perform their writing in-person. However, this is not always a fair, accessible or practical approach. Serious essays, literature reviews and group projects often involve many hours of work, dead-ends and revisions. It is not feasible to migrate such projects to the classroom, and it is also not aligned with an increasingly digitalised and distributed workplace.
To be implemented effectively, versioning has to be explained to students and to teachers. Students have to approach their work in a disciplined, considered manner. Teachers need to account for how the work was produced and not merely how it appears in its final form. The structure and transparency that this can introduce may create more clear expectations for student writing and improve how it is assessed.
Additional Resources
- “A Generative AI Primer” (Jisc, 2023)
- Generative AI Guidelines for Educators (National Academic Integrity Network, NAIN)
- AI in Education (an excellent short Canvas course put together by staff and students at the University of Sydney which explains how generative AI works, the different tools, ways to use it in an educational setting, and how to reference/credit AI)
- Assessment Reform in the Age of Artificial Intelligence (Tertiary Education Quality and Standards Agency, Australia)
- 101 Creative Ideas to Use AI in Education (open-access eBook edited by Chrissi Nerantzi; Sandra Abegglen; Marianna Karatsiori; Antonio Martínez-Arboleda)
- AI and ChatGPT resource (UCC Skills Centre)
- Teaching in the Age of AI (Center for Teaching, Vanderbilt University)
- (AI)2ed Project (UCC staff and students paired to evaluate assessment and experiment with ChatGPT - Toolkit for the Ethical Use of AI forthcoming in Sept./Oct. 2023)
- ChatGPT bibliography (a Zotero library curated by Lee Skallerup Bessette, Center for New Designs in Learning and Scholarship at Georgetown University )
- Practical Responses to ChatGPT and Generative AI (Monclaire State University)
- UCC's Academic Integrity for Examination and Assessment Policy
Resources for UCC Staff
(note: you may need to be logged into your UCC account to access some of these resources)
- UCC Plagiarism Policy
- Fostering Academic Integrity in Learning and Teaching (UCC Digital Badge with information/resources on artificial intelligence and academic integrity)
- Derek Bridge, UCC (Computer Science) contextualising ChatGPT in February 2023 (Click here for the 10-minute summary or click here for the full, 25-minute version; must be logged into Panopto with your UCC credentials to view)
- CIRTL seminar series on assessment design (recordings and resources from Spring, 2023 sessions)
Questions? Suggestions
AI is a constantly-evolving area, so we'll be updating this Short Guide often and will also be creating more AI Short Guides as our expertise expands. If you have suggestions for future AI Short Guides or resources, please let us know! You can also use the same form to ask any general AI questions you may have and we'll integrate the answers into this (and future) Short Guides. (Please note that this is an anonymous survey so we can only respond to you directly if you include your contact details with your responses!)
Click here to submit questions or suggestions to CIRTL. Or email Dr Sarah Thelen (author of this Short Guide) directly