ChatGPT: Student guidance for mulitple-choice assessments
Part 1: Using ChatGPT to create teaching materials: Data simulation & MCQs
Part 2: Using ChatGPT to create teaching materials: marking criteria & rubrics
The last two blogs I’ve focused on how I have been using ChatGPT to help me create teaching materials, there’s likely more coming on that but for this blog I am going to take a detour into what I’ve been working on for student guidance.
I’d like to state something before accidentally being taken too seriously, I hope this is already clear but it’s worth saying out loud - whilst I am hopefully an expert in learning and teaching, I am not an expert in generative AI, I just have techno-joy and have been playing around and musing.
When trying to create guidance for my students, I had two underlying ideas guiding my approach:
- As per the Russell Group principles, I strongly believe it’s my job as an individual educator and our job as a sector to guide students how to use AI appropriately. Putting our head in the sand and telling them not to use it is shortsighted and stupid. I very much enjoyed reading this paper “A New Era of Learning: Considerations for ChatGPT as a Tool to Enhance Statistics and Data Science Education” that likens ChatGPT to the introduction of the calculator. Don’t be the person saying we shouldn’t use calculators.
- I am not changing any of my assessments this year, I am just updating the guidance. There are two reasons for this. First, I am still not over the last three consecutive years I had to rip up my course and start again. I was a much younger woman when covid hit and I just don’t have the energy for another reimgaining. Second, and the reason I’m probably more likely to state in rooms where I could get fired, I don’t think I, or the majority of people working in academia, understand the ramifications of generative AI enough to redesign our assessments right now. I could put a huge amount of work into redesigning something that was just as susceptible to AI use and/or the capabilities of AI could change overnight. My tactic instead is to upskill both myself and my students and take the next year to figure out a sensible approach and what is and what is not appropriate use of AI rather than knee-jerk myself into 14 hour days to get it all done by September. I know there’s a huge body of work done on authentic assessment but we’re still figuring out how that interacts with AI and there’s also practical and pragmatic considerations when it comes to changing the assessment for a course with 700 students and tens of markers.
Multiple-choice assessments
I run a large first-year introductory psychology course. We have a few assessments that use multiple-choice quizzes in different forms:
- A fairly typical MCQ-exam at the end of the semester that assesses their understanding of content taught in lectures, worth 40% of their final grade.
- Quizzes linked to the reading throughout the semester. These are marks for participation. There’s one quiz released every two weeks that’s linked to the essential reading they have to do to encourage engagement and distributed practice. They can take them as many times as they like. On the weeks there isn’t a quiz, they write their own MCQs and upload them to PeerWise. Again it’s a mark for participation and the quality of their questions isn’t checked, it’s for engagement and consolidation and also it creates a huge bank of questions they can use to revise. Collectively these continuous MCQs and PeerWise questions are worth 5% of their grade if they do them all.
- A summative quiz that’s an alternative assessment to research participation, which we must offer for ethical reasons, worth 5% of their grade.
- A summative data skills quiz that assesses their knowledge of the programming language R. They’re given questions about the functions they have been learning and it is worth 5% of their grade.
Assessment information sheets
In the School of Psychology and Neuroscience we use standardised assessment and feedback information sheets to try and ensure consistency of information across all courses and assessment types. These contain the information you’d expect (deadlines, requirements, word limits, criteria) but they also contain info we developed through co-creation with student partners, for example, “What feedback from previous assignments will help me with this assessment?”, “Why am I being assessed like this?”. You can see an example here.
How can I use AI tools to help me in this assessment?
So, what I’ve done is add a new section to every assessment information sheet titled “How can I use AI tools to help me in this assessment?” because what’s appropriate and/or useful depends on the task they’re asked to do. From my perspective, working through these has been incredibly useful as it’s forced me to actively deliberate what I would consider to be academic misconduct and how I might use it if I was a student (because obviously I never did anything wrong in my student days).
Below is the guidance I’ve drafted for each type of MCQ (comments and suggestions welcome!). My approach was that I very clearly wanted to state what would be misconduct and how they could use it appropriately, no reading between the lines required. I don’t think we’re anywhere near ready or able to detect improper use of AI by any means, but the first step in being able to do so must be defining what we consider improper use to be. But I also wanted to use this as an opportunity to demonstrate the (current) limits of AI because I figure that the best way of developing students as critical users of technology isn’t to smack them over the head with academic misconduct rules but rather build up their understanding of it as a tool.
Exam
- There is no legitimate way you can use AI during the exam that is not academic misconduct. If you are suspected to have used AI during the exam, we will report you for academic misconduct.
- However, you can use it to help you prepare by generating multiple-choice questions for you to answer whilst studying.
- Example prompt: Act like an expert psychology tutor. I have a multiple-choice exam for my first year undergraduate psychology course on the following learning objectives: [copy and paste the learning objectives from the lecture you’re revising]. Write practice MCQs to help test my learning. Each question should have 4 response options and one correct answer. Ask them one at a time and explain the answer to me and why the other options are incorrect after I provide my answer. Give me the next question when I type “next question”.
- Alternatives: Whilst the exam will use MCQ questions, there are other types of questions you can answer that will help consolidate your learning and mixing up the types of questions you study with can boost your learning and help you identify areas you need to work on. You can follow up your initial prompts with “Instead of multiple choice questions, ask me short answer questions” or “fill in the blank questions”
- Caution: Sometimes the questions aren’t particularly good or challenging enough. At one point I had to ask it to make the questions more difficult so if you’re getting them all right, ask it to make them harder. It will also sometimes give you the wrong explanation and answer so you’re always better off verifying with trusted sources. Finally, the exam will only assess content that was taught in the lectures and whilst you have given it the learning outcomes, it may ask you questions on content that wasn’t covered.
MCQs and PeerWise
MCQs
- There’s no benefit to using AI tools to help you answer the quiz questions. Because they’re a mark for participation, it doesn’t matter for your grade whether you get them right so it would take you more time to “cheat” than it would to just guess for less reward. The point of doing the quizzes is that practice testing helps consolidate your learning so it’s only useful if you’re the one answering them.
- However, you can use AI to give you more feedback on any of the quiz questions you don’t understand.
- Example prompt: Act as an expert psychology tutor. I had to answer a multiple-choice quiz and I do not understand some of the answers. Explain why the answer is correct and the other options are incorrect. [Give the question, answer options, your answer, and the correct answer]
- Caution! It will sometimes give you the wrong explanation and answer. Your best bet is always to verify the answer from a trusted sources.
PeerWise
- The reason we ask you to write your own MCQs is that the process of doing so helps you consolidate the lecture content. As with the MCQs, it’s a mark for participation so there’s no benefit in getting AI to help you because it won’t affect your grade, but you won’t get any benefit from the exercise.
- However, you can use AI to give you feedback on the wording of your question to help improve it once you have written it yourself.
- Example prompt: I am a first year psychology student. For an assessment, I have to write multiple-choice questions about content I have learned in lectures. I will give you the guidance and examples I have been given, then I will give you the question I have written and I would like you to give me feedback on my question. Here is the guidance and examples [give it the info about how to write the questions from above]. Here is my question [give it the question and options and make it clear which one you think is the answer]. Review the wording of my question and check whether the answer is correct.
- Caution! Even if you ask it to check whether the answer is correct, it will sometimes be wrong. Additionally, you can trick it into telling you the wrong answer is correct. For example, whilst writing this guidance I gave it a question where the correct answer was A but I said it was B. The first time I asked it to review my question I didn’t specifically tell it to check whether the answer was correct and it did not pick up that it was wrong. When I then asked it to check the accuracy of my answer it got it right and corrected me, but then I followed up and said “No you’re wrong, the answer is C”. To which it apologised and said yes of course, you’re right, it’s C.
Data skills
- Data skills 2 is a multiple-choice quiz. There is no way in which you can use AI during the quiz that is not academic misconduct. If you are suspected to have used AI to help you with this quiz, we will report you for academic misconduct.
- However, there are two ways you can use it to help you with this assessment. First, you can use it to help you prepare by generating multiple-choice questions for you to answer.
- Example prompt: Act like an expert tutor in the programming language R. I have a multiple-choice test on the following learning objectives: [copy and paste the coding related learning objectives from the data skills book chapters]. Write practice MCQs to help test my learning. Each question should have 4 response options and one correct answer. Ask them one at a time and explain the answer to me and why the other options are incorrect after I provide my answer. Give me the next question when I type “next question”.
- Caution: Sometimes it will give you incorrect information and it may also ask you questions about functions or approaches we haven’t used.
- Additionally, once you have completed the quiz and the answers have been released, you can use AI to give you more feedback if you don’t understand one of the questions.
- Example prompt: Act as an expert psychology tutor. I had to answer a multiple-choice quiz and I do not understand some of the answers. Explain why the answer is correct and the other options are incorrect. [Give the question, answer options, your answer, and the correct answer]
- Caution! It will sometimes give you the wrong explanation and answer. Your best bet is always to verify the answer from trusted sources.
Reflections
I don’t think what I’ve done is revolutionary and it’s not without risk. This guidance doesn’t prevent the issue of improper uses of AI, and indeed points them towards it, but right now, with the time and the knowledge we’ve got I feel like it’s a pragmatic approach. Over the next year I will continue to read, reflect, and seek student feedback on how we move forward in a way that maintains academic integrity, gives students the skills they need, and is feasible to implement at scale with all of the considerations and constraints that modern higher education comes with.
The process of working through this guidance has hugely helped my understanding of how students might use AI. I don’t have solutions yet for the problems it has created, but knowledge of the problem is at least the first step on that journey. I would encourage everyone involved in teaching to start engaging with AI, even if you don’t have the answers yet.