\n
So, hello, I'm Sarah Howard and I'm here to talk to you a little bit about AI and education. There's been a lot of excitement about this right now, particularly one tool, which is ChatGPT, which has got everyone thinking about what does AI mean? What does artificial intelligence mean when we talk about teaching, learning, and specifically assessment? So importantly, to always remember is that when it comes to technologies and education, we've been here before, we've had a number of technologies come into education from film, radio, television. AI is another piece, it's actually been here for a while. ChatGPT has really made us stand up and look. But we will, as we have with other technologies, we will adapt it, we will learn how to work with it, we'll integrate it into our practices, and we'll be ready for the next thing that's going to come. Importantly through this, particularly around assessment, which in the higher ed space has gotten everyone really worried, is that good assessment is still good assessment.
\n
So we're gonna talk a little bit about how we think about assessment and how we design for the new AI tools that are available to us, specifically ChatGPT. So to be able to design for thinking about AI, we need to develop our AI competencies. What do we know about artificial intelligence? What can artificial intelligence do? And what can humans do? So when we think about this, the first thing is to know a little bit about tools like ChatGPT.
\n
ChatGPT, and again, this has been everywhere. I'm not telling you anything that you wouldn't have read. Um, is a type of AI called generative AI. So it's highly sophisticated. It can capture text, images, music, it can reconfigure it into new objects, into new music, but not necessarily new. So these are summaries, reconstitutions, they are what we already know.
\n
This is a kind of narrow AI, meaning it does one specific thing and it's driven by a question that a person poses to it. So it's driven by what you ask it. And this is really important when we think about how we use it and what it's capable of. So in this, when we think about ChatGPT it is fallible, okay? It's guessing when you ask it a question, when you ask it to say, you know, what is the reason for learning the states and capitals, um, of a particular country? It's going to give you its best possible answer, but it's guessing because it doesn't know the answer. So it's always guessing. I had a colleague ask ChatGPT, what is three plus three? And it said six, which is the correct answer, obviously. And he said, no, it's four. And ChatGPT corrected itself and said, oh, I'm sorry I made a mistake. I will work that into my database.
\n
So what it can't do is judge, it can't judge the quality of an answer. It can give you mathematically the best possible answer. Humans on the other hand, can critically engage with information, knowledge, learning experiences. They can judge. And this is a really important distinction when we think about what do we need to know as educators to be able to design assessments, learning exercises and activities that really take advantage of artificial intelligence, but also make sure that our students are doing the work that we want them to do and are able to demonstrate the knowledge that we want them to have. So importantly, when we also think about using AI, you have to ask, as I'd said, it's a narrow AI, you have to ask it questions. So to get a good answer, and again, there's lots of examples of this out in popular media and in the news, what have you, you have to actually ask it really good questions.
\n
You have to ask it probing questions. You have to ask over and over again. People spend hours querying this particular technology to get the answers that they want. To do that you have to know what you're asking. So do students, for students to use that particular tool to address a good assessment question, one that you've asked them to critically engage with a topic, or you've asked him to draw on multiple sources, they have to really query that, that tool, which means they've gotta really know what they're asking, which means that in that process they're exhibiting a depth of knowledge of the content that you're asking them to engage with. So that doesn't sound too bad. So if you think about in the end, when we think about the basics to consider, when we design our assessments and when we think about the use of AI in teaching and learning, we wanna build our own understanding as, as experts in our field, but also how we expect to draw that expertise that we're giving to our students back so we can see where they are.
\n
We need to build our understanding of the limitations of AI. So for example, ChatGPT, and I've just touched on a few kind of key points, but we need to build our own understanding. So this means playing with it, experimenting with it, talking to our students about it, about how we use it and how they might have used it to help them in their work. What's important is that a well designed assessment task and well-designed work, students can do much, much more sophisticated work than we see that can comes from artificial intelligence, such as ChatGPT. But we have to design those tasks. We have to design those assessments to bring that out of students. If we ask them something low level, they're gonna give us a low level answer. Something the AI can do. But if we ask them sophisticated, challenging questions, which is what we should be doing, that's quality assessment, then they will have to give us that. So that's what we need to think about when we think about our capacity to use AI and how we want our students to use it, and how we want to challenge our students even within this changing technology landscape. Thank you.
\n