\n
\n
 
\n
\n

My name is Dr. Armin Alimardani. I teach at the School of Law, University of Wollongong, and I generally teach Law and Emerging Technologies, and Criminal Law, and hopefully in the future law and artificial intelligence. So one of the main objectives of law and emerging technologies in my subject is to prepare students for the future, kind of future-proof them. So I wanted them to learn some of the basics of these emerging technologies; and of course one of them is artificial intelligence.

\n

So, I started with teaching them how they can use it, and more importantly what are the limitations where they can use it appropriately, or not. So if Generative AI is used responsibly, I think it can have many benefits for the students.

\n

One simple one is students can provide text that is pretty lengthy, and they have to shorten the text so they can ask Generative AI to make the text a little bit shorter. That's one way. One of the things I really like is the students write their assignment and then feed it to Generative AI and ask it to generate 10 killer titles for their assignments. And how about we use it as a way to counter-argue our arguments, to figure out what are the holes in, you know, our arguments.

\n

Incorporating Generative AI in my third-year subject, it was pretty stressful, because I wasn't sure how it's going to look and what it's going to be. No one ever done that before. It was just kind of new to everybody, and honestly I wasn't sure how I'm going to, you know, perform the whole thing. Everything came together bit by bit, but I knew for a fact I want to teach the students how they can use it, how to avoid their problems, potential problems, because I know that's one of the area of concerns at the university and workplaces in general.

\n

So for around 4 times every week we practice some Gen AI. How to generate things and where they can rely on it, how they should fact-check those things. And then for their assessments, I look at the assessment criteria; and one of the assessment criterion was creativity. And I thought, that would be a cool thing to use Generative AI and see whether students can use it for the creativity purposes.

\n
\n
","tags":[]},{"id":3236,"family":12401,"category":2,"order":5,"title":"Student perspectives on gen AI - Zhifei","pageId":12401,"pageDescription":"

UOW Student perspectives

","area":"showcase","url":"student-perspectives-zhifei","image":4222,"thumbnail":"api/attachment/file/thumb/4222/thumb","login":false,"video":"
\n
\n
 
\n
\n

I have used generative AI in my study, such as ChatGPT, which is a very popular tool that a lot of students are using now. So the experience for me, it was very, I was just very curious about how it works at the beginning. But then I later realised it can be very helpful for someone who's not from like an English speaking background because I'm like my assignments are like essay writing type mostly. And a lot of times I struggle to learn how to write in English.

\n

I did look for help from the university, like the writing assistant, and they did provide like academic consultation for like students in terms of like English writing for chat. Like they can give you feedback very quickly, and then you can continue to ask them questions without feeling like I'm going to be judged, feeling like I'm not being stupid or anything. So in terms of that, I feel like have good experiences with them.

\n

But also sometimes because I'm writing about cultures and contemporary society, I do feel like some feedback that it gives me is ideological. Like it contains certain assumptions or, you know, so yeah. But that's sad. But I'm glad that we are able to judge that if the feedback they're providing us is good or not.

\n

So in my career in the future, say, if I am going to become an academic scholar, I, I will still like, see it as like a writing or language tool that I can, helps me to develop my ideas or even works as AAI notes that I can, you know, tell, tell him, tell them like my ideas and then they help me to map out what I can write. So, yeah, but I do think in terms of like administrative aspects, it will save me a lot of time to do this to deal with all those administration things.

\n

Yeah, I do have concerns about using it because the way it was built can be biased even though it was trained as it says, as it demonstrates, it says to be trained as a non-biased or say non-judgmental AI. But still, like with my experiences of engaging with it, I do support it some, you know, cases where they are very judgmental and also bias in terms of like say gender and also racial aspects. So which is why I feel like school education or university education cannot be replaced because you need some guidance and also a society or community to practice the knowledge that you learned.

\n

So that people to raise the awareness of like the problematic sides of the AI, but not just like to escape away from it because it's just so problematic, but like to learn it more.

\n
\n
","tags":[]},{"id":3237,"family":10642,"category":2,"order":6,"title":"Developing guidelines for responsible GenAI use in assessments","pageId":10741,"pageDescription":"

Student Career Development | Learning, Teaching and Curriculum (LTC)

\n

Dr Matalena Tofa

","area":"showcase","url":"matalena-tofa-genai-assessments","image":3481,"thumbnail":"api/attachment/file/thumb/3481/thumb","login":false,"video":"
\n
\n
 
\n
\n

My name’s Matalena Tofa and I’m a senior lecturer in career development, learning and work integrated learning. And this semester and trimester, I’m teaching postgraduate subjects that are focused on career development learning. And so some of the things at a big picture level that I was hoping to achieve were that students could make informed choices about whether and how and when they use Generative AI both to support their learning and as a future professional.

\n

And secondly, is that students can use Generative AI tools for the sense of criticality and have the confidence and knowledge to question and evaluate the output it generates.

\n

And thirdly, that students would be able to use Generative AI in ways that are consistent with instructions and policies, and which possibly sounds a bit boring.

\n

But I think it’s important as a professional because if you’re using it in your future career, you need to use Generative AI in ways that are consistent with company policies and legislation and professional standards and so on.

\n
\n
","tags":[]},{"id":3238,"family":7843,"category":2,"order":7,"title":"Student ambassador perspectives: The impacts of genAI","pageId":9322,"pageDescription":"

UOW Student ambassadors perspectives

","area":"showcase","url":"student-perspectives-the-impacts-of-genai","image":2481,"thumbnail":"api/attachment/file/thumb/2481/thumb","login":false,"video":"
\n
\n
 
\n
\n

Student 1: People started talking about it, and I'm like, oh my God, what is this? It's like, good to use as a tool.

\n

Student 2: ChatGPT is sort of an AI tool for us to generate things or just have a chat with it.

\n

Student 3: I honestly think artificial intelligence is a great way to learn just because you can ask it any question that you have about say like, oh, I'm revising this concept and I don't understand this particular part of it.

\n

Student 2: If something that you are not clear and your lecturers is not aware of that you can just ask this kind of AI tools so that you will get answers for that.

\n

Student 1: I think it's a really good thing to use for learning, because if you can't understand something that your tutor is saying, you can ask it and get it told to you in a different way. So there's definitely pros and cons to it.

\n

Student 2: I studied like mechanical components. Say I don't understand a specific part that I'm learning about or that I have to design. I could ask it, oh, like, I need help with this. Or if we are doing a coding thing, maybe help with that. Not like letting it write it for you, but just, oh, I'm having trouble with this particular thing.

\n

Student 1: There's also that plagiarism aspect of it that I guess people are concerned about.

\n

Student 4: The idea of having something, write my, write my assignments for me, or write articles for me, I, I just, I, I, I don't, I don't trust it (laughs).

\n

Student 3: I feel like, although we need to keep in mind that it can be used for cheating and things like that, it just, it can have a lot more of an impact on learning as well.

\n

Student 2: I just use it to generate a speech text I will be going to use tomorrow. I don't know if that's good, but I'll just see if that works.

\n

Student 3: I've only used it once or twice, just for help with my resume and things like that.

\n

Student 2: The impact of this kind of AI things on my future career, I would say I can use that to develop my employability, employability skills, for example. I can use that for my, to review my resume, to review my interview reactions. I think it's a good tool for me to get practice.

\n

Student 3: I feel like there's a lot more that we can do with applications like ChatGPT. I think there should be more of an emphasis on saying to students, oh, if you, if you need help with an aspect, and I am not available, like you can go to AI. I'm not too sure where it's headed, but excited to see where it goes.

\n
\n
","tags":[]},{"id":3239,"family":9323,"category":2,"order":8,"title":"Student perspectives: The impacts of genAI","pageId":10081,"pageDescription":"

UOW Student perspectives

","area":"showcase","url":"student-perspectives-genai-september-23","image":3062,"thumbnail":"api/attachment/file/thumb/3062/thumb","login":false,"video":"
\n
\n
\n
 
\n
\n

Speaker 1:
AI's here now. So the more we use it, the more it'll learn about us, but the more we'll learn about it.

\n

Speaker 2:
It should be only used as a helper tool to guide you or maybe narrow down your research field, right? Not, not to use this as a result or an end goal.

\n

Speaker 3:
With my comms and media, I've learned a lot of classes actually want us to use it for some assignments, like to help us generate an understanding of a certain topic.

\n

Speaker 4:
Not all the information that is provided is accurate. However, it can be used as a means of providing further clarity.

\n

Speaker 5:
If you were more friendly towards AI, maybe incorporate that as part of the curriculum you know, be more sensitive about it and eventually just come to the terms that it's more of a tool.

\n

Speaker 6:
I don't think it has that personal kind of connection like that tutors and lecturers have.

\n

Speaker 7:
Even I ask a very stupid question, <laugh> give the best answer.

\n

Speaker 8:
We're gonna use it for like, main points and stuff, and then we'll look into all those main points and like do further research about it instead of like, just rely on AI.

\n

Speaker 9:
I think it sort of, the line hasn't been drawn yet between what's real and what sort of fantasy in this land of AI.

\n

Speaker 10:
It's quite innovative, which I like. That would definitely play a much larger part in university life.

\n

Speaker 11:
Like it's, it's good to come up with ideas and to help as a starting point to build upon and then do your own research. So I've, I've seen it as a pretty useful tool for me in my studies.

\n

Speaker 1:
And then I think we can maybe use it in universities to maybe make the experience of certain degrees or certain things to learn a bit more digestible for people.

\n

Speaker 3:
Chatgpt will definitely shake things up.

\n

 

\n
\n
\n
","tags":[]},{"id":3240,"family":7523,"category":2,"order":9,"title":"A considered exploration of Artificial Intelligence in education","pageId":7556,"pageDescription":"

Responsible Innovation and Information Systems | UOWD

\n

Dr Zeenath Reza Khan

","area":"showcase","url":"considered-exploration-ai-education","image":2406,"thumbnail":"api/attachment/file/thumb/2406/thumb","login":false,"video":"
\n
\n
\n
 
\n
\n

Hi, I'm Zeenath Reza Khan, an advocate for ethics in education, cyberspace and the greater community, and my passion lies in being a facilitator of learning. I wanted to begin by talking about some of the sensational headlines we have been reading about ChatGPT, unfortunately. From banning such services to going back to pen and paper, some of what we have been seeing making the news are concerning to say the least. As educators, our topmost priority is imparting knowledge, helping our students become independent, lifelong learners, preparing them with future ready skills, and of course, all the while ensuring that they can distinguish and make responsible decisions for the greater good of our society. We are visionary leaders of our classrooms innovating, using cutting edge technology and transforming our students' learning experiences to be so much more than books, texts, and tests.

\n

...But progress is funny. Some we embrace with open arms and some with disdain. So I'm now pondering why we are seeing so much threat when there's space for so much opportunity. Why are we jumping to the do or die mode? Because there is something new that's growing in popularity. But don't get me wrong, this conversation cannot be complete with eyes wide shut. We need to walk into it with all our senses intact to judge and determine the pros and cons, the premise where this is an opportunity and where it isn't. Here's why. While essay mills and answer-providing services by very nature of what they do and the service they provide are unethical violations of academic integrity values and go against the very foundation of academia, are content generators really in that same realm of concern? When we first realised how easy it was for students to copy and paste from the vast gulf of text readily available on the internet or the grammar checks and the paraphrasing tools out there, I don't know if we reacted this way too but then again, we weren't this connected for sure.

\n

Information was not spreading this fast. So perhaps the ease of access to information bombarding our smartphones and the quick social media posts are all adding to the panic. Like most of my esteemed colleagues and friends from the realm of academic integrity practice and research, I think I'm taking this slow, with baby steps to watch the revolution progresses with a pinch of salt so that I don't make hasty decisions that might very well be a disservice to my students. Last year when the hype hadn't even begun, I had asked one of my classes to try using ChatGPT to write a system requirements report. Armed with highlighters, I then asked them to mark the report based on the grading rubric. I still remember one of my students looking up an utter annoyance, exclaiming, \"this report is bleeding!\" for all the highlighting she had done throughout the pages. For all the wrong, inaccurate, or incomplete details the tool had generated.

\n

Talking to students on campus and through the ENAI W G Centre for Academic Integrity in the UAE, we've quickly realised our students aren't that dependent on the tool. They may seem like they constantly have the tool open on their devices, but most students, whether in schools or university, either haven't heard of the tool yet or realize it is merely a cognitive offloading. And what it generates can mostly be used to help start off an assessment.

\n

Plus AI hallucinates. And this means that when the tool is unable to find what it is looking for to answer a question asked in its training data, it simply makes up the data to fill up the gap. Let's not also forget training data and algorithm bias, which is too often omnipresent in AI systems. Misinformation, fake news, over reliance are all challenges. However, more students getting to use the tool under guidance means more students realising the weaknesses and the challenges. With younger students, I agree the challenge is greater because the lack of maturity in understanding and recognising the inaccuracies. But including students in discussions around such tools can be surprising and refreshing for us and empowering for them in the process to becoming independent learners.

\n

I welcome all colleagues and peers, particularly who haven't yet to try out the tools so we can all understand it better. After all, we are our subjects content specialists, and we know how best we can use the tools and where we shouldn't.

\n

Using the tool can also help dispel myths and fears and give each of us the power to decide how best to approach this dilemma. Stemming from efforts to building a culture of integrity and understanding AI ethics with clear guidelines on how and where students can use the tool, how they can acknowledge and attribute the use, I do believe we can harness the opportunities in the horizon and help our students become ready for a workplace that is already integrating such AI content generators.

\n
\n
\n
","tags":[]},{"id":3241,"family":12402,"category":2,"order":10,"title":"Student perspectives on gen AI - Michael","pageId":12402,"pageDescription":"

UOW Student perspectives

","area":"showcase","url":"student-perspectives-michael","image":4221,"thumbnail":"api/attachment/file/thumb/4221/thumb","login":false,"video":"
\n
\n
 
\n
\n

So I think gen AI will greatly impact the future of higher education. I think it will be transformative just as the pandemic somewhat was as well. This use of AI technology, following from the use of a lot of digital technologies for that hybrid learning, will very much reshape how we do education as a whole, but in particular higher education. Because a lot of tasks that would normally take many, many hours, generative AI might have the power to do it for you in a matter of seconds.

\n

There's a lot of work to be done in terms of monitoring where it's at, where the technology is as it's still rapidly evolving and trying to really sort through all the opportunities we can gain through it, whilst mitigating a lot of those risks to not just academic conduct, quality and security and integrity, but also ensuring that we can benefit from that technology. We really shouldn't fear generative AI. We should really be looking to embrace it and being prepared for that kind of change and looking to how we can really make use of it to better higher education. So I think it will have an enormous impact. It's just something that we need to really all be part of thinking through because it'll impact not just the quality of education, but even the very systems behind it throughout this institution and so many others and how they deliver the kind of quality we expect from our education, our research as well and research training, let alone our teaching and learning.

\n

My research focuses on the use of cancer treatment with radiotherapy and nanomedicine. And one thing I can see coming out of that is a lot of moves in the healthcare area like that, where I'm at, that multidisciplinary interface between a lot of disciplines is that move toward personalised medicine. And I can see AI in this kind of field doing everything from really providing enhanced and automated treatment planning for patients that improves the accuracy of our treatments or even just enhancing our productivity and workflow in the research aspect. And as to how we bring forth a lot of those new and novel innovations that will vastly improve those patient outcomes. And so for my research and the very health related field that I'm moving down in medical physics, I think that there's actually great potential for a lot of that personalised medicine to really piggyback of what generative AI can bring and what those technologies can provide in terms of not just improving our workflow, but improving the overall effectiveness or efficacy of how we treat patients. And the same goes for higher education as well.

\n

My background as a PhD candidate is academic. So any work that I do at the university will greatly be improved by AI technologies and generative AI in the quality of teaching and learning and the ability to help improve the student experience overall for students coming through the University of Wollongong. Personally, I'm quite excited for the possibility of AI to really make that difference in our lives. I can see just things like that that occupy quite a bit of my time. Just the ability to automate that and have that extra support to do that right through to if I'm teaching. And I'm very much up for the challenge of trying to really extract the really valuable stuff out of it while also ensuring that we maintain that academic security and that protection of our educational quality. Because all that underpins just how well we guarantee a quality student experience here at UOW. I have a lot of representative duty to represent all the students on shore in Australia and across our UOW family.

\n

And when I talk to students across all different schools and faculties, all different backgrounds, it'll depend on what the student needs. Some students will have potentially used it to assist them with their essays and are very much on top of the tech and use it all the time. Others haven't had need to use it at all. So while I personally have a need to, I've heard a lot of those stories about people who have, and I think a lot of raising that awareness about what it is, what it does, where it might be going, and how to prepare for best use of it while also being aware of the caveats of its use.

\n

There's a lot of those little areas that would just improve workflow, improve our productivity, free us up time for us to focus on other things, run those little things that could be done by some form of gen AI and that could really accelerate the process what we do. I think those areas are so important to look. And I think probably the best way honestly would be to consult widely and talk with all students, staff, and community members across the UOW family and look at how and where all the different ways we're either using AI now or all the different things we would love to delegate to an AI and to really comprehensively come together and figure out where to start.

\n

And so how can UOW really support that ethical and responsible use of gen AI? So I think the first and most important step is transparency and that and accountability. And that is on everyone's part. You know, the students, the staff, the university, this whole, everyone has an obligation to be open and honest with each other about it's you. Because if we're, if we're going to go forth not fearing it, then we have to also not fear that using it will have some sort of consequences. We don't intend, you know, we shouldn't fear. If I want to use this great new technology, I shouldn't as a student have to feel that I can't say that I'm using it because I might get done for cheating. Same with a staff member who's using it to just improve their workflow and take a bit of pressure off whatever their workload is at that time. So that transparency and that accountability, that openness and awareness raising across the institution in our open discussions with each other, that's the first and important start.

\n

And if we play our cards right and we work together and we really do the right thing to get on top of it now, I think that at a minimum, the quality of our education, of our resources, of experience with students and staff, academic or social, across the institution, across this university will be greatly improved. Regardless of how that looks for it. I can see so much potential with it. We shouldn't fear AI. We shouldn't fear generative AI. We should be ready to meet the challenge together of embracing how we're going to make best use of it in our lives, throughout our lives, and particularly in our places of study and work here at this university.

\n
\n
","tags":[]},{"id":3242,"family":7525,"category":2,"order":11,"title":"Generative AI and Academic Integrity","pageId":7525,"pageDescription":"

Associate Dean (Education) | BAL

\n

Professor Ann Rogerson

","area":"showcase","url":"generative-ai-academic-integrity","image":2404,"thumbnail":"api/attachment/file/thumb/2404/thumb","login":false,"video":"
\n
\n
\n
 
\n
\n

Hi, I'm Professor Ann Rogerson from the University of Wollongong, and I'm here today to talk to you about artificial intelligence, AI, as well as academic integrity, which is also AI. These can be very confusing concepts, but there is a relationship between them and also some strengths and pitfalls. Generative AI is what it says on the box. It's generative. It's a predictive tool drawing on existing information that has been drawn from the internet. So it's not creating new knowledge, it's generating answers to questions that people ask. Academic integrity, on the other hand, is the acknowledgement of words, sources, creative materials, and it's ensuring that the person who generated the idea is actually acknowledged. So it begs a question, if I use a generative AI tool, should I acknowledge it? The answer is yes, you should. And also quantify how much of the generative tool's information you have actually used because it's not yours.

\n

The better way to go about it is to write and create your own work. Artificial intelligence can add some benefits for your work in terms of checking things. However, it's not an original piece of work, it's not an original idea as it's generative.

\n

What are some of the risks that happen with using generative AI tools? As generative AI actually uses information that exists, it's only responding to the question that you ask to the extent of the knowledge base that it actually has. This means if it doesn't know the answer, it makes it up. This means it can generate references that don't exist. It can generate responses that don't are not true. The problem with this is if we don't use a critical lens and actually examine what's real and what's not, it means we can distort knowledge and the distorted knowledge is fed back into the artificial intelligence tools.

\n

So one of the critical things to understand with generative AI is it needs to be critiqued. It needs to be fact checked, and this is where academic integrity comes in. We're after integrity of thought and word and action. Critical thinking skills are not going to go away with generative AI. Someone needs to be a check and balance in the system to ensure what is generated is actually true. We know that generative AI at this point in time is good on some aspects, but is still learning in others. The importance is that it learns from what is right rather than what is wrong.

\n

Generative AI actually issues us with some challenges and also opportunities. It's important that we embrace it in both our education and workplaces to benefit society, but as part of that, we also need to be careful. Where data is stored is a big question. Who's going to access the information you upload into a generative AI tool to assess? So, for example, a number of people are building AI tools based off the OpenAI software and also its creative counterparts. But as they're building these tools, they're also capturing data. So this is where we need to be careful from an academic and personal integrity point of view, and also an ethical and privacy standpoint. If you are not clear where the information is going, you should be guarded in what you actually upload for assessment. Because once it's uploading to the system, it's difficult to retrieve and know how that information will be used in the future.

\n

It is really important that you understand from a disciplinary standpoint how your particular industry association or discipline is using artificial intelligence, both in education, research, and the workplace. This will give you a guide to the sorts of skills that will be useful into the future and also future proof you from a career standpoint. By understanding how artificial intelligence works and generative AI tools impact workplace actions, you'll be better prepared for a future.

\n

Generative AI presents some exciting opportunities moving forward. As long as we remain aware of the challenges, the difficulties, and the background to the generative AI tools, we can make better integrity and ethical decisions on its use and how we apply it in everyday life.

\n
\n
\n
","tags":[]},{"id":3243,"family":7441,"category":2,"order":12,"title":"Exploring GenAI with students","pageId":7557,"pageDescription":"

Digital Communication and Media | ASSH

\n

Dr Christopher Moore

","area":"showcase","url":"exploring-genai-with-students","image":2402,"thumbnail":"api/attachment/file/thumb/2402/thumb","login":false,"video":"
\n
\n
\n
 
\n
\n

Hello. My name is Dr. Christopher Moore, and I teach in the Bachelor of Communication and Media. Both into the core of the degree and the Digital and Social Media Major. Just recently, I've been teaching two third-year subjects, one called Future Cultures and one called Game Experience Design.

\n

We first began experimenting with image generation tools in early 2022 to prototype image assets for tabletop game products. The key to experience design is feedback cycles, and so we look for ways to reduce barriers for students in generating prototypes to get to the feedback stage and have a more rapid turnaround in getting that feedback. We started out with a neural network called VQGAN+Clip created by Katherine Crowson in 2021, but this year we were able to integrate the MidJourney image generation tool directly into our subject Discord channel, which has been a really effective way to improve the workflow and experiment with creative production with AI tools.

\n

So now, students merely have to describe their vision for assets in their board game prototype and we can generate them instantly. That lets us get to the more important work of analysing feedback and developing innovation through iteration.

\n

The Creative Industries is a dynamic discipline that will see a great deal of change over the next few years as more AI tools become standard in all forms of media production. Some of it will be brilliant, such as tools for assisting with some of the more onerous tasks like video editing. Imagine if you could take this clip and edit out all of the 'ums' and 'ahhs' and awkward pauses, and so on to save you hours of time. Some of the new AI tools and platforms will rewrite the media industries – particularly in games, film and television, podcasting, radio, print and social media. I see a potential time in the near future where you can create a game or an entire TV show based on a series of prompts.

\n

Imagine you're a content creator and you have your AI voice performer a tool that speaks in your voice and you have your hyper realistic avatar AI tool that looks like you. You can ask ChatGPT to create a script making predictions about the outcome of a sporting event on the weekend. You can tell it to prepare your Instagram, your Tik-Tok, your YouTube, your podcast episode, and whatever social media app comes next. You can tell it when to schedule the publication of that content. And once you're done fact checking and reviewing the script, you hit 'go' and you're done.

\n

Some of it, however, is going to be more difficult and we're already faced with a global problem of disinformation. And so AI tools will only magnify the challenges that bots and trolls already present us with. This year we've begun experimenting with ChatGPT 3.5 and the new version 4 using it to instantly turn a student's description of a game into an immediate how-to-play' ruleset. Now this normally takes weeks. Anyone that has created a procedure for people or a set of instructions to follow will know just how difficult this can be and ChatGPT is really efficient at organising thoughts and descriptions of activities into a systematic guide very quickly. This lets us move through prototyping very quickly and focus on what matters. See we don't assess the game, we don't assess the product. Rather, we focus our attention on the student's reflective account of their learning by integrating subject knowledge and responding to feedback from playtesters. Students were really reluctant at first. They thought this was too easy, they thought it was cheating. But we are just making prototypes, we're not making full products. We would take the products and then we would have to work with human artists and graphic designers to bring that product to the marketplace.

\n

AI tools just means we can get through that initial, sometimes frustrating stage of ideation faster and more effectively.

\n

I'm a big science fiction nerd. I'm an old science fiction nerd. So I remember an old Doctor Who episode about a machine in World War II being designed to crack the Enigma code. It was called a 'thinking machine'. One of the characters lines has always stuck with me. And that was. \"Yes, but whose thoughts will they be thinking?” This is why it is not enough to teach how to integrate AI into the workflow, but to think about the implications that accompany the use of these tools. For example, in my Future Culture's subject. We look at the history of the representation of AI in popular media to review the lessons we've already learned from imagining the consequences. Part of AI literacy is using AI tools in ethically responsible ways, and in that subject, students are encouraged to use ChatGPT o create tweets, blog posts, podcasts, and online video and other content.

\n

But built into that practice has to be ethical prompt generation, fact checking, drafting and critical review. The reality is the human is responsible for the oversight and we need to teach how that is done effectively. ChatGPT is an interesting tool, but it is trained on the Internet with all of its biases, problems, associations and so on. The AI is also heavily limited and influenced by its creators; OpenAI and Microsoft. So, when we use it, we need to consider whose thoughts is this thinking machine actually thinking and how is it thinking them?

\n

What I like most about our current approach is the ethos of experimentation and investigation from the top down and the bottom up. I think we all realise that prohibition simply doesn't work and causes more problems, but we are also being responsible in challenging assumptions and falling back on core academic skills and principles in navigating the current and future changes.

\n

The best thing we can do is experiment and share the results, both the failures and the successes. I'm sure that we just as many cases where we try using this tool in a particular way and it doesn't work very well and that's fantastic. We need to hear about the failures just as much as the successes in order to navigate the right pathways for producing the best educational practice and student experience.

\n
\n
\n
","tags":[]},{"id":3244,"family":12403,"category":2,"order":13,"title":"Student perspectives on gen AI - Keval","pageId":12403,"pageDescription":"

UOW Student perspectives

","area":"showcase","url":"student-perspectives-keval","image":4223,"thumbnail":"api/attachment/file/thumb/4223/thumb","login":false,"video":"
\n
\n
 
\n
\n

My name is Keval Patel and I'm studying Mechatronics Engineering, and I'm doing my honours years. I've been here for the past four years and currently also the student director for UOW Pulse, Deputy Chair for Student Advisory Council, and the Co-Chair for the Student Life Steering Committee. I've had a good exposure of quite a few AI-based models, generative AI-based models actually more confident with the AI being there. I believe it will be beyond anybody of ours imaginations. I've used most of those generative AI tools for personal knowledge, so just to explore the areas and places where my knowledge lacks. So as I said, photography is not predominantly my forte. But then editing those things was made very easy for me by using the generative AI on Photoshop, for instance.

\n

And when it comes to curriculum and stuff like that, if I do not understand a particular topic or subject, generative AI is always there for you know, it's always got your back and you can throw any question at it, and it will get back to you with a precise or let's say more understandable answer. More so because I'm an international student and English is not my first language, so sometimes articulating some of my thought processes or understanding some of the questions that the lecturer might have posted is a bit tricky. And using the generative AI tools, it's made really easy for me to understand and comprehend those questions and also articulate my answers to those questions.

\n

One instance that I can think of where a lecturer would have used generative AI would be my thesis supervisor. He encouraged me to use the generative AI model to develop a framework for my thesis. And he said that we could use gen AI openly to make a structure, a skeleton, and then put your thought process and words around it to build a concrete, what do you call it? The framework. The generative AI is going forward at a significantly fast rate. And if the university sector does not keep up with generative AI, or if it tries to battle against it, it's pretty evident it's going to lose the battle because the AI is obviously going to be there out in the industry. And if you are not teaching your students how to use that or incorporate it in your daily life, when the students go out in the industry, they'll feel obsolete. They'll feel helpless or, you know, feel that like they're in the deep end of the pool.

\n

UOW can definitely support students and staff members in using generative AI by educating them about what this technology is, what is it capable of, and telling them that, you know, it necessarily does not give you correct answers all the time. So educating lectures and the students of these things, also educating the lectures that it's not always your enemy. It can sometimes also be a friend when it comes to even marking or you know, setting an exam paper or whatever that is. And students can also definitely benefit from understanding how to pitch a pitch a question to generative AI in a way that it will give you a more precise and true answer. So the best case scenario according to me, looking into the future, is where AI is incorporated with the university in a way that it has a fine balance between the human touch and also the artificial intelligence.

\n

 

\n
\n
","tags":[]},{"id":3245,"family":7524,"category":2,"order":14,"title":"Artificial Intelligence & Assessment","pageId":7641,"pageDescription":"

Digital Technologies | Education

\n

Associate Professor Sarah K. Howard

","area":"showcase","url":"ai-and-assessment","image":2405,"thumbnail":"api/attachment/file/thumb/2405/thumb","login":false,"video":"
\n
\n
\n
 
\n
\n

So, hello, I'm Sarah Howard and I'm here to talk to you a little bit about AI and education. There's been a lot of excitement about this right now, particularly one tool, which is ChatGPT, which has got everyone thinking about what does AI mean? What does artificial intelligence mean when we talk about teaching, learning, and specifically assessment? So importantly, to always remember is that when it comes to technologies and education, we've been here before, we've had a number of technologies come into education from film, radio, television. AI is another piece, it's actually been here for a while. ChatGPT has really made us stand up and look. But we will, as we have with other technologies, we will adapt it, we will learn how to work with it, we'll integrate it into our practices, and we'll be ready for the next thing that's going to come. Importantly through this, particularly around assessment, which in the higher ed space has gotten everyone really worried, is that good assessment is still good assessment.

\n

So we're gonna talk a little bit about how we think about assessment and how we design for the new AI tools that are available to us, specifically ChatGPT. So to be able to design for thinking about AI, we need to develop our AI competencies. What do we know about artificial intelligence? What can artificial intelligence do? And what can humans do? So when we think about this, the first thing is to know a little bit about tools like ChatGPT.

\n

ChatGPT, and again, this has been everywhere. I'm not telling you anything that you wouldn't have read. Um, is a type of AI called generative AI. So it's highly sophisticated. It can capture text, images, music, it can reconfigure it into new objects, into new music, but not necessarily new. So these are summaries, reconstitutions, they are what we already know.

\n

This is a kind of narrow AI, meaning it does one specific thing and it's driven by a question that a person poses to it. So it's driven by what you ask it. And this is really important when we think about how we use it and what it's capable of. So in this, when we think about ChatGPT it is fallible, okay? It's guessing when you ask it a question, when you ask it to say, you know, what is the reason for learning the states and capitals, um, of a particular country? It's going to give you its best possible answer, but it's guessing because it doesn't know the answer. So it's always guessing. I had a colleague ask ChatGPT, what is three plus three? And it said six, which is the correct answer, obviously. And he said, no, it's four. And ChatGPT corrected itself and said, oh, I'm sorry I made a mistake. I will work that into my database.

\n

So what it can't do is judge, it can't judge the quality of an answer. It can give you mathematically the best possible answer. Humans on the other hand, can critically engage with information, knowledge, learning experiences. They can judge. And this is a really important distinction when we think about what do we need to know as educators to be able to design assessments, learning exercises and activities that really take advantage of artificial intelligence, but also make sure that our students are doing the work that we want them to do and are able to demonstrate the knowledge that we want them to have. So importantly, when we also think about using AI, you have to ask, as I'd said, it's a narrow AI, you have to ask it questions. So to get a good answer, and again, there's lots of examples of this out in popular media and in the news, what have you, you have to actually ask it really good questions.

\n

You have to ask it probing questions. You have to ask over and over again. People spend hours querying this particular technology to get the answers that they want. To do that you have to know what you're asking. So do students, for students to use that particular tool to address a good assessment question, one that you've asked them to critically engage with a topic, or you've asked him to draw on multiple sources, they have to really query that, that tool, which means they've gotta really know what they're asking, which means that in that process they're exhibiting a depth of knowledge of the content that you're asking them to engage with. So that doesn't sound too bad. So if you think about in the end, when we think about the basics to consider, when we design our assessments and when we think about the use of AI in teaching and learning, we wanna build our own understanding as, as experts in our field, but also how we expect to draw that expertise that we're giving to our students back so we can see where they are.

\n

We need to build our understanding of the limitations of AI. So for example, ChatGPT, and I've just touched on a few kind of key points, but we need to build our own understanding. So this means playing with it, experimenting with it, talking to our students about it, about how we use it and how they might have used it to help them in their work. What's important is that a well designed assessment task and well-designed work, students can do much, much more sophisticated work than we see that can comes from artificial intelligence, such as ChatGPT. But we have to design those tasks. We have to design those assessments to bring that out of students. If we ask them something low level, they're gonna give us a low level answer. Something the AI can do. But if we ask them sophisticated, challenging questions, which is what we should be doing, that's quality assessment, then they will have to give us that. So that's what we need to think about when we think about our capacity to use AI and how we want our students to use it, and how we want to challenge our students even within this changing technology landscape. Thank you.

\n
\n
\n
","tags":[]},{"id":3246,"family":7522,"category":2,"order":15,"title":"Artificial Intelligence in teaching practice","pageId":7559,"pageDescription":"

Information Technology | UOW College

\n

Keshav Jha

","area":"showcase","url":"ai-teaching-practice","image":2407,"thumbnail":"api/attachment/file/thumb/2407/thumb","login":false,"video":"
\n
\n
\n
 
\n
\n

The first time when I came across generative AI was when I went for my friend's son's 16th birthday party and they were all talking about this ChatGPT and I had no clue. So I tried to learn more about it when I came back home and I started to relate to it how as an IT teacher or as a teacher itself, I can use and capitalise on the unlimited potential of ChatGPT. As a teacher I find there is a polarised kind of understanding between the teachers. Some think that this is going to be end of their career or there will be less opportunity for teaching, while the other teachers are quite enthusiastic about it, and my message to all the teachers and what discussions I have, and once I shared what I could do with ChatGPT, they all were really, really excited to use it. So we really need to shed light on what aspects of ChatGPT is going to be beneficial for them, what are the ethics involved and what are the level of accuracy or biases that is found in AI-generated content and how to deal with them.

\n

Look, I'm an old guy. Students, they have got so much of energy they actually end up knowing more about ChatGPT than us. So as a teacher I have found that they have been raving about ChatGPT from the students' perspective, but I have also let them know that there are AI detections, so be careful. The first thing is you need to be responsible. You cannot just let ChatGPT do your work. So even if you have got the world's best machine, but if you are not knowledgeable about it then you won't be able to drive and get the result or at least the depth of quality of work that you can generate through ChatGPT needs you to be an expert to a certain level, and my teachers, my students have taken that as a positive note, and they have started to use ChatGPT as a brainstorming, and then they will generate an essay for example on a particular topic, and then they will see what is real and what is not real by doing fact checking. So those are the things that we have to keep in mind and students need to know that sometime their assessment that they straight away generate may be biased or might have misinformation, so they need to do their due diligence to check if they are accurate and they are free of any biases.

\n

So, if I need to share something which can help the students to become more expert and better at using ChatGPT and being more responsible, then something called prompt engineering is what I need to teach them. So basically asking intelligent questions. Or how to drill down and dive deeper into a certain area, it is not easy with AI generatives like ChatGPT still. So you could ask a straight question and it can give you a straight answer, but if you want it to drive to a point where it starts to become like a research analyst, then you need the knowledge of prompt engineering so the ChatGPT can be given the groundwork of acting as a teacher, or acting as a mining engineer or acting as a molecular biologist, and then you can ask the question, give them the context and it can build upon it as a conversation, and these are the things that is hard to teach yet. Maybe it will become easier in future, but this is where it's a good start to teach them prompt engineering. They all work on algorithms, and algorithms is very logical and sequential. So if you want to go from here to Canberra for example, then the roadway you need to know, the Google Map needs to know which one to take first. So this is exactly what prompt engineering does is, what questions you ask first, what question you should ask next and so on. So you can create a fork and sub-fork and keep asking the questions and going to the depth and get the ChatGPT to give you what actually you are after either quantity or quality, either way.

\n

I would say it is an exciting time and we don't need to be afraid or fearful of anything. A great example would be IBM's Watson AI. It won the chess match against one of the greatest players. But did it change the world? We human beings are still the leaders. So these are technologies, and they are not sentient beings. So at the end of the day, whatever we need to do with this technology is up to us. We can embrace it or be fearful of it. But at the end, if you embrace it, then you will be the early adopter and you could be more productive. So as a teacher, as a student, as an employee, you could end up saving a lot of time for those repetitive tasks.

\n
\n
\n
","tags":[]},{"id":3247,"family":7521,"category":2,"order":16,"title":"Is Artificial Intelligence transforming education?","pageId":7560,"pageDescription":"

Innovation and Digital Education | UOWD

\n

Chris Tuffnell

","area":"showcase","url":"ai-transforming-education","image":2401,"thumbnail":"api/attachment/file/thumb/2401/thumb","login":false,"video":"
\n
\n
\n
 
\n
\n

So much has already been said about generative AI tools and how they're currently impacting higher education. But are we really experiencing transformative change? To explore this further, let's take a look at what's happening through the lens of RAT, no, not the rodent variety, but the technology adoption model. RAT stands for Replace, Amplify and Transform, and it will help us think about the degree of digital transformation that we're experiencing right now with generative AI. The replacement element of the RAT model, as you would expect, is when technology replaces something we used to do either partially or wholly. So for example, let's think about a physical book being replaced by a digital PDF document. Now, this is an example of digitization rather than digital transformation, because the act of replacement in itself doesn't lead to transformative change because digital transformation needs people to change their behaviors and approaches.

\n

So to achieve the amplification stage of the RAT model, in this example, people would need to start identifying and utilising the advantages and affordances that the digitised asset or book offers, such as more effective access and greater distribution. So something has been replaced by technology, and in doing so, the technology has amplified that thing by making it more efficient. In this case, access would no longer be restricted to a physical copy in a single location like a library. Multiple people could access the digital copy at different times and locations, and, therefore, this new more democratised and flexible access can then lead to transformation of behavior.

\n

In this example, maybe it's in our teaching approach, maybe a pedagogical shift to a model such as flipped learning, for example, could happen as learners would now have access to digital learning content outside of the formal structured teaching time. So our approach and behavior has been transformed.

\n

Okay. So how does this relate to generative AI? Well, in terms of the RAT model, I believe we're currently at the replacement stage. When we look at the rapid development of these tools, it's caused a lot of reaction because it's disruptive to our existing approaches, and I'm sure we've all seen or read about how generative AI can impact our assessment approaches for example, such as the student essay. AI can seemingly replace the need for learners to research, critically evaluate, synthesize information, and construct a document to show understanding. But it's worth remembering that these AI tools haven't really replaced this cognitive process because the tools can't think critically. Yes, they can access vast amounts of information and do some information synthesis, but if our assessments are easily replaced by these tools, the challenge is not the tool but in our assessment design.

\n

Critical thinking should be front and center of our summative assessments, which can't be replaced by AI. So this is an opportunity to improve the quality, relevance, and alignment to outcomes of our assessments. I know there's been a lot said about the impact on assessment from these tools, but in reality we've been struggling with academic integrity issues for decades. But again, if higher order critical thinking is at the center of our assessment design and we think about developing more creative, project-focused, authentic, contextualised assessments, then we can incorporate these AI tools to support student learning and move us to the amplification stage of the RAT model. Just like we did with the calculator, historically. Taking time to experiment and understand these tools and to incorporate them into our educator toolbox will ultimately lead to transformation of our learning and teaching approaches and improve our learner experience overall.

\n
\n
\n
","tags":[]},{"id":3248,"family":1001,"category":3,"order":17,"title":"Support & services","pageId":12181,"pageDescription":"Overview of the ways to access self-paced learning & teaching support and specialist services.","area":"page","url":"support","login":false,"tags":[]}],"id":12424,"family":7461,"area":"collection","title":"Artificial Intelligence in Education","url":"ai-in-education","description":"A collection of resources and information related to the impact of Artificial Intelligence on learning and teaching.","hiddenDescription":"ai, genai, chatgpt, generative, artificial intelligence","content":"

This page has been created to house the latest university-supported resources and information related to the impact of Artificial Intelligence on learning and teaching. It will be updated regularly to reflect relevant and timely information from across UOW and the sector to capture the evolving conversation and practices on the topic.

\n
\"We seek to balance gen AI’s transformative potential with high ethical standards and academic rigour. Our long-term aim, therefore, is to harness the opportunities afforded by this technology to transform how we teach, learn, assess, research and conduct our operations more generally.\"
- UOW gen AI action plan
\n
\"What I like most about our current approach is the ethos of experimentation and investigation from the top down and the bottom up. I think we all realise that prohibition simply doesn't work and causes more problems, but we are also being responsible in challenging assumptions and falling back on core academic skills and principles in navigating the current and future changes.\"
- Dr Christopher Moore | ASSH
","order":0,"version":0.0,"modified":"2024-12-19T14:18:38.98841","state":"active","login":false,"hits":0,"popularity":0,"categories":"[{\"id\":1,\"name\":\"The impact of Artificial Intelligence in education\",\"description\":\"\"},{\"id\":3,\"name\":\"Access support\",\"description\":\"\"},{\"id\":2,\"name\":\"Conversations about Artificial Intelligence \",\"description\":\"

This section captures conversations taking place at UOW on the topic of Artificial Intelligence (AI), particularly generative AI (GenAI), and Education. It features perspectives from UOW thought leaders, researchers, teachers and student communities.

\\n
To be part of the conversation, email ltc-central@uow.edu.au.
\",\"collapse\":false}]"}};
UOW logo
UOW logo

Artificial Intelligence in Education

This page has been created to house the latest university-supported resources and information related to the impact of Artificial Intelligence on learning and teaching. It will be updated regularly to reflect relevant and timely information from across UOW and the sector to capture the evolving conversation and practices on the topic.

"We seek to balance gen AI’s transformative potential with high ethical standards and academic rigour. Our long-term aim, therefore, is to harness the opportunities afforded by this technology to transform how we teach, learn, assess, research and conduct our operations more generally."
- UOW gen AI action plan
"What I like most about our current approach is the ethos of experimentation and investigation from the top down and the bottom up. I think we all realise that prohibition simply doesn't work and causes more problems, but we are also being responsible in challenging assumptions and falling back on core academic skills and principles in navigating the current and future changes."
- Dr Christopher Moore | ASSH

Contact Learning, Teaching & Curriculum

Request support

Contribute to the Hub

Provide feedback

UOW logo
Aboriginal flagTorres Strait Islander flag
On the lands that we study, we walk, and we live, we acknowledge and respect the traditional custodians and cultural knowledge holders of these lands.
Copyright © 2024 University of Wollongong
CRICOS Provider No: 00102E | Privacy & cookie usage | Copyright & disclaimer