The role of quality assessment design in strengthening academic integrity
Recent conversations in the media about ChatGPT and various other artificial intelligence tools have prompted critical reflections on academic integrity and how these tools might impact the integrity and security of our assessment tasks. Artificial intelligence tools are not new and they have significant potential to revolutionise the way we study and work. Academic integrity in higher education is a complex area which has broad implications when breached (Tertiary Education Quality and Standards Agency 2017). There are many reasons why students make a choice to engage in poor academic practice and/or activities which constitute academic misconduct (Blum, 2016; Park, 2003), but as Phil Dawson notes, “the vast majority of students do not e-cheat or even cheat” (2020, p.2).
If we consider the place of assessment in higher education, it serves multiple purposes (Carless, 2015, p.11):
- Supporting student learning;
- Allowing us to judge the quality of student achievement; and
- Satisfying the needs or demands of accountability.
Although we cannot “design out” cheating within our subjects entirely (Bretag & Harper, 2017), designing quality assessment goes hand-in-hand with developing our students’ digital literacy capabilities and facilitating their academic integrity education.
This article focuses on the role of quality assessment design to strengthening academic integrity. These design considerations are not intended to be an exhaustive list, nor are they offered as guarantees of academic integrity; rather they are a starting point for educators to reflect on their assessment design and promote consideration of future changes.
"Assessment design trumps assessment security"
Dawson (2020, pp.133-135) points out four common assessment design mistakes that may undermine assessment security:
- Reusing the same assessment tasks year after year
- Unsupervised online tests
- Take-home ‘one right answer’ or lower-level tasks
- Poor examination practices
Please refer to Dawson (2020) for more information on these design decisions and why they are potentially problematic.
Some fundamental considerations for designing quality learning-oriented assessment
Click on the headings below to expand the sections.
One of the many reasons students may turn to cheating is when they experience time pressures (Blum, 2016; Park, 2003), of which academic study workload is a critical component. To assist students with managing their study load, consider the timing of assessment tasks for your subject through the teaching session; where possible, consult with the other educators in your course team who are teaching subjects during the same session, to avoid (where possible) students having major assessments due at the same time.
Another important consideration is that of assessment task weightings; strive to achieve a balance of not ‘over-assessing’ with too many low-stakes tasks, but also trying to avoid large high-stakes tasks where possible.
In reflecting on your subject’s assessment, consider whether the assessment task types are appropriate to assess the subject learning outcome/s (SLOs) being addressed, as some SLOs will lend themselves to particular task types (e.g., written, practical, oral). Where possible have a mix of individual and collaborative tasks, as this helps to build the professional skills of students in both independent study and working with others in teams. Try to also avoid monotony in assessment type (e.g. all quiz-based assessment tasks), as using a variety of assessment types promotes student interest and gives them a chance to demonstrate their learning through different modalities which may appeal to their strengths and preferences. Although no assessment types are immune to academic integrity breaches such as contract cheating (Tertiary Education Quality and Standards Agency, 2017), tasks that involve reflections on practicums, vivas, individualised tasks that relate to students’ personal experiences, or supervised in-class tasks are examples of four types of assessment that are less likely to be involved in student misconduct (Bretag et al., 2019).
Involving student choice where appropriate can be useful in promoting interest and engagement, as Blum (2006) identifies a lack of interest in learning content as a common motivation for poor academic practice. However, it is important to acknowledge the challenges here with ensuring fairness and equality and managing workload of both students (undertaking the task) and teachers (marking the task).
Multimodal assessment tasks that involve oral or video artefacts have many benefits, including preparing job-ready students with experience in communicating via a variety of media, improving students’ digital literacy capabilities, and are particularly useful in strengthening academic integrity.
Quality assessment design also involves the development of students’ evaluative judgement. Tai and colleagues (2018) provide a simple definition of evaluative judgement as “the capability to make decisions about the quality of work of self and others” (Tai et al., 2018, p.471). There are two components which are integral to student evaluative judgement (Tai et al., 2016; 2018): firstly, students need to understand what constitutes quality; and secondly, students need opportunities to apply their understanding of quality through appraisals of their own work, and the work of others.
The benefits of developing student capability for evaluative judgement include a reduction in student anxiety, promotion of fairness, engendering of trust, and development of skills that promote independence as students and contribute to job-ready graduates. Supporting students in becoming effective learners as well as meeting the requirements of their workforce future is a key aspect of the concept of sustainable assessment (Boud, 2000).
This particular assessment design consideration has been proven in the literature to help potentially mitigate academic integrity breaches; see for example findings from the large multi-institutional Australian study led by Tracey Bretag and Rowena Harper which included the recommendation of clarifying assessment requirements through “task instructions, scaffolding, interactive discussion and rubrics” (Bretag et al., 2019, p.47) as a cheating mitigation strategy.
Tai and colleagues (Tai et al., 2018, p.474) suggest five practices for developing evaluative judgement:
- Self-assessment
- Peer feedback/review
- Feedback
- Rubrics
- Exemplars
These are briefly unpacked as follows:
Self-assessment
Supporting students in effective self-assessment involves engagement in identifying or considering criteria (where students may need to be supported in identifying and choosing criteria for judgement) and using these to qualitatively (rather than quantitatively or assign a single grade) review their own work and then subsequently use that feedback to improve their work.
Peer feedback/review
Often implemented as part of group work assessment tasks, when students evaluate and make judgements about the work of their peers and construct a written feedback commentary, it is actively engaging them in “multiple acts of evaluative judgement, both about the work of peers, and, through a reflective process, about their own work” (Nicol, 2014, p.102). An important consideration here is that students need to be scaffolded in in how to provide constructive, quality peer feedback.
Rubrics and exemplars
In order to support students in understanding what quality means in their discipline, we need to consider both the ‘task criteria’ (what needs to be done to accomplish the task) and ‘quality criteria’ (what constitutes doing the task well) (Torrance, 2012, p.337). Task and quality criteria include resources such as rubrics and exemplars. However, a balance must be struck between transparency and too much detail which potentially overloads students, and Carless (2015) makes an important point that “criteria in themselves, however, may be of limited use to students unless they engage with them purposefully or are stimulated to do so by relevant classroom activities” (p.138). A similar point is made by Bell et al. (2013) in their study of student use of grade descriptor, marking guides and annotated exemplars, suggesting that formal in-class activities that provide “opportunities to ask questions about the resources might have reduced the number of students requesting further examples and other clarifications” (p.780).
Facilitating dialogic interaction and feedback
Rather than the traditional understanding of feedback as unidirectional (from educator to student), the concept of feedback as “a dialogic process in which learners make sense of information from varied sources and use it to enhance the quality of their work or learning strategies” (Carless, 2015, p.192) changes the understanding: from feedback as information, to feedback as an ongoing process involving multidirectional feedback (educator-to-student, student-to-student, student-to-educator).
Findings from Bretag et al.’s 2019 survey of Australian students and staff on contract cheating and assessment design indicated that providing “constructive, meaningful and timely feedback for each student” was as part of “fostering personalised teaching and learning relationships with students” (p.47) and a key consideration in potentially avoiding cheating behaviours.
Building in purposeful dialogic interaction and feedback as part of the design of a series of connected assessment tasks where students are required to act on feedback and submit multiple artefacts potentially strengthens academic integrity (Bloxham & Boyd, 2007) and promotes higher levels of student engagement in assessment.
Related information
- UOW Assessment & Feedback Principles | L&T Hub Collection
- UOW Teaching and Assessment Policy Suite | UOW Policy directory
- Constructive alignment | L&T Hub article
- Authentic assessment | L&T Hub article
References
Bell, A., Mladenovic, R. and Price, M. (2013). Students’ perceptions of the usefulness of marking guides, grade descriptors and annotated exemplars. Assessment and Evaluation in Higher Education, 38(7), 769– 788.
Biggs, J., & Tang, C. (2011). Teaching for quality learning at university. McGraw-Hill Education.
Bloxham, S. & Boyd, P. (2007). Developing effective assessment in higher education: A practical guide. McGraw-Hill International (UK).
Blum, S. D. (2016). What it Means to be a Student Today. In T. Bretag (Ed.), Handbook of Academic Integrity (383–406). Springer.
Boud, D. (2000). Sustainable Assessment: Rethinking Assessment for the Learning Society. Studies in Continuing Education. 22(2), 151–167.
Bretag, T., & Harper, R. (2017, May 12). Assessment design won’t stop cheating, but our relationships with students might. The Conversation. https://theconversation.com/assessment-design-wont-stop-cheating-but-our-relationships-with-students-might-76394
Bretag, T., Harper, R., Ellis, C., van Haeringen, K., Newton, P., Rozenberg, P., & Saddiqui, S. (2019). Contract cheating and assessment design: Exploring the connection—Final report. Australian Government Department of Education and Training. https://ltr.edu.au/resources/SP16-5383_BretagandHarper_FinalReport_2019.pdf
Carless, D. (2015). Excellence in university assessment: Learning from award-winning practice. Taylor & Francis Group.
Dawson, P. (2020). Defending Assessment Security in a Digital World: Preventing E-Cheating and Supporting Academic Integrity in Higher Education (1st ed.). Routledge. https://doi.org/10.4324/9780429324178
Ellis, C., van Haeringen, K., Harper, R., Bretag, T., Zucker, I., McBride, S., Rozenberg, P., Newton, P., & Saddiqui, S. (2020). Does authentic assessment assure academic integrity? Evidence from contract cheating data. Higher Education Research & Development, 39(3), 454–469. https://doi.org/10.1080/07294360.2019.1680956
Nicol, D., Thomson, A., & Breslin, C. (2014). Rethinking feedback practices in higher education: A peer review perspective. Assessment & Evaluation in Higher Education, 39(1), 102–122. https://doi.org/10.1080/02602938.2013.795518
Park, C. (2003). In Other (People’s) Words: Plagiarism by university students–literature and lessons. Assessment & Evaluation in Higher Education, 28(5), 471–488. https://doi.org/10.1080/02602930301677
Tai, J., Ajjawi, R., Boud, D., Dawson, P., & Panadero, E. (2018). Developing evaluative judgement: Enabling students to make decisions about the quality of work. Higher Education, 76(3), 467–481. https://doi.org/10.1007/s10734-017-0220-3
Tai, J. H.-M., Canny, B. J., Haines, T. P., & Molloy, E. K. (2016). The role of peer-assisted learning in building evaluative judgement: Opportunities in clinical medical education. Advances in Health Sciences Education, 21(3), 659–676. https://doi.org/10.1007/s10459-015-9659-0
Tertiary Education Quality and Standards Agency. (2017). Good Practice Note: Addressing contract cheating to safeguard academic integrity. https://www.teqsa.gov.au/guides-resources/resources/good-practice-notes/good-practice-note-addressing-contract-cheating-safeguard-academic-integrity
Torrance, H. (2012). Formative assessment at the crossroads: Conformative, deformative and transformative assessment. Oxford Review of Education, 38(3), 323–342.