UOW logo
UOW logo

Assessment randomisation for technical and multiple-choice questions using R

Andrew Zammit-Mangion

Associate Professor Andrew Zammit-Mangion | Engineering and Information Sciences (EIS)

Using R to randomise technical assessments to promote academic integrity, encourage learning and make assessment equitable.

"... randomisation can lead to benefits other than academic integrity facilitation in the ecosystem of assessment strategies, even when the primary aim of the assessment is formative."
- Andrew Zammit-Mangion


A side-effect of COVID-19 is that many summative assessments that were usually carried out under strict invigilation are now being completed by students in the comfort of their own homes, with access to the Internet and to channels of communication with fellow students. This shift naturally leads to concerns related to academic integrity and to the viability of the prevailing assessment model. 

To partially address this issue I have adopted the notion of assessment randomisation in the majority of formative and summative assessments I conduct. With assessment randomisation, every student receives assessment tasks that are different from those given to every other student in a cohort. Beyond ensuring academic integrity, randomisation can lead to benefits other than academic integrity facilitation in the ecosystem of assessment strategies, even when the primary aim of the assessment is formative. I have seen a considerable reduction in academic integrity issues in assessments that employ randomisation.

How?

Several online learning management systems, such as Moodle, provide the option to randomly generate numbers or text, and to randomly select questions from question groups. However, these systems tend to be relatively limited in the functionality they provide. For example, when scaffolding technical assessments, one may want to ask the student to prove an intermediate result, in which case the result needs to be computed, numerically or otherwise, for each randomised task. This can be difficult, or impossible, to do in an online learning management system.

For this reason, I have turned to the programming software R for generating random assessments. R natively supports many operations carried out in Mathematics (e.g., linear algebraic operations), Statistics (e.g., hypothesis testing), and Engineering (e.g., Fourier transforms). R can be used to generate random data from randomised models, generate sophisticated, presentable plots, and much more. The R package 'exams' provides the link between R and randomised assessment tasks by allowing the user to specify the random components of assessment tasks, scaffolding through the provision of intermediate results and task-specific guides and pointers, and the corresponding solutions to those tasks. The package allows the user to generate randomised questions in a variety of ways, for example by generating multiple Portable Document Format (PDF) files (one per student), or by generating an eXtensible Markup Language (XML) file for importing into Moodle. 

R logo

There are broadly two ways in which one could use the package 'exams' in R for generating random assessments:
  1. Random numeric or textual entries
  2. Random task selection.
First, random numeric or textual entries are the most straightforward way in which to randomise questions. Here, the question is the same for every student in the cohort, but selected numbers or words within the question are different for every student.  For example, in Statistics, if the question is on hypothesis testing, the null hypothesis, or the data on which to base the test on, could be different for each student. Each student could also be asked to prove a result (e.g., prove that the 95% confidence interval for some parameter is [0.1, 0.2]) that is different from that of other students. Text could also be randomised; for example, a student may be asked to write down the definition of an x-process, where x takes values in {“Gaussian”, “Poisson”, “Markov”, “auto-regressive”, …}.

Second, in random task selection, tasks that assess similar learning outcomes with similar difficulty are put into groups. Then, each student is allocated a task from within this group. This task could be a simple multiple-choice question, a project, an essay, or one that requires a high degree of specialised scaffolding. For example, in Queuing Theory, one group of questions could be assessing the student’s capacity to derive the expected properties of queues, such as the expected queue length. Each question in this group would be placed in a different real-world setting but assess the same learning outcome.

Reflection

There is an initial time investment in creating randomised assessments, that only pays off if the assessment is repeated over several sessions. One will also need to become acquainted with the software package R and the package 'exams'. A detailed discussion on the usage of the exams package is beyond the scope of this showcase; instead, I provide a series of short videos within an online short course that gives a gentle introduction to using this package. An introductory video to this course can be found below:

Positive reviews of this video series indicate that R and the 'exams' package is being successfully used worldwide in a variety of settings. More resources and exercise templates are available on the R/Exams website

R-Exams logo

 

Contact Learning, Teaching & Curriculum

Request support

Contribute to the Hub

Provide feedback

UOW logo
Aboriginal flagTorres Strait Islander flag
On the lands that we study, we walk, and we live, we acknowledge and respect the traditional custodians and cultural knowledge holders of these lands.
Copyright © 2023 University of Wollongong
CRICOS Provider No: 00102E | Privacy & cookie usage | Copyright & disclaimer