UOW logo
UOW logo

Rubrics in Assessment

Assessment criteria are those artefacts that “outline expectations for student work, especially characteristics that denote attainment of standards or describe aspirational quality” (Bearman & Ajjawi, 2019, p.2). Rubrics are a commonly used form of assessment criteria that provides a mechanism for feedback to students. Assessment tasks completed by students are measured (marked/graded/scored) against a set of criteria, and a rubric is a way of formalising this assessment criteria.

According to Popham’s (1997) seminal work, rubrics have three essential features:

  • Evaluative criteria – the essential attributes of the assessment that are to be ‘measured’ 
  • Quality definitions – the definitions of the evaluative criteria at different levels of achievement/quality
  • Scoring strategy – the strategy to produce a mark/grade/score; this may be holistic or analytic.

Why?

Assessment criteria, including rubrics, should be an integral part of developing an assessment and be provided to students alongside of the general assessment task information and instructions. Transparency in assessment criteria promotes assessment validity and enhances student learning (Jonsson, 2014). 

There are multiple benefits to using rubrics for both teaching staff and students:

Benefits for Staff

  • Clarifying the components of an assessment task in evaluative criteria aids with alignment to the task aims, and alignment with subject and course learning objectives
  • Aids in consistency in marking across teams of markers, and student cohorts
  • Becomes a time-saving feedback tool in the assessment marking workflow
  • Provides students with timely, effective feedback that promotes student learning in a sustainable way.

Benefits for Students

  • Facilitates understanding of expectations of a given assessment task, including acceptable performance standards
  • Provides a mechanism for improving assessment output and subject learning through timely, detailed feedback
  • Promotes student assessment literacy through being able to self-monitor and assess their progress as they prepare an assessment task, as well as after the assessment through its feedback.

How

The following steps are presented here as a potential guide to creating a rubric for an assessment task. Click the headings to expand each section.

 
The first decisions you are likely to make involve whether to create an analytic or holistic rubric, and whether it should be general or task-specific. The choice depends on your purpose: for example, is the rubric to be used as formal marking criteria for an assessment task, or perhaps it is for students to use in self-reflection in completing a particular type of in-class task?

Tables 1 and 2 below (content adapted from Brookhart & Nitko, 2008; Sadler, 2009), summarise analytic versus holistic rubrics, and general versus task-specific rubrics:
Table 1: Analytic and holistic rubrics

Type

Approach to Evaluative Criteria

Advantages

Disadvantages

Analytic

Each criterion is assessed separately

Good for formative assessment 

Provides useful diagnostic information 

Provides detailed explicit feedback to students

Easier to link instructions to different criteria in the rubric

Labour-intensive for markers depending on the number of evaluative criteria

Requires more time to achieve inter-rater reliability than with holistic rubrics

Holistic

All criteria are evaluated simultaneously

Marking is faster than with an analytic rubric 

Requires less time to achieve inter-rater reliability 

Good for summative assessment or when there is a focus on the overall understanding or proficiency of a specific piece of content

Does not communicate as much information to the student on what areas they need to improve



Table 2: General and task-specific rubrics

Type 

Approach to Evaluative Criteria 

Advantages 

Disadvantages 

General

Descriptions of criteria refer to characteristics that apply to a broader category of tasks (e.g., writing, problem- solving) 

Reuse same rubrics with several tasks or assignments 

Supports learning by helping students see ‘good work’ as bigger than one task 

Supports student self-evaluation 

Students can help construct general rubrics with educators 

Lower reliability at first than with task-specific rubrics 

May require practice to apply well 

Task-specific 

Descriptions of criteria refer to the specific content of a particular task (e.g., gives an answer, specifies a conclusion) 

The level of detail for each criteria can potentially make measurement easier 

Requires less time to achieve inter-rater reliability 

Care needs to be taken with descriptions to ensure adequate level of detail without giving away the exact answers required 

Need to write new rubrics for each task 

For open-ended tasks, good answers not listed in rubrics may be evaluated poorly 

 

 
The evaluative criteria should reflect the assessment task information in the Subject Outline and clearly encapsulate the key aspects of the assessment task that will be measured (marked/graded/scored). Some evaluative criteria may be related and combined; for example, spelling and grammar could be one criterion. However, it might be difficult to group criteria such as spelling and accuracy of calculations as these are likely unrelated.

You may choose to weight your evaluative criteria so that it reflects the needs of the subject, the course, the needs of the students, or because you want to help students achieve specific learning outcomes. For example, you may choose to give significant weighting to a structure of an essay and less on the content for a 100-level 100 subject, but the reverse for a 200-level subject. Evaluative criteria and their weighting indicate to students the relative importance of various aspects of the assessment and will affect the way they approach the assessment task.

Popham (1997) suggests three to five evaluative criteria, but the key consideration is to keep things manageable for students and markers who will need to reflect on the achievement of each criteria. 

In analytic rubrics, the evaluative criteria are usually listed in the leftmost column.

 

 

Once you have created the criteria, you will need to determine the level(s) of performance you expect students to be able to demonstrate. Most rating scales have between three and five levels. These can consist of letter grades, number grades, percentages, and can include short descriptor e.g., Satisfactory (3), Excellent (4) etc. Quality levels that reflect the university’s grade descriptors are often used e.g. high distinction, distinction, credit, pass, fail.

Dawson (2017, p. 354) suggests that Biggs’ Structure of Observed Learning Outcomes (SOLO) can provide useful wording for indicating increasingly complex levels of quality.

In analytic rubrics, the quality levels are usually listed in the top row.

 

 

Now you will need to write short statements of your expectations underneath each quality level for each evaluative criteria. The descriptions should be specific and measurable, and the language should be parallel to help with student comprehension and the degree to which the standards are met should be explained. In other words, the descriptors will build on each other as the anticipated quality level increases.

For an analytic rubric, the “organisation” criterion in a rubric for an essay could look something like this example from Roell (2019):

Criteria

Exceptional (3)

Satisfactory (2)

Developing (1)

Unsatisfactory (0)

 Organisation

Organisation is coherent, unified, and effective in support of the paper’s purpose and consistently demonstrates effective and appropriate transitions between ideas and paragraphs.

 Organisation is coherent and unified in support of the paper’s purpose and usually demonstrates effective and appropriate transitions between ideas and paragraphs.

Organisation is coherent in support of the essay’s purpose, but is ineffective at times and may demonstrate abrupt or weak transitions between ideas or paragraphs.

Organisation is confused and fragmented. It does not support the essay’s purpose and demonstrates a lack of structure or coherence that negatively affects readability.

A holistic rubric would not break down the evaluative criteria with such precision. The top two tiers of a holistic rubric for an essay would look more like this example from Roell (2019):

Level

Descriptor

6

Essay demonstrates excellent composition skills including a clear and thought-provoking thesis, appropriate and effective organization, lively and convincing supporting materials, effective diction and sentence skills, and perfect or near perfect mechanics including spelling and punctuation. The writing perfectly accomplishes the objectives of the assignment.

5

Essay contains strong composition skills including a clear and thought-provoking thesis, but development, diction, and sentence style may suffer minor flaws. The essay shows careful and acceptable use of mechanics. The writing effectively accomplishes the goals of the assignment.

 

 

Exemplars are a useful complimentary resource to rubrics and are described as by Sadler (1987) as “key examples chosen so as to be typical of designated levels of quality or competence. The exemplars are not standards themselves but are indicative of them … they specify standards implicitly” (p.200).

Tierney & Simon (2004) advocate for exemplars of student work to accompany rubrics, as they reduce both marker and student interpretation variability and help to “operationalize the attributes and performance criteria” (p.6). 

There are many studies in the literature which support the benefits of providing exemplars to students before the submission of assessment tasks; c.f. Handley & Williams (2011) and Orsmond et al. (2002).




Tips

(adapted from Cornell University, 2022):

  • Start small; try creating one rubric for one assessment task in a semester
  • Ask colleagues if they have developed rubrics for similar assessment tasks, or adapt rubrics that are available online
  • Give a draft of the rubric to your colleagues for feedback
  • Consider how you are going to provide explicit support to students to scaffold them in the use of the rubric and solicit feedback from them; this will help you understand whether the rubric is clear to them and will identify any weaknesses
  • You may also consider co-creation of rubrics with students; the L&T hub article Students as Partners: assessment has further information on this.

Related information




References

Bearman, M. & Ajjawi, R. (2019). Can a rubric do more than be transparent? Invitation as a new metaphor for assessment criteria. Studies in Higher Education. 46(2), 359–368. https://doi.org/10.1080/03075079.2019.1637842

Brookhart, S. M. & Nitko, A. J. (2008). Assessment and Grading in Classrooms. Pearson Education.

Cornell University. (2022). Rubric Development Guidelines. https://teaching.cornell.edu/resource/rubric-development-guidelines

Dawson, P. (2017). Assessment rubrics: towards clearer and more replicable design, research and practice. Assessment & Evaluation in Higher Education. 42(3), 347–360. http://dx.doi.org/10.1080/02602938.2015.1111294

Handley, K. & Williams, L. (2011). From copying to learning: using exemplars to engage students with assessment criteria and feedback. Assessment & Evaluation in Higher Education. 36(1), 95-108. https://doi.org/10.1080/02602930903201669

Orsmond, P., Merry, S. & Reiling, K. (2002). The Use of Exemplars and Formative Feedback when Using Student Derived Marking Criteria in Peer and Self-assessment. Assessment & Evaluation in Higher Education, 27(4), 309-323. https://doi.org/10.1080/0260293022000001337

Popham, W. J. (1997). What’s Wrong – and What’s Right – with Rubrics. Educational Leadership. 55(2), 72–75.

Roell, K. (2019). How to Create a Rubric in 6 Steps. https://www.thoughtco.com/how-to-create-a-rubric-4061367

Sadler, D.R. (2009). Transforming Holistic Assessment and Grading into a Vehicle for Complex Learning. In: Joughin, G. (eds) Assessment, Learning and Judgement in Higher Education. Springer, Dordrecht. https://doi.org/10.1007/978-1-4020-8905-3_4

Tierney, R. & Simon, M. (2004). What's still wrong with rubrics: Focusing on the consistency of performance criteria across scale levels. Practical Assessment, Research, and Evaluation. 9(2). 
https://doi.org/10.7275/jtvt-wg68

Contact Learning, Teaching & Curriculum

Request support

Contribute to the Hub

Provide feedback

UOW logo
Aboriginal flagTorres Strait Islander flag
On the lands that we study, we walk, and we live, we acknowledge and respect the traditional custodians and cultural knowledge holders of these lands.
Copyright © 2024 University of Wollongong
CRICOS Provider No: 00102E | Privacy & cookie usage | Copyright & disclaimer