View Static Version
Loading

Teaching Matters Newsletter October 2022: Five Inquiries Into Assessment and Feedback

Introduction

The University of Edinburgh’s new Assessment and Feedback Principles and Priorities include the following: “Our assessment and feedback practices will be reliable, robust and transparent”. But how might we best deliver on this promise – made to both our students and our teachers? How might we do so at a time when the reliability, robustness and transparency that students seek from teaching remain complicated by the fall-out of Covid-19, and consequent changes to our learning technologies?

Teaching Matters’ recent series, ‘Assessment and Feedback: Principles and Priorities’, offered the chance to reflect on what assessment could and should look like. The series constitutes a brief but rich glimpse into this future, being made feasible today in classrooms at Edinburgh and beyond. Perhaps the greatest takeaway from these blog posts will be their sense of consensus – specifically, on the urgent need for multi-laterality in assessment and feedback. As Tina Harrison notes in her introductory blog post, we must attempt to “move beyond assessment as something that is done to students to something that is done with them.” To that end, this series highlights ongoing experiments with assessment and feedback from all angles of our academic community: course designers, lecturers, and students alike.

In this newsletter, you will find five inquiries from the eight blog posts that make up the ‘Assessment and Feedback’ series. These are followed by our regular features: Collegiate Commentary, In Case You Missed It (ICYMI), and Coming Soon at Teaching Matters! If you'd like to keep up with Teaching Matters, make sure to sign up to our Monthly Newsletter Mailing List.

Five Inquiries into Assessment and Feedback

Image credit: Jo Szczepanska, unsplash, CC0

Inquiry 1: What is assessment?

Forms of assessment are part and parcel of nearly every level of education. As a result, our mechanisms for assessment – as well as our justifications for it – can sometimes feel a little too self-evident. Ironically, then, the sheer ubiquity of assessment risks making it difficult to grasp and innovate.

In their post, Sabine Rolle and Neil Lent strip the concept of assessment down to its very basics by asking, “what is assessment?” They find purchase on this slippery question by narrowing it down the notion of ‘purpose’, though this too points at the multiple possibilities, routes, and roles of assessment within the classroom. Among many other ways, assessment might be used as a way “to set standards; to check what has been learned (to validate our programmes); to motivate students to engage with the work; to inform students about progress…” Such diversity means that it is important to remain open and critical about the question of assessment: “Thinking carefully about its purpose will help us do it better.”

In her post, outlining in detail the principles and priorities of assessment and feedback, Tina Harrison concurs:

“Assessment is an integral part of learning and teaching and has many purposes, including being for and of learning. We need to be clear about the purpose of assessment, and ensure that the assessment methods are fit for that purpose.”
via Tina Harrison

The inquiry we’ve made so far, ‘what is assessment?’, can also be reformulated as: ‘where is assessment?’ Dave Laurenson, in his post on the scope of assessment and feedback at the programme level, raises an important point about the location(s) of assessment beyond ‘just’ the individual course. Should we be looking at larger pedagogical forms, such as course clusters and programmes, to access a big-picture perspective of effective assessment? What might a programme-focused approach to curriculum transformation look like? As Dave explains:

Going forward, we could/should be even more ambitious and introduce assessment at programme level... This would allow us to evidence the acquisition of competencies – including professional competencies – that are not linked to a specific course but are developed across several courses and several years of study to meet overall programme learning outcomes.
Image credit: Gerd Altmann, Pixabay, CC0

Inquiry 2: What should feedback look like?

In their post on effective feedback, Neil Lent and Tina Harrison explain that “feedback can be a strong influencer of students’ learning (Hattie and Timperley, 2007) but, when done badly, can be worse than no feedback at all (Black and Wiliam, 1998)”. Their post underscores the importance of constructing feedback as dialogue; not as instructions relayed from educator to student but as a conversation that helps a student develop their capacity to self-assess and, ideally, their desire to try, try again.

While we tend to think about feedback as a one-on-one affair, with the instructor taking the lead, we find that effective feedback practices may also be student-led (as Catherine Bovill indicates in her post on co-creating with students), a collective effort (jumping off what Phil Marston calls the “shared language” of transparent marking), and even, perhaps, a project of Artificial Intelligence (as Sian Bayne and Tim Drysdale predict in their post on assessment’s digital futures)!

Inquiry 3: Do you really know a First when you see one?, or, The question of transparency

The truth is this: the use of standardised marking criteria is sometimes trumped by the power of that inexplicable pedagogical feeling best described as, “I know a First when I see one”. As Tina Harrison explains, this attitude has not gone unnoticed by exam assessors, as well as concerned students. She flags that, for students, “it is not always clear how marks have been awarded, giving rise to comments about ‘unclear’, ‘unfair’, and ‘subjective’ marking. It is important that marking criteria are clear to students and made available with the assessment task.”

In his post, Phil Marston takes up this issue of transparency in assessment, embodied, as he sees it, by “I know a First when I see one”, but also by the misuse of subjective adjectives (“excellent!”/ “inadequate”) and a lack of shared marking priorities across larger courses. Drawing on focus group research, Phil and his colleagues offer several practical suggestions to improve the assessment process from the side of the instructor – though, as he writes:

The single most helpful thing… is to have a conversation with the students about what they think the [marking] descriptors are describing. We had meetings dedicated to rewriting descriptors in a shared language… It seems to help the students understand, not just the role of assessment in a university setting, but understand the academic endeavour of working with knowledge that is always provisional.

Inquiry 4: What does it mean to centre students?

Image credit: Annie Hryshchenko, CC0

In her post on the co-creation of assessment practices – a project that involves creating transformative dialogue on assessment between staff and students – Catherine Bovill offers exciting glimpses into co-creation in action at a range of universities. Staff at Edinburgh, Glasgow, Cumbria, Vienna, Kansas, Auckland, and University College Dublin are actively producing ways to involve students in the re-imagining of their assessments and feedback. Student involvement in these acts of co-creation has been well-received, and includes marking sample essays and designing questions. At the Edinburgh Futures Institute, co-creation has been particularly creative, with students addressing learning outcomes by developing ‘assets’ ranging from board games to poetry to exhibition proposals. You can watch a recording of the course co-organiser, Andy Cross, presenting this work.

The upshots of co-creation cannot be overstated. Sam Maccallum (Vice President Education, Edinburgh University Student’s Association) reflects on how vital it is to give students the opportunity to make “informed decisions” in their assessments. In their post on creating inclusive, equitable, and fair assessment practices, they view co-creation as a much-needed step in that direction:

… Assessment methods should consider all students in their creation, ensuring that no students are left behind during policy decisions. Shifting towards inclusive assessment recognises the diversity of student needs and aims to bridge gaps in attainment and outcomes for all of our students.

Sam explores this idea through their perspective as a student with ADHD, and in light of a mental health crisis experienced during the height of the Covid-19 pandemic. Their lived experiences illustrate that traditional exam design and coursework (characterised by rigid, static formats) urgently require re-thinking alongside a diverse student community. This is crucial given that such forms of assessment may otherwise favour normatively abled students – and may, as Sam argues, deepen existing racial inequities in assessment outcomes.

Inquiry 5: What are the digital possibilities of (and for) assessment and feedback?

Sam Maccallum’s post points to an unexpectedly positive outcome of the pandemic’s rehaul of education systems. Its pressures have pushed us to innovate pedagogy through technology, in ways that have made assessments more accessible than ever to students. All of Sam’s examinations during lockdown were, “online, 24 hour, and open book. The freedom of taking an exam at home, without distractions of an exam hall, let me approach the exams on my own terms.”

Yet technology poses as many risks to learning and teaching as benefits. Sian Bayne and Tim Drysdale consider such academic misconduct issues as text-generating AI and online contract cheating services (‘essay mills’). 98% of universities in the UK now use plagiarism detection systems in response to these developments, but such technologies themselves pose serious risks. These include not just sometimes-iffy functionality and scope, but also their role in sowing further distrust within a teacher-student relationship marked by increasing workloads.

Nonetheless, technology, when used critically, can offer teaching immense rewards. Sian and Tim outline digital innovations in assessment practice from programmes, departments, and Schools across The University of Edinburgh. For instance, certain tools allow teachers to create randomised questions for each student, increasing academic integrity. The use of collaborative digital resources like Miro, Padlet, and Jupyter notebooks encourages students to produce more creative forms of academic knowledge for assessment and feedback, drawing us well beyond the traditional essay.

Image credit: Fauxels, pexels, CC0

Collegiate Commentary

Donna Hurford (left) and Andrew Read (right)

with Donna Hurford, Academic Developer at the University of Southern Denmark, and Andrew Read, Independent Educational Consultant

While Teaching Matters primarily showcases University of Edinburgh teaching and learning practice, our core values of collegiality and support extend beyond our institution, inviting a wider, international community to engage in Teaching Matters. In this feature, we ask colleagues from other Universities to provide a short commentary on ‘Five things...’, and share their own learning and teaching resource or output, which we can learn from.

Donna and Andrew's commentary on 'Five inquiries into assessment and feedback'

Inquiry 1 raises the questions ‘what is assessment?’ and ‘where is assessment?’ but perhaps we should start with ‘why is assessment?’ When we attempt to untangle assessment from learning, assessment becomes a ‘thing’, subject to institutional mechanisms, and, perhaps inevitably, detached from its relationship with learning. Internal quality assurance procedures can emphasise this disconnection by linking the provision of assessment to administrative (or bureaucratic) requirements: ‘has a range of assessment types been employed across the programme?’ and ‘does the course specification take account of formative assessment?’, etc. Institution-wide targets will have a similar impact: ‘has the percentage of students graduating with a good degree increased in the last two years?’ etc. Assessment treated like this acquires functional fixedness (Hurford and Read, 2022): the ‘why’ is about box-ticking.

Programme-level assessment could disrupt this. Dilly Fung (2017) advocates practical approaches to connected programme design. We would suggest that the questions that Fung poses to departments and programme teams (2017, p. 60) could be usefully adopted by university quality assurance teams when considering course and programme validation. But it is interesting that, within Fung’s arguably radical way of approaching programme design, the ultimate ‘why’ of formative feedback is to serve summative grading.

Inquiry 3 voices concerns about transparency and our gut responses to students’ work. Without greater transparency we risk having gaps between teacher and student expectations. Such gaps are fertile ground for misunderstandings about course assessment and other learning activities. And with misunderstandings come biases such as expectation biases, “a weak presentation, just as I expected”, and the flat-packed IKEA bias, “they should have understood the assessment, I explained it in the course and it’s in the course handbook”. Benson’s (2016) ‘Cognitive bias cheat sheet’ and the ‘Cognitive Bias Codex’ provide useful insights into the range of cognitive biases which may affect our perceptions and judgements.

Phil Marston’s recommended conversations between teacher and students about course assessments and what a good one looks like can help reduce these expectation gaps. At the University of Southern Denmark (SDU), we offer a course for university teachers on helping students understand assessment which draws on Sadler’s (1989) legacy contribution to developing shared understandings of assessment. During the course, teachers are offered different approaches to co-developing assessment checklists or rubrics with the students, such as offering students a partially completed rubric and asking student groups to fill in the gaps. And if there isn’t the time or the will to co-create then the teacher, having developed the course assessment rubric, mixes up the criteria descriptors and invites the students to solve the rubric jigsaw.

Image credit: Donna Huford and Andrew Read

By working collaboratively on re-organising the rubric’s contents, the students engage with the criteria descriptors’ syntax and query the meaning of ambiguous descriptors like for example: ‘reasonable’ or ‘solid’. To help negotiate ambiguous assessment language, students can next benefit from peer reviewing exemplars of course assessment using the course assessment rubric. By applying the rubric to authentic examples, students get insights into standards and quality: “oh, that’s what a good one can look like”. And it isn’t only the students who can benefit from this revelation. Whilst designing a rubric the teacher reflects on and articulates their understanding of standards and quality. These processes all take time, but by actively discussing and working with assessment through a course, there’s a better chance of reducing the gap between the teacher’s and the students’ expectations of a course assessment.

As discussed in Inquiry 5, the lockdown required a sudden shift to online teaching and learning, which brought its own opportunities and challenges. Oral exams are a common form of assessment in the Danish education system: students often submit an individual or group written assignment, followed by individual oral exams. However, student visibility in oral exams can trigger examiners’ confirmation biases. The orchestra audition study reveals how non-anonymised recruitment led to gender stereotyping, and how blind auditioning resulted in criteria-informed assessments and fairer recruitment (Goldin and Rouse, 2000).

Even if the examiner doesn’t recognise the student, there is the risk of first impression bias or the anchor bias (Myers, 2022), where for example, the examinee’s first response significantly influences the examiner’s expectations of their overall performance in the exam. Strategies for managing fair oral exams and reducing student anxiety include oral exam role plays, giving the students the opportunity to experience and prepare for the exam format, and implementing checklists for bias-aware oral exams (Hurford, 2020). Shifting oral exams online during the lockdown didn’t reduce the risk of anchor bias, but it was noticeable at SDU how many more teachers sought advice about managing online oral exams fairly and allaying student anxieties.

So, when thinking about what assessment and feedback could look like, why not picture a model without grades, a degree without classified honours. The ‘class system’, after all, is uniquely British and has only existed in this form since 1918 (Alderman, 14.10.2003). Shouldn’t universities devise less crude mechanisms for recognising the attainment of knowledge, understanding, or whatever the particular institution values?

One of the key challenges of acting on any piece of blue-sky thinking in higher education, if we put to one side the obstacles raised by institutional mechanics, is getting student buy-in. In this context, ‘buy-in’ is inevitably about bound up with ‘satisfaction’. Student-led feedback (Inquiry 2) and assessment co-creation (Inquiry 4) look great on paper – we would be 100% behind innovations such as these - but students’ expectations need to be carefully managed. How do you respond when a student tells you ‘I don’t want to design the assessment method – that’s what I’m paying the university to do’? This isn’t just a case of the student as consumer wanting their money’s worth. This is about the student having had an educational lifetime of being done to not done with. ‘Buy-in’ in this context is also about authenticity. In order to equip students to provide effective feedback or to co-create assessment activities, how do we avoid simply training students to duplicate the models of feedback and assessment that we already have in place? Perhaps we could consider embedding a thread of assessment and feedback design within programmes, building critically and creatively, to support students to reach thoughtful, genuinely learner-centred conclusions about what they can do.

Resources for addressing bias in teaching, learning and assessment:

Benson, Buster (2016) Cognitive bias cheat sheet. Available at https://medium.com/thinking-is-hard/4-conundrums-of-intelligence-2ab78d90740f

Hurford, Donna and Read, Andrew (2022) Bias-aware Teaching, Learning and Assessment. St Albans, UK: Critical Publishing

Hurford, Donna and Read, Andrew (2022) Address bias in teaching, learning and assessment in five steps. Available at: https://www.timeshighereducation.com/campus/address-bias-teaching-learning-and-assessment-five-steps

SDU Centre for Teaching and Learning (no date) Unlimited Thinking and Teaching. Insights and resources on bias aware teaching, learning and assessment for academic and professional contexts, including addressing the SDGs. Available at: https://unlimited.sdu.dk/

Donna Hurford is an Academic Developer at the University of Southern Denmark where she leads on the Lecturer Training programme, teaches about collaborative learning, addressing bias, integrating sustainable development goals, assessment, and questioning. She has a background in school teaching and pre-service education at the University of Cumbria.

Andrew Read is an Independent Educational Consultant. He was head of the Education Division at London South Bank University and, before that, Head of Teacher Education at the University of East London. Before working in higher education, he was a primary school teacher.

In case you missed it (ICYMI)

Don't forget to read our recent extra posts:

Check out the call for Student Partnership Agreement proposals - funding is available for up to £1000 for projects that involve meaningful collaboration between University of Edinburgh staff and students. Deadline is Monday 17 October. You can read more about the funding and ideas of projects on a recent post.

Interested in learning more about online teaching? Sign up to this short 2-week online course, which will provide a light touch introduction to some things to consider when teaching online: An Introduction to Online Teaching (10th – 21st October 2022). The course will provide participants with the opportunity to experience what it’s like to be an online student, as well as providing a research-informed introduction to online teaching and digital assessment approaches. Book a place here: https://edin.ac/3prtlWb. Please note this event is booked via People & Money. If you have any issues with booking please contact IAD.Teach@ed.ac.uk.

Peer Observation of Digital Teaching (Pilot): This initiative offers participants an opportunity to observe and share digital teaching practices in a supported and structured way through peer observation. The course takes place over an eight-week period and the pilot will take place between 17th October - 9th December. Further details can be found here - https://edin.ac/3Pxodum. Expressions of interest can be registered by completing this short form - https://edin.ac/3pqGi2r.

Coming soon at Teaching Matters

Upcoming blog themes

We continue with the Sept-Oct Learning & Teaching Enhancement Theme 'Careers and Employability', and the Hot Topic 'Student Partnership Agreement 2022'.

The Nov-Dec Learning & Teaching Enhancement Theme will be on Reflective Learning.

Want to keep in touch?

Sign up for our email mailing list!

If you would like to contribute to Teaching Matters, we'd love to hear from you: teachingmatters@ed.ac.uk

Credits:

Created with images by Günter Albers - "blue sky with sun" • Bangkok Click Studio - "Group of six business people team standing together and holding colorful and different shapes of speech bubbles" With thanks to Melanie Grandidge for her icon artwork design.

NextPrevious