Friday, 11 July 2014

What does your assessment say about your pedagogy?


At a recent assessment event for our teaching advisory network, colleagues raised a number of thought-provoking questions. Although the session was intended to be about assessment in tertiary education contexts generally, the use of digital tools for assessment purposes was a recurring theme. These include:
  • eportfolios for assessment of artifacts representing learning
  • Google forms for formative feedback from students
  • electronic means of crafting generic feedback related to criteria, which can then be copied, pasted and customised for each student's work
  • Moodle tests
  • Ensuring equivalence of assessment across teaching formats, such as on campus, online, satellite campus.
One question in particular got me thinking. The question was something along the lines of ‘If lecturers are copying and pasting generic feedback onto students’ assignments, and this feedback is the same from one year to the next, what does this say about our pedagogy?’

I decided I needed to respond to this
Firstly, feedback is customised for each student's work, but starts with a common framework which enables the marker (or multiple markers, as is often the case) to work smarter. And it makes good sense because:
  • All student work is assessed in relation to consistent and transparent criteria (criteria that is shared with students in paper outlines, and sometimes negotiated with students as part of assignment preparation)
  • generic feedback is informed by that criteria
  • specific feedback is also informed by those same criteria
  • with large classes, personalising feedback requires us to find ways to manage these sizes. It's an  important factor when considering the practicality and usability of assessment methods 
  • a copy, paste, customise approach is a sensible way to provide valid and reliable assessment
  • it is valid because the feedback references the criteria, and leaves scope for adding specific, targeted feedback to each student rather than being formulated at the whim of the marker
  • it is reliable because the feedback is consistent across markers and across days of marking. Often marking will take up to three weeks as it is interspersed with other tasks and demands. A template helps us to sustain consistency throughout this time.
The colleague who raised this question makes an excellent point – that is, we need to attend to the feedback we are giving our students in order to squeeze out formative information for ourselves, for course development and pedagogical purposes. The feedback we give is not just formative for students; it can also inform our teaching. For example, if we spot common misunderstandings when giving feedback on student work, such as when:

  • first year students need help with APA referencing 
  • any students have difficulty referencing chapter authors in edited texts
  • second years struggle to differentiate between a summary and a critique
  • third years struggle with argumentation
  • post grad students write literature reviews as an annotated bibliography

... I could go on and on... However, patterns of student need are readily apparent in our feedback and can be used proactively to direct teaching and how the paper is adjusted in subsequent iterations. Of course, this is the essence of formative assessment. Our feedback tells us (as well as the students) what the next learning steps are and makes it easier to target these needs. Importantly, it's a mistake to assume that generic feedback remains the same year after year. Indeed, our assessments change frequently over time, as they are informed by student needs, curriculum development, wider influences, and constant innovation. When an assignment changes, criteria change and feedback changes. 

Feedback may also change in response to student feedback about that feedback ;-). That is, when students appraise a paper and the teaching performance of the teaching team, either via TDU or other means, smart teachers use the student feedback to improve the paper. Finally, to return to the question: What does my (copy, paste, customise) approach to feedback say about my pedagogy, it is noteworthy that the feedback I plan to give students is invariably positive. I write the feedback on the assumption that the student's work has met the criteria. For example, in a recent 100 level eportfolio assessment, one criterion reads:

"Entries capture thoughtful consideration of professional issues".

My generic feedback is in the form of a stem, beginning “Thank you for sharing your eportfolio. Your entries capture thoughtful consideration of professional issues, including…”. I then customise the feedback by listing some of the professional issues raised by that student in their eportfolio. So, if a student referred to culturally responsive teaching in mathematics (a subject of recent New Zealand media coverage), and also demonstrated consideration of the class-size debate, I might complete the stem by writing:

“Thank you for sharing your eportfolio. Your entries capture thoughtful consideration of professional issues, including culturally responsive teaching and class sizes.” 

 The eportfolio that did not include consideration of any professional issues of note, would receive a comment like:

“It was expected that your entries would capture thoughtful consideration of professional issues”.

 How does this give useful feedback on student learning? It lets individual students know whether and how their work has met the criteria. Subsequent comments may elaborate further on quality, and suggest how further improvement could be made.

  So, what do others think about assessment with digital tools? Please feel free to offer comments. 

No comments:

Post a Comment