Thursday 4 September 2008

Developing Assessment to support Student Learning

I've just watched Professor Graham Gibbs deliver a very powerful keynote speech at the Assessment, Learning and Teaching day of our annual staff development festival at Leeds Met. He presented empirical evidence that revealed the approaches to assessment that support student learning, and the approaches that don't.

The key points, as I interpreted them, were:

Lots of feedback is the route to quality learning. The majority of resources should be devoted to this.

The more summative assessment you have, the worse off everybody is.

The programme, award, course - whatever you want to call it - that thing that has a discipline specific title and lasts for 3 years - is the most effective container for learning. Splitting a course into 24 mini courses makes things worse.

Students need to be welcomed into their course's community of practice, which is populated by 3 full years of co-learners, plus staff.

The more explicit you are about criteria, the more students will work for a mark and miss the point of learning.

Students often see marks as a judgement about them as a person, rather than a judgement of their learning. There is great value in learning that doesn't result in a mark.

Feedback needs to be received as soon as possible to have any real value. Quick and dirty feedback is better than accurate but delayed feedback.

Peer support and peer pressure help quality learning to take place.


All of these points reinforce the strong beliefs that I hold about effective assessment and learning, gained through my experience as an art & design educator. They also confirm my suspicions about other popular approaches.

I have a very clear idea of how this evidence relates to the 3 year undergraduate programme that I lead, but how might it relate to learning in virtual worlds?

No comments: