Another dot in the blogosphere?

Posts Tagged ‘evaluations

I object to end-of-course student evaluations, particularly if the course is, say, only two sessions deep. Heck, they can happen at the end of a half semester (after about six sessions) or a full semester (about double the number of sessions) and I would still object.

This not because I got poor results when I was a teaching faculty member. Quite the opposite. I had flattering scores that were often just shy of perfect tens in a variety of courses I used to facilitate.

No, I object to such evaluations because they rarely are valid instruments. While they might seem to be about the effectiveness of the course, they are not. These evaluations are administrative and ranking tools for deciding which courses and faculty to keep.

Course evaluations are also not free from bias. Even if the questions are objective, the participants of the questionnaire are not. One of the biggest problems with end-of-course evaluations are that they can be biased against women instructors [1] [2] [3].

I would rather focus on student learning processes and evidence of learning. Such insights are not clearly and completely observable from what are essentially perception surveys.

If administrators took a leaf from research methodology, they might also include classroom observations, interviews, discourse analysis (e.g, of interactions), and artefact analysis (e.g., of lesson plans, resources, assignments, and projects).

But these are too much trouble to conduct, so administrators settle for shortcuts. Make no mistake, such questionnaires can be reliable when repeated over time, but they are not valid for what they purport to measure.

Some might say that end-of-course evaluations are a necessary evil. If so, they could be improved to focus on processes and products of learning. This article by Faculty Focus has these suggestions.

Article by Faculty Focus has these suggestions for questionnaires that focus on processes and products of learning.

Are there any takers?

This is the third and final part of my reflection on post-session evaluations. [part 1] [part 2]

Very few people I meet question the assumptions behind the ubiquitous “smiley sheet” at the end of a professional development session.

One excuse for this is that session evaluations have “always been done this way”. My response to that is that doctors used to advertise cigarettes and we used to include lead in paint. Now we know better.

We should know better. One way to get there is to question the assumptions of Level 1 evaluation forms:

  • You are objective (you are not)
  • The evaluation format is objective (it is not, e.g., gender-biased)
  • Your impressions indicate what you have learnt (short of mind-reading, only externalisation by performance does)
  • Your feelings and impressions are somehow correlated to performance, impact, and return on investment (they are not)

In short, smiley sheets are not indicators or measures of learning. At best, they collect information from participants whether they liked a session or facilitator or both. None of this liking guarantees learning.

I outline my approach to post-session evaluations I conducted recently and contrast it with the conventional method.

Conventional method

My method

Fixed questions, numerical ratings

Open questions and free form answers

Mandatory questions

Optional questions and activities

Focuses on teaching and impression

Focus on learning and reflection

Reliance on single source and instance

Triangulation of exit tickets, backchannels, informal meetings, and other follow-ups

Emphasises scores

Emphasises narrative

My method is designed in part to complement and compensate for the shortcomings of the conventional method. It gives participants a choice of whether to answer and how to answer. I find out what participants take away with them after critical reflection and what they intend to do with what they learnt. My method also does not rely on a single source of information and provides a narrative that numbers alone cannot provide.

It can also replace a conservative, number-oriented method of evaluation if depth and actual indicators of initial learning (or learner intent) are valued over perception and feelings. The narrative is particular important because typical responses when looking at numbers include: What does this mean? What do we need to do now? The interpretations provide answers to this information gaps, and while designed to persuade, still leave the decision-making to organisers.

Such evaluations take more effort. I collect data before, during, and after my sessions. I meet, listen, and converse with people who are both critical and receptive. I distill all these into a qualitatively designed report.

I know that anything worth doing takes hard and smart work. Simply recycling old forms and practices is lazy and provides little value if any.

This is the second part of my thoughts on flawed evaluation of instruction and professional development. This was yesterday’s prelude.

Most training and professional development outfits conduct a survey at the end of a session. This is typically a Kirkpatrick Level 1 form otherwise known as the “smiley sheet”. These forms collect immediate self-reported impressions from the participants of the experience and the provider.

Level 1 forms suffer several weaknesses, among them:

  • Unreliable self-reporting (inconsistency over time)
  • Invalid self-reporting (poorly phrased or misinterpreted questions)
  • Middling scores from disinterested or undecided participants
  • High scores from participants erring on the side of caution
  • Inconsistent design over time or between interventions (e.g., 4 through 7-point Likert-like scales)
  • No or low correlation to other levels of evaluation

Charlatans also know how to take advantage of the weaknesses of such forms. They create a show to wow and thus manipulate the Level 1 feedback. If unethical vendors or instructors are invited to design such forms, then the questions can be manipulated by vendors to favour positive responses.

Even if a form is outside their control, charlatans can focus on behaviours that are measured (e.g., content delivery or speaking ability) and ignore unlisted ones (e.g., risk-taking or promoting critical thinking).

Now this is not to say that behaviours like skilful content delivery and a velvety tongue are not important. However, it is easy to fool people into thinking they are getting a lot of content with persuasive rhetoric.

The larger question is whether the learning experience is meaningful and actionable. Level 1 forms are rarely designed to go beyond initial impression, what feels good, and what is easy to measure.

I do not conduct Level 1 evaluations of my workshops or seminars partly because the organiser does them and also because I know they do not work in isolation.

When I was invited to conduct a long-running series of seminars and workshops for an organisation, I was also required to design my own evaluation reports. Rather than design a Level 1 form, I decided to do something quite different.

I will share the design principles of this evaluation strategy tomorrow.

Today I start the first part of two or three reflections on the evaluation of teaching.

I tweeted this recently and it got me thinking about how organisations evaluate vendors who conduct professional development.

We can fix a blocked sink. We can perform first aid. We can teach someone a thing or two. But there are times when you call a plumber, see a doctor, or rely on a pedagogue instead.

Most people seem to almost instinctively know when a situation is beyond their ability and it is time to rely on a more knowledgeable and skilled other. This happens in the case of the plumber and doctor, but not always for the pedagogue. Why? Could this be because everyone can teach?

Of course everyone can teach. A parent teaches a child, a sibling teaches her sibling, an owner teaches his dog. However, not everyone knows how to teach well.

That is how this rant is related to the tweet. There are many pretender pedagogues who know how to copy, brand, and sell. They know HOW to do, but they know not WHY. There is a word for these people: Charlatans.

It takes two hands to clap, so the charlatans are not the only ones to blame. Organisations that employ these people often have filtering processes. However, some organisations are more porous than others while some focus on the wrong things.

So some flies are invariably going to escape the spider’s web. What can organisations do then? Evaluate all vendors that are called to teach.

In the next part, I suggest how such evaluations are the ultimate weak link and how they could be what allows charlatans to put on show after show.

Click to see all the nominees!

QR code

Get a mobile QR code app to figure out what this means!

My tweets


Usage policy

%d bloggers like this: