Another dot in the blogosphere?

Posts Tagged ‘evaluating

Video source

Talk about a double-whammy — incompetent people who think that they are amazing do not know they are incompetent nor do they have the mindset or aptitude to change.

This observation is based on psychological research and is called the Dunning-Kruger effect.

How do we not overestimate our own abilities as teachers and educators? I suggest each of us reflects critically and strategically. Mine is to do so at least daily and this has become a habit.

How might we not overestimate our collective abilities as a system or country famed for its schooling and education system as measured by tests? I say we ignore PISA results and university rankings. These external validations count for little if we do not first critique ourselves and seek to continuously improve.

It would be an understatement if I said my last week was a tiring one. I balanced classes in the evening and evaluations of novice facilitators in the day.

I was glad that I had the flexibility to arrange the evening classes early in the week and negotiate evaluations later the same week.

When I was a young faculty member, I was treated like a number on a schedule. I recall having to leave home at 6am to get from one end of the island to the other to set up for early morning classes. Sometimes this was on the back of a class the evening before or I had a string of tutorials throughout the day. It was not that much better with seniority because the timetable was king.

Now I get to choose what to be involved in as a consultant and only because I relate to the causes of those I collaborate with. But this does not mean that the work is any less strenuous.

My evening classes are typically from 6.30-9.30pm in a central location. I leave home at 4.30pm to take into account time for travel, an early dinner, and setting up the classroom. After clearing up and chatting with people who stay behind, I might leave the venue at around 10pm and am lucky if I am home at 11pm.

This is a sacrifice that no amount of renumeration compensates for: This takes away from family time. This week was exceptionally painful because it coincided with a week-long school vacation that I could not enjoy with my wife and son.

I make sure that the sacrifice is worth it. I keep the sessions as lively as possible and refrain from lecturing. The entire three hours of each class is driven by learner-centred activities, technology-mediated strategies, and individual reflection.

Jigsaw method of peer learning and instruction.

The photo above might look static, but it is actually a snapshot of groups hard at work during a jigsaw of peer instruction. It is a joy to see energy levels high and questioning minds active even at the end of the session. Sometimes I feel bad that we cannot do more or because I have to stop discussions in order to move on to other important activities and topics.

The evening classes are particularly draining because the body and mind want rest after a day of work. But my learners and I keep our energies up and I employ active learning strategies to help in this regard.

An equally draining activity is evaluating novice facilitators. I do this as part of a cumulative assignment that future faculty develop over approximately two months. They plan and implement a self-contained 10-minute lesson that showcases their ability to be learner-centred.

Evaluating microteaching at NTU.

I am always encouraged by those who make the effort to teach in ways that they were not taught when they were undergraduates.

The other facilitators and I have the unenviable task of changing or shifting mindsets over a very short period. The reception and abilities of our learners spans the spectrum of the militantly resistant to the devoutly willing. Yet we have to help all of them manage their expectations and coax performances that meet the high standards we set for them.

All this makes for taxing, but fulfilling work. Even though I am technically paid to be with these learners three hours at a time, I do my usual early start and late end. The latter is often a result of staying back to discuss ideas, overcome stumbling blocks, or debate philosophical differences.

A while ago, a contact of mine asked me what I did. I described my teaching and facilitating work in less detail than I did above. However, he was sharp enough to label what I did “unbundling”. I understood what he meant immediately.

I had dropped the unnecessary meetings and the regular interruptions. I was able to offer specific services to my clients and collaborators that I was well-versed in as a professor and was also able to focus on these tasks exclusively instead of being torn in different directions.

I have always made time to read and write (I started this blog when I had less bandwidth than I have now) and the unbundling now affords me more. In hindsight, I wish I knew then what I know now about unbundling. It would have given me something to look forward to.

This is the third and final part of my reflection on post-session evaluations. [part 1] [part 2]

Very few people I meet question the assumptions behind the ubiquitous “smiley sheet” at the end of a professional development session.

One excuse for this is that session evaluations have “always been done this way”. My response to that is that doctors used to advertise cigarettes and we used to include lead in paint. Now we know better.

We should know better. One way to get there is to question the assumptions of Level 1 evaluation forms:

  • You are objective (you are not)
  • The evaluation format is objective (it is not, e.g., gender-biased)
  • Your impressions indicate what you have learnt (short of mind-reading, only externalisation by performance does)
  • Your feelings and impressions are somehow correlated to performance, impact, and return on investment (they are not)

In short, smiley sheets are not indicators or measures of learning. At best, they collect information from participants whether they liked a session or facilitator or both. None of this liking guarantees learning.

I outline my approach to post-session evaluations I conducted recently and contrast it with the conventional method.

Conventional method

My method

Fixed questions, numerical ratings

Open questions and free form answers

Mandatory questions

Optional questions and activities

Focuses on teaching and impression

Focus on learning and reflection

Reliance on single source and instance

Triangulation of exit tickets, backchannels, informal meetings, and other follow-ups

Emphasises scores

Emphasises narrative

My method is designed in part to complement and compensate for the shortcomings of the conventional method. It gives participants a choice of whether to answer and how to answer. I find out what participants take away with them after critical reflection and what they intend to do with what they learnt. My method also does not rely on a single source of information and provides a narrative that numbers alone cannot provide.

It can also replace a conservative, number-oriented method of evaluation if depth and actual indicators of initial learning (or learner intent) are valued over perception and feelings. The narrative is particular important because typical responses when looking at numbers include: What does this mean? What do we need to do now? The interpretations provide answers to this information gaps, and while designed to persuade, still leave the decision-making to organisers.

Such evaluations take more effort. I collect data before, during, and after my sessions. I meet, listen, and converse with people who are both critical and receptive. I distill all these into a qualitatively designed report.

I know that anything worth doing takes hard and smart work. Simply recycling old forms and practices is lazy and provides little value if any.

This is the second part of my thoughts on flawed evaluation of instruction and professional development. This was yesterday’s prelude.

Most training and professional development outfits conduct a survey at the end of a session. This is typically a Kirkpatrick Level 1 form otherwise known as the “smiley sheet”. These forms collect immediate self-reported impressions from the participants of the experience and the provider.

Level 1 forms suffer several weaknesses, among them:

  • Unreliable self-reporting (inconsistency over time)
  • Invalid self-reporting (poorly phrased or misinterpreted questions)
  • Middling scores from disinterested or undecided participants
  • High scores from participants erring on the side of caution
  • Inconsistent design over time or between interventions (e.g., 4 through 7-point Likert-like scales)
  • No or low correlation to other levels of evaluation

Charlatans also know how to take advantage of the weaknesses of such forms. They create a show to wow and thus manipulate the Level 1 feedback. If unethical vendors or instructors are invited to design such forms, then the questions can be manipulated by vendors to favour positive responses.

Even if a form is outside their control, charlatans can focus on behaviours that are measured (e.g., content delivery or speaking ability) and ignore unlisted ones (e.g., risk-taking or promoting critical thinking).

Now this is not to say that behaviours like skilful content delivery and a velvety tongue are not important. However, it is easy to fool people into thinking they are getting a lot of content with persuasive rhetoric.

The larger question is whether the learning experience is meaningful and actionable. Level 1 forms are rarely designed to go beyond initial impression, what feels good, and what is easy to measure.

I do not conduct Level 1 evaluations of my workshops or seminars partly because the organiser does them and also because I know they do not work in isolation.

When I was invited to conduct a long-running series of seminars and workshops for an organisation, I was also required to design my own evaluation reports. Rather than design a Level 1 form, I decided to do something quite different.

I will share the design principles of this evaluation strategy tomorrow.

Today I start the first part of two or three reflections on the evaluation of teaching.

I tweeted this recently and it got me thinking about how organisations evaluate vendors who conduct professional development.

We can fix a blocked sink. We can perform first aid. We can teach someone a thing or two. But there are times when you call a plumber, see a doctor, or rely on a pedagogue instead.

Most people seem to almost instinctively know when a situation is beyond their ability and it is time to rely on a more knowledgeable and skilled other. This happens in the case of the plumber and doctor, but not always for the pedagogue. Why? Could this be because everyone can teach?

Of course everyone can teach. A parent teaches a child, a sibling teaches her sibling, an owner teaches his dog. However, not everyone knows how to teach well.

That is how this rant is related to the tweet. There are many pretender pedagogues who know how to copy, brand, and sell. They know HOW to do, but they know not WHY. There is a word for these people: Charlatans.

It takes two hands to clap, so the charlatans are not the only ones to blame. Organisations that employ these people often have filtering processes. However, some organisations are more porous than others while some focus on the wrong things.

So some flies are invariably going to escape the spider’s web. What can organisations do then? Evaluate all vendors that are called to teach.

In the next part, I suggest how such evaluations are the ultimate weak link and how they could be what allows charlatans to put on show after show.

One of my newfound favourites on YouTube is Brett Domino. He and his partner form the Brett Domino Trio band (and yes, there are only two of them).

Video source

BD appeared on my radar thanks to a Gizmodo post a short while ago. He spoof-taught us how to create a hit pop song. The video went viral, but I do not think that his channel has got as many new subscribers as he deserves.

I think that he is a rare combination of musical and comedic talent. But not everyone agrees.

Video source

When the BD Trio appeared on Britain’s Got Talent five years ago, Simon Cowell did not appreciate his talent and was the first (and only) judge to buzz them out. He did not get what BD was trying to do. The audience seemed to get it. The other judges did and even had to explain it to Cowell.

There are many Cowells in the world today. They have narrow definitions of talent or worth. When they are the majority they drown out the views of the minority who think otherwise. Even if they are the minority, they have so much influence, possess veto powers, or claim to represent current norms that they get their way.

Video source

Take BD’s video response to Airbnb’s recently redesigned logo for example (warning: Not for the prude or sensitive). BD was not the first to point of that the logo looked like genitalia. However, I think he quickly responded with a funny and catchy song. But how many people are going to laugh along and appreciate his talent?

Here is another example. Someone I know on Twitter expressed her frustration at having to show her O and A-level certificates as she moved to another job in the civil service. Most statutory boards and the civil service here prize paper qualifications seemingly at the exclusion of everything else. Almost two decades of teaching experience was not good enough.

That person was facing a Cowell form of evaluation. But I think that it is far more important to know what you are worth by your own reckoning, and if you find it necessary, find other measures.

The BD Trio has its likes and comments in YouTube. Owners of other forms of digital portfolios can collect and curate comments, critiques, and bouquets, and showcase them alongside processes and products of learning. I think these will be far more important and effective in the near future.

I have found this to be true for myself. I am leaving NIE at the end of the month. But I have found suitors despite not actively looking for a more permanent job. People know me from what I have shared at talks or online. My worth is not measured by my doctorate but by what value I bring to the table. That value is not theoretical in the form of school certificates but a living portfolio in the form of this blog and other digital artefacts.

So instead of waiting for the world to change, I suggest we see and be the change. We all have talent whether someone else values them or not.

Click to see all the nominees!

QR code

Get a mobile QR code app to figure out what this means!


Usage policy

%d bloggers like this: