Another dot in the blogosphere?

Posts Tagged ‘feedback

About twice a year, there is an almost clockwork response to the university practice of student evaluations/feedback on teaching (SETs/SFTs). Why twice a year? These coincide with the end of university semesters.

What is wrong with SETs/SFTs? Some universities mandate SETs/SFTs by withholding student results until they complete them. Worse still, such SETs/SFTs might get a failing grade based on research.

Berend Van Der Kolk recently highlighted the futility and invalidity of such tools for measuring the effectiveness of university teaching. He cited a 26-year old paper, How to Improve Your Teaching Evaluations without Improving Your Teaching, and a more recent one, Meta-analysis of faculty’s teaching effectiveness: Student evaluation of teaching ratings and student learning are not related.

Among the highlights of the newer paper were: 

  • Re-analyses of previous meta-analyses of multi-section studies indicate that SET ratings explain at most 1% of variability in measures of student learning.
  • New meta-analyses of multi-section studies show that SET ratings are unrelated to student learning.

And yet, most universities persist with SETs/SFTs because they are easier to implement and administratively convenient. As Van Der Kolk put it:

Why are such quantified ‘performance indicators’ still used if this is the case? Likely because they help to simplify the complex classroom reality. Quantification can reformulate something as complex and multidimensional as teaching into a one-dimensional score. And such a score gives the possessor a sense of control and understanding.

They might find ways to polish this, er, low-hanging fruit, but those who are honest with themselves and think critically with research know what lies at the core. 

Photo by Ivan Babydov on Pexels.com

I borrow Van Der Kolk’s wisdom with his conclusion:

What we need to address is our understanding and use of ‘performance’ measures such as SETs. We should not be naïve about their limitations, ideally complement quantified measures with richer qualitative information and use them to initiate dialogues about what matters. Because not everything that matters can be measured, and not everything that can be measured matters.

I received an email notification last week from Niantic Wayfarer that a Pokémon stop that I had suggested was rejected.

Late and irrelevant feedback from Niantic Wayfarer.

My issue with the notification was that it was almost three years late. I had forgotten that I had even suggested that stop. I had to visit the Wayfarer portal to see the submission I made on 4 July 2019. 

There was also no reason in the email why my stop was rejected and I had to get details in the portal. Apparently it was a duplicate. Of course it was! The mall that the structure was in was new then and someone probably had the same idea as me. Duplicity is not a good reason to reject the suggestion. The fact that one or more others also suggested it means that it had appeal.

I visited a few other rejected suggestions and found the feedback to be irrelevant. For example, one stop was a foot reflexology spot near where I live. The reason for rejecting it was ”generic business”. The feature was not a business but an amenity for the community.

Niantic relies on people to suggest and review Pokémon stops. So this reminded me about the importance of teaching people how to provide timely and relevant feedback. If I was conducting a workshop on technology-mediated feedback, I might start with this as a hook to activate the schema of my learners.

As an educator, my reaction to this tweet was: True that!

But I also thought about what makes for effective feedback, i.e., constructive critique that is skilfully delivered and acted upon positively. From the myriad of possible strategies, I boil down three:

  1. Timeliness
  2. Preparedness
  3. Meaningfulness

Being timely is partly about how soon the feedback is provided. If there is too much distance between the performance of a task and the feedback on the task, the later is ineffective because the performer cannot remember what/how they did.

Timeliness is also about being sensitive to context. It can sometimes help to put some reflective distance between an event and having a conversation about it. 

This leads to preparedness of the learner to receive such feedback and the teacher to provide it. 

A learner might be more open to feedback when they can recall what they wrote in an essay a day ago instead of a month ago. The same learner might be more ready to listen when they have had time to cool off and take other perspectives following a bad group project experience.

A teacher needs to prepared with feedback strategies relevant their academic areas. Such preparedness does not arrive by chance. Both students and teachers need to be taught how to listen actively and to ask critical questions.

But above all, feedback needs to be meaningful. This  means that it is understood by and relevant to the learner. It is provided in context and concrete enough to be acted upon. One generic framework that might scaffold the design of feedback is my 5W1H1S. That is the Who, What, Where, When, Why, and How of feedback, and So What happens if you do/do not act.

Seth Godin wrote about what he called the Oxford comma trap. He said that when providing feedback, focusing on trivial matters like typos and grammar was not as important as the larger issues.

I disagree. The small things matter, too, because they add up or can have a disproportionately large impact. Take how Richard Feynman worked out how O-ring failures led to the Challenger explosion [1] [2].

Photo by Pixabay on Pexels.com

The approach to giving feedback does not have to focus on either small details or large issues. We can do both. 

This was what I did on a project that I am currently working on. The small issues built up to larger ones. I zoomed in and zoom out to provide feedback that I thought was critical and constructive. 

The small things matter because they reveal the degree of care, accuracy, and precision. The small things are not trivial when they add up.

The international project I am currently working on requires me to provide feedback on documents. 

One thing I asked the project lead was how blunt I could be. I asked this because a) I recognise the importance of tone during feedback, and b) I believe that it is better to be cruel in order to be kind.

Photo by Ricardo Esquivel on Pexels.com

But my overall concern was that we acknowledge that we all have our own blindspots. These are areas of improvement that are not obvious to us but are clear to others by virtue of them being on the outside.

Thankfully we were on the same page about being direct and clear about feedback. And this connects to two things that happened to me.

Photo by Ryan Miguel Capili on Pexels.com

The first was a potential consulting opportunity I had last year. Long story short: I met the interested parties, we had a friendly chat over coffee, and I did not hear from them after that. There was no closure, e.g., we are not interested, we might work together later, etc. I was professionally ghosted!

Some might consider this to be rude. But I attribute this to the fact that the people I chatted with do not operate in the educational arena. So they do not speak the same way or have the same expectations. 

Photo by Alexander Suhorucov on Pexels.com

I contrast that with an interaction I had with a teacher from the US who said I had misspelt a word in my verb wheel for Bloom’s Taxonomy. I pointed out that I was using UK English and we subsequently exchanged several email messages about the joys of being educators. I also created a US English version of the job aid.

The project I am currently working on involves educators or people who have strong associations with education. I think we understand each other more implicitly because we have shared values and language.

Photo by Jens Johnsson on Pexels.com

The second event was when I was stopped by a cabbie who needed directions. He did not use a map tool, so I had to resort to verbal directions and hand gestures to describe a route. He wanted to go back the way he came, but I offered him a shorter and better route. I watched him drive off in the right direction.

The similarity of that event to my project is how I was expected to provide feedback with a form. I found this method limiting because I could only answer fixed questions and had to describe which parts of the document I was referring to (just like gesticulating to the cabbie).

I offered to create a copy of the document that I could markup and comment on. This allowed me to highlight the exact phrases and to provide feedback in context. The cabbie equivalent might be us using a map to trace a route or me jumping into the taxi and providing directions. 

This was about providing feedback on a better way to give feedback. Like the cabbie who followed my advice, my collaborators saw the value in receiving my feedback this way. Again, I attribute this to how there was no need to unpack this feedback strategy because we already had shared values and language. 

Rising above: We can avoid our blindspots if we are open to good sources of feedback. But our ability to give and receive is shaped by our expectations and values.

SFT is short for student feedback on teaching. There is some variant of this initialism in practically every higher education course.

The intent of SFTs is the same: It is supposed to let the instructor know what they did well and what areas need improvement. However, they end up as administrative tools for ranking instructors and are often tied to annual appraisals.

The teaching staff might get the summary results so late, e.g., the following semester, that they cannot remediate. As a result, some teaching faculty game the process to raise their scores while doing the bare minimum to stay employed.

Using SFTs alone to gauge the quality of a course is like relying on just one witness to a traffic accident. It is not reliable. It might not even be valid if the questions do are not aligned to the design and conduct of the course.

Instead, teaching quality needs to be triangulated with multiple methods, e.g., observations, artefact analysis, informal polling of students, critical reflection.

The tweet above provides examples of the latter two from my list. It also indicates why SFTs might not even be necessary — passionate educators are constantly sensing and changing in order to maximise learning.

The next tweet highlights a principle that administrators need to adopt when implementing multi-pronged methods. Trying to gauge good teaching is complicated because it is multi-faceted and layered.

You cannot rely only on SFTs which are essentially self-reporting exit surveys. This is like relying on one frame of a video. How do you know that the snapshot is a representative thumbnail of the whole video? At best, SFTs offer a shaky snapshot. Multiple methods are complicated, but they provide a more representative view of the video.

Photo by Pixabay on Pexels.com

There is a saying that you catch more flies with honey than with vinegar. It means that it is easier to get what you want if you are nice instead of nasty.

The problem with using honey is that a) it is a waste of honey, b) you end up with a sticky mess, and c) you get more than just flies. 

How is this like providing feedback that only sounds sweet? It can sometimes be a waste of everyone’s time if the constructive message is neither sent nor received. If you sound so positive, nothing seems wrong so there is nothing to work on.

You end up with a larger mess than you started with because the feedback on a document or project does not get acted on. Worse still, it could breed indifference or overconfidence in the one receiving feedback.

All this is not to say that being absolutely nasty or brutal is a better method. The receiver is just as likely to shut down upon reading or hearing the first negative word.

Photo by Noelle Otto on Pexels.com

So what might we do? I say we start with a preemptive discussion on blindspots. All of us have them, i.e., we have our own perspectives and biases. These make us unable to see some other way unless someone helps us with a different view.

When driving in a car, checking blindspots regularly and taking action quickly are important. In terms of feedback, dealing with blindspots needs to be clear and direct. If not, an accident could happen.

Of course there is a chance that an accident will not happen if you do not check your blindspots. Likewise, there is a chance that things will go swimmingly if you do not point out flaws in a plan. But are you willing to take that chance?

The number of likes this tweet received probably reflects the number of higher education faculty who can relate to it. 

By generalising the phenomenon we might conclude that we tend to focus on the negative. This is why newspapers and broadcasters tend to report bad news — it gets eyeballs and attention.

The underlying psychological cause is a survival instinct. We are primed to spot danger. Something negative is a possible threat and we pay a disproportionate amount of attention on that. 

But giving sensationalised news and one bad review too much attention is not good either. These might demoralise us and shift our energy away from what is important. 

What is important is making improvements. I do not place much weight on end-of-course evaluations because they are rarely valid or designed properly. 

Instead I focus on what happens at every lesson. I self-evaluate, I pick up cues as the lesson progresses, and I get feedback from my students. I do not wait for the end of a course because it is too late to do anything then. I prefer to prevent a ship from running aground.

 
I have had the privilege and misfortune of experiencing how student feedback on teaching (SFT) is done in different universities.

When I was a full-time professor, the institute I worked at specialised in teacher education and had experts in survey metrics. So no surprises — the SFTs were better designed and constantly improved upon.

One of the best improvements was the recognition that different instructors had different approaches. Each instructor had a set of fixed questions, but could also choose and suggest another set of questions.

As an adjunct instructor now and roving workshop facilitator, I have been subject to feedback processes that would not have passed the face validity test at my previous workplace.

One practice is administration using only positive feedback to market their courses. Feedback, if validly measured, should be used to improve the next semester’s offering, not be a shiny star in a pamphlet.

Another bad practice is sampling a fraction of a class. If there is a sampling strategy, it must be clear and representative. Feedback is not valid if only some participants provide it.

Yet another SFT foible is not sharing the feedback with the facilitator or instructor. One institute that operated this way had multiple sections of a course taught by different instructors. However, the feedback did not collect the name of their primary instructor because classes were shared.

All the examples I described attempted to conduct SFT. None do it perfectly. But some are better informed than others. Might they not share their practices with one another? If they do, will institutional pride or the status quo stand in the way?

 
I have never placed much weight on end of course feedback. This was even if the results of such data was favourable. Why? My knowledge research on such feedback and my experiences with the design of questions hold me back.

In my Diigo library is a small sample of studies that highlight how there is gender, racial, and other bias in end of course feedback tools. These make the data invalid. The feedback forms do not measure what they purport to measure, i.e., the effectiveness of instruction, because students are influenced by distractors.

Another way that feedback forms are not valid is in their design. They are typically created by administrators who have different concerns from instructors. The latter are rarely, if at all, consulted on the questions in the forms. As a result, students might be asked questions that are not relevant.

For example, take one such question I spotted recently: The components of the module, such as class activities, assessments, and assignments, were consistent with the course objectives. This seems like a reasonable question and it is an important one to both administrator and instructor.

An administrator wants alignment particularly if a course is to be audited externally or to be benchmarked against other similar offerings elsewhere. An instructor needs to justify that the components are relevant a course. However, there are at least three problems with such a question.

First, the objectives are not as important as outcomes. Objectives are theoretical and focus on planning and teaching while outcomes practical and emerge from implementation and learning. Improvement: Focus on outcomes.

The second problem is that it will only take one component — an activity, an assessment, or an assignment — to throw the question off. The student also has the choice to focus on one, two, or three components. Improvement: Each component needs to be its own question.

Third, not all components might be valid. Getting personal, one of the modules I facilitate has no traditional or formal assessment or assignments. The student cannot gauge a non-existent component, so the question is not valid. Improvement: Customise end of course forms to suit the modules.

Another broad problem with feedback forms is that they are not reliable. The same questions can be asked of different batches of students, and assuming that nothing else changes, the average ratings can vary wildly. This is a function of the inability to control for learner expectations and a lack of reliability testing for each question.

End of course evaluations are convenient to organisers of courses and modules, but they are pedagogically unsound and lazy. I would rely more on critical reflection of instructors and facilitators, as well as their ability to collect formative feedback during a course to make changes.


Archives

Usage policy

%d bloggers like this: