Another dot in the blogosphere?

Posts Tagged ‘feedback

SFT is short for student feedback on teaching. There is some variant of this initialism in practically every higher education course.

The intent of SFTs is the same: It is supposed to let the instructor know what they did well and what areas need improvement. However, they end up as administrative tools for ranking instructors and are often tied to annual appraisals.

The teaching staff might get the summary results so late, e.g., the following semester, that they cannot remediate. As a result, some teaching faculty game the process to raise their scores while doing the bare minimum to stay employed.

Using SFTs alone to gauge the quality of a course is like relying on just one witness to a traffic accident. It is not reliable. It might not even be valid if the questions do are not aligned to the design and conduct of the course.

Instead, teaching quality needs to be triangulated with multiple methods, e.g., observations, artefact analysis, informal polling of students, critical reflection.

The tweet above provides examples of the latter two from my list. It also indicates why SFTs might not even be necessary — passionate educators are constantly sensing and changing in order to maximise learning.

The next tweet highlights a principle that administrators need to adopt when implementing multi-pronged methods. Trying to gauge good teaching is complicated because it is multi-faceted and layered.

You cannot rely only on SFTs which are essentially self-reporting exit surveys. This is like relying on one frame of a video. How do you know that the snapshot is a representative thumbnail of the whole video? At best, SFTs offer a shaky snapshot. Multiple methods are complicated, but they provide a more representative view of the video.

Photo by Pixabay on Pexels.com

There is a saying that you catch more flies with honey than with vinegar. It means that it is easier to get what you want if you are nice instead of nasty.

The problem with using honey is that a) it is a waste of honey, b) you end up with a sticky mess, and c) you get more than just flies. 

How is this like providing feedback that only sounds sweet? It can sometimes be a waste of everyone’s time if the constructive message is neither sent nor received. If you sound so positive, nothing seems wrong so there is nothing to work on.

You end up with a larger mess than you started with because the feedback on a document or project does not get acted on. Worse still, it could breed indifference or overconfidence in the one receiving feedback.

All this is not to say that being absolutely nasty or brutal is a better method. The receiver is just as likely to shut down upon reading or hearing the first negative word.

Photo by Noelle Otto on Pexels.com

So what might we do? I say we start with a preemptive discussion on blindspots. All of us have them, i.e., we have our own perspectives and biases. These make us unable to see some other way unless someone helps us with a different view.

When driving in a car, checking blindspots regularly and taking action quickly are important. In terms of feedback, dealing with blindspots needs to be clear and direct. If not, an accident could happen.

Of course there is a chance that an accident will not happen if you do not check your blindspots. Likewise, there is a chance that things will go swimmingly if you do not point out flaws in a plan. But are you willing to take that chance?

The number of likes this tweet received probably reflects the number of higher education faculty who can relate to it. 

By generalising the phenomenon we might conclude that we tend to focus on the negative. This is why newspapers and broadcasters tend to report bad news — it gets eyeballs and attention.

The underlying psychological cause is a survival instinct. We are primed to spot danger. Something negative is a possible threat and we pay a disproportionate amount of attention on that. 

But giving sensationalised news and one bad review too much attention is not good either. These might demoralise us and shift our energy away from what is important. 

What is important is making improvements. I do not place much weight on end-of-course evaluations because they are rarely valid or designed properly. 

Instead I focus on what happens at every lesson. I self-evaluate, I pick up cues as the lesson progresses, and I get feedback from my students. I do not wait for the end of a course because it is too late to do anything then. I prefer to prevent a ship from running aground.

 
I have had the privilege and misfortune of experiencing how student feedback on teaching (SFT) is done in different universities.

When I was a full-time professor, the institute I worked at specialised in teacher education and had experts in survey metrics. So no surprises — the SFTs were better designed and constantly improved upon.

One of the best improvements was the recognition that different instructors had different approaches. Each instructor had a set of fixed questions, but could also choose and suggest another set of questions.

As an adjunct instructor now and roving workshop facilitator, I have been subject to feedback processes that would not have passed the face validity test at my previous workplace.

One practice is administration using only positive feedback to market their courses. Feedback, if validly measured, should be used to improve the next semester’s offering, not be a shiny star in a pamphlet.

Another bad practice is sampling a fraction of a class. If there is a sampling strategy, it must be clear and representative. Feedback is not valid if only some participants provide it.

Yet another SFT foible is not sharing the feedback with the facilitator or instructor. One institute that operated this way had multiple sections of a course taught by different instructors. However, the feedback did not collect the name of their primary instructor because classes were shared.

All the examples I described attempted to conduct SFT. None do it perfectly. But some are better informed than others. Might they not share their practices with one another? If they do, will institutional pride or the status quo stand in the way?

 
I have never placed much weight on end of course feedback. This was even if the results of such data was favourable. Why? My knowledge research on such feedback and my experiences with the design of questions hold me back.

In my Diigo library is a small sample of studies that highlight how there is gender, racial, and other bias in end of course feedback tools. These make the data invalid. The feedback forms do not measure what they purport to measure, i.e., the effectiveness of instruction, because students are influenced by distractors.

Another way that feedback forms are not valid is in their design. They are typically created by administrators who have different concerns from instructors. The latter are rarely, if at all, consulted on the questions in the forms. As a result, students might be asked questions that are not relevant.

For example, take one such question I spotted recently: The components of the module, such as class activities, assessments, and assignments, were consistent with the course objectives. This seems like a reasonable question and it is an important one to both administrator and instructor.

An administrator wants alignment particularly if a course is to be audited externally or to be benchmarked against other similar offerings elsewhere. An instructor needs to justify that the components are relevant a course. However, there are at least three problems with such a question.

First, the objectives are not as important as outcomes. Objectives are theoretical and focus on planning and teaching while outcomes practical and emerge from implementation and learning. Improvement: Focus on outcomes.

The second problem is that it will only take one component — an activity, an assessment, or an assignment — to throw the question off. The student also has the choice to focus on one, two, or three components. Improvement: Each component needs to be its own question.

Third, not all components might be valid. Getting personal, one of the modules I facilitate has no traditional or formal assessment or assignments. The student cannot gauge a non-existent component, so the question is not valid. Improvement: Customise end of course forms to suit the modules.

Another broad problem with feedback forms is that they are not reliable. The same questions can be asked of different batches of students, and assuming that nothing else changes, the average ratings can vary wildly. This is a function of the inability to control for learner expectations and a lack of reliability testing for each question.

End of course evaluations are convenient to organisers of courses and modules, but they are pedagogically unsound and lazy. I would rely more on critical reflection of instructors and facilitators, as well as their ability to collect formative feedback during a course to make changes.

Sometimes I have to remind myself that what is obvious to me is not obvious to others. One such reminder came in the feedback I provided on an assignment I graded last week. My feedback was:

If technology use is to be effective, it must be accompanied by mindset and behavioural change. If not, we are simply changing the medium and not the method.

I provided this advice because most of my learners opted to design and implement edtech as presentations and demonstrations. This left the technology largely in their hands. But the impact would have been greater if they had also got their learners (their peers) to also use the technology.

Despite my modelling of learner-centric use of technology in my studio-based design, I did not state the “change the medium and the method” principle until the end of the course. This is something I must remember to mention and repeat in the earlier sessions.

From my DIWA Keynote, Philippines, in 2016.

I am physically and mentally drained as I approach the end of academic semesters with my partner institutions.
 

 
I facilitated the last class online yesterday right after an intensive feedback and grading week. I normally have two weeks to grade a major assignment, but its deadline was extended by a week so I had to squeeze the same amount of work into less time.

Just how difficult was this to do? To do this particular assessment justice, I have developed a three-phase approach.

  • Phase 1: Get an overview by skimming all submitted papers without grading anything. I do this with classes of 10 or fewer students because the assessments are complex (Masters or Ph.D. level). I also do this so that I am not harsh with the first script and lenient with the last one.
  • Phase 2: Providing formative feedback and grading, both with a detailed rubric. The rubric helps me remain objective as I award marks. I do not believe in over-praising or relying on praise for feedback. I would rather be direct with my feedback on what my students need to do to improve. But I make it a point to acknowledge effort and provide encouragement where it is warranted.
  • Phase 3: I walk away from the graded scripts and return to them one more time to check on my feedback and score totals. I find that the time away helps me overcome blindspots and catch mistakes I might have made in Phase 2.

Phases 2 and 3 total up to four hours per script. That works out to about half a work day per script, so I schedule two papers a day. When things get intense — like last week when I had less allocated time — I worked in three papers a day.

This is intense work. It requires intense concentration and objectivity. So I try not to grade and provide feedback at home because there are too many distractions and comforts there. A side benefit of this habit is my knowledge of several libraries and cafes where I can work in relative peace.

Would I change anything? I wish I could make people in shared spaces speak in hushed tones, but I cannot do that. I try to change unhelpful mindsets and practices my students might have as a result of uncorrected habits. I build this into our sessions immediately after I return their scripts. But, no, I would not change what I think is a rigorous grading process.

I heard a few questions from new faculty at a recent pre-semester meeting. The questions revealed how much I take for granted and how much the new folk need to level up.
 

 
One person confused academic integrity with general integrity. Academic integrity is normally about how one writes essays and reports research. We want individuals who are models of overall integrity, of course. But when we focus on assignments and reports, we zoom in on specific aspects of academic integrity like citing, attributing, and not plagiarising.

Another person brought up how students might be confused as to why they had to cooperate in class activities (e.g., co-editing Google Docs) but could not do the same with most summative assignments. While such students bring up a valid argument, we should counter that with accountability. We focus on group accountability with shared documents, but we determine individual accountability with end-of-course essays.

I was glad to hear how a few faculty had started using mobile apps to quiz their students. However, I was dismayed that they focused on the bells and whistles instead of the praxis of feedback or assessment. Such application of educational theory could be the need to monitor learning and/or to provide formative feedback. It should not be about a timer counting down or background music adding tension.

All three examples bring up the importance of being an academic who is literate in pedagogical theory and research. Being a good instructor and facilitator is not just about knowing what works. It is also about knowing why it works.

Like most folks who teach, I can relate to the comic below. I respond the same way.

This might be a natural human response given how bad news and reviews travel or stick more. It is one way for newspapers to survive and app developers to die of reputational embarrassment.

But focusing on what only grabs attention is detrimental. It might drag morale or disable action. I will try to focus on what worked well while taking constructively offered feedback into account.

Example of positive feedback left by a graduate student in a one-minute paper.

My current sets of students are future university professors and researchers. It is rare for such faculty-to-be to offer positive and unsolicited feedback in an open area like a one-minute paper.

I appreciate the shared thought and am energised to keep performing at a high level. To do less would be a disservice to my learners.

Alfie Kohn is not afraid to say what is on his mind and he says it well. The saying makes little sense and is bad advice.

One of the things I remember most about studying in the USA and observing some students and teachers there is how positive they were. They preferred to encourage rather than point out mistakes.

This did not sit well with me then because I was used to direct critique and even unkind criticism. But I understood the rationale for staying on the light side — we do not want to discourage learners who are struggling with new content or experiences.

Educators who blog and tweet their thoughts share a common story set. They tell of how they might receive 99 positive responses and just one negative comment. They dwell or even obsess on the one.

That is not a bad thing if that single comment is fair. That is not a bad thing if it points out a flaw and you try to improve as a result.
 

 
The honey does not only not catch flies, it disguises what is wrong underneath it. Give only praise and learners making mistakes might think there is nothing wrong or nothing to change.

The trick is to provide constructive comments. These invariably come across as tough or even negative. I find that it helps to set learner expectations (I am not going to sugar-coat) and to rationalise (this is why I gave this feedback).


http://edublogawards.com/files/2012/11/finalistlifetime-1lds82x.png
http://edublogawards.com/2010awards/best-elearning-corporate-education-edublog-2010/

Click to see all the nominees!

QR code


Get a mobile QR code app to figure out what this means!

Archives

Usage policy

%d bloggers like this: