Another dot in the blogosphere?

Posts Tagged ‘teaching

I have said it before and I will say it again: I do not deliver learning.

Far better and wiser people have said it too. Do not take my word for it, consider theirs.

This honest tweet reply to the original tweeted question reminded me of something.

I wonder if laypeople know how unprepared most university faculty are to teach. If they found out, what might they say? Given how much a university education costs, what might they demand?

This is one major reason why I choose to educate teachers (pre and inservice) and university faculty. They are the toughest customers because they are adults who have their own experiences, baggage, and opinions. But they all need to learn how to teach and educate.

What qualifies me to educate teachers? I have a post-graduate diploma in secondary education. I also have a Masters degree and a doctorate in a field that combines educational psychology, pedagogy, instructional design, and technology. More details near the bottom of this page.

This year also marks my 32nd year in training and education. If this time has taught me anything, it is that the more I think I know, the less I actually do. So I learn constantly.

I have learnt not to lecture and spoon-feed. Instead I shepherd. And I know where the best spots are to explore, eat, and expand.

The tweet and report above are fodder for anti-vaccination Facebook groups and taxi uncles alike. The headline is irresponsible because it implies causality. However, no other factors for the death were explored or considered in the tweeted article.

Contrast the lack of context and information to the tweet thread below.

If I had to fault the tweet, I would point out that it did not immediately provide sources for the numbers. However, a Guardian article in the second part of Dr Clarke’s thread reported:

The MHRA, which collects reports of side-effects on drugs and vaccines in the UK through its “yellow card” scheme, told the Guardian it had received more notifications up until 28 February of blood clots on the Pfizer/BioNTech than the Oxford/AstraZeneca vaccine – 38 versus 30 – although neither exceed the level expected in the population.

The MHRA is the Medicines and Healthcare products Regulatory Agency in the UK.

The actual numbers of blood clot cases will vary over time, but the fact remains that the incidents are so low as to be below actual chance. What does that mean?

In an actual population, a certain number of people would naturally get blood clots. Take this thought experiment: We inject the entire population with saline that mimics blood plasma that has no drugs or vaccines in it. The result: More people will get blood clots with that saline jab than the AstraZeneca (AZ) vaccine.

The AZ vaccine use is new and the blood clot cases might rise. But for now the data indicate what Dr Clarke and others in the Guardian article have said — it is safe to use, not using it is dangerous.

Thankfully some good sense has prevailed since I started drafting this reflection. The BBC news report below revealed how the EU has declared the vaccine to be safe for continued use.


Video source

I have two takeaways from reading both news reports. The first is the image quote below.

It's easy to lie with statistics, but it's hard to tell the truth without them. -- Andrejs Dunkels

My second is a parallel in teaching. Just as CNA was irresponsible for its misleading article, it is just as bad to teach content without context. While the use of vaccines has regulatory bodies that will correct wayward action, everyday teaching does not.

The AZ vaccine might see a quick comeback with investigation and regulation. But teaching that focuses primarily on content and teaching to the test has a long term detriment — it nurtures students who cannot think for themselves.


Video source

Jimmy Kimmel introduced an eight-year-old girl who scammed her way out of Zoom-based class.

That girl was not the first nor will she be the last to find ways to skip class be it in-person or online.

However, she was among the few who got on television because a talkshow host and/or his team thought it would be funny.

In the past, some folks might have sought 15 minutes of fame by design. Today it might be their 15 seconds of TikTok notoriety by accident.

The difference is the speed and method. But the outcome is the same: The fame/notoriety is a footnote in history or replaced with the next attention grabber. This is what happens when you celebrate mediocrity.

This is a tangential reminder not to reach for the low-hanging fruit in teaching. Merely enhancing lessons with technology to grab attention is mediocre compared to the more difficult but also more effective work of enabling learning.

 
Today I ask some unsolicited questions on behalf of teachers and educators who have had to endure professional advice from their non-teacher/educator friends or relatives.

Would you claim to be a doctor after a few visits to your general practitioner?

Would you tell a software engineer what to do after you figured out how to change a WhatsApp setting?

Would you advise an architect on the next great design after you built a Lego masterpiece?

Would you tell an artist what to be inspired by after getting a shower thought?

Most probably not. But you have ideas that should be implemented by teachers and educators, don’t you?

Not many of you can claim to be doctors, engineers, architects, or artists. But practically all of you have attended lessons in classrooms, lecture halls, and laboratories. Many of you gained some insights of teachers and educators thanks to home-based learning/remote teaching thanks to COVID-19 lockdowns. But how exactly does that make you a teacher or educator?

This tweeted declaration and its elaboration in the news article seem obvious, do they not? That edtech should serve educational purposes must be as obvious as how we fall down because the earth sucks.

But the answer to the question on the purpose of edtech depends on who you ask.

  • If you ask a vendor of the technology, it might be to sell as much as possible for as long as possible.
  • If you ask a university administrator, it might be to fulfil a budget line item and to follow procurement procedures.
  • If you ask a teaching staff, it might be to pivot as little as possible so as to recreate a face-to-face experience online instead.
  • If you ask a student, it might be to make the best of a bad situation — campus shutdown during the pandemic — and get as much out of the tuition fees as possible.

The president of the university from whom the headline quoted elaborated:

…we use technology to make the best of the situation, and we deliver the best experiences that we can until such time that we can pivot offline.

So if you take that out of context, it might be to salvage a bad experience and hope that normalcy returns.

For me, what is obvious is that learning outcomes are not always the concern or priority, no matter what anyone might claim. It is not what you say that matters, but what you do.

It should be obvious that all stakeholders need to learn from the shared experience, i.e., realise that some of the differences are better, and not return completely to normal by adopting what worked better. That should be obvious, should it not?

 
I have had the privilege and misfortune of experiencing how student feedback on teaching (SFT) is done in different universities.

When I was a full-time professor, the institute I worked at specialised in teacher education and had experts in survey metrics. So no surprises — the SFTs were better designed and constantly improved upon.

One of the best improvements was the recognition that different instructors had different approaches. Each instructor had a set of fixed questions, but could also choose and suggest another set of questions.

As an adjunct instructor now and roving workshop facilitator, I have been subject to feedback processes that would not have passed the face validity test at my previous workplace.

One practice is administration using only positive feedback to market their courses. Feedback, if validly measured, should be used to improve the next semester’s offering, not be a shiny star in a pamphlet.

Another bad practice is sampling a fraction of a class. If there is a sampling strategy, it must be clear and representative. Feedback is not valid if only some participants provide it.

Yet another SFT foible is not sharing the feedback with the facilitator or instructor. One institute that operated this way had multiple sections of a course taught by different instructors. However, the feedback did not collect the name of their primary instructor because classes were shared.

All the examples I described attempted to conduct SFT. None do it perfectly. But some are better informed than others. Might they not share their practices with one another? If they do, will institutional pride or the status quo stand in the way?

Today I offer another reason why the one-size-fits-all type of end of course evaluations are not valid.
 

 
I have reflected on how I design and implement my classes and workshops to facilitate learning. I do not try to deliver content. The difference is like showing others how to prepare meals vs serving meals to them.

You would not evaluate a chef and a Grab delivery person the same way. Each has their role and worth, so each should be judged for that. Likewise student feedback on teaching (SFT) must cater to the design and implementation of a course.

 
I have never placed much weight on end of course feedback. This was even if the results of such data was favourable. Why? My knowledge research on such feedback and my experiences with the design of questions hold me back.

In my Diigo library is a small sample of studies that highlight how there is gender, racial, and other bias in end of course feedback tools. These make the data invalid. The feedback forms do not measure what they purport to measure, i.e., the effectiveness of instruction, because students are influenced by distractors.

Another way that feedback forms are not valid is in their design. They are typically created by administrators who have different concerns from instructors. The latter are rarely, if at all, consulted on the questions in the forms. As a result, students might be asked questions that are not relevant.

For example, take one such question I spotted recently: The components of the module, such as class activities, assessments, and assignments, were consistent with the course objectives. This seems like a reasonable question and it is an important one to both administrator and instructor.

An administrator wants alignment particularly if a course is to be audited externally or to be benchmarked against other similar offerings elsewhere. An instructor needs to justify that the components are relevant a course. However, there are at least three problems with such a question.

First, the objectives are not as important as outcomes. Objectives are theoretical and focus on planning and teaching while outcomes practical and emerge from implementation and learning. Improvement: Focus on outcomes.

The second problem is that it will only take one component — an activity, an assessment, or an assignment — to throw the question off. The student also has the choice to focus on one, two, or three components. Improvement: Each component needs to be its own question.

Third, not all components might be valid. Getting personal, one of the modules I facilitate has no traditional or formal assessment or assignments. The student cannot gauge a non-existent component, so the question is not valid. Improvement: Customise end of course forms to suit the modules.

Another broad problem with feedback forms is that they are not reliable. The same questions can be asked of different batches of students, and assuming that nothing else changes, the average ratings can vary wildly. This is a function of the inability to control for learner expectations and a lack of reliability testing for each question.

End of course evaluations are convenient to organisers of courses and modules, but they are pedagogically unsound and lazy. I would rely more on critical reflection of instructors and facilitators, as well as their ability to collect formative feedback during a course to make changes.

Today I try to link habits of an app use to a change in teaching.

Like many Singaporeans, I have had months of practice using the location aware app, SafeEntry, to check in and out of venues. We do this in a collective contract tracing effort during the current pandemic.

You cannot forget to check in because you need to show the confirmation screen to someone at the entrance. However, you can easily forget to check out* because, well, you might mentally checked out or have other things on your mind.

Therein lies a flaw with the design and implementation of the app. Instead of making both processes manual, the app could be semi-automatic. It could have a required manual check in at entrances, but offer automated exits.

How so? The mobile app is location-aware. It has a rough idea where you are and can suggest where to check in. This is why the manual check in is better — the human choice is more granular.

However, when people leave a venue, the app could be programmed to automatically check them out if the app detects that they are no longer there over a period of, say, 10 minutes. I say give the option to user for a manual check out or an automated one.

*The video below reported that checking out is not compulsory. But not checking out creates errors in contact tracing, i.e., we do not know exactly where a person has been and for how long. This not only affects the usability of the data but also inculcates blind user habits.


Video source

For me, this is a lesson on rethinking teaching during the pandemic by using awareness as key design feature. It is easy to just try to recreate the classroom room and maintain normal habits when going online or adopting some form of hybrid lessons.

But this does not take advantage of what being away from the classroom or being online offers. The key principle is being aware of what the new issues, opportunities, and affordances are, e.g., isolation, independence, customisation.

Making everyone to check in and out with SafeEntry is an attempt to create a new habit with an old principle (the onus is all on you). This does not take advantage of what the mobile app is designed to do (be location aware).

Likewise subjecting learners to old expectations and habits (e.g., the need to be physically present and taking attendance) does not take advantage of the fact that learning does not need to be strictly bound by curricula and time tables.

The key to breaking out of both bad habits is learning to be aware of what the app user and learner thinks and how they experience the reshaped world. This design comes from a place of empathy, not a position of authority.
 


http://edublogawards.com/files/2012/11/finalistlifetime-1lds82x.png
http://edublogawards.com/2010awards/best-elearning-corporate-education-edublog-2010/

Click to see all the nominees!

QR code


Get a mobile QR code app to figure out what this means!

My tweets

Archives

Usage policy

%d bloggers like this: