Another dot in the blogosphere?

Posts Tagged ‘critique

I have been consistent about my stance against end-of-course student feedback on teaching (SFTs). Today my reflection was prompted by this tweet.

I am confident that, like me, this professor and others like him, do not get bad reviews. We are against a data collection method that is flawed. 

I caution administrators against using SFTs as the only measure of faculty teaching because SFTs are:

  • not valid in if they do not measure if effective learning took place
  • used for purposes other than to improve instruction
  • summative in that they do not allow teaching faculty to make changes that semester
  • reliant on student self-reports as a single data source

The tweet highlighted how invalid SFTs can be. No matter the questions asked, students might bias their answers because of non-teaching or superficial traits of their instructor/facilitator. The questions in an SFT are also likely to focus on teaching-related aspects of a course (e.g., the LMS) instead of how much or how well they learnt.

SFTs designed to measure traditional and face-to-face teaching methods also might not align to online methods or facilitative approaches. For example, SFTs rarely (if ever) focus on the design of effective asynchronous learning resources or personalised online coaching.

Administrators use SFTs to rank faculty during promotion and retention exercises. This is clear to any full-time university faculty with a significant teaching load. I know of ex-colleagues who would game the system by currying favour with their students so that they would get good SFTs. 

These folk needed the most help improving their instruction, but since they got good enough SFTs, they did not reflect and improve on their practice. They just got better at gaming the system.

If SFTs are primarily for improving the quality of courses and instruction, they cannot be implemented at the end of a course. Good teachers collect feedback constantly so they can make adjustments on the run.

Insisting that data from the end of one course should inform the design and implementation for the next one misses the point — teaching is dynamic and complex. You can take the same instructor, design, and content, but different batches of students will react differently.

SFTs also rely on self-reports by students. These are equivalent to the Kirkpatrick Level 1 “smiley sheets” that seek opinion rather than fact. If students like you, they will rate you higher than you serve. The opposite is also true.

So what else can we do in addition to or as alternatives to SFTs? In my reflection earlier this year, I suggested “multiple methods, e.g., observations, artefact analysis, informal polling of students, critical reflection”. 

Today I would add that faculty portfolios capture these methods. Remarks from casual observations by fellow faculty, marked up video recordings, key takeaways from brief but regular student polls, and faculty reflections can be collated on online platforms like a blog or Google Site.

Portfolios have another plus: They put the ownership of the design, implementation, and evaluation of courses in the hands of teaching faculty. If these instructors carefully maintain their portfolios outside university, they can take them wherever they go. 

That said, portfolios do not resolve the biggest problem with SFTs. They might still be about teaching. What matters is whether students learnt, what they learnt, how much and how well they learnt it, etc. 

That problem is not an easy one to solve. Students might view courses merely as stepping stones to paper qualifications. There is the long tail of learning, i.e., their ah-ha moments might occur outside the course and these are not captured. Their in-course learning might not be intentional but still desirable, e.g., they learnt how to manage their time, but these too are not measured.

The biggest problem is that both administrators and faculty might be content with measuring the low-hanging fruit. After all, it is easy to hide behind the rock called It Has Always Been Done This Way.

Today I link a pop-culture phenomenon and the importance of nuanced expertise.

Like many other Netflix subscribers, I enjoyed Squid Game. But I was surprised to learn that it was ten years in the making and almost did not happen.

I also appreciated the critique of the show’s english subtitles. Some references just got lost in translation. As a result, those of us that were not fluent in Korean lost social and emotional context.

Video source

The video above featured several examples by a Korean language professor.

For example, I loved the analysis of the use of “hyung” or a social elder brother. The subtitles simply indicated that the character of Ali called his friend’s name. However, the audio clearly indicated that he was also using this term of close kinship. Knowing the meaning of hyung made Ali’s betrayal and death even more impactful.

It took a language professor to explain this nuance. A subtle cannot realistically capture such a cultural reference and so much was lost in translation. But we have the benefit of an expert’s analysis if we seek it out.

I see a parallel in pedagogical design. I might use a strategy like cooperation within heterogeneous groups. An outside observer might simplistically “subtitle” this as a collaborative activity. They could not be more wrong.

My strategy does not go as far as collaboration; it is realistically levelled at brief and task-based cooperation. The student groupings comprise of intentionally different learner skills or abilities. There is more thought and skill in my design than meets the eye.

The designs of my lessons are no where near complexity of Squid Game. But they might be just as subtle. You only have to ask, unpack, and learn.

Some education heroes critique and share on TikTok.

Dr Inna Kanevsky is my anti-LSM hero. LSM is my short form for learning styles myth.

In her very short video, she highlighted how teachers perpetuate the myth of learning styles despite what researchers have found.

In the Google Doc she provided, she shared the media and peer-reviewed research that has debunked this persistent but pointless myth.

If your attitude is to ask what the harm is in designing lessons for different “styles”, then you are part of the problem — distracting the efforts of teachers and promoting uncritical thinking and uninformed practice.

Video source

You recognise expertise when you see and hear it. This trauma surgeon picked apart surgical and other medical procedures as represented on TV or movies. 

Some might argue that TV shows or movies are for entertainment and should not be compared to reality. I argue that these sources of entertainment are often the reference points for laypeople (see tweet below and its thread for an example). This results in unrealistic expectations not just of medical procedures, but also natural disasters, space travel, dinosaur attacks, military strategy, etc.

Reality can be simultaneously more frightening and mundane, so we need experts to shift our focus. I also find such critiques to be entertaining in themselves. This is why YouTube channels of Wired, Insider, and GQ [examples] have them.

The bits of reality that the trauma surgeon shared should, at the minimum, create some appreciation for what she does. Optimally, it might also generate new perspectives and empathy from non-surgeons.

I wonder if there will ever be a Wired expert critique of teachers and classroom practices as shown on the small and big screen. Probably not. Teachers are normally not wired to be stars; they are tired from slogging in the background. 

This would be a shame because many parents experienced primers during lockdowns on what it is like to teach their children. They typically have their memories as students and Hollywood representations of classrooms. Neither are realistic perspectives of teaching.

Get some perspective. Listen to some teachers today.

This content is password protected. To view it please enter your password below:

Barely a month (week?) goes by without headlines about the link between using mobile device and some harm, e.g., poor mental health. We do not call those headlines a form of gaslighting because so many of us have bought into them.

Thankfully, this critique, Flawed data led to findings of a connection between time spent on devices and mental health problems, bucks the trend. That article summarised recent research and concluded: 

…simply taking tech away from (young people) may not fix the problem, and some researchers suggest it may actually do more harm than good.

Whether, how and for whom digital tech use is harmful is likely much more complicated than the picture often presented in popular media. However, the reality is likely to remain unclear until more reliable evidence comes in.

The thesis of the article: “The evidence for a link between time spent using technology and mental health is fatally flawed”.

The thrust of the article was that studies in the area of mobile device use and harm relied on self-reporting measures. It then argued how such measures were logically and methodologically flawed.

First, we do not pay attention to what we do habitually. Such activity is background noise, not foreground work. As a result, it is difficult to accurately remember how frequently we use mobile devices or apps.

Next, the author shared how he and his colleagues systematically reviewed actual and self-reported digital media use and discovered discrepancies between the two. He also outlined his own research of using objective measures like Apple’s screen time app to track device use. He concluded:

…when I used these objective measures to track digital technology use among young adults over time, I found that increased use was not associated with increased depression, anxiety or suicidal thoughts. In fact, those who used their smartphones more frequently reported lower levels of depression and anxiety.

The author revealed that he used to be a believer of what the popular media peddled about the harm of mobile device use. But his research revealed that the popular media were simplifying complex findings: 

The scientific literature was a mess of contradiction: Some studies found harmful effects, others found beneficial effects and still others found no effects. The reasons for this inconsistency are many, but flawed measurement is at the top of the list.

We cannot simply read headlines, form conclusions, and craft far-reaching policies of mobile use, e.g., limit kids of age X to Y minutes of iPad time. Why? The measurements for the evidence of harm are flawed and the results of studies are mixed. 

We need to be critical readers, thinkers, and actors. We could start by reading beyond the headline, i.e., actually read the whole article and not propagating articles without first processing it carefully. This is more difficult to do than casually sharing a link, but it is a vital habit to inculcate if we are to be digitally wise. And with most habits, doing this gets easier with practice.

I am more often on the teacher end of a Zoom connection than on the student end. So I took in as much as I could as a participant at a recent research webinar.

First I took the time to examine the Zoom interface in webinar mode. It had a green tick on the top left to indicate the quality of the connection. By clicking on the tick, I was able to get connection details, e.g., the session went through the Singapore data centre. I found it reassuring to be on a fast local connection instead of being routed elsewhere.

Zoom webinar tools.Image source

The webinar interface was simple with Chat, Raise Hand, and Q&A options only. There was no option for video or audio interaction nor indication of how many people were present. Only the moderator appeared on screen and was replaced by each speaker in turn.

Even if participants cannot interact with one another, just knowing how “crowded” the room is provides a sense of social space. Zoom can learn from so many other existing systems that have learnt to recreate social presence. One way to do this with low bandwidth overhead is to represent each participant as an icon or with an avatar. This is like the Anonymous Animals that appear at the top of shared Google Docs or the bubble avatars in group chat tools.

Google Docs anonymous animals.
Image source

A seminar makes it easy for a participant to not actually participate, i.e., be a passive recipient. I had to listen activity, take notes and screenshots, and think of questions.

Zoom’s Q&A tool is not interactive, i.e., cannot ask a question and then follow up. If I wanted to follow up, I had to ask another question but the text was not threaded so this would have been visually messy. I found this tool to be rudimentary and very poorly conceived and implemented.

One simple way to overcome this issue and also simulate social presence could be to provide the option to use voice or video for participants. This would recreate a conference-like environment by providing immediacy.

The Q&A tool seemed to work on an embargo system of storing and queuing questions. The questions seemed to go to the moderator without appearing in the chat or the Q&A window immediately. How do I know? There was a lag between a question being asked and appearing on screen.

Participants do not see what questions the others have because of the same embargo system. Again, Zoom could learn from tools like Google Slides where everyone can see the questions and vote up the ones that matter to them.

Google Slides Q&A.
Image source

While I liked the fact that some questions were answered ‘live’ while others were answered in text, I wondered if the speakers could indicate their preference for the moderator to see. That way they would know where to focus their energy. More than once the moderator asked a speaker to answer a question and the speaker had already answered in text, was in the process of typing their answer, or had to repeat the answer verbally.

I conclude with a statement from the session. One speaker said the pandemic was opportunity to push for changes in teaching, but what mattered more was the quality of the change. That sentiment could have been applied to the Zoom session given better quality tools and strategies that work both online and off.

Not a semester goes by when I meet preservice teachers, inservice teachers, or future faculty who swear by learning styles. Every semester, I try to correct such errant thinking.

Someone taught my latest batch of educators the learning styles myth and I felt duty-bound to say otherwise even though my modules were not about that. For me it was like knowing that a bridge ahead was destroyed and I had to warn the travellers blindly heading towards it.

I have a time-tested collection of resources that refute the learning styles myth better than I can. But I also offer my perspective.

Learning preferences are not learning styles. A student might prefer to watch a video instead of read a book, but that does not mean you give in to that preference if the learning outcomes are about reading.

Styles are impractical treatments. A teacher who has been taught to apply styles might prepare lessons based on visual, auditory, and psychomotor (VAK) “styles” because this supposedly optimises learning for three categories of students. The matching styles with strategies is called the meshing hypothesis. This is not only impractical over time, it is also insufficient and self-fulfilling.

Why insufficient? It practitioners are to take styles seriously, they need to cater to all learner differences. There is currently between 70 to 80 style inventories now. Even if we take the lower end, there are 70! (70 factorial or 70x69x68…x1) possibilities. Even if a teacher elects to focus only on VAK, such effort is not pragmatic over every lesson.

Why is focusing on styles self-fulfilling? Imagine being identified or labelled as a visual learner. If that is supposed to be your style and it is catered to, there is no incentive to develop the other ways of learning. Such learning is not only incomplete and irresponsible, a learner also becomes what s/he is labelled, just as easily as s/he grows to accept being called the class clown or teacher’s pet.

Learning styles ignore context. If a task is necessarily psychomotor, e.g., swimming a particular stroke or riding a bike, are visual and auditory learners supposed to rely on imagery and sounds of the same? No, the task necessitates the strategy, not the supposed optimal style.

Now consider an argument from the special needs angle. A visually impaired person cannot help but rely on auditory and tactile learning. But this does not mean that the learner has a style. The circumstances necessitate the reliance on non-visual forms of learning, but no reasonable person would call those forms learning styles.

If the logic against learning styles is not enough, consider what research says about this stubborn myth. Drawing from some resources I have shared before:

The American Psychological Association has come out against learning styles. The APA went so far at to say that “many parents and educators may be wasting time and money on products, services and teaching methods that are geared toward learning styles.”

Video source

The TEDx video above was of Dr. Tesia Marshik, Assistant Professor of Psychology at the University of Wisconsin-La Crosse, who highlighted how learning styles:

  • had no research evidence that show that they improve learning
  • wasted the time and effort of teachers who tried to cater to different styles
  • labelled and limited people into believing they learn best in certain ways

Video source

In the SciShow video above, Hank Green highlighted how:

  • the only study that seemed to support learning styles had severe flaws in its design
  • students with perceptions that they had one style over others actually benefitted from visual information regardless of their preference

This SciShow video and educators Dylan Wiliam and Donald H Taylor cited the work of Pashlar et al (2008) who declared this:

… we found virtually no evidence for the interaction pattern mentioned above, which was judged to be a precondition for validating the educational applications of learning styles. Although the literature on learning styles is enormous, very few studies have even used an experimental methodology capable of testing the validity of learning styles applied to education. Moreover, of those that did use an appropriate method, several found results that flatly contradict the popular meshing hypothesis. We conclude therefore, that at present, there is no adequate evidence base to justify incorporating learning-styles assessments into general educational practice.

I share the thoughts of Willingham et al (2015) when they concluded: “Learning styles theories have not panned out, and it is our responsibility to ensure that students know that.”

Catering to a supposed inherent style does not necessarily optimise learning. Sadly, learning styles are a myth perpetuated by teacher educators and workplace trainers who do not keep up with critical research and reflective practice. They are easy to latch on to because the pseudo science is a low-hanging fruit that preys on our innate perception of individual differences.

On Wednesday I said I would try out Zoom’s latest feature, Breakout Rooms.

Unlike the randomised groups that an instructor can already create in Zoom, Breakout Rooms allows an instructor to create and name “rooms” or “spaces” that students enter on their own. The easiest way to think about this is stations in a classroom that students choose to visit.

I tried this tool out and here are my thoughts and critiques.

Issue 1
I had to be the host of the meeting to do this. The host has the administrative capacity in a Zoom classroom and is the only one who can see the Breakout Rooms function. That is, if an even higher authority, the systems administrator, enables Breakout Rooms in their dashboard.

I saw this function during my trial run because a systems administrator already made me host of a session. But I did not see this when I was co-host on the actual day I needed to use it. The systems administrator had to make me the host before I could see the Breakout Rooms function appear on my tool bar.

Why is this important? Depending on an institution’s setup, the instructor might not be a host. This might be an unusual circumstance, but it does happen, particularly with folks who are new to the game or less trusting of their users.

In any case, my purpose for using Breakout Rooms was to allow students to more choose rooms to enter based on assigned topics. In other words, I was using homogenous grouping as a strategy. If I had used the random assignment function, I would have created heterogeneous groups and students would not be empowered to make a choice.

Issue 2
When I created Breakout Rooms for the first online activity, Zoom remembered these rooms even after I had closed them. This meant that I had to manually delete them one by one. This is not the case with randomised groups.

Later when I needed to assign students randomly to different groups, the requirement to delete the existing rooms first was a hassle that created a delay. Only after every room was gone was I able to activate the random assignment.

For me, this was an example of Zoom struggling to enable basic classroom strategies. It made something intuitive and seamless in class become clunky and undesirable online.

Issue 3
Here is another example of poor user interface and interaction design.

Students drop out of the online sessions all the time and attempt to come back in. They might drop out due to bad connections, frozen video, or a host of other reasons. Most system administrators require students to enter a waiting room first, so they are stuck in limbo before an instructor lets them into the the online classroom manually.

Zoom provides alerts of waiting students as audio pings, but the notifications are not brought to the frontmost layer. As a result, students might wait a long time because they are get lost in the layers of windows open on a desktop. This is like a student knocking on a locked classroom door wishing to be let in, but there are all sorts of barriers like boards, bookshelves, desks, and people in the way.

When something unexpected happens, I do not panic and I tend to troubleshoot quickly. I am the type of person that people throw laptops and phones at when they do not work. I am also a student and teacher of user interfaces and experiences, so when I say that Zoom is a woeful classroom replacement if you want to do anything more than talk, take me seriously.

I enjoy Wired’s series on expert critiques of movie “reality”. The most recent video was about a fighter pilot’s perspective on how movies depicted aerial dogfights and dodging missiles.

Video source

I know that the entertainment industry provides escape from reality, but it should not define it. That should seem obvious — emphasis on should –but that is the reality nowadays. Given how some folk cannot distinguish entertainment from education, the experts’ comments provide a healthy dose of reality.

What is the parallel with academia and education?

News reports of research do not always capture the rigours of study and the limitations of research processes. Vendors who claim to have one-stop shops ignore the context and complexity of learning and classrooms.

One group of people seem to only want easy answers to complex issues. They are not patient with nuance and details offered by a second group — people who have invested time and effort in honing their craft and developing their theorems. The first group of people would rather be entertained than educated.


Usage policy

%d bloggers like this: