Another dot in the blogosphere?

Posts Tagged ‘evidence

Some education heroes critique and share on TikTok.

Dr Inna Kanevsky is my anti-LSM hero. LSM is my short form for learning styles myth.

In her very short video, she highlighted how teachers perpetuate the myth of learning styles despite what researchers have found.

In the Google Doc she provided, she shared the media and peer-reviewed research that has debunked this persistent but pointless myth.

If your attitude is to ask what the harm is in designing lessons for different “styles”, then you are part of the problem — distracting the efforts of teachers and promoting uncritical thinking and uninformed practice.

I continue from yesterday’s reflection about “engagement” and why it does not guarantee learning.

One of the best things I discovered from my shallow dive into this rabbit hole was a resource from 2017 from Paul Kirshner. He shared how the rhetoric and practice of engagement were shortcuts for teaching and learning. This aligned to my educator’s philosophy that anything worth doing is difficult.

Even better were the slides Kirschner highlighted from a 2015 presentation by Rob Coe of Durham University. These were shared in a blog entry by Carl Hendrick titled Engagement: Just because they’re busy, doesn’t mean they’re learning anything.

The highlighted slides are worth a few minutes read and both blogs deserve click-through traffic.

I will just say this. A rabbit hole is indicator, an actual rabbit is evidence. Students might look like they are learning. What matters if they actually do. We need to focus on strategies that matter.

What might these strategies be? A slide from Coe’s presentation offers some suggestions: Feedback, metacognition, peer teaching.

I went down a shallow rabbit hole after my RSS feed that revealed how a teacher confused engagement for learning.

This was her plan:

I had planned what I thought was a brilliant lesson that would feed my love for scrapbooking and get students to connect their learning about the early civilizations. I set up each table as a different cultural component of a civilization: government, geography, religion, economics, and education. There were magazines, research materials, colored pencils, scrapbooking paper, and other materials on each table. Students had to complete an activity by sharing and questioning each other.

When debriefed on her lesson, she was challenged with the question: Were they learning, or was it just “pretty”? When she looked at students reflections, she realised that they could remember the activities but not the content. She concluded that while the engaging activities might be vehicles of learning, they were not necessarily indicators of learning.

My blog will reveal how long I have been against the rhetoric on engagement, but I do have to question why the:

  • recall of content was the only measure of learning
  • sharing and peer teaching were not also measures of learning
  • products of learning were prioritised over the processes of learning

That said, I agree that lessons that look “pretty” because they seem active may come across as “engaging” while not offering much by way of learning. But I would not use the vehicle/indicator references.

Instead, I consider the activities as possible indicators of learning, e.g., time spent reading, quality of peer teaching, level of reflection. But all these are not evidence of learning as measured by a specific tool.

The tool might be a paper test, performance, community project, etc. Only when externalised and applied meaningfully is there evidence of learning of new information, attitudes, or skills.

This was my reflection of the first room down the rabbit hole of engagement, learning, indicators, and evidence. More on the same tomorrow in Part 2.

About a week ago, I watched a news interview where a politician countered a question by saying that there was no evidence for a nefarious deed and therefore it did not happen.

That was not unusual because that is what a backpedalling might politician say. What might be unusual is how easily we might accept that argument.

The absence of evidence is not evidence of absence.

An often stated axiom is that the absence of evidence is not evidence of absence. A lack of evidence of a crime does not mean that the crime did not take place. It could mean that proof has not yet been gathered.

The politician’s argument is a logical fallacy that is based on ignorance. If you do not know that something exists or that some process happens, you might insist that it does not. The remedy is to learn so that you are no longer ignorant.

And then there is wilful ignorance. This is when you (should) know better, but decide to ignore the facts or advice. An example of this in schooling and training is atheoretical practice. This is perpetuating information and processes (the what and how) without knowing the reasons for them (the why).

Atheoretical practice is frighteningly common. I know of people who claim to be “learning designers” who have little to no theoretical foundation. They choose not to learn from edtech history or stay current with research.

Ignorance is difficult enough to overcome. But wilful ignorance is a beast ridden particularly by adults who think they know better. They do not.

This week’s Crash Course episode on navigating digital information focused on evaluating evidence offered by online creators.

Video source

Anyone who says anything online needs to back up any claim with evidence. But not just any evidence.

Some might offer claims as evidence. Host John Green highlighted a claim about a new and supposedly deadly spider that had already killed five people in the USA. That claim (in all caps, no less) was made without reference to any other resource.

Others might offer wrong evidence after making a claim. Green provided the example of a US senator who brought a snowball into the senate floor and offered it as evidence that there was no global warming. This was evidence of winter and short term weather, but nothing against long term climate change.

In Green’s own words, not all evidence is created equally. So what are we to do? Ask two questions:

  • Does the information make sense?
  • Does the information merely confirm my pre-existing worldview?

Answers to both questions require value judgements and this can be a subjective process. To make things more objective, we could evaluate evidence by finding out how valid and reliable it is.

Validity is about how relevant and credible the information is; reliability is a measure of how much or how often that same evidence shows up.

The saying, “Pics, or it didn’t happen” is wiser than it appears.

The phrase is a quick way of saying show me evidence, specifically photos, because what you claim to be a truthful or factual account may not be valid or reliable.

Video source

Our memories are imperfect. The majority of us do not have “photographic” memories, and those that do are exceptional talents. Even then, captures are not facts devoid of colouring, contrasting, or other manipulations.

Any teacher who still thinks that drill and rote memory are still the best ways to teach and learn needs to reconsider or retire.

What you capture today might not be relevant tomorrow in the age of social media. There is as much point to objecting to such circumstances as there is blowing raspberries at a tornado.

Instead, “pics, or it didn’t happen” could be one principle to base change on. It could be the foundation for dealing with fake news. It could start the line of questions against learning styles, digital natives, “best” practices, and extrinsic gamification. It could shift the focus away from just learning-about (content) to learning-to-be (contextual thinking). It could spur the search for evidence-based practices, and personal and professional development.

Video source

This video had an interesting statement to make about the mindset of today.

Instead of “Pics, or it didn’t happen”, we now have “Pics, it must have happened”.

Observation: Some teachers are not comfortable with the first statement, so how will they respond to the second?

Today I would like to share three lessons on change management that one might draw from a utility bill.

It may sound strange, but there is one monthly bill I almost look forward to receiving every month. This is my utility bill for electricity, water, and gas.

I do not actually look forward to paying money. I like seeing the comparison table that I get via an e-bill.


This is my August summary. I take some pride that despite having a large apartment, I use comparatively little by way of utilities. The asterisks refer to comparisons with other apartments in my building and the national average based on the size of my apartment.

My household keeps our electricity bill low by using LED bulbs, using energy efficient appliances, rarely using the air-conditioner, and having devices that switch standby devices totally off. I am also a tyrant about electricity discipline.

We keep water waste to a minimum by having low-flow taps and adjusting the WC flow to its most efficient. I am not sure what we do with gas except that it is sometimes more efficient to microwave small amounts of water than to heat it over a stove. It boils down to good personal habits.

I have invested the most time, money, and effort in saving electricity because that is what I have the greatest control over and there are a variety of devices and processes at home that use it.

I have not changed any major appliance since I bought them almost a decade ago, but I got the most energy efficient ones I could then. I put the computing devices on power schedules so they do not run when we are not using them.

I initially had CFL lights (which were energy savers) but changed my often used lights to LEDs (which use even less electricity) despite the high initial cost. I also invested in two devices that prevent standby devices from using electricity (IntelliPlug by OneClick, exact model here).

I found out as much as I could about these devices, tried a few, monitored the results, and bought more when they seemed to be working.

The savings paid off almost immediately. Each month, I get reminded that what I started keeps working. When there are utility hikes, I do not see appreciable jumps.

So what are the lessons that might be scaled up and applied to change management?

First, it is important to invest in the long term. The short term might involve cost (money, time, effort, manpower, etc.) with no clear results for months or even years. But if you have a well-researched and/or proven strategy, you can be confident that it will pay off in the longer term.

Second, you must monitor the effects of change implementation. You must have a constant source of data to let you know that what you are doing is working or not. Objectively collected and analyzed data that yields good results is a morale booster and motivates change agents to keep pushing forward. Data that consistently points the other way is a clear sign to try something else.

Third, keep at it even when things are going well. The worst thing that can happen is to get complacent. Every process can be more efficient or more effective or something can come along to threaten a time-tried technique. It is important to stick to your guns when things do not seem to be going well or know when to switch tactics even if they are.

If you are ahead of the curve, your biggest competitor is yourself. If you want to keep staying ahead, keep establishing new long term goals, monitor your progress, and embrace constant change.


Usage policy

%d bloggers like this: