Another dot in the blogosphere?

Posts Tagged ‘research

Photo by Pixabay on Pexels.com

I am slow blogging my thoughts on e-pedagogy for a possible workshop later this year.

I have already reflected on Martin Weller’s excellent offerings on group work [reflection] and asynchronicity [reflection]. I recorded some scattered thoughts on e-pedagogy in general and on anticipation as an intentional learning design element.

Today I jot down some notes on how to link some questions with answers from research on teaching and learning. For example, when designing intentional learning, one might consider: 

  • so wow: hook/activation of schema (Piaget, Ausubel)
  • so what/why/how: social negotiation of meaning (Vygotsky)
  • so what is this to me: resolution of cognitive dissonance (Festinger)

My plan is to link to what teachers (should) already know. This should serve as a hook or activate schemata. We would negotiate a few strategies or principles of e-pedagogy by leveraging on homogenous and heterogeneous group work. Then we ensure takeaways by resolving current and future design practices.

I have even more thoughts in a growing document my Notes app. My worry is that there is too much to uncover. This is like trying to distill a Masters programme into one or more workshops. It might be a case of too much too fast.

In the space of about a week, I read two reflections that challenged the narrative of “online = bad, face to face = good”.

The first was a tweet thread by Tim Fawns:

Fawn’s main points seemed to be:

  • It is not logical or possible to compare “face-to-face learning” and “online learning” because each is not just one method, environment, or resource.
  • Even if you compare just one method, e.g., online lectures and in-person lectures, or online group work vs face-to-face group work, how well each approach worked would depend on a host of other factors such as “how well each approach was done, how well it suited students, how well students engaged with it, relationships between teachers & student, infrastructure & support for each approach, surrounding circumstances, what else students did”.
  • The either/or argument is counterproductive. Students already do both, both teachers and students have strategies that work well for them, and much of teaching and learning is already blended in terms of environments and methods.
  • Labelling all online learning as bad probably stems from a bad experience, but this does not make all online learning bad. One rotten apple does not mean that all other apples are bad.

The second resource — good online learning – group work — was from Martin Weller. Citing previous research and practice, Weller stated that there are “well established model(s) to help you construct online interaction in a way that is different from face to face”.

Specifically, he suggested Diana Laurillard’s conversational framework (see condensed version of four categories of activities and five media forms here) and Gilly Salmon’s 5 stage e-moderating model.

Then he summed up what research has uncovered on online learning and group work. In plain speak, he suggested that we:

  • Clearly differentiate synchronous vs asynchronous activities
  • Include a lot more time for activities to be conducted and completed
  • Engage in careful design when planning, and provide detailed instructions and guidance when facilitating 
  • Take advantage of pre-existing learner expectations and behaviours about operating online
  • Establish social connections early in an online course: It is the glue that holds people together
  • Leverage on asynchronous work to not just provide flexibility of time for all but also reassurance for the socially awkward
  • Monitor and deal with behaviours that are counterproductive to online learning and cooperation
  • Not simply transfer face-to-face group work designs to online experiences without redesign or greater support

Rising above, these two gents provided precursors for a masterclass on the design of online learning experiences. I have a Masters and Ph.D. for studies relevant in this field and have taught online since 2001. But I am still learning how to do this. So I appreciate the pearls of wisdom they threw online. Oink!

This Instagram post succinctly stated why peer teaching is an excellent strategy to help students learn. According to a study: 

…students who prepared to teach outperformed their counterparts in both duration and depth of learning, scoring 9 percent higher on factual recall a week after the lessons concluded, and 24 percent higher on their ability to make inferences. The research suggests that asking students to prepare to teach something—or encouraging them to think “could I teach this to someone else?”—can significantly alter their learning trajectories.

But the post did not include a link to the study. It was published in the Journal of Educational Psychology. The abstract at the journal site also claims that learning was effective as measured by a test shortly after peer teaching and “even at a delay”.

I wanted to know how long that delay was but was unable to because the manuscript will only be publicly available on 15 February 2022. That said, the study adds to the pool of knowledge about peer teaching.

One of my favourite sayings about peer teaching is this: 

To teach is the learn twice.

Whitman, N.A. & Fife, J.D. (1988). Peer Teaching: To Teach Is To Learn Twice. ASHE-ERIC Higher Education Report No. 4.

http://files.eric.ed.gov/fulltext/ED305016.pdf

This was based on a much older paper title in 1988. Students might learn something the first time round when they read, watch, listen, etc. But they learn a second time when they prepare to teach it to their peers — they identify gaps, use their own examples, relate with peer language.

Both the individuals who tweeted that social media is not inherently harmful expressed their righteous indignation. 

But opinion is not fact. Facts are backed up with rigorous research and critical analysis of data on “screen time” and “addiction”. I curate a running list on those topics at Diigo.

For example, the latest two articles are summaries offered by The Conversation

If we want to create conditions for change, we need not just righteous indignation, we also need research-based indignation. Most people will shout down the former because there is no firm ground on any side. Some people are going to ignore the latter, but at least we have a firm foundation to stand tall on.

My reflection today was prompted by watching US news segments in which anti-vaxxers claimed to do “research” on the SARS-CoV2 vaccines.

Photo by Startup Stock Photos on Pexels.com

There is a fundamental difference between research conducted by experts and the “research” that followers of conspiracy theorists claim they do. 

The latter is an attempt to sound sophisticated and effortful. What they mean is that they heard what someone said or read a Facebook opinion. This is hearsay and unvalidated reporting, it is not research.

Research is methodical whether it from the sciences or the social sciences. It might be qualitative, qualitative, or mixed. Researchers from these fields know what study designs are and should be able to tell you the difference between methodology and methods.

Even the critical reading that might spark research, i.e., literature review, should careful and methodical. It seeks broad views within a domain of knowledge and seeks to identify gaps.

The type of “research” that anti-vaxxers do typically starts with a firm conclusion. These readers then seek opinion and articles that support this outcome. Again, this is not research. 

I am a squeaky wheel for defining the words we use clearly and precisely. If we do not, we lack shared meaning and then work towards different goals. Worse still, we might allow poorly informed meanings to take over more critical meanings of words.

Tags:

Some education heroes critique and share on TikTok.

Dr Inna Kanevsky is my anti-LSM hero. LSM is my short form for learning styles myth.

In her very short video, she highlighted how teachers perpetuate the myth of learning styles despite what researchers have found.

In the Google Doc she provided, she shared the media and peer-reviewed research that has debunked this persistent but pointless myth.

If your attitude is to ask what the harm is in designing lessons for different “styles”, then you are part of the problem — distracting the efforts of teachers and promoting uncritical thinking and uninformed practice.

Barely a month (week?) goes by without headlines about the link between using mobile device and some harm, e.g., poor mental health. We do not call those headlines a form of gaslighting because so many of us have bought into them.

Thankfully, this critique, Flawed data led to findings of a connection between time spent on devices and mental health problems, bucks the trend. That article summarised recent research and concluded: 

…simply taking tech away from (young people) may not fix the problem, and some researchers suggest it may actually do more harm than good.

Whether, how and for whom digital tech use is harmful is likely much more complicated than the picture often presented in popular media. However, the reality is likely to remain unclear until more reliable evidence comes in.

The thesis of the article: “The evidence for a link between time spent using technology and mental health is fatally flawed”.

The thrust of the article was that studies in the area of mobile device use and harm relied on self-reporting measures. It then argued how such measures were logically and methodologically flawed.

First, we do not pay attention to what we do habitually. Such activity is background noise, not foreground work. As a result, it is difficult to accurately remember how frequently we use mobile devices or apps.

Next, the author shared how he and his colleagues systematically reviewed actual and self-reported digital media use and discovered discrepancies between the two. He also outlined his own research of using objective measures like Apple’s screen time app to track device use. He concluded:

…when I used these objective measures to track digital technology use among young adults over time, I found that increased use was not associated with increased depression, anxiety or suicidal thoughts. In fact, those who used their smartphones more frequently reported lower levels of depression and anxiety.

The author revealed that he used to be a believer of what the popular media peddled about the harm of mobile device use. But his research revealed that the popular media were simplifying complex findings: 

The scientific literature was a mess of contradiction: Some studies found harmful effects, others found beneficial effects and still others found no effects. The reasons for this inconsistency are many, but flawed measurement is at the top of the list.

We cannot simply read headlines, form conclusions, and craft far-reaching policies of mobile use, e.g., limit kids of age X to Y minutes of iPad time. Why? The measurements for the evidence of harm are flawed and the results of studies are mixed. 

We need to be critical readers, thinkers, and actors. We could start by reading beyond the headline, i.e., actually read the whole article and not propagating articles without first processing it carefully. This is more difficult to do than casually sharing a link, but it is a vital habit to inculcate if we are to be digitally wise. And with most habits, doing this gets easier with practice.

The most recent episode of the Build For Tomorrow podcast is for anyone who has bought into the narrative of being “addicted” to technology. 

Podcast host, Jason Feifer, started with the premise that people who have no qualifications, expertise, or study in addiction tend to be the ones who make claims that we are helplessly “addicted” to technology.

Ask the experts and they might point out that such “addiction” is the pathologicalisation of normal behaviour. For an addiction to actually be one, it must interfere with social, familial, occupational commitments.

Another problem with saying that we are “addicted” to technology is that addiction is normally defined chemically (e.g., to drugs, smoking, or alcohol) and not to behaviourally (e.g., gaming, checking social media). Just because something looks like addiction does not mean it is addiction.

An expert interviewed in the podcast described how behavioural addiction had misappropriated chemical addiction in self-reporting surveys (listen from around the 28min 45sec mark). To illustrate how wrong this misappropriation was, he designed an “addicted to friends” study (description starts at the 32min mark).

  • Take the questions from studies about addictive social media use
  • Swap content for friendship measures, e.g., From “How often do you think about social media a day?” to “How often do you think about spending time with friends during the day?”
  • Get a large and representative sample (807 respondents) and ask participants to self report (just like other “addiction” studies)

Long story made short: This study found that 69% of participants were “pathologically addicted to wanting to spend time with other people”. Is this also not a health crisis?

If that sounds ridiculous, know that this followed the design of the alarming social media addiction studies but was more thorough. If we cannot accept the finding that people are addicted to spending time with one another, we should not accept similarly designed studies that claim people are “addicted” to social media.

Other notable notes from the podcast episode:

  • Non-expert addiction “experts” or the press like to cite numbers, e.g., check social media X times a day. This alone does not indicate addiction. After all, we breathe, eat, and go to the loo a certain number of times a day, but that does not mean we are addicted to those things.
  • The heavy use of, say, social media is not necessarily a cause of addiction. It might be a correlation made bare, i.e., a person has an underlying condition and behaviour manifests that way. The behaviour (checking social media) did not cause the addiction; it is the result of something deeper.
  • The increased use of social media and other technological tools are often enablers of social, familial, occupational commitments, not indicators of addiction. Just think about how we have had to work and school from home over the current pandemic. Are we addicted to work or school?

One final and important takeaway. The podcast episode ended with how blindly blaming addiction on technology is a form of learnt helplessness. It is easier for us to say: Something or someone else is to blame, not me. We lose our agency that way. Instead, we should call our habit what it is — overuse, wilful choice — not a pathological condition. 

I enjoyed this podcast episode because it dealt with a common and ongoing message by self-proclaimed gurus and uninformed press. They focus on getting attention and leveraging on fear. Podcasts like Build For Tomorrow and the experts it taps focus on meaning and nuance.

Am I happy that there is a study and meta research that reports that there is no statistically significant advantage of handwriting over typing notes?

Sort of. In a previous reflection, I explained that it is what students do with recorded notes that matters more than how they take them. Their preferences also matter.

I am also glad that there is ammunition for me to fire back to anyone that claims “research says…” and does not go deeper than that.

But here are a few more factors to consider about this debate.

First, a quiz was the measure of ability to recall. A quiz and recall — the most basic tool for the most fallible aspect of learning. Consider these: Learning is not just a measure of basic recall and our brains are designed more to forget than to remember.

Second, the students in the study were not allowed to review their notes before the quiz. On one hand, this is good experimental treatment design as it excludes one confounding variable. On the other, this is inauthentic practice — the point of good note-taking is to process them further.

Finally, this type of research has been repeated enough times for a meta study. It is an indication of technological determinism, i.e., we attribute disproportionate effects of the type of technology (writing vs typing instruments). In doing so, we foolishly discount methods of teaching and strategies for learning.

 
If you wonder why online courses are perceived to be inferior to in-person ones, this article has some answers.

The author cheekily (but accurately) suggested four “entrenched inequities” that keep the value of online courses and instruction down:

  • The second-class status of pedagogy research
  • The third-class status of online courses
  • The fourth-class status of online-oriented institutions
  • The fifth-class status of the majority of online instructors

The devil is in the details and the author is a demonic writer. Every word sizzled and the full article is worth the read just for its frank critique of the status quo.

Whither online efforts? They wither because they are denied resources.


Archives

Usage policy

%d bloggers like this: