Another dot in the blogosphere?

Posts Tagged ‘research

Both the individuals who tweeted that social media is not inherently harmful expressed their righteous indignation. 

But opinion is not fact. Facts are backed up with rigorous research and critical analysis of data on “screen time” and “addiction”. I curate a running list on those topics at Diigo.

For example, the latest two articles are summaries offered by The Conversation

If we want to create conditions for change, we need not just righteous indignation, we also need research-based indignation. Most people will shout down the former because there is no firm ground on any side. Some people are going to ignore the latter, but at least we have a firm foundation to stand tall on.

My reflection today was prompted by watching US news segments in which anti-vaxxers claimed to do “research” on the SARS-CoV2 vaccines.

Photo by Startup Stock Photos on Pexels.com

There is a fundamental difference between research conducted by experts and the “research” that followers of conspiracy theorists claim they do. 

The latter is an attempt to sound sophisticated and effortful. What they mean is that they heard what someone said or read a Facebook opinion. This is hearsay and unvalidated reporting, it is not research.

Research is methodical whether it from the sciences or the social sciences. It might be qualitative, qualitative, or mixed. Researchers from these fields know what study designs are and should be able to tell you the difference between methodology and methods.

Even the critical reading that might spark research, i.e., literature review, should careful and methodical. It seeks broad views within a domain of knowledge and seeks to identify gaps.

The type of “research” that anti-vaxxers do typically starts with a firm conclusion. These readers then seek opinion and articles that support this outcome. Again, this is not research. 

I am a squeaky wheel for defining the words we use clearly and precisely. If we do not, we lack shared meaning and then work towards different goals. Worse still, we might allow poorly informed meanings to take over more critical meanings of words.

Tags:

Some education heroes critique and share on TikTok.

Dr Inna Kanevsky is my anti-LSM hero. LSM is my short form for learning styles myth.

In her very short video, she highlighted how teachers perpetuate the myth of learning styles despite what researchers have found.

In the Google Doc she provided, she shared the media and peer-reviewed research that has debunked this persistent but pointless myth.

If your attitude is to ask what the harm is in designing lessons for different “styles”, then you are part of the problem — distracting the efforts of teachers and promoting uncritical thinking and uninformed practice.

Barely a month (week?) goes by without headlines about the link between using mobile device and some harm, e.g., poor mental health. We do not call those headlines a form of gaslighting because so many of us have bought into them.

Thankfully, this critique, Flawed data led to findings of a connection between time spent on devices and mental health problems, bucks the trend. That article summarised recent research and concluded: 

…simply taking tech away from (young people) may not fix the problem, and some researchers suggest it may actually do more harm than good.

Whether, how and for whom digital tech use is harmful is likely much more complicated than the picture often presented in popular media. However, the reality is likely to remain unclear until more reliable evidence comes in.

The thesis of the article: “The evidence for a link between time spent using technology and mental health is fatally flawed”.

The thrust of the article was that studies in the area of mobile device use and harm relied on self-reporting measures. It then argued how such measures were logically and methodologically flawed.

First, we do not pay attention to what we do habitually. Such activity is background noise, not foreground work. As a result, it is difficult to accurately remember how frequently we use mobile devices or apps.

Next, the author shared how he and his colleagues systematically reviewed actual and self-reported digital media use and discovered discrepancies between the two. He also outlined his own research of using objective measures like Apple’s screen time app to track device use. He concluded:

…when I used these objective measures to track digital technology use among young adults over time, I found that increased use was not associated with increased depression, anxiety or suicidal thoughts. In fact, those who used their smartphones more frequently reported lower levels of depression and anxiety.

The author revealed that he used to be a believer of what the popular media peddled about the harm of mobile device use. But his research revealed that the popular media were simplifying complex findings: 

The scientific literature was a mess of contradiction: Some studies found harmful effects, others found beneficial effects and still others found no effects. The reasons for this inconsistency are many, but flawed measurement is at the top of the list.

We cannot simply read headlines, form conclusions, and craft far-reaching policies of mobile use, e.g., limit kids of age X to Y minutes of iPad time. Why? The measurements for the evidence of harm are flawed and the results of studies are mixed. 

We need to be critical readers, thinkers, and actors. We could start by reading beyond the headline, i.e., actually read the whole article and not propagating articles without first processing it carefully. This is more difficult to do than casually sharing a link, but it is a vital habit to inculcate if we are to be digitally wise. And with most habits, doing this gets easier with practice.

The most recent episode of the Build For Tomorrow podcast is for anyone who has bought into the narrative of being “addicted” to technology. 

Podcast host, Jason Feifer, started with the premise that people who have no qualifications, expertise, or study in addiction tend to be the ones who make claims that we are helplessly “addicted” to technology.

Ask the experts and they might point out that such “addiction” is the pathologicalisation of normal behaviour. For an addiction to actually be one, it must interfere with social, familial, occupational commitments.

Another problem with saying that we are “addicted” to technology is that addiction is normally defined chemically (e.g., to drugs, smoking, or alcohol) and not to behaviourally (e.g., gaming, checking social media). Just because something looks like addiction does not mean it is addiction.

An expert interviewed in the podcast described how behavioural addiction had misappropriated chemical addiction in self-reporting surveys (listen from around the 28min 45sec mark). To illustrate how wrong this misappropriation was, he designed an “addicted to friends” study (description starts at the 32min mark).

  • Take the questions from studies about addictive social media use
  • Swap content for friendship measures, e.g., From “How often do you think about social media a day?” to “How often do you think about spending time with friends during the day?”
  • Get a large and representative sample (807 respondents) and ask participants to self report (just like other “addiction” studies)

Long story made short: This study found that 69% of participants were “pathologically addicted to wanting to spend time with other people”. Is this also not a health crisis?

If that sounds ridiculous, know that this followed the design of the alarming social media addiction studies but was more thorough. If we cannot accept the finding that people are addicted to spending time with one another, we should not accept similarly designed studies that claim people are “addicted” to social media.

Other notable notes from the podcast episode:

  • Non-expert addiction “experts” or the press like to cite numbers, e.g., check social media X times a day. This alone does not indicate addiction. After all, we breathe, eat, and go to the loo a certain number of times a day, but that does not mean we are addicted to those things.
  • The heavy use of, say, social media is not necessarily a cause of addiction. It might be a correlation made bare, i.e., a person has an underlying condition and behaviour manifests that way. The behaviour (checking social media) did not cause the addiction; it is the result of something deeper.
  • The increased use of social media and other technological tools are often enablers of social, familial, occupational commitments, not indicators of addiction. Just think about how we have had to work and school from home over the current pandemic. Are we addicted to work or school?

One final and important takeaway. The podcast episode ended with how blindly blaming addiction on technology is a form of learnt helplessness. It is easier for us to say: Something or someone else is to blame, not me. We lose our agency that way. Instead, we should call our habit what it is — overuse, wilful choice — not a pathological condition. 

I enjoyed this podcast episode because it dealt with a common and ongoing message by self-proclaimed gurus and uninformed press. They focus on getting attention and leveraging on fear. Podcasts like Build For Tomorrow and the experts it taps focus on meaning and nuance.

Am I happy that there is a study and meta research that reports that there is no statistically significant advantage of handwriting over typing notes?

Sort of. In a previous reflection, I explained that it is what students do with recorded notes that matters more than how they take them. Their preferences also matter.

I am also glad that there is ammunition for me to fire back to anyone that claims “research says…” and does not go deeper than that.

But here are a few more factors to consider about this debate.

First, a quiz was the measure of ability to recall. A quiz and recall — the most basic tool for the most fallible aspect of learning. Consider these: Learning is not just a measure of basic recall and our brains are designed more to forget than to remember.

Second, the students in the study were not allowed to review their notes before the quiz. On one hand, this is good experimental treatment design as it excludes one confounding variable. On the other, this is inauthentic practice — the point of good note-taking is to process them further.

Finally, this type of research has been repeated enough times for a meta study. It is an indication of technological determinism, i.e., we attribute disproportionate effects of the type of technology (writing vs typing instruments). In doing so, we foolishly discount methods of teaching and strategies for learning.

 
If you wonder why online courses are perceived to be inferior to in-person ones, this article has some answers.

The author cheekily (but accurately) suggested four “entrenched inequities” that keep the value of online courses and instruction down:

  • The second-class status of pedagogy research
  • The third-class status of online courses
  • The fourth-class status of online-oriented institutions
  • The fifth-class status of the majority of online instructors

The devil is in the details and the author is a demonic writer. Every word sizzled and the full article is worth the read just for its frank critique of the status quo.

Whither online efforts? They wither because they are denied resources.

Call me biased, but I like featuring news and research that counters the fear-driven narratives of much of the press.


Video source

In the video above, parents learnt how to play video games to connect with their kids. This is not the only way parents connect, but it is an important one. The strategy not only creates opportunities awareness and involvement, it showcases the kids’ abilities to teach their parents.

Another resource certain to ruffle the feathers of proverbial ostriches with heads in the sand is the NYT review of research revealing that fears about kids mobile phone and social media use are unwarranted.

Though not specially labelled in the article, the reported research sounded like meta analyses of prior research studies on mobile phone and social media use on well-being.

The meta research revealed that the effect size was negligible. On the other hand, studies that spread fear and worry tended to be correlational, e.g, the rise in suicide rates in the USA rose with the common use of mobile phones.

But the NYT reminded us that correlation is not causation. Furthermore, there was no appreciable rise in Europe even though there was a similar rise in use of mobile phones.

One reason the NYT has the reputation it has is because it resists the temptation to be reductionist or simply regurgitate what the rest report. This is not about stand out. It is about being critical and responsible.

Praxis is research-informed practice or research that is translated into practice.


Video source

According to this video, there is surprisingly little praxis in the area of classroom management.

Just how little? According to one research group’s analysis, only 0.13% of published research were replications. Replications are studies that test another researcher’s findings and claims. This means that it is easy to make an initial claim and not have it challenged by questions or critique.

That finding affects educators who regularly read academic journals. If they do not, their practice is transmitted and challenged socially by their peers and supervisors.

There is nothing wrong with teachers observing one another and exchanging professional practice. In fact, this needs to happen more often than it already does. But casual or unstructured observations and communications are not research. They do not have the reach or rigour of reputable research journals.

So the next time you attend a workshop or conference with a guru up front making claims that their technique works, ask them what replicated research it is based on. If you do not, I have some snake oil you might like to purchase.

The tweet above is an example of how NOT to start research.

You do not start with a conclusion and look for data to prove it. Instead, you gather data based on one or more research questions, then only do conclusions possibly emerge.

So how might the tweeted issue be investigated? It might start with the questions: How does the new surge pricing scheme affect drivers? How does it affect passengers? What are the effects by each company?

These questions allow for different data sources and types to shed light on a complex phenomenon. They may reveal that the surge pricing is “unfair” (however that is defined) or not. They do not exclude data that might reveal the contrary or uncover even more issues.


http://edublogawards.com/files/2012/11/finalistlifetime-1lds82x.png
http://edublogawards.com/2010awards/best-elearning-corporate-education-edublog-2010/

Click to see all the nominees!

QR code


Get a mobile QR code app to figure out what this means!

Archives

Usage policy

%d bloggers like this: