Another dot in the blogosphere?

Posts Tagged ‘causation

The most recent episode of the Build For Tomorrow podcast is for anyone who has bought into the narrative of being “addicted” to technology. 

Podcast host, Jason Feifer, started with the premise that people who have no qualifications, expertise, or study in addiction tend to be the ones who make claims that we are helplessly “addicted” to technology.

Ask the experts and they might point out that such “addiction” is the pathologicalisation of normal behaviour. For an addiction to actually be one, it must interfere with social, familial, occupational commitments.

Another problem with saying that we are “addicted” to technology is that addiction is normally defined chemically (e.g., to drugs, smoking, or alcohol) and not to behaviourally (e.g., gaming, checking social media). Just because something looks like addiction does not mean it is addiction.

An expert interviewed in the podcast described how behavioural addiction had misappropriated chemical addiction in self-reporting surveys (listen from around the 28min 45sec mark). To illustrate how wrong this misappropriation was, he designed an “addicted to friends” study (description starts at the 32min mark).

  • Take the questions from studies about addictive social media use
  • Swap content for friendship measures, e.g., From “How often do you think about social media a day?” to “How often do you think about spending time with friends during the day?”
  • Get a large and representative sample (807 respondents) and ask participants to self report (just like other “addiction” studies)

Long story made short: This study found that 69% of participants were “pathologically addicted to wanting to spend time with other people”. Is this also not a health crisis?

If that sounds ridiculous, know that this followed the design of the alarming social media addiction studies but was more thorough. If we cannot accept the finding that people are addicted to spending time with one another, we should not accept similarly designed studies that claim people are “addicted” to social media.

Other notable notes from the podcast episode:

  • Non-expert addiction “experts” or the press like to cite numbers, e.g., check social media X times a day. This alone does not indicate addiction. After all, we breathe, eat, and go to the loo a certain number of times a day, but that does not mean we are addicted to those things.
  • The heavy use of, say, social media is not necessarily a cause of addiction. It might be a correlation made bare, i.e., a person has an underlying condition and behaviour manifests that way. The behaviour (checking social media) did not cause the addiction; it is the result of something deeper.
  • The increased use of social media and other technological tools are often enablers of social, familial, occupational commitments, not indicators of addiction. Just think about how we have had to work and school from home over the current pandemic. Are we addicted to work or school?

One final and important takeaway. The podcast episode ended with how blindly blaming addiction on technology is a form of learnt helplessness. It is easier for us to say: Something or someone else is to blame, not me. We lose our agency that way. Instead, we should call our habit what it is — overuse, wilful choice — not a pathological condition. 

I enjoyed this podcast episode because it dealt with a common and ongoing message by self-proclaimed gurus and uninformed press. They focus on getting attention and leveraging on fear. Podcasts like Build For Tomorrow and the experts it taps focus on meaning and nuance.


Video source

This video is as much about misconceptions surrounding screen time as it is about:

  • Reading beyond headlines
  • Understanding how newspapers are not journals
  • Distinguishing engagement and accuracy; statistical significance and effect size; correlation and causation

It also illustrated how large sample sizes can make tiny effects statistically significant even though they have no practical significance.

For example, the video cited a study in Nature Human Behaviour that had a sample size of 355,358 adolescents. The video (also this article in Vox) highlighted how the study found that “wearing eyeglasses and eating potatoes also had significant yet small negative effects on teens’ wellbeing”. And yet we do not vilify either.

Add to that the fact that researchers have to decide where cut-offs are that distinguish statistically significant effects from non-significant ones (e.g., P value 0.01 vs 0.05). The same researchers or the agencies they work for might also make cut-offs like recommended screen times of no more than one hour before age five, even if the evidence does not support strict limits for any age groups.

TLDR? Newspapers oversimplify complex phenomena by providing easy answers. Real learning is not in taking these answers at face value. It happens when you explore nuance and depth instead.

One of my pet peeves is how some people confuse correlation with causation. Sometimes I cannot blame them because they were taught to think that way.


Video source

The SciShow video above highlights one common example. As a former biology student (and teacher), I was taught (and taught others) wrongly that aching muscles are due to lactic acid buildup.

Not only is the buildup due to lactate — a base that accepts protons — aches are only correlated to the buildup. The lactate might build up, but it does not seem to cause the aches; the actual cause is not yet known for sure.

This video is not just useful for highlighting how scientific facts change, but also how scientific thinking takes place. It is the latter that creates content and changes it. It is the thinking that needs to be modelled and taught, not just the content.

Anyone who needs to process scientific, medical, or social science research that involve correlations needs to watch the video below.


Video source

As the video highlights, the number of drownings can be correlated to the release of Nicholas Cage movies, but this does not mean one causes the other.

Journalists who like reporting whether certain foods are good or bad for you need this video.

People who read what these uninformed journalists write need to watch this video.

Anyone who might have heard someone else declare, “Correlation is not causation!” needs to watch this video.

Watch this video!

Say it, repeat it: Correlation is not causation. This is a tenet in critical thinking and research literacy.

Case in point:

As long as news agencies and papers continue to publish drivel without explaining the tenet, they are peddling misinformation. There are many other factors that contribute to susceptibility or resistance to disease.

There is nothing wrong with promoting healthy eating by feeding your body with fruit. There is everything wrong with perpetuating uncritical thinking by feeding your mind with misinformation.

As long as news agencies and papers practice such lazy and unethical publishing, teachers and parents have a duty to use these examples to model critical thought.

Rick and Morty is an animated series that is waiting for its third season.

It is not for the faint-hearted because it makes you laugh from openings you might not realise you have. It can be rude and crude, but -oh-so intelligent.


Video source

So it should come as little surprise that it is possible to use Rick and Morty to illustrate human cognition, confirmation bias, and how correlation is not the same as causation.

This just goes to show how just about anything can be used to teach anything else. The key is an educator who can think both creatively and critically.

Our daily rags sometimes do us a disservice by publishing articles like this.

A headline that reads “Eating too much fish while pregnant raises child obesity risk” is not only inaccurate, it is also irresponsible. The researchers highlighted that there was no direct link and said that making such a hypothesis was “speculative”. The study did not prove causation; it only suggested correlation.

The headline is what grabs eyeballs. It is clickbait based on fear or worry.

If not scientifically or research literate, the layperson typically does not distinguish between correlation with causation. Perhaps we need a SkillsFuture course on this because it is a valuable lesson in lifelong learning.

If not, then we might ponder the observation of one of the readers: The Japanese consume a lot of fish, and presumably that includes pregnant women, but they have a relatively low obesity rate. So what gives?

Rising above irresponsible reporting, I wonder if literacy in schools includes the sort of critical thinking that 1) distinguishes between correlation and causation, and 2) encourages questions with counter examples and data.

Is such literacy relegated to “cyberwellness” programmes or is it integrated in the context of actual content?

20130112-st-happier-youth-spend-less-time-online

Does anyone raise an eyebrow when they read headlines like “Happier youth ‘spend less time online'”? [article archive online] [article archive PDF]

I am not surprised not because I agree with the statement but because I have come to expect the Straits Times to publish such misleading information as truth.

I have said it before and I will say it again: Correlation is not causation.

There are so many things that can cause unhappiness. Taking a poll and reacting to poorly drawn conclusions can cause unhappiness.

You can sample a segment of a population and collect data on, say, the brand of toilet paper they prefer, and the incidence of crimes like peeping toms.

The data might reveal a strong correlation between those who like Itchy Bum brand and the likelihood of being watched by a peeping tom. But you cannot conclude that one causes the other. You can only say there is some relationship  between the two variables.

I have a second problem with the way the data was presented.

The article states that respondents who rated themselves “happy” spent 5.4 hours online each day. Those who rated themselves “unhappy” spent 5.8 hours online. Ignoring how happiness or unhappiness was determined, is 0.4 hours (24 minutes) statistically significant? Is is pragmatically significant? Just because there is a numerical difference does not make it significant.

Worse still, a layperson who reacts extremely to this might take the results as prescriptive. They might take the article as advice on what to do instead of analyzing it more critically.

Might a parent or teacher then insist on 5.4 hour cap of online activity? This is not as ridiculous as it sounds when you consider how people react to articles about whether to drink wine or eat a particular type of fish for health reasons. These folks switch diets, brands, and behaviours at the drop of a hat.

And at the drop of a physical newspaper at their doorstep. This newspaper is the same one that would prefer to bash a competitive medium (online bad, paper good) instead of reinvent itself to fully take advantage of positive change.

The same newspaper that had an opportunity in this case to educate but chose to misinform instead. Present the same information in an online forum or Facebook and watch what happens. The misinformation gets discussed and eventually might get corrected.


Archives

Usage policy

%d bloggers like this: