Another dot in the blogosphere?

Posts Tagged ‘information

Here some of my notes on the second part of Crash Course’s series on media and digital literacies.

Video source

This episode focused on fact checking. To do this, presenter John Green outlined a Stanford University study on how a group university professors and students evaluated information online.

The participants focused on superficial elements of source sites, e.g., how it presented information, instead of looking deeper on what information it shared.

On the other hand, professional fact checkers armed themselves with at least three questions to evaluate sources:

  1. Who is behind this information and why are they sharing it?
  2. What is the evidence for their claims?
  3. What do other sources say about the sharer and its claims?

Answering these questions is not as simple as ABC, but it does provide an easy-to-remember set of 1-2-3 to evaluate what we read, watch, or listen to.

Near the end of the video, Green highlighted the difference between being cynical and being skeptical. The former is being “generally distrustful of everyone else’s motives” while the latter is being “not easily convinced”.

All of us could use a healthy dose of skepticism every day. The problem is that our bias might raise this shield when the information does not align to what we already know or believe. This is why asking the 1-2-3 regardless of source or our compass helps keep us in check.

John Green and co have just released part 1 of their Crash Course series on navigating digital information.

Video source

If I had to sum up the takeaway from the video, it would be this: Just because it looks like a news article does not make it one. Appearances like layout, graphics, and slickness matter, but these should not distract from the quality and accuracy of the content. To determine those latter qualities, we need to investigate the sources of the article.

Sounds simple, doesn’t it?

However, Green mentioned a study by the Stanford History Education Group which highlighted how historians and university students focused on the superficial instead of digging deep.

Speaking of digging deep, I could find the Stanford group online, but not the documentation about the study from the Crash Course video page. Might Crash Course consider providing a link to such evidence and not just its main sponsors/collaborators?

Time is a human construct. It is on that basis that we are in the year 2019.

If you take the lunar calendar into account, the new year only arrives on 5 Feb. The Year of the Dog makes way for the Year of the Pig.

One of the hottest items that people here will queue endlessly for is bak kwa (pork jerky).

Last week I wondered out loud to my wife if there was something wrong with selling, buying, or pigging out on bak kwa in the Year of the Pig. I should have looked online first because I found this image:

Yes, there is bak kwa in the shape of pig silhouettes. While bak kwa can be eaten all year, I have no doubt that some will take advantage of this once-in-twelve year joke.

We live in an Information Age because it (information, not bak kwa) is so readily available. But it might just as well be an Ignorant Age if we do not bother to look, or worse, not know how to. In the worst case, we do not know if what we find is valid and reliable.

Piggy Year or not, it is a timeless mindset to be skeptical and a timeless skill set to problem seek and then problem solve. We could all use some timeless reminding of this timeless message:

The illiterate of the 21st century will not be those who cannot read or write, but those who cannot learn, unlearn, and relearn. — Alvin Toffler

Video source

The video above is a preview of a new Crash Course that will be coming soon.

I am looking forward to it as much as the next major blockbuster. While movies entertain, John Green and company have a way of educating that pulls learners in.

I am one of the 8+ million subscribers to their channel. You should be, too, if you have any role in developing information literacy.

There is much truth in the message represented by the graphic embedded in the tweet below.

If I had to split hairs, I would point out that what anyone shares is information, not knowledge. Information only becomes knowledge only when it has been reprocessed and internalised. So it is information that is lost and it never becomes actionable knowledge.

But the fact remains that much of research conducted by academics does not reach its intended audience nor does it have the effect it should. That might be one of the reasons why one of the thoughts shared in the article below was for researchers to disseminate more widely and clearly.

The truth is out there. Some of it is hidden in academic speak and journals — this used to be the dominant but tedious way of sharing. Now some of that information is shared online more openly, freely, and simply.

The problem now is that there is so much information, misinformation, and disinformation. The sad truth is that we are still struggling to teach students how to solve that problem.

The headlines highlighted in this tweet are why we need:

  • science and experts.
  • to be information and media literate.
  • to follow entities outside our bubbles.

Forbes and NASA have experts that are good at what they do. Both provided commentary on a shared observation. Only one was actually informative — NASA.

If we were information and media literate — collectively digitally literate — we would be skeptical of Forbes’ report and know how to investigate the issue. We would then find NASA’s version of the event and we would be able to evaluate what we find.

Operating outside our bubbles allows us to see what others see. Operate in the Forbes or entertainment bubble and we see only mystery or ignorance. Operate in the scientific bubble and we see more factual information.

That said, I follow You Had One Job on Twitter because it is funny. It is also provocative in that it helps me make critical connections. So while being digitally literate and sourcing expertise are important, it helps to first operate outside one’s bubble.

Yesterday I mentioned how the edtech vendor DRIP — data rich, information poor — approach was like torture. Today I elaborate on one aspect of data-richness and link that to an under-utilised aspect of game-based learning.

The data-richness that some edtech providers tout revolves around a form of data analytics — learning analytics. If they do their homework, they might address different levels of learning analytics: Descriptive, diagnostic, predictive, prescriptive.

A few years of following trends in learning analytics allows me to distill some problems with vendor-touted data or learning analytics:

  • Having data is not the same as having timely and actionable information
  • While the data is used to improve the technological system, it does not guarantee meaningful learning (a smarter system does not necessarily lead to a smarter student)
  • Such data is collected without users’ knowledge or consent
  • Users do not have a choice but to participate, e.g., they need to access resources and submit assignments to institutional LMS
  • The technological system sometimes ignores the existing human system, e.g., coaches and tutors

I define learning analytics and highlight a feature in Pokémon Go to illustrate how data needs to become information to be meaningful to the learner.

First, a seminal definition from Long and Siemens (2011):

… learning analytics is the measurement, collection, analysis and reporting of data about learners and their contexts, for purposes of understanding and optimizing learning and the environments in which it occurs

ERIC source

The processes of measurement, collection, analysis, and reporting are key to analytics. I use a recent but frustrating feature of Pokémon Go to illustrate each.

My PoGo EX Raid Pass.

The Pokémon Go feature is the “EX Raid Pass” invite system (I shorten this to ERP). Players need to be invited to periodic raids to battle, defeat, and catch the rare and legendary, Mewtwo. The ERP seemed to be random like a lottery and rewarded few like a lottery as well.

Even though Niantic (Pokémon Go’s parent company) provided vague tips on how to get ERPs, players all over the world became frustrated as they did not know why they were not selected despite playing by the rules and putting in much effort.

To make matters worse, a few players seemed to strike the lottery more than once. At the time of writing, I know of one player who claimed on Facebook that he has eight ERPs for the next invite on 9 Jan 2018.

Eight Ex Raid Passes!

Players have swarmed Reddit, game forums, and Facebook groups to crack this nut. Some offered their own beliefs and tips. Much of this was hearsay and pseudoscience, but it was data nonetheless — unverifiable and misleading data.

A few Facebookers then decided to poll ERP recipients about where their EX Raids were. This was the start of measurement as they looked for discrete data points. As the data points grew, the Facebookers compiled lists (data collection).

Such data measurement and collection was not enough to help non-ERP players take action. The collected data was messy and there was no pattern to it.

I know of at least one local Pokémon Go player who organised the data as visualisations. He created a tool that placed pinned locations in a Singapore map as potential EX Raid venues. With this tool, it became obvious that locations were reused for EX Raids.

Potential EX Raids hotspots.

Pattern of reuse of venues for EX Raids.

However, such a visualisation was still not information. While the data pointed to specific spots where EX Raids were likely to happen, they still did not provide actionable information on what players might actually do to get an ERP.

To do this, Facebooker-players asked recipients when their ERPs were valid and when they raided those spots previously. One of the patterns to emerge was normal raids of any levels (1 to 5) at hotspot gyms a few days before Ex Raids. So if an Ex Raid was likely to happen on Saturday at Gym X, the advice was to hit that gym on Wednesday, Thursday, and Friday to increase the likelihood of receiving an ERP.

Collectively, these actions were a form of analysis because of the attempts to reduce, generalise, and ultimately suggest a pattern of results. This actionable information was reported and communicated online (social media networks) and in-person (auntie and uncle network).

The advice to players seeking ERPs is a reduction of much data, effort, and distilled knowledge from a crowd. It illustrates how data becomes information. I have benefitted from the data-to-information meta process because I followed the advice and received an ERP (see image embedded earlier).

The advice does not constitute a guarantee. With more players using this strategy, more will enter the pool eligible for selection. There is still a lottery, but you increase your chances with the scientific approach. You do not just rely on lucky red underwear; you create your own “luck”.

Now back to edtech DRIP. Edtech solutions that claim to leverage on analytics are only good if they not only help the technical system get better at analysis, but also help the teacher and learner take powerful and meaningful action. Edtech solutions that are data rich but information poor only help themselves. Edtech solutions that turn rich data into meaningful information help us.

Click to see all the nominees!

QR code

Get a mobile QR code app to figure out what this means!

My tweets


Usage policy

%d bloggers like this: