Another dot in the blogosphere?

Posts Tagged ‘bias

I tried to schedule a tweet yesterday with Buffer but could not.

TweetDeck would not load and going directly to Twitter did not help either. So I searched for sites that monitored up-time.

One said everything was fine.

Another reported local access issues.

Obviously the latter was more accurate because it matched what I was experiencing.

Interestingly the same could be said about cultural bubbles or individual mindsets. When something changes, we might look for information that reaffirms what we already believe or think we know. This is confirmation bias.

Broader and more critical thinking requires the examination of more and contradictory sources, and evaluating their worth. This might be called skeptical bias.

Thankfully the Twitter outage did not last long. But it provided me a timely reminder to check and double check.

This TED Ed video outlines how and why we might be so stubborn or even stupid about our political leanings.


Video source

It offers some advice when dealing with such a phenomenon:

  1. We need to recognise that we are more biased than we think
  2. We should make fact-checking part of a valued group process
  3. When trying to convince others, frame an argument in their language and values

There are parallels to dealing with novice learners who are exposed to new information or experiences:

  1. They have prior knowledge and experiences which may help or hinder learning
  2. We should make discovery-based investigation part of cooperative learning
  3. We might start with the language and examples that learners are familiar with to help them level up


Video source

This was another episode that focused on hands-on Python coding using Google Colaboratory. It was an application of concepts covered so far on dealing with biased algorithms.

The takeaway for programmers and lay folk alike might be that there is no programme free from undesirable bias. We need to iterate on designs to reduce such bias.


Video source

This was an episode that anyone could and should watch. It focused on bias and fairness as applied in artificial intelligence (AI).

The narrator took care to first distinguish between being biased and being discriminatory. We all have bias (e.g., because of our upbringing), but we should prevent discrimination. Since AI adopts our bias, we need to be more aware of ourselves so as to prevent AI from discriminating harmfully by gender, race, religion, etc.

What are some examples of applied bias? Google image search for “nurse” and you are likely to see photos of women; do the same for “programmer” and you are more likely to see men in the photos.

The narrator suggested five sources of bias. I paraphrase them as follows:

  1. Existing data are already biased (e.g., the photo example above)
  2. New training data is unbalanced (e.g., providing photos of faces largely from one main race)
  3. Data is reductionist and/or incomplete (e.g., creative writing is difficult to measure and simpler proxies like vocabulary are used instead)
  4. Positive feedback loops (e.g., past actions are repeated as future ones regardless of context)
  5. Manipulation by harmful agents (e.g., users teaching Microsoft’s Tay to tweet violence and racism)


Video source

This video provides some insights into why we seem to have a negative bias when it comes to news.

We are wired to pay more attention to bad news. Our brains process such information more thoroughly than good news. This might explain why we might focus on one criticism even though we also receive nine plaudits.

The surprise finding might be how social media might counter our Debbie downer tendency. The narrator highlighted studies that found how we might share and spread more positive content. Why?

We consume news as outside observers, but we use social media as active participants.

So actively sharing positive content might a coping and counter mechanism to how we are biologically wired.

But how we are wired keeps us vigilant. The point is not to shield ourselves or hide from bad news. That same news keeps us informed so that we can take action.

… it is biased.


Video source

According to the video above, we introduce these forms of bias: Interaction, latent, and selection.

Our technologies are not just tools. They are designed with intent, and even the best intentions are tinged with our biases.

You might know cognitive bias by a different name.


Video source

The movie-phile might cite a young Forrest Gump: “Stupid is as stupid does”. The well-read might label this as the Dunning-Kruger effect.


Video source

According to the video above narrated (by Stephen Fry, no less!) cognitive bias can take root due to the salience effect (what gets emphasised) and repetition (what gets repeated).

Some might point out that a little knowledge might be a dangerous thing, but — to quote a line from the video — complete ignorance also breeds confidence. This is the Dunning-Kruger effect.

The obvious salve seems to be to inject the stubborn and ignorant with timely information. But pride and bias make for thick skin. So Fry hinted at an alternative strategy: Tackle emotions first. Find ways to connect with those you seek to change.

Learning is a form of change. So if teachers take anything away, it might be this message: Reach to teach.

If you cannot reach them, you cannot teach them.

I watched three different YouTube videos recently, but I came to the same conclusion. They were all smart moves.


Video source
One of Samsung’s ideas for road safety was putting cameras in front of large trucks and projecting the videos behind of the road ahead for other road users to see. It was a smart example of using what you already have.
 

Video source
This was a “dance” or choreographed video with a difference. Most of the performers did not have to be classically trained in dance. Instead, they combined dexterity, coordination, and sheer hard work to create a mesmerizing performance. It was a smart case of finding your own niche.
 

Video source
This was a rather technical video. The central idea was that the programmer created a programme that taught itself how to play a video game. It was about artificial intelligence mimicking how we learn, but at a more rapid rate. It was a smart example of pushing the envelope.
 

 
Rising above the three videos, I would guess that most people would see the utility of the first video: It could prevent accidents and thus save lives.

The second video is a creative endeavour that is good to have, but it is not a must-have. People could take it or leave it.

The third video might create fear. I would wager that a few people might cite the fictional Skynet of The Terminator series of movies. They fear that machines will become smarter than us and sentient, and then elect to wipe us off the face of the planet.

Viewed objectively, we might use logic for the first example, choose personal preference for the second, and rely on fantasy for the third. This is despite the fact that creative and disciplined thinking gave rise to all three.

Stupid human bias holds us back. The same thing blocks empathy and prevents learning. We should not confuse uninformed bias with critical thinking. Learn to tell the difference.


http://edublogawards.com/files/2012/11/finalistlifetime-1lds82x.png
http://edublogawards.com/2010awards/best-elearning-corporate-education-edublog-2010/

Click to see all the nominees!

QR code


Get a mobile QR code app to figure out what this means!

My tweets

Archives

Usage policy

%d bloggers like this: