Another dot in the blogosphere?

Posts Tagged ‘intelligence

Video source

The second episode of CrashCourse’s AI series focused on how AI learns: Reinforcement, unsupervised, and supervised.

  • Refinforcement learning: AI gets feedback from its behaviours.
  • Unsupervised learning: AI learns to recognise patterns by clustering or grouping objects.
  • Supervised learning: AI is presented objects with training labels and associates the two. This is the most common method of training AI and was the focus of the episode.

Examples of supervised learning by AI include the ability to recognise your face over others and distinguishing between relevant and spam email.

Understanding how supervised learning happens broadly is easy. Doing the same at the programmatic level is not. The AI brain does not consist of human neurone analogues. While both seem to have just two actions (fire or not fire; one or zero), AI can be programmed to weight its processing before firing.

The last paragraph might not be easy to picture. The video made this clearer by illustrating how an AI might distinguish between donuts and bagels. Both look alike but an AI might be taught to tell the difference by considering the diameter and mass of each item — the diameter and mass being the weights that influence the processing.

The video then went on to illustrate the difference between precision and recall in AI. This is important to AI programming, but not so much in the context of how I might use this video (AI for edtech planning and management).

This episode scratched the surface of how AI learns in the most basic of ways. I am itching for the next episode on neural networks and deep learning.

The first episode of CrashCourse’s series on artificial intelligence (AI) is as good as the other series created by the group.

Video source

The introductory episode started by making the point that AI is not the scary robot made popular by books or movies. We use everyday AI when we ask a voice assistant to play some music, sort photos using facial recognition, or vacuum clean our floor with a Roomba.

These are ordinary events that we do not question or fear, but it is still AI at a low level. Regardless, how did basic AI become commonplace?

AI has no sense organs, so it needs to be fed a lot of data. The availability of now ubiquitous data has enabled the rise of AI.

Then there is the constant improvements in computing power. What a current supercomputer might process in one second would take the IBM 7090 — the most advanced computer in 1956 — 4735 years to solve.

Finally, the commonness of AI is due to the information that we create and share, and how we transact on the Internet.

So the seemingly rapid rise of AI is due to three main things: Massive data, computing power, and the Internet as we know it.

AI is not terrifying in itself. What should scare us is ignorance of what it is and what it might become through irresponsible design.

Video source

Yay, here is a preview of a YouTube series on artificial intelligence and machine learning by CrashCourse.

Even more yay, it might be another resource to add to the Masters course I facilitate.

While reading and watching pieces on artificial intelligence (AI), I had a moment of… umm.

AI holds much promise. It might eventually help us communicate across languages, solve problems we have and do not yet have, and release us for more creative ventures. But we are not there yet.

Some weeks ago, I listed Wheeler’s levels of AI as a teaching resource: A Narrow I, A General I, A Super I. We are still at the narrow level — chat bots, learning analytics, voice assistants, etc.

It is still possible to fool current AI with images. For example, it might have problems differentiating chihuahuas from muffins. This Twitter collection reveals other things that AI easily confuses.

This seems like a dumb differentiation exercise, but most humans do this intuitively. Perhaps what we need is not just artificial intelligence but also artificial idiocy. By this I mean the random, spontaneous, and creative aspects of human thinking and expression.

But I do not know how to make this happen. I am a real idiot.

Video source

Is “emotional intelligence” a skillset?

According to key research outlined in the video above, EI can be predicted by a person’s general intelligence, agreeableness, and sex. So those who tout the ability to increase EI might simply be relabelling existing traits.

But might improved cognitive reasoning and problem-solving also increase EI?

The research summarised in the video suggested that students who practiced how to take perspectives and reduce aggression or distress were better able to solve emotional problems.

So the question of EI as a skillset is moot. The more important question might be why EI seems to be valued later (adult working life) than sooner (in schooling). This is not to say that EI is not important earlier.

The larger issue might be how academics are still valued and pursued over almost everything else. This sets the tone for what children should focus on and maladjusts them later in life.

The video below might run long for something on YouTube, but it is packed with good information for anyone with an interest in artificial intelligence (AI).

Video source

After watching the 40-minute video, I reflected on the responsibility of:

  • Reporting AI — focusing on facts instead of rumour, and on possibilities instead of fantasy
  • Researching and developing AI — being aware that human bias creeps in and shapes what we design
  • Teaching AI — focusing on what is current and countering the popular but simplistic notions what AI can do
  • Learning AI — having minds open enough to replace ignorance and bias, and brave enough to challenge myths

I reviewed my archive of notes on artificial intelligence (AI) in general and in education. This was for a Masters class I am facilitating. Here are a few sources and highlights.

Why Education Should Become More Like Artificial Intelligence

  • We are already bionic: “Our natural senses and functions are supplemented by computers and mobile phones (which relieve our brains of some of their data storage and processing burdens). AI is making us smarter. It helps humans get and process information in ways that humans on their own cannot.”

Robots That Act Like Humans Are a Waste of Time

  • Make AI-driven robots useful, not mimics of humans.

What Does It Mean to Prepare Students for a Future With Artificial Intelligence?

  • AI should complement, not replace humans
  • Stop mystifying or fear-mongering AI
  • AI is for/by everyone, not just computer scientists (we need emotions, psychology, philosophy, ethics, design thinking)
  • Update computer science curricula to be more current and relevant

Educators on Artificial Intelligence: Here’s the One Thing It Can’t Do Well

  • Building and fostering meaningful relationships with students (socio-emotional role — as highlighted by Maha Bali in next article)

Against the 3A’s of EdTech: AI, Analytics, and Adaptive Technologies in Education
Maha Bali’s five questions:

  1. Which educational problem are you trying to solve?
  2. What human solutions to this problem exist? Why aren’t you investing in those?

  3. What harm could come to stakeholders from using this tool?

  4. In what ways might this tool disproportionately harm less privileged learners and societies? In what ways might it reproduce inequality?

  5. How much have actual teachers and learners on the ground been involved in or consulted on the design of these tools?

Click to see all the nominees!

QR code

Get a mobile QR code app to figure out what this means!


Usage policy

%d bloggers like this: