Another dot in the blogosphere?

Posts Tagged ‘ai


Video source

The gist of this episode might read: Neural networks, anyone?

Neural networks are commonplace, but we might not be aware of them. They are used when Facebook suggests tags for photos, a diagnostic lab analyses cell samples for cancer, or a bank decides whether or not to offer a loan.

So knowing what neural networks are and how they work are important. However, this episode provided only a small taste of both with this schematic.

My marked up version of the PBS/CrashCourse graphic on a basic neural network schematic.

Marked up version of the PBS/CrashCourse graphic on a neural network schematic.

If the input layer is a query we might have and the output layer is an answer, the black box is where rules and algorithms break down and process the input.

What happens in the black box is still a mystery. We might not care how exactly a social media system knows what tags to suggest for a photo, but we probably want to know why a financial system denies us a loan.

Perhaps the next episode might shed more light on the black box.


Video source

The second episode of CrashCourse’s AI series focused on how AI learns: Reinforcement, unsupervised, and supervised.

  • Refinforcement learning: AI gets feedback from its behaviours.
  • Unsupervised learning: AI learns to recognise patterns by clustering or grouping objects.
  • Supervised learning: AI is presented objects with training labels and associates the two. This is the most common method of training AI and was the focus of the episode.

Examples of supervised learning by AI include the ability to recognise your face over others and distinguishing between relevant and spam email.

Understanding how supervised learning happens broadly is easy. Doing the same at the programmatic level is not. The AI brain does not consist of human neurone analogues. While both seem to have just two actions (fire or not fire; one or zero), AI can be programmed to weight its processing before firing.

The last paragraph might not be easy to picture. The video made this clearer by illustrating how an AI might distinguish between donuts and bagels. Both look alike but an AI might be taught to tell the difference by considering the diameter and mass of each item — the diameter and mass being the weights that influence the processing.

The video then went on to illustrate the difference between precision and recall in AI. This is important to AI programming, but not so much in the context of how I might use this video (AI for edtech planning and management).

This episode scratched the surface of how AI learns in the most basic of ways. I am itching for the next episode on neural networks and deep learning.

The first episode of CrashCourse’s series on artificial intelligence (AI) is as good as the other series created by the group.


Video source

The introductory episode started by making the point that AI is not the scary robot made popular by books or movies. We use everyday AI when we ask a voice assistant to play some music, sort photos using facial recognition, or vacuum clean our floor with a Roomba.

These are ordinary events that we do not question or fear, but it is still AI at a low level. Regardless, how did basic AI become commonplace?

AI has no sense organs, so it needs to be fed a lot of data. The availability of now ubiquitous data has enabled the rise of AI.

Then there is the constant improvements in computing power. What a current supercomputer might process in one second would take the IBM 7090 — the most advanced computer in 1956 — 4735 years to solve.

Finally, the commonness of AI is due to the information that we create and share, and how we transact on the Internet.

So the seemingly rapid rise of AI is due to three main things: Massive data, computing power, and the Internet as we know it.

AI is not terrifying in itself. What should scare us is ignorance of what it is and what it might become through irresponsible design.


Video source

Yay, here is a preview of a YouTube series on artificial intelligence and machine learning by CrashCourse.

Even more yay, it might be another resource to add to the Masters course I facilitate.

While reading and watching pieces on artificial intelligence (AI), I had a moment of… umm.

AI holds much promise. It might eventually help us communicate across languages, solve problems we have and do not yet have, and release us for more creative ventures. But we are not there yet.

Some weeks ago, I listed Wheeler’s levels of AI as a teaching resource: A Narrow I, A General I, A Super I. We are still at the narrow level — chat bots, learning analytics, voice assistants, etc.

It is still possible to fool current AI with images. For example, it might have problems differentiating chihuahuas from muffins. This Twitter collection reveals other things that AI easily confuses.

This seems like a dumb differentiation exercise, but most humans do this intuitively. Perhaps what we need is not just artificial intelligence but also artificial idiocy. By this I mean the random, spontaneous, and creative aspects of human thinking and expression.

But I do not know how to make this happen. I am a real idiot.

The video below might run long for something on YouTube, but it is packed with good information for anyone with an interest in artificial intelligence (AI).


Video source

After watching the 40-minute video, I reflected on the responsibility of:

  • Reporting AI — focusing on facts instead of rumour, and on possibilities instead of fantasy
  • Researching and developing AI — being aware that human bias creeps in and shapes what we design
  • Teaching AI — focusing on what is current and countering the popular but simplistic notions what AI can do
  • Learning AI — having minds open enough to replace ignorance and bias, and brave enough to challenge myths

Artificial Intelligence (AI) in education was one of the sub-topics of a course I facilitated about a month ago. The course has ended, but the learning does not.

Here are five resources that emerged after the course:


Video source


Video source


Video source

Our digital future 5: Artificial Intelligence by Steve Wheeler

AI won’t destroy us, it’ll make us smarter by TNW

The third video and Wheeler’s piece highlighted attempts to categorise AI. Wheeler had three levels and opined that current AI was only at the lowest level (artificial narrow intelligence). A researcher in the video used voice assistants as examples of “weak AI”.

Both these AI levels cited examples of structured or guided machine intelligence. Neither are full-fledged autonomous learning systems. This was why I had my own simple categorisation: Artificially intelligent and actually intelligent.

AI types


http://edublogawards.com/files/2012/11/finalistlifetime-1lds82x.png
http://edublogawards.com/2010awards/best-elearning-corporate-education-edublog-2010/

Click to see all the nominees!

QR code


Get a mobile QR code app to figure out what this means!

My tweets

Error: Twitter did not respond. Please wait a few minutes and refresh this page.

Archives

Usage policy

%d bloggers like this: