Another dot in the blogosphere?

Posts Tagged ‘crashcourse


Video source

This week’s episode on artificial intelligence (AI) focused on robots.

After providing examples of current robots, the host listed three robotic concepts: Localisation, planning, and manipulation.

For a robot to interact with its environment, it needs to know where it is (localisation) and how to move somewhere else (planning). In order to get there, a robot will have to sense its path and take actions to enable its progress (manipulation).

This episode was easier to relate to since the concepts were closely linked to what humans already do innately well. The problem is creating for robots artificial versions of what evolutionary time has given us.

Interestingly one training and practising arena for robots is games. AI playing games is in the next episode.


Video source

This week’s episode focused on symbolic artificial intelligence (AI). This is where AI represents real-world objects as symbols.

What does symbolic AI look like? Modern video games depend on such AI.

Neural networks rely on huge amounts of data, best guesses, and probabilities. Symbolic AI is the opposite in that it does not require massive data, training, or guesswork. It uses logic and symbols instead.

A symbol can be any letter, number, word, or object. Symbols are linked by relations. If symbols are nouns, then relations are verbs and adjectives.

Symbols as nouns and relations as verbs and adjectives.

A voice assistant like Siri might take a question we ask and turn a noun into a symbol and a verb into a relation.

That is probably as much as I can explain as a lay learner because the next part involved the mathematics of truth tables. The host used these to introduce the logic of AND, OR, NOT, and IF/THEN.

For me it was a cognitive leap to see how such logic statements helped AI make inferences and then to become expert systems.

But I took this important message away: A neural network remains a black box, but symbolic AI can show you the reasons for its decisions.


Video source

This week’s episode focused on one example of supervised learning — how AI recognises human handwriting. This is a problem that was tackled quite a while ago (during the rise of tablet PCs) and is a safe bet as an example.

The boiled down basic AI ingredients are:

  1. Labelled datasets of handwriting
  2. Neural network programmed with initial rule sets
  3. Training, testing, and tweaking the rules

The oversimplified process might be: Convert handwritten letters to scanned pixels, allow the neural network to process the pixels, make the neural network learn by comparing its outputs with the labelled inputs, and reiterating until it reaches acceptable accuracy.

The real test is whether the neural network can read and interpret a previously unseen dataset. The narrator demonstrated how he imported and tweaked such data so that it was suitable for the neural network.

My takeaway was not the details because that is not my area of expertise nor my focus. It is the observation that the choice of datasets and how they are processed is key.

If there is not enough data or if there is only partial representation of a larger set, then we cannot blame AI entirely for mistakes. We make the data choices and their labels, so the fault is ours.


Video source

This episode introduced terminology at the heart of neural networks.

  • Architecture: Structure and connections of neurones.
  • Weights: Fine-tuning the computations.
  • Optimisation: Improving architecture and weights.
  • Loss function: Errors that AI makes in predictions.
  • Backpropagation: Providing feedback to weights to improve the computing process.
  • Local optimal solution: Best fit given limited conditions.
  • Global optimal solution: Best fit given better conditions.
  • Learning rate: How much the weights get adjusted during backpropagation.
  • Fitting to training data: Providing relevant information for meaningful output.
  • Overfitting: Allowing AI to find strong but meaningless correlations, e.g., between divorce rates and margarine consumption, or revenue from skiing and death by tangled bedsheets.

Correlation between divorce rates and margarine consumption.

Correlation between revenue from skiing and death by tangled bedsheets.

My guess is that human bias is strongly introduced with weights and training data. This could explain why current facial recognition has problems identifying people with dark skin.


Video source

The gist of this episode might read: Neural networks, anyone?

Neural networks are commonplace, but we might not be aware of them. They are used when Facebook suggests tags for photos, a diagnostic lab analyses cell samples for cancer, or a bank decides whether or not to offer a loan.

So knowing what neural networks are and how they work are important. However, this episode provided only a small taste of both with this schematic.

My marked up version of the PBS/CrashCourse graphic on a basic neural network schematic.

Marked up version of the PBS/CrashCourse graphic on a neural network schematic.

If the input layer is a query we might have and the output layer is an answer, the black box is where rules and algorithms break down and process the input.

What happens in the black box is still a mystery. We might not care how exactly a social media system knows what tags to suggest for a photo, but we probably want to know why a financial system denies us a loan.

Perhaps the next episode might shed more light on the black box.


Video source

The second episode of CrashCourse’s AI series focused on how AI learns: Reinforcement, unsupervised, and supervised.

  • Refinforcement learning: AI gets feedback from its behaviours.
  • Unsupervised learning: AI learns to recognise patterns by clustering or grouping objects.
  • Supervised learning: AI is presented objects with training labels and associates the two. This is the most common method of training AI and was the focus of the episode.

Examples of supervised learning by AI include the ability to recognise your face over others and distinguishing between relevant and spam email.

Understanding how supervised learning happens broadly is easy. Doing the same at the programmatic level is not. The AI brain does not consist of human neurone analogues. While both seem to have just two actions (fire or not fire; one or zero), AI can be programmed to weight its processing before firing.

The last paragraph might not be easy to picture. The video made this clearer by illustrating how an AI might distinguish between donuts and bagels. Both look alike but an AI might be taught to tell the difference by considering the diameter and mass of each item — the diameter and mass being the weights that influence the processing.

The video then went on to illustrate the difference between precision and recall in AI. This is important to AI programming, but not so much in the context of how I might use this video (AI for edtech planning and management).

This episode scratched the surface of how AI learns in the most basic of ways. I am itching for the next episode on neural networks and deep learning.

The first episode of CrashCourse’s series on artificial intelligence (AI) is as good as the other series created by the group.


Video source

The introductory episode started by making the point that AI is not the scary robot made popular by books or movies. We use every day AI when we ask a voice assistant to play some music, sort photos using facial recognition, or vacuum clean our floor with a Roomba.

These are ordinary events that we do not question or fear, but it is still AI at a low level. Regardless, how did basic AI become commonplace?

AI has no sense organs, so it needs to be fed a lot of data. The availability of now ubiquitous data has enabled the rise of AI.

Then there is the constant improvements in computing power. What a current supercomputer might process in one second would take the IBM 7090 — the most advanced computer in 1956 — 4735 years to solve.

Finally, the commonness of AI is due to the information that we create and share, and how we transact on the Internet.

So the seemingly rapid rise of AI is due to three main things: Massive data, computing power, and the Internet as we know it.

AI is not terrifying in itself. What should scare us is ignorance of what it is and what it might become through irresponsible design.


Video source

Yay, here is a preview of a YouTube series on artificial intelligence and machine learning by CrashCourse.

Even more yay, it might be another resource to add to the Masters course I facilitate.


Archives

Usage policy

%d bloggers like this: