Another dot in the blogosphere?

Posts Tagged ‘learning


Video source

This week’s episode on artificial intelligence (AI) focused on reinforcement learning. This reminded me of the very old school of behaviourism. In this form of learning, AI is “rewarded” for learning how to do something on its own.

The example in the video was learning how to walk. Instead of providing a robot with exact instructions on limb angles, speeds, forces, etc., it learns to walk by trial and error. If it stays up longer and moves further, it gets simple rewards equivalent to “”good job” and “do that again”.

The episode introduced new concepts of agent, environment, state, value, policy, and actions.

If an AI like a robot played a game, the robot is the agent and the game space is its environment. The AI’s state might include its location and what it senses. Values are attached to the AI’s iterations of trial and error — higher values for good attempts, lower values for bad ones.

A policy seems like an overall strategy that the AI uses to get a reward efficiently. It might rely on different actions to do this. It might exploit an existing successful strategy or it might explore a new one.


Video source

When AI does unsupervised learning, it does so without training labels or known answers. People do this all the time, e.g., observing and mimicking the behaviour of others.

A key strategy for AI is creating categories and patterns for new or unknown entities. This is called unsupervised clustering. To create categories, AI must know what to measure and how to represent it.

The video helps make this overall process clearer with examples of image recognition, i.e., grouping similar looking flowers into its own species group, and differentiating unlabelled images.

While this video focused on the basics of imaging with AI, the next promises to focus on natural language processing.

The title of my reflection today might read like an oxymoron, but you would be surprised how many novice teachers and new professors do not distinguish the two.

What sparked this reminder? A tweet from Alec Couros.

Teaching (however it is conducted) does not guarantee learning (however it is measured). This does not discount the importance of good teaching; it emphasises how other factors influence learning.

Bearing this in mind, we might realise not to use teaching as a shield against change.

The danger of lectures...

If students are to learn, they must be actively and meaningfully involved.

Learning is not a spectator sport.

One active learning strategy is to get learners to peer teach.

To teach is the learn twice.

As teachers provide these learning opportunities to their students, they need to recognise that an expert’s knowledge and experience allows them to see how separate pieces fit together. Novices to the game do not.

Teaching is neat. Learning is messy.

Teaching is not learning and does not guarantee that learning happens. The first thing teachers forget is what it is like to struggle with learning. It takes empathy, humility, and an open mindset to unlearn that.


Video source

This week’s episode focused on one example of supervised learning — how AI recognises human handwriting. This is a problem that was tackled quite a while ago (during the rise of tablet PCs) and is a safe bet as an example.

The boiled down basic AI ingredients are:

  1. Labelled datasets of handwriting
  2. Neural network programmed with initial rule sets
  3. Training, testing, and tweaking the rules

The oversimplified process might be: Convert handwritten letters to scanned pixels, allow the neural network to process the pixels, make the neural network learn by comparing its outputs with the labelled inputs, and reiterating until it reaches acceptable accuracy.

The real test is whether the neural network can read and interpret a previously unseen dataset. The narrator demonstrated how he imported and tweaked such data so that it was suitable for the neural network.

My takeaway was not the details because that is not my area of expertise nor my focus. It is the observation that the choice of datasets and how they are processed is key.

If there is not enough data or if there is only partial representation of a larger set, then we cannot blame AI entirely for mistakes. We make the data choices and their labels, so the fault is ours.

Recently I had the opportunity to provide my perspective on a policy and administrative design (PAD) of university courses. I call it a PAD because the pedagogy and learning design seemed secondary.

Consider two institutes of higher learning (IHLs). IHL A is more typical in that it offers 24 hours of class contact time over 12 weeks (i.e., 2-hour classes); IHL B offers 24 hours of class time over 6 weeks (i.e., a truncated semester with 4-hour classes).

Despite the stress place on learners in IHL B, its leaders rationalise that the number of teaching or contact hours from the truncated semester is effectively the same as a typical semester.

How? If you factor in public holidays, semester breaks, exams weeks, and other calendar interruptions, you might get similar numbers of contact time. If you design a curriculum on a spreadsheet, you might buy in to that argument.
 

 
Now consider some nuance by focusing on learning. Learning is like baking cookies. You might need to leave them in the oven for 15 minutes at 180°C. If you play the numbers game, you argue that you can bake the cookies in 7.5 minutes at 360°C.

However, you cannot bake cookies faster by simply increasing the temperature. You will burn them because rushing the physics affects the chemistry of baking.

Learning also takes time. Teaching might enable learning as does time allocated for learning. Teaching ability and time during and between classes are within the control of designers of courses. If we rely more on PAD and not on what research and practice tell us, we risk burning out our learners.

Students will still learn, but they will feel the heat of being rushed and overloaded. Assignments and assessments become even more dreaded deadlines (emphasis on dead). Left unchecked, the learning that happens, if any at all, becomes strategic and superficial instead of reflective and deep.

You need only take a few minutes to observe how kids and some young adults use their bags and other property to reserve tables in food courts or fast food joints. So I wonder if parents or schools have not taught kids to value their belongings.

I resumed my teaching semester at a local university recently. During lunch, an undergrad sharing my table left his iPhone X in his place. If he had done this anywhere else in the world, he would be an ex-iPhone owner.

Maybe I am just getting old and judgemental. But maybe I am right and should offer what I once taught were unnecessary life lessons.

Then again, maybe I do not need to teach anything for someone else to learn. Experiencing loss is a harsh but effective lesson.


Video source

The second episode of CrashCourse’s AI series focused on how AI learns: Reinforcement, unsupervised, and supervised.

  • Refinforcement learning: AI gets feedback from its behaviours.
  • Unsupervised learning: AI learns to recognise patterns by clustering or grouping objects.
  • Supervised learning: AI is presented objects with training labels and associates the two. This is the most common method of training AI and was the focus of the episode.

Examples of supervised learning by AI include the ability to recognise your face over others and distinguishing between relevant and spam email.

Understanding how supervised learning happens broadly is easy. Doing the same at the programmatic level is not. The AI brain does not consist of human neurone analogues. While both seem to have just two actions (fire or not fire; one or zero), AI can be programmed to weight its processing before firing.

The last paragraph might not be easy to picture. The video made this clearer by illustrating how an AI might distinguish between donuts and bagels. Both look alike but an AI might be taught to tell the difference by considering the diameter and mass of each item — the diameter and mass being the weights that influence the processing.

The video then went on to illustrate the difference between precision and recall in AI. This is important to AI programming, but not so much in the context of how I might use this video (AI for edtech planning and management).

This episode scratched the surface of how AI learns in the most basic of ways. I am itching for the next episode on neural networks and deep learning.


http://edublogawards.com/files/2012/11/finalistlifetime-1lds82x.png
http://edublogawards.com/2010awards/best-elearning-corporate-education-edublog-2010/

Click to see all the nominees!

QR code


Get a mobile QR code app to figure out what this means!

Archives

Usage policy

%d bloggers like this: