Another dot in the blogosphere?

Posts Tagged ‘learning

As good as Mashable might be for creating awareness on different aspects of life, I question the wisdom of relying on it for “apps to help you learn something new”.

I am not saying that the ten apps it suggested are not effective. Apps for learning are a dime as dozen, so it might help to get a recommendation on which one to use. However, I question the specificity of this ask.

The apps need to have a very specific focus, e.g., a particular language, playing the guitar, financial investments. There is nothing wrong with that as long as the learner has a clear desire and need to use it.

Such apps take a long time to develop and need constant updating. When educators first learn about mobile apps, they tend to approach them from content fields: Is there an app for ABC? How might we develop an app for XYZ? They do not realise the amount of time, effort, talent, and money it takes to get one such app off the ground, much less maintaining it.

For educators who do not code or have their own startups, I suggest switching from this narrow and content-specific view to a broader one. Look at what generic apps like YouTube, Instagram, or Twitter might offer. These involve content curation and creation by them and/or their students. They also require blended approaches of using these apps with existing environments, methods, and resources. This is a lower entry barrier to app-enabled or app-assisted learning.


Video source

It is no secret that I admire the Green brothers. They are responsible for some of the best YouTube videos that are intentionally and accidentally educational.

In this video, John Green explained why he keeps learning despite long being out of school. If you want the product without the process, John concluded: “New learning can reshape old learning, and because learning is a way of seeing connection.”

But his process of storytelling is the learning process itself and worth the four minutes of initial consumption. Just like how John had to search for more information by following facts down a rabbit hole, personal learning requires more time, e.g., to reflect, to create, to share, to critique.

If there is just one thing I wish all teachers and educators internalised, it is this finding by John Hattie:

… the evidence is that the biggest effects on student learning occur when teachers become learners of their own teaching, and when students become their own teachers.

Hattie, J. (2009). Visible learning: A synthesis of over 800 meta-analyses relating to achievement. Routledge. London and New York.

This book was not available in a local library. I could only find a non-borrowable reference in a restricted library near me, but Google Books has this snippet.

Hattie quote source.

If we internalise this, it creates a shift in mindset and practice that separates schooling from education, and teachers from educators.


Video source

This week’s episode on artificial intelligence (AI) focused on reinforcement learning. This reminded me of the very old school of behaviourism. In this form of learning, AI is “rewarded” for learning how to do something on its own.

The example in the video was learning how to walk. Instead of providing a robot with exact instructions on limb angles, speeds, forces, etc., it learns to walk by trial and error. If it stays up longer and moves further, it gets simple rewards equivalent to “”good job” and “do that again”.

The episode introduced new concepts of agent, environment, state, value, policy, and actions.

If an AI like a robot played a game, the robot is the agent and the game space is its environment. The AI’s state might include its location and what it senses. Values are attached to the AI’s iterations of trial and error — higher values for good attempts, lower values for bad ones.

A policy seems like an overall strategy that the AI uses to get a reward efficiently. It might rely on different actions to do this. It might exploit an existing successful strategy or it might explore a new one.


Video source

When AI does unsupervised learning, it does so without training labels or known answers. People do this all the time, e.g., observing and mimicking the behaviour of others.

A key strategy for AI is creating categories and patterns for new or unknown entities. This is called unsupervised clustering. To create categories, AI must know what to measure and how to represent it.

The video helps make this overall process clearer with examples of image recognition, i.e., grouping similar looking flowers into its own species group, and differentiating unlabelled images.

While this video focused on the basics of imaging with AI, the next promises to focus on natural language processing.

The title of my reflection today might read like an oxymoron, but you would be surprised how many novice teachers and new professors do not distinguish the two.

What sparked this reminder? A tweet from Alec Couros.

Teaching (however it is conducted) does not guarantee learning (however it is measured). This does not discount the importance of good teaching; it emphasises how other factors influence learning.

Bearing this in mind, we might realise not to use teaching as a shield against change.

The danger of lectures...

If students are to learn, they must be actively and meaningfully involved.

Learning is not a spectator sport.

One active learning strategy is to get learners to peer teach.

To teach is the learn twice.

As teachers provide these learning opportunities to their students, they need to recognise that an expert’s knowledge and experience allows them to see how separate pieces fit together. Novices to the game do not.

Teaching is neat. Learning is messy.

Teaching is not learning and does not guarantee that learning happens. The first thing teachers forget is what it is like to struggle with learning. It takes empathy, humility, and an open mindset to unlearn that.


Video source

This week’s episode focused on one example of supervised learning — how AI recognises human handwriting. This is a problem that was tackled quite a while ago (during the rise of tablet PCs) and is a safe bet as an example.

The boiled down basic AI ingredients are:

  1. Labelled datasets of handwriting
  2. Neural network programmed with initial rule sets
  3. Training, testing, and tweaking the rules

The oversimplified process might be: Convert handwritten letters to scanned pixels, allow the neural network to process the pixels, make the neural network learn by comparing its outputs with the labelled inputs, and reiterating until it reaches acceptable accuracy.

The real test is whether the neural network can read and interpret a previously unseen dataset. The narrator demonstrated how he imported and tweaked such data so that it was suitable for the neural network.

My takeaway was not the details because that is not my area of expertise nor my focus. It is the observation that the choice of datasets and how they are processed is key.

If there is not enough data or if there is only partial representation of a larger set, then we cannot blame AI entirely for mistakes. We make the data choices and their labels, so the fault is ours.


http://edublogawards.com/files/2012/11/finalistlifetime-1lds82x.png
http://edublogawards.com/2010awards/best-elearning-corporate-education-edublog-2010/

Click to see all the nominees!

QR code


Get a mobile QR code app to figure out what this means!

My tweets

Archives

Usage policy

%d bloggers like this: