Another dot in the blogosphere?

Posts Tagged ‘ai

Somehow this, AI Is Making It Extremely Easy for Students to Cheat, is still news. That is because some of us have been schooled to think a certain way.

The AI in question is Wolfram|Alpha, a tool that can help students solve mathematics problems, amongst many others. Traditionalists might consider this cheating because students let the tool do the heavy lifting.

My response is this: What is a tool for if not to make a task more efficient and/or effective?

Some of us might be old enough to recall when math teachers did not want students to use calculators because they feared students would become lazy. Two hundred years ago, some people worried about students preferring to wrote on paper (see tweet below). Travel back further in time and being able to read on your own was cheating because you did not need a lecturer to read for you.

Technology has been feared and reviled through the ages because it represents change (click the image below for full size). The changes are in mindsets, attitudes, and behaviours. People like the idea of change, but not the processes of change itself.

What schools call cheating the rest of the world calls cooperating or collaborating. For example, you cannot consult your neighbour during a typical written exam. However, you can — and might be expected to — consult colleagues around you at work.

The creator of Wolfram|Alpha reportedly designed the tool to pull users through such a shift in thinking:

“Mechanical math,” Wolfram argues, “is a very low level of precise thinking.” Instead, Wolfram believes that we should be emphasizing computational thinking—something he describes as “trying to formulate your thoughts so that you can explain them to a sufficiently smart computer.” This has also been called computer-based math. Essentially, knowing algebra in today’s technology-saturated world won’t get you very far, but knowing how to ask a computer to do your algebra will. If students are making this shift, in his mind, they’re just ahead of the curve.

Wolfram|Alpha could be a more significant development than Google or YouTube. All help us when we have a problem that we can boil down to questions. Google and YouTube are broad spectrum in that they give us many returns. Wolfram|Alpha is more razor focused and cuts to the chase by providing specific answers.

Wolfram|Alpha does not take away our need to think. It helps us focus on better ways to think, e.g., strategic decision-making, instead of mundane mechanics. Wolfram|Alpha does what technology does best and allows us to think to our best.

Once upon a time, being educated meant learning from a limited pool of experts and a relatively shallow body of knowledge. We had to recall and recite to prove that the information had been transmitted and was embedded as replicated knowledge.

I reflected on this a few days ago: From 5000 BCE to 2007, the estimated amount of information stored by the human race was 300 exabytes; in 2013, that data had grown to 1,200 exabytes. In just six years (2007-2013), the human race created four times as much information as several thousand years before. That “once upon a time” now reads like a fairy tale because the amount of information we have collectively generated makes this old school thinking impractical.

We need to learn how to leverage on the technology of today and tomorrow. This starts with unschooling ourselves that technology is always bad and makes us lazy or stupid. That is the lazy and stupid thinking that perpetuates fear and ignorance.

Tags: ,

I read this article, Sesame Workshop and IBM team up to test a new A.I.-powered teaching method, with critical optimism.

After reading the article, I still wondered if the AI was actually adapting to how kids learn or if it was learning how to teach as an adult would. The former focuses on learning while the latter is about teaching.

Teaching and learning are not synonymous. Ideally and intentionally, effective teaching should lead to meaningful learning. However, teaching does not guarantee learning. Let me illustrate.

The article claimed that:

kindergarteners learned words like “arachnid,” “amplify,” “camouflage,” and “applause,” which are typically considered above their grade level.

Kids were taught these words, but did they really learn to use these words in contexts that were meaningful to them? Will they retain and use the words appropriately in future?

My son learnt “chela” and “carapace” in kindergarten. I only learnt these as a Biology major in university. Today he cannot recall those terms or even learning them. However, those terms are etched in my memory even though I have not taught Biology in over 20 years.

I argue that my son was taught those terms, but only I learnt them. It is one thing to teach for short-term gain and retention. It is entirely another to design for long-term and meaningful learning.

If we teach AI the wrong way, then artificial intelligence will have another meaning. It will be about “learning” that is meaningless, superficial, and fleeting.

Video source

That’s a phrase that was never really uttered by Sherlock Holmes. Watson is also a supercomputer that is competing against champions in the gameshow, Jeopardy.

I found the video at Gizmodo and a commenter there provided this higher quality version.

Some might say, “Be afraid, be very afraid!” But only if you like overreacting or live in a movie world.

Sure, machines will get more intelligent. Why do you think we call that device in our pocket or bag a smartphone? But all Watson is doing now is brute force factual recall. It’s reactions will be faster, it will learn more quickly and it won’t fatigue.

What is fantastic is Watson’s ability to recognize and process language. The day of being able to talk to computers like we talk to people is closer.

Dig a little deeper and you will find IBM’s development of Watson.

Video source


Click to see all the nominees!

QR code

Get a mobile QR code app to figure out what this means!

My tweets


Usage policy

%d bloggers like this: