Another dot in the blogosphere?

Posts Tagged ‘artificial intelligence

Why do people freak out at driverless cars? One reason is that they think humans are better drivers than robots powered by artificial intelligence (AI).

As Veritasium host, Derek Muller, pointed out in the video below the data and statistics do not support this perception. People are more likely to cause car-related accidents.

Video source

The fear of technology combined with the over-confidence in human ability is also not new. Muller related a story of how people used to freak out when elevator (lift) operators were phased out. Some did not want to get into a box not controlled by a fellow human. This was also mentioned in an old Pessimists Archive podcast.

We think nothing of operating a lift ourselves now. In fact, it would be very strange to have someone else do this for you as their job.

If we get over our hangups, we might just see driverless vehicles as the norm. If we are not convinced, we might watch the part of the video where Muller described how planes can practically fly themselves.

From the high level of AI required for driverless vehicles to basic edtech, the common barrier of effective and common use is us. Our role should not be to fear monger based on unfounded information. It should be to contribute care, ethics, and nuance — all things we are still better at than AI.

I heard someone say this in a YouTube video: Artificial intelligence (AI) is no match for natural ignorance (NI).

Artificial intelligence is no match for natural ignorance.

The context for this quote was how Facebook claimed that it had AI that could raise “conflict alerts” of “contentious or unhealthy conversations” to administrators.

Such AI probably uses natural language processing. However, it is no match for nuance, context, and natural human ignorance. The example highlighted in the video was people arguing the merits of various sauces. Case closed.

We do not have to wait for robot overloads to destroy humanity. We are capable of this on our own.


Video source

When I was curating resources last year on educational uses of artificial intelligence (AI), I discovered how some forms were used to generate writing.
 

Video source

YouTuber, Tom Scott, employed writing AI (OpenAI’s GPT-3) to suggest new video ideas by offering topics and even writing scripts. The suggestions were ranged from the odd and impossible to the plausible and surprisingly on point.

This was an example of AI augmenting human creativity, but it was still very much in the realm of artificial narrow intelligence. The AI did not have the general intelligence to mimic human understanding of nuance and context.

I liked Scott’s generalisation about technology following how AI worked/failed for him. He described a technology’s evolution as a sigmoid curve. After a slow initial start, the technology might seem to suddenly be widely adopted and improved upon. It then hits a steady state.

Tom Scott: Technology evolution as a sigmoid curve. Source: https://youtu.be/TfVYxnhuEdU?t=431

Scott wondered if AI was at the steady state. This might seem to be the case if we only consider the boxed in approach that the AI was subject to. If it had been given more data to check its own suggestions, it might have offered creative ideas that were on point.

So, no, the AI was not that the terminal steady state. It was at the slow start. It has the potential to explode. It is our responsibility to ensure that the explosions are controlled ones (like demolishing a building) instead of unhappy accidents that result from neglect (like the warehouse in Beirut).

This Reddit thread was one response to the Boston Dynamics robot dog making its rounds in Bishan-Ang Mo Kio park. It was there to monitor social distancing and to remind park users to do the same.

The title of the thread — Dystopian robot terrifies park goers in Bishan Park — reveals a state of mind that I call dy-stupid-ian.

I have said this in edtech classes I facilitate and I will say it again: If your only reference for artificial intelligence (AI) and robotics is the Terminator franchise, then your perspective is neither informed nor nuanced.

The entertainment industry likes to paint a dystopian picture of what AI and robots will do. There is even a Black Mirror episode (Metalhead) that featured similar looking dogs. Somehow fear and worry embedded in fantasy are entertaining.

An education about AI and robotics is more mundane and requires hard work. But most of us need not be programmers and engineers to gain some basic literacy in those fields. For that, I recommend two excellent sources.


Video playlist


Video playlist

At the very least, the videos are a good way to spend time during a COVID-19 lock down.


Video source

That’s a phrase that was never really uttered by Sherlock Holmes. Watson is also a supercomputer that is competing against champions in the gameshow, Jeopardy.

I found the video at Gizmodo and a commenter there provided this higher quality version.

Some might say, “Be afraid, be very afraid!” But only if you like overreacting or live in a movie world.

Sure, machines will get more intelligent. Why do you think we call that device in our pocket or bag a smartphone? But all Watson is doing now is brute force factual recall. It’s reactions will be faster, it will learn more quickly and it won’t fatigue.

What is fantastic is Watson’s ability to recognize and process language. The day of being able to talk to computers like we talk to people is closer.

Dig a little deeper and you will find IBM’s development of Watson.


Video source

Brilliant!


http://edublogawards.com/files/2012/11/finalistlifetime-1lds82x.png
http://edublogawards.com/2010awards/best-elearning-corporate-education-edublog-2010/

Click to see all the nominees!

QR code


Get a mobile QR code app to figure out what this means!

My tweets

Archives

Usage policy

%d bloggers like this: