Another dot in the blogosphere?

Posts Tagged ‘ai

Note: The URL of the BBC video changed after I published my reflection, so I have updated it.

Video source

This BBC video asked the question: God and robots: Will AI transform religion?

It was clickbait because it could pull in viewers with different opinions. A superficial answer is no, a deeper thought experiment-based answer is possibly yes, and current answer is we still do not know.

While such videos highlight possibilities, they also tend to focus on OR instead of AND. The OR thinking is likely the mindset of most of the people they interviewed, e.g., a human OR a robot leading you in prayer. The reality now is that AI can assist human tasks — this is an AND perspective. The use of AI does not exclude the value of the human. 

The current reality is that the broad question asked in the video is premature. As an expert pointed out towards the end of the video, AI does not yet have superagency, i.e., it does not yet make “beneficial decisions on our behalf intentionally because it wants to”.

Rising above, I recall a framework that Steve Wheeler shared. Our development of AI is still at the level of artificial narrow intelligence. It is nowhere near where it needs to be for it to replace priests, rabbis, or other religious figures.  

Why do people freak out at driverless cars? One reason is that they think humans are better drivers than robots powered by artificial intelligence (AI).

As Veritasium host, Derek Muller, pointed out in the video below the data and statistics do not support this perception. People are more likely to cause car-related accidents.

Video source

The fear of technology combined with the over-confidence in human ability is also not new. Muller related a story of how people used to freak out when elevator (lift) operators were phased out. Some did not want to get into a box not controlled by a fellow human. This was also mentioned in an old Pessimists Archive podcast.

We think nothing of operating a lift ourselves now. In fact, it would be very strange to have someone else do this for you as their job.

If we get over our hangups, we might just see driverless vehicles as the norm. If we are not convinced, we might watch the part of the video where Muller described how planes can practically fly themselves.

From the high level of AI required for driverless vehicles to basic edtech, the common barrier of effective and common use is us. Our role should not be to fear monger based on unfounded information. It should be to contribute care, ethics, and nuance — all things we are still better at than AI.

I heard someone say this in a YouTube video: Artificial intelligence (AI) is no match for natural ignorance (NI).

Artificial intelligence is no match for natural ignorance.

The context for this quote was how Facebook claimed that it had AI that could raise “conflict alerts” of “contentious or unhealthy conversations” to administrators.

Such AI probably uses natural language processing. However, it is no match for nuance, context, and natural human ignorance. The example highlighted in the video was people arguing the merits of various sauces. Case closed.

We do not have to wait for robot overloads to destroy humanity. We are capable of this on our own.

Video source

A person with ALS needed to have his voice box removed. But before that happened, he recorded his voice so that computing devices would help him speak.

He recorded 3000 stock phrases and many of his own favourites so that he could artificially create new speech and call up original recordings. One of his choice phrases (at the 12min 57sec mark) was:

A little knowledge may be a dangerous thing, but it’s not half as bad as a lot of ignorance. 

I agree, and there is more than one way to interpret that statement.

The common way is to cite an example like nuclear fission. When that was discovered, it unlocked a massive potential that was as useful for energy production as it was for weapons of mass destruction. That knowledge was indeed dangerous.

Another way of interpreting the sentence starts with focusing on “little knowledge”. It could mean not enough, e.g., little knowledge of how the SARS-CoV-2 vaccines were developed and how they work. Such knowledge can become the basis of conspiracy theories and pseudoscience, e.g., microchips in vaccines and learning styles, respectively.

We do not have to be experts at everything. We simply cannot. But there is such a state as having too little knowledge. In this state, we fill in the void with our own experiences, biases, and cultural cues. For example, much understanding of AI seems to come from movies made for entertainment and these AI want to dominate or destroy human life.

With enough knowledge from credible and reliable sources, we might understand the opposite. For example, the person whose voice is partly powered by AI is roboticist, Dr Peter B Scott-Morgan. In his 1984 publication, he declared (17min 25sec mark): 

If the path of enhanced human is followed, then it will be possible for mankind and robot to remain on the same evolutionary branch rather than humanity watch the robots split away. In this way, mankind will one day be able to replace its all too vulnerable bodies with more permanent mechanisms and use the supercomputers as intelligence amplifiers.

This philosophy of AI as partner instead of rival flies in the face of popular culture. It stems from deep knowledge and critical practice in the field of AI and robotics. It is nowhere as glamorous or attention-grabbing as dystopian Hollywood fare.

Dr Scott-Morgan’s bit of deep knowledge is worth more than money-spinning loads of ignorance. It offers a hopeful and productive way forward.


Video source

When I was curating resources last year on educational uses of artificial intelligence (AI), I discovered how some forms were used to generate writing.
 

Video source

YouTuber, Tom Scott, employed writing AI (OpenAI’s GPT-3) to suggest new video ideas by offering topics and even writing scripts. The suggestions were ranged from the odd and impossible to the plausible and surprisingly on point.

This was an example of AI augmenting human creativity, but it was still very much in the realm of artificial narrow intelligence. The AI did not have the general intelligence to mimic human understanding of nuance and context.

I liked Scott’s generalisation about technology following how AI worked/failed for him. He described a technology’s evolution as a sigmoid curve. After a slow initial start, the technology might seem to suddenly be widely adopted and improved upon. It then hits a steady state.

Tom Scott: Technology evolution as a sigmoid curve. Source: https://youtu.be/TfVYxnhuEdU?t=431

Scott wondered if AI was at the steady state. This might seem to be the case if we only consider the boxed in approach that the AI was subject to. If it had been given more data to check its own suggestions, it might have offered creative ideas that were on point.

So, no, the AI was not that the terminal steady state. It was at the slow start. It has the potential to explode. It is our responsibility to ensure that the explosions are controlled ones (like demolishing a building) instead of unhappy accidents that result from neglect (like the warehouse in Beirut).


Video source

As I watched this video, I could hear the fear mongers and armchair experts talk about AI taking over, making reference to the fictional Skynet, or how the video foretells of robots dancing on our graves.

All this is projection without fact. Movies are not the same as critical research or reflective practice. Conjecture should not be placed at the same level as scientific advancement or nuanced policies of use.

Fiction and fantasy have their purposes, e.g., entertainment, making critical statements. But these are the easy and attention-grabbing headlines that should not be confused with the hard and mundane work of scientific endeavour.

Any projection, whether informed or not, is subject to how myopic we are. We might look back with rose-tinted glasses, but we can barely look forward beyond our noses.

For some perspective, I offer this tweet from Pessimists Archive. Technology is not all gloom and doom. It enabled many of us to continue schooling, work, and life despite the pandemic. It is already doing much good, but that does not sell the news.

Technology has the potential to do harm even when it is designed to do good. But that is not because of technology; it is because of the short-sighted and imperfect human user. If we take that perspective, we might be more mindful about how we invent and use technologies.

This is my reflection about how a boy gamed an assessment system that was driven by artificial intelligence (AI). It is not about how AI drives games.
 

 
If you read the entirety of this Verge article, you will learn that a boy was disappointed with the automatic and near instant grading that an assessment tool provided. The reason why he got quick but poor grades was because his text-based answers were assessed with a vendor’s AI.

The boy soon got over his disappointment when he found out that he could add keywords to the end his answers. These keywords were seemingly disjointed or disconnected words that represented key ideas of a paragraph or article. When he included these keywords, he found out that he could get full marks.

My conclusion: Maybe the boy learnt some content, but he definitely learnt how to game the system.

A traditionalist (or a magazine wiriter in this case) might say that the boy cheated. A progressive might point out that this is how every student responds to any testing regime, i.e., they figure out the rules and how to best take advantage of them. This is why test-taking tends to reliably measure just one thing — the ability to take the test.

If the boy had really wanted to apply what he learnt, he would have persisted with answering questions the normal way. But if he did that, he would have been penalised for doing the right thing. I give him props for switching to a strategy that was gamed from the start.

This is not an attack on AI. It is a critique on human decision-making. What was poor about the decisions? For one thing, it seemed like the vendor assumed that the use of key words indicated understanding or application. If a student did not use the exact key words, the system would not detect and reward them.

It sounds like the AI was a relatively low-level matching system, not a more nuanced semantic one. If it was the latter, it would be more like a teacher who would be able to give each student credit when it was due if the same meanings were expressed.

The article did not dive into the vendor’s reasons for using that AI. I do not think the company would want to share that in any case. For me, this exhibited all the signs of a quick fix for quick returns. This is not what education stands for, so that vendor gets an F for implementation.

It is a long time before I need to facilitate a course on future edtech again, but I am already curating resources.


Video source

As peripheral as the video above might seem, it is relevant to the topic of algorithms and artificial intelligence (AI).

The Jolly duo discovered how YouTube algorithms were biased against comments written in Korean even though that was the language of a primary audience. Why? YouTube wanted to see if it could artificially drive English-speakers there instead of allowing what was already happening organically.

Algorithms and AI drive edtech and both are designed by people. Imperfect and biased people. Similar biases exist in schooling and education. One need only recall the algorithms that caused chaos for major exams in July for the international baccalaureate (IB) and August for the General Certificate Exams in the UK. Students received lower than expected results and this disproportionately affected already disadvantaged students.

Students taking my course do not have to design algorithms or AI since that is just one topic of many that we explore. The topic evolves so rapidly that it is pointless to go in depth. However, an evergreen aspect is human design and co-evolution of such technology in education.

We shape our tools and then our tools shape us. -- Marshall McLuhan

Marshall McLuhan’s principle applies in this case. We cannot blindly accept that technology is by itself disruptive or transformative. We create these technologies, the demand for them, and the expectations of their use.

A small and select groups have the know-how to create the technology. They create to the demand by convincing administrators and policymakers who do not necessarily know any better. Since those gatekeepers are not alert, we need new expectations — we must know, know better, and do better. All this starts with knowing what algorithmic bias looks like and what it can do.

One of the simplest forms of digital curation is teaching YouTube algorithms what videos to suggest.

Curating by informing YouTube algorithms.

I do this by marking videos that I have no wish to watch with “not interested” (see screenshot above). I also remove some videos from my watched history listing.

Sometimes I watch videos based on a suggestion or a whim, but I find them irrelevant. If I do not remove them from my watch history, I will get suggestions that are related to those videos the next time I refresh my YouTube feed.

These simple steps are an example of cooperating with relatively simple AI so that algorithms work with and for me. This is human-AI synergy.

This Reddit thread was one response to the Boston Dynamics robot dog making its rounds in Bishan-Ang Mo Kio park. It was there to monitor social distancing and to remind park users to do the same.

The title of the thread — Dystopian robot terrifies park goers in Bishan Park — reveals a state of mind that I call dy-stupid-ian.

I have said this in edtech classes I facilitate and I will say it again: If your only reference for artificial intelligence (AI) and robotics is the Terminator franchise, then your perspective is neither informed nor nuanced.

The entertainment industry likes to paint a dystopian picture of what AI and robots will do. There is even a Black Mirror episode (Metalhead) that featured similar looking dogs. Somehow fear and worry embedded in fantasy are entertaining.

An education about AI and robotics is more mundane and requires hard work. But most of us need not be programmers and engineers to gain some basic literacy in those fields. For that, I recommend two excellent sources.


Video playlist


Video playlist

At the very least, the videos are a good way to spend time during a COVID-19 lock down.


http://edublogawards.com/files/2012/11/finalistlifetime-1lds82x.png
http://edublogawards.com/2010awards/best-elearning-corporate-education-edublog-2010/

Click to see all the nominees!

QR code


Get a mobile QR code app to figure out what this means!

Archives

Usage policy

%d bloggers like this: