Another dot in the blogosphere?

Posts Tagged ‘ai

Video source

A person with ALS needed to have his voice box removed. But before that happened, he recorded his voice so that computing devices would help him speak.

He recorded 3000 stock phrases and many of his own favourites so that he could artificially create new speech and call up original recordings. One of his choice phrases (at the 12min 57sec mark) was:

A little knowledge may be a dangerous thing, but it’s not half as bad as a lot of ignorance. 

I agree, and there is more than one way to interpret that statement.

The common way is to cite an example like nuclear fission. When that was discovered, it unlocked a massive potential that was as useful for energy production as it was for weapons of mass destruction. That knowledge was indeed dangerous.

Another way of interpreting the sentence starts with focusing on “little knowledge”. It could mean not enough, e.g., little knowledge of how the SARS-CoV-2 vaccines were developed and how they work. Such knowledge can become the basis of conspiracy theories and pseudoscience, e.g., microchips in vaccines and learning styles, respectively.

We do not have to be experts at everything. We simply cannot. But there is such a state as having too little knowledge. In this state, we fill in the void with our own experiences, biases, and cultural cues. For example, much understanding of AI seems to come from movies made for entertainment and these AI want to dominate or destroy human life.

With enough knowledge from credible and reliable sources, we might understand the opposite. For example, the person whose voice is partly powered by AI is roboticist, Dr Peter B Scott-Morgan. In his 1984 publication, he declared (17min 25sec mark): 

If the path of enhanced human is followed, then it will be possible for mankind and robot to remain on the same evolutionary branch rather than humanity watch the robots split away. In this way, mankind will one day be able to replace its all too vulnerable bodies with more permanent mechanisms and use the supercomputers as intelligence amplifiers.

This philosophy of AI as partner instead of rival flies in the face of popular culture. It stems from deep knowledge and critical practice in the field of AI and robotics. It is nowhere as glamorous or attention-grabbing as dystopian Hollywood fare.

Dr Scott-Morgan’s bit of deep knowledge is worth more than money-spinning loads of ignorance. It offers a hopeful and productive way forward.


Video source

When I was curating resources last year on educational uses of artificial intelligence (AI), I discovered how some forms were used to generate writing.
 

Video source

YouTuber, Tom Scott, employed writing AI (OpenAI’s GPT-3) to suggest new video ideas by offering topics and even writing scripts. The suggestions were ranged from the odd and impossible to the plausible and surprisingly on point.

This was an example of AI augmenting human creativity, but it was still very much in the realm of artificial narrow intelligence. The AI did not have the general intelligence to mimic human understanding of nuance and context.

I liked Scott’s generalisation about technology following how AI worked/failed for him. He described a technology’s evolution as a sigmoid curve. After a slow initial start, the technology might seem to suddenly be widely adopted and improved upon. It then hits a steady state.

Tom Scott: Technology evolution as a sigmoid curve. Source: https://youtu.be/TfVYxnhuEdU?t=431

Scott wondered if AI was at the steady state. This might seem to be the case if we only consider the boxed in approach that the AI was subject to. If it had been given more data to check its own suggestions, it might have offered creative ideas that were on point.

So, no, the AI was not that the terminal steady state. It was at the slow start. It has the potential to explode. It is our responsibility to ensure that the explosions are controlled ones (like demolishing a building) instead of unhappy accidents that result from neglect (like the warehouse in Beirut).


Video source

As I watched this video, I could hear the fear mongers and armchair experts talk about AI taking over, making reference to the fictional Skynet, or how the video foretells of robots dancing on our graves.

All this is projection without fact. Movies are not the same as critical research or reflective practice. Conjecture should not be placed at the same level as scientific advancement or nuanced policies of use.

Fiction and fantasy have their purposes, e.g., entertainment, making critical statements. But these are the easy and attention-grabbing headlines that should not be confused with the hard and mundane work of scientific endeavour.

Any projection, whether informed or not, is subject to how myopic we are. We might look back with rose-tinted glasses, but we can barely look forward beyond our noses.

For some perspective, I offer this tweet from Pessimists Archive. Technology is not all gloom and doom. It enabled many of us to continue schooling, work, and life despite the pandemic. It is already doing much good, but that does not sell the news.

Technology has the potential to do harm even when it is designed to do good. But that is not because of technology; it is because of the short-sighted and imperfect human user. If we take that perspective, we might be more mindful about how we invent and use technologies.

This is my reflection about how a boy gamed an assessment system that was driven by artificial intelligence (AI). It is not about how AI drives games.
 

 
If you read the entirety of this Verge article, you will learn that a boy was disappointed with the automatic and near instant grading that an assessment tool provided. The reason why he got quick but poor grades was because his text-based answers were assessed with a vendor’s AI.

The boy soon got over his disappointment when he found out that he could add keywords to the end his answers. These keywords were seemingly disjointed or disconnected words that represented key ideas of a paragraph or article. When he included these keywords, he found out that he could get full marks.

My conclusion: Maybe the boy learnt some content, but he definitely learnt how to game the system.

A traditionalist (or a magazine wiriter in this case) might say that the boy cheated. A progressive might point out that this is how every student responds to any testing regime, i.e., they figure out the rules and how to best take advantage of them. This is why test-taking tends to reliably measure just one thing — the ability to take the test.

If the boy had really wanted to apply what he learnt, he would have persisted with answering questions the normal way. But if he did that, he would have been penalised for doing the right thing. I give him props for switching to a strategy that was gamed from the start.

This is not an attack on AI. It is a critique on human decision-making. What was poor about the decisions? For one thing, it seemed like the vendor assumed that the use of key words indicated understanding or application. If a student did not use the exact key words, the system would not detect and reward them.

It sounds like the AI was a relatively low-level matching system, not a more nuanced semantic one. If it was the latter, it would be more like a teacher who would be able to give each student credit when it was due if the same meanings were expressed.

The article did not dive into the vendor’s reasons for using that AI. I do not think the company would want to share that in any case. For me, this exhibited all the signs of a quick fix for quick returns. This is not what education stands for, so that vendor gets an F for implementation.

It is a long time before I need to facilitate a course on future edtech again, but I am already curating resources.


Video source

As peripheral as the video above might seem, it is relevant to the topic of algorithms and artificial intelligence (AI).

The Jolly duo discovered how YouTube algorithms were biased against comments written in Korean even though that was the language of a primary audience. Why? YouTube wanted to see if it could artificially drive English-speakers there instead of allowing what was already happening organically.

Algorithms and AI drive edtech and both are designed by people. Imperfect and biased people. Similar biases exist in schooling and education. One need only recall the algorithms that caused chaos for major exams in July for the international baccalaureate (IB) and August for the General Certificate Exams in the UK. Students received lower than expected results and this disproportionately affected already disadvantaged students.

Students taking my course do not have to design algorithms or AI since that is just one topic of many that we explore. The topic evolves so rapidly that it is pointless to go in depth. However, an evergreen aspect is human design and co-evolution of such technology in education.

We shape our tools and then our tools shape us. -- Marshall McLuhan

Marshall McLuhan’s principle applies in this case. We cannot blindly accept that technology is by itself disruptive or transformative. We create these technologies, the demand for them, and the expectations of their use.

A small and select groups have the know-how to create the technology. They create to the demand by convincing administrators and policymakers who do not necessarily know any better. Since those gatekeepers are not alert, we need new expectations — we must know, know better, and do better. All this starts with knowing what algorithmic bias looks like and what it can do.

One of the simplest forms of digital curation is teaching YouTube algorithms what videos to suggest.

Curating by informing YouTube algorithms.

I do this by marking videos that I have no wish to watch with “not interested” (see screenshot above). I also remove some videos from my watched history listing.

Sometimes I watch videos based on a suggestion or a whim, but I find them irrelevant. If I do not remove them from my watch history, I will get suggestions that are related to those videos the next time I refresh my YouTube feed.

These simple steps are an example of cooperating with relatively simple AI so that algorithms work with and for me. This is human-AI synergy.

This Reddit thread was one response to the Boston Dynamics robot dog making its rounds in Bishan-Ang Mo Kio park. It was there to monitor social distancing and to remind park users to do the same.

The title of the thread — Dystopian robot terrifies park goers in Bishan Park — reveals a state of mind that I call dy-stupid-ian.

I have said this in edtech classes I facilitate and I will say it again: If your only reference for artificial intelligence (AI) and robotics is the Terminator franchise, then your perspective is neither informed nor nuanced.

The entertainment industry likes to paint a dystopian picture of what AI and robots will do. There is even a Black Mirror episode (Metalhead) that featured similar looking dogs. Somehow fear and worry embedded in fantasy are entertaining.

An education about AI and robotics is more mundane and requires hard work. But most of us need not be programmers and engineers to gain some basic literacy in those fields. For that, I recommend two excellent sources.


Video playlist


Video playlist

At the very least, the videos are a good way to spend time during a COVID-19 lock down.

I like pointing out that the current state of artificial intelligence (AI) is no match for natural human stupidity. Consider the examples below.

Since this tweet in 2017, image recognition might have improved so that AI trained with specific datasets can distinguish between chihuahuas and muffins.

The video below highlights another barrier — AI struggles with human accents.


Video source

Overcoming this (and other barriers) might be a helped by access to broader and better datasets. But such AI still operate at the level of artificial narrow intelligence (see Wheeler’s levels of AI). They are certainly not at the level of artificial general intelligence, much less at artificial super intelligence.


Video source

I’ll admit it. The title of this episode did not appeal to me from the get go. Why use artificial intelligence (AI) to figure out of there are other forms of intelligent life in the galaxy?

Here is my bias: I would rather see the power of AI developed more for enabling better life on Earth. But I remain open-minded enough to learn something about the alien effort.

According to one scientist, the last 50 years of space data exploration is akin to a glass of water. This is against the total data set the size of the world’s oceans. So using AI makes sense.

I liked the honesty of another scientist who declared that he did not know exactly what he was looking for. He was simply looking for a blip of life against a sea of darkness. So again there is the counter narrative to the press and movies — we are not looking for aliens to battle.

So how might AI detect alien life? Pattern recognition of rare needles against vast amounts of hay in huge stacks.

About halfway through the video, the content switched abruptly to synths — AI with bodies that mimic humans. Long story short, we are nowhere near what science fiction paints in books or movies. But the efforts to deconstruct and reconstruct the human body and mind are interesting (to put it mildly).

I liked how the video moved on to the ethics of synths. What rights would they have? Can they be taught good values? If they commit crimes, who is responsible? These proactive questions influence their design and development.

I think the episode was the final one. If it was, it was a good note to end on.

How might artificial intelligence (AI) prevent us from destroying ourselves? The seventh episode of this YouTube Original series provided some insights on how AI could help prevent animal extinction, famine, and war.


Video source

Take the battle against ivory poachers. Trap cameras take photos of anything that moves. They capture images of elephants, a host of other animals, and possibly the occasional poacher. But manually processing the photos for poachers is so time-consuming that it might be too late to save the elephants.

However, an AI-enabled camera, one with a vision processing unit, detects people and sends only those photos immediately to the park rangers. This gives the rangers more time to intercept the poachers.

In the second segment of the video, the focus shifted to the meat that we eat. Like it or not, animal husbandry contributes to climate change by taking away natural resources and emitting greenhouse gases. If we are to shift to not-meat but not rely on Impossible Burgers, what alternatives are there?

One is an AI called Giuseppe that does not reconstitute meat and creates the perception of meat instead. It analyses how molecules in all foods create taste and texture, and recommends blends of analogues from plants.

NotCo, the company that uses Giuseppe, has already created NotMayo, NotMilk, and NotMeat. The video featured the development of NotTuna.

The third part of the video focused on predicting earthquakes. Like the poacher detection tool, sensors collect more noisy data than useful data. AI can be trained to recognise cultural sounds like transportation and construction, and distinguish those from a possible earthquake.

The final segment asked a broad question: Might AI be able to prevent disasters, unrest, or wars that stem from our misuse of natural resources?

To answer this question, a small company in the USA collects satellite images and relies on AI to identify and differentiate objects like solar panels and riverbeds. With AI as a tool, the company makes predictions like the output of cultivated crops in a year.

The predictions extend to man-made infrastructure and natural water sources. The example featured in the video was how measurements of snowfall could be used to predict water supply, which in turn correlates to crop yields.

If the snowfall was low, farmers could be advised to plant drought-resistant crops instead. If unrest or war stem from food or water shortage, such predictions might inform deployments of food aid before trouble erupts.

The overall message of this video countered the popular and disaster movie narratives of human-made AI running amok and killing us. Instead, it focused on how AI actually helps us become better humans.


http://edublogawards.com/files/2012/11/finalistlifetime-1lds82x.png
http://edublogawards.com/2010awards/best-elearning-corporate-education-edublog-2010/

Click to see all the nominees!

QR code


Get a mobile QR code app to figure out what this means!

My tweets

Archives

Usage policy

%d bloggers like this: