Another dot in the blogosphere?

Posts Tagged ‘artificial

Note: The URL of the BBC video changed after I published my reflection, so I have updated it.

Video source

This BBC video asked the question: God and robots: Will AI transform religion?

It was clickbait because it could pull in viewers with different opinions. A superficial answer is no, a deeper thought experiment-based answer is possibly yes, and current answer is we still do not know.

While such videos highlight possibilities, they also tend to focus on OR instead of AND. The OR thinking is likely the mindset of most of the people they interviewed, e.g., a human OR a robot leading you in prayer. The reality now is that AI can assist human tasks — this is an AND perspective. The use of AI does not exclude the value of the human. 

The current reality is that the broad question asked in the video is premature. As an expert pointed out towards the end of the video, AI does not yet have superagency, i.e., it does not yet make “beneficial decisions on our behalf intentionally because it wants to”.

Rising above, I recall a framework that Steve Wheeler shared. Our development of AI is still at the level of artificial narrow intelligence. It is nowhere near where it needs to be for it to replace priests, rabbis, or other religious figures.  

This is my reflection about how a boy gamed an assessment system that was driven by artificial intelligence (AI). It is not about how AI drives games.
 

 
If you read the entirety of this Verge article, you will learn that a boy was disappointed with the automatic and near instant grading that an assessment tool provided. The reason why he got quick but poor grades was because his text-based answers were assessed with a vendor’s AI.

The boy soon got over his disappointment when he found out that he could add keywords to the end his answers. These keywords were seemingly disjointed or disconnected words that represented key ideas of a paragraph or article. When he included these keywords, he found out that he could get full marks.

My conclusion: Maybe the boy learnt some content, but he definitely learnt how to game the system.

A traditionalist (or a magazine wiriter in this case) might say that the boy cheated. A progressive might point out that this is how every student responds to any testing regime, i.e., they figure out the rules and how to best take advantage of them. This is why test-taking tends to reliably measure just one thing — the ability to take the test.

If the boy had really wanted to apply what he learnt, he would have persisted with answering questions the normal way. But if he did that, he would have been penalised for doing the right thing. I give him props for switching to a strategy that was gamed from the start.

This is not an attack on AI. It is a critique on human decision-making. What was poor about the decisions? For one thing, it seemed like the vendor assumed that the use of key words indicated understanding or application. If a student did not use the exact key words, the system would not detect and reward them.

It sounds like the AI was a relatively low-level matching system, not a more nuanced semantic one. If it was the latter, it would be more like a teacher who would be able to give each student credit when it was due if the same meanings were expressed.

The article did not dive into the vendor’s reasons for using that AI. I do not think the company would want to share that in any case. For me, this exhibited all the signs of a quick fix for quick returns. This is not what education stands for, so that vendor gets an F for implementation.

It is a long time before I need to facilitate a course on future edtech again, but I am already curating resources.


Video source

As peripheral as the video above might seem, it is relevant to the topic of algorithms and artificial intelligence (AI).

The Jolly duo discovered how YouTube algorithms were biased against comments written in Korean even though that was the language of a primary audience. Why? YouTube wanted to see if it could artificially drive English-speakers there instead of allowing what was already happening organically.

Algorithms and AI drive edtech and both are designed by people. Imperfect and biased people. Similar biases exist in schooling and education. One need only recall the algorithms that caused chaos for major exams in July for the international baccalaureate (IB) and August for the General Certificate Exams in the UK. Students received lower than expected results and this disproportionately affected already disadvantaged students.

Students taking my course do not have to design algorithms or AI since that is just one topic of many that we explore. The topic evolves so rapidly that it is pointless to go in depth. However, an evergreen aspect is human design and co-evolution of such technology in education.

We shape our tools and then our tools shape us. -- Marshall McLuhan

Marshall McLuhan’s principle applies in this case. We cannot blindly accept that technology is by itself disruptive or transformative. We create these technologies, the demand for them, and the expectations of their use.

A small and select groups have the know-how to create the technology. They create to the demand by convincing administrators and policymakers who do not necessarily know any better. Since those gatekeepers are not alert, we need new expectations — we must know, know better, and do better. All this starts with knowing what algorithmic bias looks like and what it can do.

Two days ago, I mentioned that I attended a Zoom-based meeting to celebrate the graduation of a few Masters students. I opted not to use an artificially generated background and relied on what I had in my study instead.

Obviously not all will agree with that choice. They might wish to embellish or hide natural backgrounds as a matter of personal choice.

Zoom, with natural background.

I choose to use a natural background in part because it suits my purpose — it is a study, it looks studious, and I teach via video conference if it is necessary.

It is also for pedagogogial and technical reasons that I opt for a natural background. An artificially replaced background requires software algorithms to work hard to keep track of where the person is. This creates artefacts when the person moves.

At the latest Zoom meeting, a participant with an artificial background tried to show an item by holding it up. But since the Zoom algorithm is optimised for people, it removed the object from view. If a teacher did the same, her students would not be able to see what she was trying to illustrate.

The choice of a tool is not straightforward. Once chosen, its usage is not fixed because its designers and creators cannot foresee every contextual use. This is why the choice and use should not be left only to vendors and administrators. The actual users need to weigh in as well.

I look forward to every podcast episode of Pessimists Archive, rare and irregular as it is. I wish the latest episode came out before my course finale.
 

 
The latest podcast started with a “heroic” dog and ended with the war between natural ice and artificial refrigeration. Yes, the episodes are weird but connected like that. But they all share a common theme.

Take this quote from the 23min 47sec mark:

When people face new technologies… they end up wanting… a simple heuristic to cut through complexity and allow them to make decisions that would otherwise be ambiguous or overwhelming.

Technology represents change and some people react with fear. To manage that change and fear, these people seek simple heuristics e.g., tell me what to do, what is a formula I can follow, how might I dumb it down and essentially do the same thing.

But such short-term thinking does us no good. Shortcuts avoid the critical and creative thinking that is necessary for problem-solving and embracing nuance. Given that my course was about new educational technologies, the quote and the thinking behind it would have made a timely and wise course conclusion.

Ah, well. This is something else to add to the 30-plus reminders I already have in my Notes app…

I like pointing out that the current state of artificial intelligence (AI) is no match for natural human stupidity. Consider the examples below.

Since this tweet in 2017, image recognition might have improved so that AI trained with specific datasets can distinguish between chihuahuas and muffins.

The video below highlights another barrier — AI struggles with human accents.


Video source

Overcoming this (and other barriers) might be a helped by access to broader and better datasets. But such AI still operate at the level of artificial narrow intelligence (see Wheeler’s levels of AI). They are certainly not at the level of artificial general intelligence, much less at artificial super intelligence.


Video source

I’ll admit it. The title of this episode did not appeal to me from the get go. Why use artificial intelligence (AI) to figure out of there are other forms of intelligent life in the galaxy?

Here is my bias: I would rather see the power of AI developed more for enabling better life on Earth. But I remain open-minded enough to learn something about the alien effort.

According to one scientist, the last 50 years of space data exploration is akin to a glass of water. This is against the total data set the size of the world’s oceans. So using AI makes sense.

I liked the honesty of another scientist who declared that he did not know exactly what he was looking for. He was simply looking for a blip of life against a sea of darkness. So again there is the counter narrative to the press and movies — we are not looking for aliens to battle.

So how might AI detect alien life? Pattern recognition of rare needles against vast amounts of hay in huge stacks.

About halfway through the video, the content switched abruptly to synths — AI with bodies that mimic humans. Long story short, we are nowhere near what science fiction paints in books or movies. But the efforts to deconstruct and reconstruct the human body and mind are interesting (to put it mildly).

I liked how the video moved on to the ethics of synths. What rights would they have? Can they be taught good values? If they commit crimes, who is responsible? These proactive questions influence their design and development.

I think the episode was the final one. If it was, it was a good note to end on.

How might artificial intelligence (AI) prevent us from destroying ourselves? The seventh episode of this YouTube Original series provided some insights on how AI could help prevent animal extinction, famine, and war.


Video source

Take the battle against ivory poachers. Trap cameras take photos of anything that moves. They capture images of elephants, a host of other animals, and possibly the occasional poacher. But manually processing the photos for poachers is so time-consuming that it might be too late to save the elephants.

However, an AI-enabled camera, one with a vision processing unit, detects people and sends only those photos immediately to the park rangers. This gives the rangers more time to intercept the poachers.

In the second segment of the video, the focus shifted to the meat that we eat. Like it or not, animal husbandry contributes to climate change by taking away natural resources and emitting greenhouse gases. If we are to shift to not-meat but not rely on Impossible Burgers, what alternatives are there?

One is an AI called Giuseppe that does not reconstitute meat and creates the perception of meat instead. It analyses how molecules in all foods create taste and texture, and recommends blends of analogues from plants.

NotCo, the company that uses Giuseppe, has already created NotMayo, NotMilk, and NotMeat. The video featured the development of NotTuna.

The third part of the video focused on predicting earthquakes. Like the poacher detection tool, sensors collect more noisy data than useful data. AI can be trained to recognise cultural sounds like transportation and construction, and distinguish those from a possible earthquake.

The final segment asked a broad question: Might AI be able to prevent disasters, unrest, or wars that stem from our misuse of natural resources?

To answer this question, a small company in the USA collects satellite images and relies on AI to identify and differentiate objects like solar panels and riverbeds. With AI as a tool, the company makes predictions like the output of cultivated crops in a year.

The predictions extend to man-made infrastructure and natural water sources. The example featured in the video was how measurements of snowfall could be used to predict water supply, which in turn correlates to crop yields.

If the snowfall was low, farmers could be advised to plant drought-resistant crops instead. If unrest or war stem from food or water shortage, such predictions might inform deployments of food aid before trouble erupts.

The overall message of this video countered the popular and disaster movie narratives of human-made AI running amok and killing us. Instead, it focused on how AI actually helps us become better humans.

Here is a phrase uttered and written so much that it has practically become a trope: Beware, robots will take our jobs.


Video source

Technology-enabled automation has always taken away old jobs, e.g., we do not need phone operators to manually connect us. But people conveniently forget how automation also creates new jobs, e.g., maintainers and improvers of phones. To that end, the video featured a truck driver whose duties evolved along with the development of automated truck-driving.

The automated truck-driving segment ended with the test driver stating that AI was not making people redundant. It was doing jobs that people no longer wanted to do.

The next video segment featured an automated sea port that moved the containers that arrived in ships. The repeated theme was that the human responsibility shifted from moving the containers to maintaining the robotic cranes and vehicles that moved the containers.

An important concept from both segments was that current AI might have good specific intelligence, but it has poor general intelligence. If an environment is controlled or if the problem is structured, AI is often safer, more efficient, and more effective than people.

The final video was about a chain’s pizza order prediction, preparation, and delivery. It emphasised how humans and AI work together and countered the popular narrative of AI taking humans entirely out of the equation.

The underlying message was that people fight change that they do not like or do not understand. This is true in AI or practically any other change, e.g., policy, circumstance, practice.


Video source

This episode of the YouTube Original series on artificial intelligence (AI) was way out there. It focused on how AI might help us live on another planet. What follows are my notes on the episode.

If NASA’s plan to send humans to Mars by 2033 is to happen, various forms of AI need to be sent ahead to build habitats for life and work.

Current construction relies on experience and historical knowledge. AI-enabled construction (AKA generative design) compares and predicts how different designs might operate in Mars.

Side note: Closer to home, generative design also helps us make predictions by answering our what-if questions. What if this structure is placed here? What if there is more of them?

Other than modelling possibilities, AI that builds must not just follow instructions but also react, make decisions, and problem-solve. A likely issue on Mars is using and replenishing resources. One building material is a biopolymer partly synthesised from maize. If AI is to farm corn in Mars, what might it learn from how we do it on Earth?

The video segued to the Netherlands which has the world’s second-highest fresh food production despite its small size. It owes this ability in large part to the AI-informed agricultural techniques developed at Wageningen University.

Most folk will probably relate to how developing AI for Mars actually helps us live life better on earth. It has the capacity to help us think and operate better in terms of how we consume and deploy resources. Imagine how much the rest of the world would benefit from scaling up the techniques developed in the Netherlands.


http://edublogawards.com/files/2012/11/finalistlifetime-1lds82x.png
http://edublogawards.com/2010awards/best-elearning-corporate-education-edublog-2010/

Click to see all the nominees!

QR code


Get a mobile QR code app to figure out what this means!

Archives

Usage policy

%d bloggers like this: