Another dot in the blogosphere?

Posts Tagged ‘ai

One of the simplest forms of digital curation is teaching YouTube algorithms what videos to suggest.

Curating by informing YouTube algorithms.

I do this by marking videos that I have no wish to watch with “not interested” (see screenshot above). I also remove some videos from my watched history listing.

Sometimes I watch videos based on a suggestion or a whim, but I find them irrelevant. If I do not remove them from my watch history, I will get suggestions that are related to those videos the next time I refresh my YouTube feed.

These simple steps are an example of cooperating with relatively simple AI so that algorithms work with and for me. This is human-AI synergy.

This Reddit thread was one response to the Boston Dynamics robot dog making its rounds in Bishan-Ang Mo Kio park. It was there to monitor social distancing and to remind park users to do the same.

The title of the thread — Dystopian robot terrifies park goers in Bishan Park — reveals a state of mind that I call dy-stupid-ian.

I have said this in edtech classes I facilitate and I will say it again: If your only reference for artificial intelligence (AI) and robotics is the Terminator franchise, then your perspective is neither informed nor nuanced.

The entertainment industry likes to paint a dystopian picture of what AI and robots will do. There is even a Black Mirror episode (Metalhead) that featured similar looking dogs. Somehow fear and worry embedded in fantasy are entertaining.

An education about AI and robotics is more mundane and requires hard work. But most of us need not be programmers and engineers to gain some basic literacy in those fields. For that, I recommend two excellent sources.

Video playlist

Video playlist

At the very least, the videos are a good way to spend time during a COVID-19 lock down.

I like pointing out that the current state of artificial intelligence (AI) is no match for natural human stupidity. Consider the examples below.

Since this tweet in 2017, image recognition might have improved so that AI trained with specific datasets can distinguish between chihuahuas and muffins.

The video below highlights another barrier — AI struggles with human accents.

Video source

Overcoming this (and other barriers) might be a helped by access to broader and better datasets. But such AI still operate at the level of artificial narrow intelligence (see Wheeler’s levels of AI). They are certainly not at the level of artificial general intelligence, much less at artificial super intelligence.

Video source

I’ll admit it. The title of this episode did not appeal to me from the get go. Why use artificial intelligence (AI) to figure out of there are other forms of intelligent life in the galaxy?

Here is my bias: I would rather see the power of AI developed more for enabling better life on Earth. But I remain open-minded enough to learn something about the alien effort.

According to one scientist, the last 50 years of space data exploration is akin to a glass of water. This is against the total data set the size of the world’s oceans. So using AI makes sense.

I liked the honesty of another scientist who declared that he did not know exactly what he was looking for. He was simply looking for a blip of life against a sea of darkness. So again there is the counter narrative to the press and movies — we are not looking for aliens to battle.

So how might AI detect alien life? Pattern recognition of rare needles against vast amounts of hay in huge stacks.

About halfway through the video, the content switched abruptly to synths — AI with bodies that mimic humans. Long story short, we are nowhere near what science fiction paints in books or movies. But the efforts to deconstruct and reconstruct the human body and mind are interesting (to put it mildly).

I liked how the video moved on to the ethics of synths. What rights would they have? Can they be taught good values? If they commit crimes, who is responsible? These proactive questions influence their design and development.

I think the episode was the final one. If it was, it was a good note to end on.

How might artificial intelligence (AI) prevent us from destroying ourselves? The seventh episode of this YouTube Original series provided some insights on how AI could help prevent animal extinction, famine, and war.

Video source

Take the battle against ivory poachers. Trap cameras take photos of anything that moves. They capture images of elephants, a host of other animals, and possibly the occasional poacher. But manually processing the photos for poachers is so time-consuming that it might be too late to save the elephants.

However, an AI-enabled camera, one with a vision processing unit, detects people and sends only those photos immediately to the park rangers. This gives the rangers more time to intercept the poachers.

In the second segment of the video, the focus shifted to the meat that we eat. Like it or not, animal husbandry contributes to climate change by taking away natural resources and emitting greenhouse gases. If we are to shift to not-meat but not rely on Impossible Burgers, what alternatives are there?

One is an AI called Giuseppe that does not reconstitute meat and creates the perception of meat instead. It analyses how molecules in all foods create taste and texture, and recommends blends of analogues from plants.

NotCo, the company that uses Giuseppe, has already created NotMayo, NotMilk, and NotMeat. The video featured the development of NotTuna.

The third part of the video focused on predicting earthquakes. Like the poacher detection tool, sensors collect more noisy data than useful data. AI can be trained to recognise cultural sounds like transportation and construction, and distinguish those from a possible earthquake.

The final segment asked a broad question: Might AI be able to prevent disasters, unrest, or wars that stem from our misuse of natural resources?

To answer this question, a small company in the USA collects satellite images and relies on AI to identify and differentiate objects like solar panels and riverbeds. With AI as a tool, the company makes predictions like the output of cultivated crops in a year.

The predictions extend to man-made infrastructure and natural water sources. The example featured in the video was how measurements of snowfall could be used to predict water supply, which in turn correlates to crop yields.

If the snowfall was low, farmers could be advised to plant drought-resistant crops instead. If unrest or war stem from food or water shortage, such predictions might inform deployments of food aid before trouble erupts.

The overall message of this video countered the popular and disaster movie narratives of human-made AI running amok and killing us. Instead, it focused on how AI actually helps us become better humans.

Here is a phrase uttered and written so much that it has practically become a trope: Beware, robots will take our jobs.

Video source

Technology-enabled automation has always taken away old jobs, e.g., we do not need phone operators to manually connect us. But people conveniently forget how automation also creates new jobs, e.g., maintainers and improvers of phones. To that end, the video featured a truck driver whose duties evolved along with the development of automated truck-driving.

The automated truck-driving segment ended with the test driver stating that AI was not making people redundant. It was doing jobs that people no longer wanted to do.

The next video segment featured an automated sea port that moved the containers that arrived in ships. The repeated theme was that the human responsibility shifted from moving the containers to maintaining the robotic cranes and vehicles that moved the containers.

An important concept from both segments was that current AI might have good specific intelligence, but it has poor general intelligence. If an environment is controlled or if the problem is structured, AI is often safer, more efficient, and more effective than people.

The final video was about a chain’s pizza order prediction, preparation, and delivery. It emphasised how humans and AI work together and countered the popular narrative of AI taking humans entirely out of the equation.

The underlying message was that people fight change that they do not like or do not understand. This is true in AI or practically any other change, e.g., policy, circumstance, practice.

Video source

This episode of the YouTube Original series on artificial intelligence (AI) was way out there. It focused on how AI might help us live on another planet. What follows are my notes on the episode.

If NASA’s plan to send humans to Mars by 2033 is to happen, various forms of AI need to be sent ahead to build habitats for life and work.

Current construction relies on experience and historical knowledge. AI-enabled construction (AKA generative design) compares and predicts how different designs might operate in Mars.

Side note: Closer to home, generative design also helps us make predictions by answering our what-if questions. What if this structure is placed here? What if there is more of them?

Other than modelling possibilities, AI that builds must not just follow instructions but also react, make decisions, and problem-solve. A likely issue on Mars is using and replenishing resources. One building material is a biopolymer partly synthesised from maize. If AI is to farm corn in Mars, what might it learn from how we do it on Earth?

The video segued to the Netherlands which has the world’s second-highest fresh food production despite its small size. It owes this ability in large part to the AI-informed agricultural techniques developed at Wageningen University.

Most folk will probably relate to how developing AI for Mars actually helps us live life better on earth. It has the capacity to help us think and operate better in terms of how we consume and deploy resources. Imagine how much the rest of the world would benefit from scaling up the techniques developed in the Netherlands.

Video source

Can artificial intelligence (AI) emote or create art?

Perhaps the question is unfair. After all, some people we know might have trouble expressing their emotions or making basic shapes.

So it makes sense to see what something fuzzy like emotions might consist of. The components include the meaning of words, memory of events, and the expression of words. If that is the case, modern chat bots fit this basic bill.

On a higher plane are avatars like SimSensei that monitor human facial expressions and respond accordingly. Apparently it has been used in a comparative study for people suffering from PTSD. That study found that patients preferred the avatar because it was perceived to be less judgmental.

And then there are the robot companions that are still on the creepy side of the uncanny valley. These artificial flesh and no blood human analogues look and operate like flexible and more intelligent mannequins, but it is early days yet on this front.

As for whether AI can create art, consider Benjamin, an AI that writes screenplays. According to an AI expert, Pedro Domingos, art and creativity for an AI is easier than problem solving. AI can already create art that moves people and music that is indistinguishable from that of human composers.

The video does not say this, but such powerful AI are not commonplace yet. We still have AI that struggles to make sense of human fuzziness.

The third and last part of the video seemed like an odd inclusion — robot race car drivers. Two competing teams tested their robo-cars’ abilities to overtake another car. This was a test of strategic decision making and a proxy for aggression and competitiveness.

Like the previous videos in the series, this one did not conclude with firm answers but with questions instead. Will AI ever have the will to win, the depth or create, the empathy to connect on a deep human level? If humans are perpetuated biological algorithms, might AI evolve to emulate humans? Will they be more like us or not?

Video source

This was the final episode of the the CrashCourse series on artificial intelligence (AI). It focused on the future of AI.

Instead of making firm predictions, the narrator opted to describe how far AI development has come and how much further it could go. He used self-driving cars as an example.

Five levels or milestones of self-driving AI.

Viewed this way, the development of AI is gauged on general milestones instead of specific states.

The narrator warned us that the AI of popular culture was still the work of science fiction as it had not reached the level of artificial general intelligence.

His conclusion was as expected: AI has lots of potential and risks. The fact that AI will likely evolve faster than the lay person’s understanding of it is a barrier to realising potential and mitigating risks.

Whether we develop AI or manage its risks, the narrator suggested some questions to ask when a company or government rolls out AI initiatives.

Questions about new AI initiatives.

I thoroughly enjoyed this 20-part series on AI. It provided important theoretical concepts that gave me more insights into the ideas that were mentioned in the new YouTube Original series, The Age of AI. Watching both series kept me informed and raised important questions for my next phase of learning.

Video source

This episode focused on how we might use artificial intelligence (AI) to augment ourselves to end human disability.

The first example in the video was artificial legs with embedded AI. The AI used machine learning to process a person’s movement to make the continuous and tiny adjustments that we take for granted. What was truly groundbreaking was how such limbs might be attached to existing muscles so that the person can feel the artificial limb.

The second example was improving existing abilities like analysis and decision-making in sports. The role of AI is to take large amounts of data and make predictions for the best payoffs. But despite the AI ability to process more than humans can intuit, we sometimes hold AI back because its recommendations seem contradictory.

We trust AI in some circumstances (e.g., recommending travel routes) but not in others (e.g., race strategies). The difference might be the low stakes of the former and the higher stakes of the latter.

The third example highlight how we might enhance our vision and hearing while increasing trust in AI in high stakes situations. It featured glasses that augmented vision for firefighters so that they could see is now or zero visibility. The camera and AI combined detect and highlight edges like exits and victims.

The video ended with the message that increased trust in AI will make it ubiquitous and invisible. But trust to be built, we need to remove ignorance, bias, and old perspectives.

AI can be a tool that we shape. But I am reminded of the adage that we first shape our tools and that our tools also shape us. This was true in our past and it will apply in our future.

Click to see all the nominees!

QR code

Get a mobile QR code app to figure out what this means!

My tweets


Usage policy

%d bloggers like this: