Another dot in the blogosphere?

Posts Tagged ‘age

Maybe it is age catching up on me, but I still feel drained from facilitating a four-hour class yesterday.

Maybe I am more used to three-hour modules or workshops. That seems to be the norm and I have forgotten what is it like to play in overtime.

Maybe I should factor in travel time. Depending on where the class is, it takes an hour to an hour-and-a-half each way on public transport. Surely that hustle and bustle has an impact.

Maybe it is because I make it a point to arrive at least an hour before class to rearrange the physical environment of the classroom, check the lighting, and test all audio-video systems.

Maybe it is simply the accumulation of preparatory work and the sheer energy of facilitating over just didactic teaching that consumes my energy.

Maybe I should not overthink it — I am just getting old.

How might artificial intelligence (AI) prevent us from destroying ourselves? The seventh episode of this YouTube Original series provided some insights on how AI could help prevent animal extinction, famine, and war.

Video source

Take the battle against ivory poachers. Trap cameras take photos of anything that moves. They capture images of elephants, a host of other animals, and possibly the occasional poacher. But manually processing the photos for poachers is so time-consuming that it might be too late to save the elephants.

However, an AI-enabled camera, one with a vision processing unit, detects people and sends only those photos immediately to the park rangers. This gives the rangers more time to intercept the poachers.

In the second segment of the video, the focus shifted to the meat that we eat. Like it or not, animal husbandry contributes to climate change by taking away natural resources and emitting greenhouse gases. If we are to shift to not-meat but not rely on Impossible Burgers, what alternatives are there?

One is an AI called Giuseppe that does not reconstitute meat and creates the perception of meat instead. It analyses how molecules in all foods create taste and texture, and recommends blends of analogues from plants.

NotCo, the company that uses Giuseppe, has already created NotMayo, NotMilk, and NotMeat. The video featured the development of NotTuna.

The third part of the video focused on predicting earthquakes. Like the poacher detection tool, sensors collect more noisy data than useful data. AI can be trained to recognise cultural sounds like transportation and construction, and distinguish those from a possible earthquake.

The final segment asked a broad question: Might AI be able to prevent disasters, unrest, or wars that stem from our misuse of natural resources?

To answer this question, a small company in the USA collects satellite images and relies on AI to identify and differentiate objects like solar panels and riverbeds. With AI as a tool, the company makes predictions like the output of cultivated crops in a year.

The predictions extend to man-made infrastructure and natural water sources. The example featured in the video was how measurements of snowfall could be used to predict water supply, which in turn correlates to crop yields.

If the snowfall was low, farmers could be advised to plant drought-resistant crops instead. If unrest or war stem from food or water shortage, such predictions might inform deployments of food aid before trouble erupts.

The overall message of this video countered the popular and disaster movie narratives of human-made AI running amok and killing us. Instead, it focused on how AI actually helps us become better humans.

Here is a phrase uttered and written so much that it has practically become a trope: Beware, robots will take our jobs.

Video source

Technology-enabled automation has always taken away old jobs, e.g., we do not need phone operators to manually connect us. But people conveniently forget how automation also creates new jobs, e.g., maintainers and improvers of phones. To that end, the video featured a truck driver whose duties evolved along with the development of automated truck-driving.

The automated truck-driving segment ended with the test driver stating that AI was not making people redundant. It was doing jobs that people no longer wanted to do.

The next video segment featured an automated sea port that moved the containers that arrived in ships. The repeated theme was that the human responsibility shifted from moving the containers to maintaining the robotic cranes and vehicles that moved the containers.

An important concept from both segments was that current AI might have good specific intelligence, but it has poor general intelligence. If an environment is controlled or if the problem is structured, AI is often safer, more efficient, and more effective than people.

The final video was about a chain’s pizza order prediction, preparation, and delivery. It emphasised how humans and AI work together and countered the popular narrative of AI taking humans entirely out of the equation.

The underlying message was that people fight change that they do not like or do not understand. This is true in AI or practically any other change, e.g., policy, circumstance, practice.

Video source

This episode of the YouTube Original series on artificial intelligence (AI) was way out there. It focused on how AI might help us live on another planet. What follows are my notes on the episode.

If NASA’s plan to send humans to Mars by 2033 is to happen, various forms of AI need to be sent ahead to build habitats for life and work.

Current construction relies on experience and historical knowledge. AI-enabled construction (AKA generative design) compares and predicts how different designs might operate in Mars.

Side note: Closer to home, generative design also helps us make predictions by answering our what-if questions. What if this structure is placed here? What if there is more of them?

Other than modelling possibilities, AI that builds must not just follow instructions but also react, make decisions, and problem-solve. A likely issue on Mars is using and replenishing resources. One building material is a biopolymer partly synthesised from maize. If AI is to farm corn in Mars, what might it learn from how we do it on Earth?

The video segued to the Netherlands which has the world’s second-highest fresh food production despite its small size. It owes this ability in large part to the AI-informed agricultural techniques developed at Wageningen University.

Most folk will probably relate to how developing AI for Mars actually helps us live life better on earth. It has the capacity to help us think and operate better in terms of how we consume and deploy resources. Imagine how much the rest of the world would benefit from scaling up the techniques developed in the Netherlands.

Video source

Can artificial intelligence (AI) emote or create art?

Perhaps the question is unfair. After all, some people we know might have trouble expressing their emotions or making basic shapes.

So it makes sense to see what something fuzzy like emotions might consist of. The components include the meaning of words, memory of events, and the expression of words. If that is the case, modern chat bots fit this basic bill.

On a higher plane are avatars like SimSensei that monitor human facial expressions and respond accordingly. Apparently it has been used in a comparative study for people suffering from PTSD. That study found that patients preferred the avatar because it was perceived to be less judgmental.

And then there are the robot companions that are still on the creepy side of the uncanny valley. These artificial flesh and no blood human analogues look and operate like flexible and more intelligent mannequins, but it is early days yet on this front.

As for whether AI can create art, consider Benjamin, an AI that writes screenplays. According to an AI expert, Pedro Domingos, art and creativity for an AI is easier than problem solving. AI can already create art that moves people and music that is indistinguishable from that of human composers.

The video does not say this, but such powerful AI are not commonplace yet. We still have AI that struggles to make sense of human fuzziness.

The third and last part of the video seemed like an odd inclusion — robot race car drivers. Two competing teams tested their robo-cars’ abilities to overtake another car. This was a test of strategic decision making and a proxy for aggression and competitiveness.

Like the previous videos in the series, this one did not conclude with firm answers but with questions instead. Will AI ever have the will to win, the depth or create, the empathy to connect on a deep human level? If humans are perpetuated biological algorithms, might AI evolve to emulate humans? Will they be more like us or not?

Video source

This episode focused on how we might use artificial intelligence (AI) to augment ourselves to end human disability.

The first example in the video was artificial legs with embedded AI. The AI used machine learning to process a person’s movement to make the continuous and tiny adjustments that we take for granted. What was truly groundbreaking was how such limbs might be attached to existing muscles so that the person can feel the artificial limb.

The second example was improving existing abilities like analysis and decision-making in sports. The role of AI is to take large amounts of data and make predictions for the best payoffs. But despite the AI ability to process more than humans can intuit, we sometimes hold AI back because its recommendations seem contradictory.

We trust AI in some circumstances (e.g., recommending travel routes) but not in others (e.g., race strategies). The difference might be the low stakes of the former and the higher stakes of the latter.

The third example highlight how we might enhance our vision and hearing while increasing trust in AI in high stakes situations. It featured glasses that augmented vision for firefighters so that they could see is now or zero visibility. The camera and AI combined detect and highlight edges like exits and victims.

The video ended with the message that increased trust in AI will make it ubiquitous and invisible. But trust to be built, we need to remove ignorance, bias, and old perspectives.

AI can be a tool that we shape. But I am reminded of the adage that we first shape our tools and that our tools also shape us. This was true in our past and it will apply in our future.

Video source

The second episode of the YouTube Original series on artificial intelligence (AI) focused on how it might compensate for human disease or conditions .

One example was how speech recognition, live transcription, and machine learning helped a hearing-impaired scientist communicate. The AI was trained to recognise voice and transcribe his words on his phone screen.

Distinguishing usage of words like “there”, “their”, and “they’re” required machine learning of large datasets of words and sentences so that the AI learnt grammar and syntax. But while such an AI might recognise the way most people speak, the scientist had a strong accent and he had to retrain it to recognise the way he spoke.

Recognising different accents is one thing, recognising speech by individuals afflicted with Lou Gehrig’s disease or amyotrophic lateral sclerosis (ALS) is another. The nerve cells of people with ALS degenerate over time and this slurs their speech. Samples of speech from people with ALS combined with machine learning might allow them to communicate with others and remote control devices.

Another human condition is diabetic retinopathy — blindness brought on by diabetes. This problem is particularly acute in India because there are not enough eye doctors to screen patients. AI could be trained to read retinal scans to detect early cases of this condition. To do this, doctors grade initial scans on five levels and AI learns to recognise and grade new scans.

This episode took care not to paint only a rosy picture. AI needs to learn and it makes mistakes. The video illustrated this when Google engineers tested phone-based AI on the speech patterns of a person with ALS.

Some cynics might say that the YouTube video is an elaborate advertisement for Google’s growing prowess in AI. But I say that there is more than enough negativity about AI and much of it is based on fiction and ignorance. We need to look forward with responsible, helpful, and powerful possibilities.

Video source

Would you take anything about artificial intelligence seriously if it was delivered by Robert Downey Jr (aka Tony Stark aka Iron Man)?

Well, he is the host a scripted eight-part documentary series, so the authenticity and accuracy of the content is subject to whoever curated and connected the most current information. The series is a “YouTube Original” but there is scant information beyond that.

The first episode focused on the development of digital consciousness, affective (emotional) computing, and human augmentation. The examples explored in this episode included a digital child (BabyX), customer service avatars, and advanced prosthetics.

One of the most important concepts to that a layperson might take away from the episode is that AI is not an independent and all-powerful entity. The best AI now is a combination of human and machine with the latter modelled on the former.

The other concept of capturing, augmenting, and improving upon human intelligence is how far we should go. This is the same question with another technological development — DNA manipulation.

The series seeks like a very promising one and I hope to catch the remaining episodes.

I created this image quote in 2015 after reading a variant of the words attributed to George Bernard Shaw.

We do not stop playing because we grow old. We grow old because we stop playing.

But with every axiom comes exceptions.

Video source

According to the research cited in this video, age is a factor at the highest levels of video gaming.

However, this does not invalidate the principle that we do not have to outgrow curiosity, a sense of fun, or risk-taking. Older gamers also learn to metagame — they devise strategies to compensate for split second slowness.

Whether you have a good memory or not, your memory is imperfect. According to this video, our memories are like fake news if we can only compare one part of a book with another part of the same book.

Video source

The video went on to explain how it was psychologically easy to implant false memories or reshape existing ones, and then influence a person to believe falseness as fact.

If we shave the our memory problem down to its core, the issue is that we do not have one or more other reliable sources for comparison.

Take the recent fake news scandal about one of our Ministers of Education — and later our Director-General of Education — supposedly claiming that Singapore was winning the wrong academic race. The person who wrote the article reported a false memory.

It was impossible to prove because the event was not open to everyone. It was only when the speech transcript and video recording were shared months after the fact that there were sources for comparison.

In a similar vein, the invisibility of a learner’s thinking can lead a teacher to make assumptions. To compare what the teacher and student knows, the teacher can require that student to make their thinking and learning visible.

I do not rely on just my memory. I externalise it by tweeting, blogging, photographing, and video recording. These provide evidence of memories and learning that I can compare with what I have in my head.

A value of current technology it is that is helping us arrive at the Cyborg Age. Our memories are less fallible because we augment and improve them with various technologies.

Click to see all the nominees!

QR code

Get a mobile QR code app to figure out what this means!

My tweets


Usage policy

%d bloggers like this: