Another dot in the blogosphere?

Posts Tagged ‘intelligence


Video source

Can artificial intelligence (AI) emote or create art?

Perhaps the question is unfair. After all, some people we know might have trouble expressing their emotions or making basic shapes.

So it makes sense to see what something fuzzy like emotions might consist of. The components include the meaning of words, memory of events, and the expression of words. If that is the case, modern chat bots fit this basic bill.

On a higher plane are avatars like SimSensei that monitor human facial expressions and respond accordingly. Apparently it has been used in a comparative study for people suffering from PTSD. That study found that patients preferred the avatar because it was perceived to be less judgmental.

And then there are the robot companions that are still on the creepy side of the uncanny valley. These artificial flesh and no blood human analogues look and operate like flexible and more intelligent mannequins, but it is early days yet on this front.

As for whether AI can create art, consider Benjamin, an AI that writes screenplays. According to an AI expert, Pedro Domingos, art and creativity for an AI is easier than problem solving. AI can already create art that moves people and music that is indistinguishable from that of human composers.

The video does not say this, but such powerful AI are not commonplace yet. We still have AI that struggles to make sense of human fuzziness.

The third and last part of the video seemed like an odd inclusion — robot race car drivers. Two competing teams tested their robo-cars’ abilities to overtake another car. This was a test of strategic decision making and a proxy for aggression and competitiveness.

Like the previous videos in the series, this one did not conclude with firm answers but with questions instead. Will AI ever have the will to win, the depth or create, the empathy to connect on a deep human level? If humans are perpetuated biological algorithms, might AI evolve to emulate humans? Will they be more like us or not?


Video source

This was the final episode of the the CrashCourse series on artificial intelligence (AI). It focused on the future of AI.

Instead of making firm predictions, the narrator opted to describe how far AI development has come and how much further it could go. He used self-driving cars as an example.

Five levels or milestones of self-driving AI.

Viewed this way, the development of AI is gauged on general milestones instead of specific states.

The narrator warned us that the AI of popular culture was still the work of science fiction as it had not reached the level of artificial general intelligence.

His conclusion was as expected: AI has lots of potential and risks. The fact that AI will likely evolve faster than the lay person’s understanding of it is a barrier to realising potential and mitigating risks.

Whether we develop AI or manage its risks, the narrator suggested some questions to ask when a company or government rolls out AI initiatives.

Questions about new AI initiatives.

I thoroughly enjoyed this 20-part series on AI. It provided important theoretical concepts that gave me more insights into the ideas that were mentioned in the new YouTube Original series, The Age of AI. Watching both series kept me informed and raised important questions for my next phase of learning.


Video source

The second episode of the YouTube Original series on artificial intelligence (AI) focused on how it might compensate for human disease or conditions .

One example was how speech recognition, live transcription, and machine learning helped a hearing-impaired scientist communicate. The AI was trained to recognise voice and transcribe his words on his phone screen.

Distinguishing usage of words like “there”, “their”, and “they’re” required machine learning of large datasets of words and sentences so that the AI learnt grammar and syntax. But while such an AI might recognise the way most people speak, the scientist had a strong accent and he had to retrain it to recognise the way he spoke.

Recognising different accents is one thing, recognising speech by individuals afflicted with Lou Gehrig’s disease or amyotrophic lateral sclerosis (ALS) is another. The nerve cells of people with ALS degenerate over time and this slurs their speech. Samples of speech from people with ALS combined with machine learning might allow them to communicate with others and remote control devices.

Another human condition is diabetic retinopathy — blindness brought on by diabetes. This problem is particularly acute in India because there are not enough eye doctors to screen patients. AI could be trained to read retinal scans to detect early cases of this condition. To do this, doctors grade initial scans on five levels and AI learns to recognise and grade new scans.

This episode took care not to paint only a rosy picture. AI needs to learn and it makes mistakes. The video illustrated this when Google engineers tested phone-based AI on the speech patterns of a person with ALS.

Some cynics might say that the YouTube video is an elaborate advertisement for Google’s growing prowess in AI. But I say that there is more than enough negativity about AI and much of it is based on fiction and ignorance. We need to look forward with responsible, helpful, and powerful possibilities.


Video source

Would you take anything about artificial intelligence seriously if it was delivered by Robert Downey Jr (aka Tony Stark aka Iron Man)?

Well, he is the host a scripted eight-part documentary series, so the authenticity and accuracy of the content is subject to whoever curated and connected the most current information. The series is a “YouTube Original” but there is scant information beyond that.

The first episode focused on the development of digital consciousness, affective (emotional) computing, and human augmentation. The examples explored in this episode included a digital child (BabyX), customer service avatars, and advanced prosthetics.

One of the most important concepts to that a layperson might take away from the episode is that AI is not an independent and all-powerful entity. The best AI now is a combination of human and machine with the latter modelled on the former.

The other concept of capturing, augmenting, and improving upon human intelligence is how far we should go. This is the same question with another technological development — DNA manipulation.

The series seeks like a very promising one and I hope to catch the remaining episodes.


Video source

This was another episode that focused on hands-on Python coding using Google Colaboratory. It was an application of concepts covered so far on dealing with biased algorithms.

The takeaway for programmers and lay folk alike might be that there is no programme free from undesirable bias. We need to iterate on designs to reduce such bias.


Video source

This was an episode that anyone could and should watch. It focused on bias and fairness as applied in artificial intelligence (AI).

The narrator took care to first distinguish between being biased and being discriminatory. We all have bias (e.g., because of our upbringing), but we should prevent discrimination. Since AI adopts our bias, we need to be more aware of ourselves so as to prevent AI from discriminating harmfully by gender, race, religion, etc.

What are some examples of applied bias? Google image search for “nurse” and you are likely to see photos of women; do the same for “programmer” and you are more likely to see men in the photos.

The narrator suggested five sources of bias. I paraphrase them as follows:

  1. Existing data are already biased (e.g., the photo example above)
  2. New training data is unbalanced (e.g., providing photos of faces largely from one main race)
  3. Data is reductionist and/or incomplete (e.g., creative writing is difficult to measure and simpler proxies like vocabulary are used instead)
  4. Positive feedback loops (e.g., past actions are repeated as future ones regardless of context)
  5. Manipulation by harmful agents (e.g., users teaching Microsoft’s Tay to tweet violence and racism)


Video source

Finally. An episode on how search engines use AI to help (or not help) us find answers to questions.

The narrator likened search engines to library systems: They had to gather data, organise them, and find and present answers when needed.

The gathering of data is done by web crawlers — programmes that find and download web pages. The data is then organised by reverse indexes (like those at the back of textbooks).

The indexed web content is associated with numbers. Each time we search with an engine, these numbers are then linked to associated web content.

Example of indexing.

Since there is so much content, it needs to be ranked by accuracy, relevance, recency, etc. We help the AI to this with bounces (returning to the search) to click-throughs (staying with what we were presented).

The narrator also explained how we might be presented with immediate answers and not just links to possibly relevant web resources. AIs use knowledge bases instead of reverse indexes.

Knowledge bases might be built with NELL — Never Ending Language Learner. The video explains this better than I can.

NELL — Never Ending Language Learner.

Fair warning: Search engines still suck at questions that are rarely asked or are nuanced. AI is still limited by what data is available. This means that it is subject to the bias of people who provide data artefacts.

The next episode is about dealing with such bias. Now the series gets really interesting!


Video source

This was an episode that would make a novice coder happy because it provided practice.

It did not apply to ame because I was merely getting some basics and keeping myself up to date for a course I facilitate.

In this episode, the host led a session on how to code for a movie recommendation system. To do this, he revisited concepts like pooling large datasets, getting personalised ratings, and implementing collaborative filtering. In doing so, this host suggested solutions for incomplete data, cold starts, and poor filtering.

The next episode promises to provide insights on how search engines make recommendations.

Two weeks ago, I shared this announcement about Singapore’s ten-year plan for AI and focused on how it might affect schooling.

I left my reflection on AI for grading on slow burn for a while. I am enjoying a break, but I also enjoy wrestling with dubious change.

Yes, dubious. But first, two pretexts.

First, the vendors that the Ministry of Education, Singapore, works with are not going to be transparent with their technologies, so I cannot be absolutely certain of the AI development runway, timeline, and capabilities.

Next, the field of AI is not new and it is diverse. Parts of it evolve more quickly or slowly than we might expect. For example, handwriting recognition has been around since before Microsoft released its slate PCs. It was good enough to recognise some doctor scrawls even back then!

However, the Hollywood vision that AI will replace or even kill us off has not materialised. An expert might point out that AI is not good at making social predictions and ethical decisions. I simply point out that artificial intelligence is still no match for natural stupidity.

Back to the issue — we need to consume claims made by policymakers and edtech vendors critically. And more critically if they are reported by the mainstream media that thrives on sensationalism.

Do not take my word for it. Take this expert’s view that some claims are “snake oil”. In his slides, he put these claims into three categories: Genuine and rapid progress; imperfect but improving; and fundamentally dubious.

An expert’s view that some AI claims are “snake oil”. In his slides, he put the claims into three categories: Genuine and rapid progress; imperfect but improving; and fundamentally dubious. Slide #10 at https://www.cs.princeton.edu/~arvindn/talks/MIT-STS-AI-snakeoil.pdf
Source

I highlighted “automated essay grading” in the screenshot above because that coincides with our 2022 plan to “launch automated marking system for English in primary and secondary schools”.

The fundamental issue is AI’s ability to automate judgement. Some judgements are simple and objective, others are complex and subjective. Written language falls in the latter category particularly when the writers get older and are expected to write in more complex and subjective ways.

Anyone who has had to grade essays will know what rubrics and “standardisation” sessions are. Rubrics provide standards, guidelines, and point allocation. Standardisation meetings are when a group of assessors get a small and common set of essays, grade those essays, and compare the marks. Those same meetings set the standard for the definitions of subjectivity, disagreement, and frustration.

Might AI in three years be able to find the holy grail of objective and perfect grading of subjective and imperfect writing? Perhaps. If it does so, it might be less a result of rapid technological evolution and more one of social manipulation.

To facilitate AI processing of essays, students might be required to use proprietary tools and platforms. For example, they might have to use word processed forms instead of handwriting. They could be told to write in machine readable ways, e.g., only five paragraphs, structured paragraphs, model phrases, etc. In other words, force-fitting writers and writing.

This is already how some tuition and enrichment centres operate. They reduce essay writing to formulae and teach these strategies to kids. Students are not encouraged to make mistakes, learn from them, or develop creative and critical thought. They are taught to game the algorithms.

The algorithms are the teachers’ expectations and rubrics now. They could be the AI algorithms in future. But the same reductionist strategy applies because we foolishly prefer shortcuts.

The AI expert I highlighted earlier focused on how ill-equipped AI is to predict social outcomes. He concluded his talk with this slide.

Concluding slide (#21) from https://www.cs.princeton.edu/~arvindn/talks/MIT-STS-AI-snakeoil.pdf
Source

We might also apply his last two points to automated essay grading: Resist commercial-only interest aimed to hide what AI cannot do, and focus on what is accurate and transparent.

This is not my way of stifling innovation as enable by educational technology. I wear my badge of edtech evangelist proudly. But I keep that badge polished with critical thought and informed practice.

It took a while, but CrashCourse finally provided some insights into how YouTube, Netflix, and Amazon make recommendations.


Video source

Long story short: The AI recommendations are based on supervised and unsupervised learning. The interesting details are that the algorithms may be content-based, social-based, or personalised.

Content-based algorithms examine what is in, say, YouTube videos. Social-based algorithms focus on what the audience does (e.g., likes, views, time spent watching). As we have different preferences, algorithms can learn what we like and serve us similar content or content from the same provider.

The recommendations we see on YouTube are a combination of all three and the process is called collaborative filtering. This relies on unsupervised learning to predict what we might like based on what other users similar to us also like/watch.

The AI might make mistakes in the recommendations. This can be due to sparse data (e.g., low views, low likes), cold starts (i.e., AI does not know enough about us initially), and statistics (i.e., what is likely is not the same as what is contextually relevant). A good example of this sort of mistake is online ads.

Some pragmatics: To get good recommendations, we might subscribe and like videos from content creators we appreciate. To avoid getting tracked, we might use the incognito mode in most modern web browsers.


Archives

Usage policy

%d bloggers like this: