Posts Tagged ‘speech’
I watched Deputy Prime Minister (DPM) and Finance Minister, Mr Heng Swee Keat, deliver the Fortitude Budget in Parliament yesterday. It was the fourth budget speech for the Singapore government’s response to the COVID-19 pandemic.
I also monitored my Twitter stream for takeaways by the major english language newspapers. None of them mentioned the schooling and education related headlines from DPM’s speech.
With regard to home-based learning (HBL), DPM announced a greater role for artificial intelligence (AI) and learning sciences. As there were scant details, I presume that AI will play a role in learning analytics while researchers in the learning sciences will be consulted on e-pedagogy.
DPM also announced an “accelerated timeline” for all secondary school students to receive digital devices. This was initially announced in March and the device could take the form of a tablet, laptop, or Chromebook. The goal then was for all secondary 1 students to own a device by 2024 and all secondary school students to have one by 2028.
There was no firm timeline in the budget speech for both announcements. We do not yet know what an accelerated timeline means for the ownership of devices, nor do we know how long the changes to HBL will take.
All the changes are urgent and important. They are needed immediately and over the long haul. While these changes might not be as tweet-worthy to the newspapers, I aim to read and summarise what I learn in the weeks to come.
The Age of AI 2
Posted December 27, 2019
on:The second episode of the YouTube Original series on artificial intelligence (AI) focused on how it might compensate for human disease or conditions .
One example was how speech recognition, live transcription, and machine learning helped a hearing-impaired scientist communicate. The AI was trained to recognise voice and transcribe his words on his phone screen.
Distinguishing usage of words like “there”, “their”, and “they’re” required machine learning of large datasets of words and sentences so that the AI learnt grammar and syntax. But while such an AI might recognise the way most people speak, the scientist had a strong accent and he had to retrain it to recognise the way he spoke.
Recognising different accents is one thing, recognising speech by individuals afflicted with Lou Gehrig’s disease or amyotrophic lateral sclerosis (ALS) is another. The nerve cells of people with ALS degenerate over time and this slurs their speech. Samples of speech from people with ALS combined with machine learning might allow them to communicate with others and remote control devices.
Another human condition is diabetic retinopathy — blindness brought on by diabetes. This problem is particularly acute in India because there are not enough eye doctors to screen patients. AI could be trained to read retinal scans to detect early cases of this condition. To do this, doctors grade initial scans on five levels and AI learns to recognise and grade new scans.
This episode took care not to paint only a rosy picture. AI needs to learn and it makes mistakes. The video illustrated this when Google engineers tested phone-based AI on the speech patterns of a person with ALS.
Some cynics might say that the YouTube video is an elaborate advertisement for Google’s growing prowess in AI. But I say that there is more than enough negativity about AI and much of it is based on fiction and ignorance. We need to look forward with responsible, helpful, and powerful possibilities.