Another dot in the blogosphere?

Posts Tagged ‘human

Video source

This episode focused on how we might use artificial intelligence (AI) to augment ourselves to end human disability.

The first example in the video was artificial legs with embedded AI. The AI used machine learning to process a person’s movement to make the continuous and tiny adjustments that we take for granted. What was truly groundbreaking was how such limbs might be attached to existing muscles so that the person can feel the artificial limb.

The second example was improving existing abilities like analysis and decision-making in sports. The role of AI is to take large amounts of data and make predictions for the best payoffs. But despite the AI ability to process more than humans can intuit, we sometimes hold AI back because its recommendations seem contradictory.

We trust AI in some circumstances (e.g., recommending travel routes) but not in others (e.g., race strategies). The difference might be the low stakes of the former and the higher stakes of the latter.

The third example highlight how we might enhance our vision and hearing while increasing trust in AI in high stakes situations. It featured glasses that augmented vision for firefighters so that they could see is now or zero visibility. The camera and AI combined detect and highlight edges like exits and victims.

The video ended with the message that increased trust in AI will make it ubiquitous and invisible. But trust to be built, we need to remove ignorance, bias, and old perspectives.

AI can be a tool that we shape. But I am reminded of the adage that we first shape our tools and that our tools also shape us. This was true in our past and it will apply in our future.

I have taken notes on the fifteen episodes of the CrashCourse AI series so far. Like some, I also look forward to automated transportation in general.

Rising above, I wondered if we need AI not just for monitoring what happens outside the vehicle but also inside. After all, the slowest, dumbest, and squishy-est element of robocars is the people inside.

People do stupid things like choosing not to belt up, putting kiddie seats in the front passenger area, or worse, rely on the “mama seat belt”. (In this part of the world, the mama seat belt is a child sitting on a caregiver’s lap.)

The mama seat belt provides a false sense of security. Anywhere in a car, human reflexes are not quick enough in an accident. In the front passenger seat, this arrangement places a small child’s head directly in the path of the explosion that is the airbag.

How about internal sensors to stop stupid people from doing stupid things in robocars? These cars could refuse to move until their human cargo is safe.

That said, I wonder if this is a fool’s errand. At the moment, current AI is still no match for natural human stupidity.

Video source

This week’s episode countered the mainstream and entertainment media message that artificial intelligence (AI) will take over all our jobs and eventually us as well. It focused on how humans and AI can collaborate and complement one another.

AI is quick, consistent, and tireless. But it is poor with insight, creativity, and nuance, traits that we possess despite ourselves. The narrator related an example of how chess players worked with AI to beat human chess masters or AI-only opponents.

Beyond chess, the narrator suggested that AI could help with medical diagnoses. It can focus on rote tasks and processing large amounts of information and combine its findings with a doctor’s experience and knowledge of a patient.

In engineering, AI could suggest basic designs of structures based on existing rules while humans might consider the practicality of those designs in context. In human development, AI could artificially give us more strength, endurance, or precision, e.g., robot exoskeletons, remote surgery.

As much as AI helps us, we also help AI. We provide data for AI every time we contribute of any online database. When AI spits results out based on algorithms, it often shows us the products but not the processes; humans can provide insights into those processes or fine-tune them.

AI has no moral value systems. That is a human thing. But so is bias, which happens to be the focus of the next episode.

It is time for a curmudgeonly rant.

Some schools and parents here seem to have forgotten to teach kids the basis. I am not referring to the three Rs.

What happened to speaking in hushed tones when in a shared or public space?

We already live in cramped environments in Singapore. This alone is a good reason for not talking loudly during conversations over a meal or when packed on public transport. A lack of volume control reveals a lack of self-awareness and is inconsiderate to others who do not want to be audience to your conversations.

What happened to taking care of personal property?

People routinely leave their bags and computing devices in fast food joints or coffee places. The onus is not on others to look after your stuff; it is yours to care enough to leave someone behind or to take your things with you.

There is a reason why they are called valuables — someone had to work hard to make the money so that you have that personal property. Be grateful, not careless.

What happened to taking care of shared property?

There is no learning if kids know how to return food trays in school but do not consistently do this at a mall eatery. There is no care if you use a toilet properly at home but somehow lose your aim and decency in a public restroom.

And yes, this rant is fresh. I am drafting this at a Starbucks while surrounded by people who talk loudly and who have left a handbag and two computers at their tables. There is no toilet at this establishment, but there is one a stone’s throw away. Someone decided to pee in a sink.

Maybe I should create an option in my education consultancy called Human Decency 101. But here is the sad news: If you need it, it is probably too late.

I was taught a lot as an undergraduate majoring in biology. Not all of it was true.

One thing that a lecturer taught me was this factoid: Human DNA is almost 99% identical to chimpanzee. That has stuck with me because it was so jarring.

The lesson then was that it took just 1% of evolutionary tweaking and protein-making difference to have a human. Back then I just took an expert’s word for it.

Today I have YouTube condensing the work and critique of several experts. The video below was built from five published references.

Video source

The main takeaway from the video is that the absolute number (99%) is misleading. The number was derived under conditions like ignoring portions of genomes and arbitrary rules so that the number is neither valid nor reliable. Change the rules and the number changes.

The larger issue is how students today might still be taught: From old textbooks, with outdated pedagogy, and without access to more than one source of information.

The biggest sin of any teacher is focusing just on content. This means the delivery of information and the testing by regurgitation of it.

Content is (or it should be) a means to an end. The end is not to reproduce that content in a test because information can be challenged and knowledge can change. Content should be a way to teach thinking.

The teaching of content today should not just be learning-about. It should focus on learning-to-be. In the chimpanzee and human DNA example, it is not just learning about the 99% factoid. It is about asking critical questions about it and knowing how to find valid and reliable answers to those questions.

Rising above, the teaching of a juicy factoid like human DNA is 99% chimpanzee stems from the pedagogy of answers and the attempt to engage students with interesting nuggets. The critique of such a factoid starts with the pedagogy of questions and continues with the empowerment of students to think and act critically.

Technology destroys the perfect and then it enables the impossible.

Seth Godin recently made this declaration:

Technology destroys the perfect and then it enables the impossible.

He said this while providing examples of how computers do things better, faster, or cheaper than people can. His examples were from daily life and commerce.

Something similar could be said about schooling and education. The “perfection” is the general insularity of the classroom from the outside world. Technology needs to destroy this status quo, but it is only chipping away at this mountain of change.

Today’s classroom walls are potentially more porous thanks to our phones. These allow teachers and students to connect with experts and content beyond traditional means.

Why is the change so slow in schooling and education?

The same people who use their phones in their personal lives might see how the changes are better, faster, or cheaper. However, they probably do not see how the same applies in the classroom or other learning contexts.

Technology will need a lot of help to overcome this human impasse. Training and professional development that addresses skills and behaviours will do little to make this change. To enable the impossible, we must start first with mindsets.

While the technology affords change, the teachers and leaders must allow it. They might be aware of what technology can do and perhaps even how, but they must also know why.

Today I reflect on three seemingly disparate topics. However, all have a theme of not compromising on standards. They are standards of English, decency, and learning.

I spotted this sign at a Fairprice grocery store. It urged patrons to think of the environment.

You cannot use less plastic bags, but you can use less plastic the way you can use less water. The water and plastic are uncountable. Plastic bags are countable so you should use fewer of them.

Actually you should try not to use any plastic bags by carrying your own recyclable bags. If you do that, the sign reads another way: Do you part for the environment. Useless plastic bags!

If standards of basic rules of English have not slipped, we would see fewer of such signs (countable property) and I would be less of an old fart (uncountable property).

Speaking of which, a fellow old fart (OF) responded to a Facebook troll who had terribly warped priorities when commenting on the kids who lost their lives during the Sabah earthquake.

The troll focused on the fact that the deceased could not take their Primary School Leaving Examinations (PSLE) later this year. When OF called the troll to task, the latter became indignant.

OF discovered that the troll was a student in a local school, and while not all kids act this way, OF wondered in subsequent comments how the standards of human decency seemed to have slipped.

I baulk at the fact that some teachers wait for official Character and Citizenship Education (CCE) materials to be prepared and distributed instead of using everyday examples like these. They are far more timely, relevant, and impactful.

To reiterate what I mentioned yesterday about bad advice for teachers, how are adults to realize what kids are writing and thinking if they do not follow them on social media? You need to be on the ground to see what is good and bad about it.

Parents and teachers should not be reacting in a way so that there is less social media use because that is unrealistic. When someone cannot write or speak well, you do not tell them to write or speak less; you tell them to practice more (after you coach them and provide feedback).

The third use-less/useless example comes from this Wired article about the change in Twitter leadership.

The author contrasted Twitter’s previously “unruly, algorithm-free platform” with Facebook’s. This was not a negative statement about Twitter because stalwarts value the power of human curation and serendipity.

However, those new to Twitter might view the platform as useless and choose to use less and less of it until they stop altogether. They do not stay long enough to discover its value.

The slipping standard here is learning to persist. I can see why school systems like the ones in the USA are including “grit” in their missions or using the term in policy documents.

But is grit the central issue?

What if the adults do not have a complete picture and are creating policies and curricula that are as flawed as the “use less” sign?

What if they should actually be spending more time on social media not just to monitor their kids and students but also to connect with other adults so that they learn the medium and the message deeply?

One key answer to these questions is about the ability of adults to keep on learning. We should not be holding kids and students to one standard (it is your job to study what I tell you) and holding ourselves to another (I have stopped learning or I have learnt enough).

Do this and you put yourself on the slippery slope of sliding standards. When standards slip, they are not always as obvious as badly-composed signs or insensitively-written Facebook postings. The refusal to learn can be insidious and lead to a lack of positive role models for kids and students to emulate.

Two days ago I mentioned how I could work just about anywhere by remote control. When I mix work with play and personal with professional, I can work almost any time.

But while various technologies help me work from any part of the world, I cannot work any time all the time because I operate in a time zone.

Just being in Wellington, NZ, which was five hours ahead of Singapore, meant that a reasonable 8.30am kiwi meeting actually required me to be wide awake even though my body was operating at 3.30am.

I had to give my weekly #edsg chat at miss because it was 8.00pm in Singapore and my body clock was adjusting to NZ time (1.00am).

The weak link or rate determining step is human.

I think that is a principle that applies in organizing an event, communication in all contexts, and designing and implementing instructional interventions.

If something goes wrong, it is more likely human than technical. The sooner we admit to this or realize this, the better the technology will seem to work for us!


Usage policy

%d bloggers like this: