Another dot in the blogosphere?

Here are principles I operated by when I used to lead a centre for e-learning. They are general enough to apply in the era of generative artificial intelligence (AI).

Good sensing: Determining gaps in current information and knowledge. In education, this typically includes information on student expectations and behaviour, and knowledge that educators need in order to assimilate change.

Good sensing might start with the critical consumption of information about de facto technology and could continue with student surveys, interviews, focus groups, etc. Good sensors keep consuming valid and reliable media and seeing how it aligns or contrasts with the data they collect.

Good sense: Shaping plans, frameworks, and policies that pull change, not push it. A large part of pull comes from teachers or faculty who realise the importance of staying current. They do this because they are good sensors and/or have access to timely data.

This phase overlaps with the previous and next elements and might be bridged with discussions with educators, e.g., departmental visits, institutional town halls, regular chats with representatives. It is a phase that does not go away or reduce in intensity. Engaging with educators so that they take ownership is long and arduous.

Good sense-making: Facilitating teaching practices that are not only relevant and powerful, but also stand the test of time. Technologies change fast, human behaviours lag. Educators need to know that they are not just responding to fickle fashion but to long term benefit.

Other than on-going dialogue with educators, this phase is more “active” in that it could take the form of workshops, sharing sessions by change leaders, classroom observations, etc.

Photo by Andrea Piacquadio on

Little beats seeing is believing, so I would also encourage the video recording of student focus groups and educators implementing technology-mediated pedagogy. These go a long way in convincing those sitting on the fence.

One final tip: Those leading the charge need to metaphorically grow some thick skin and don noise-cancelling earphones. They will need these to ignore the noise, jeers, or other discouragement that will inevitably come their way.

Video source

The title of the video makes it sound like the error lay in the artificial intelligent (AI) agent, ChatGPT. The video report even started with a clip of the CEO of OpenAI, the company behind ChatGPT, appearing before a US Congress committee stating how he wanted to work with the government to prevent AI from running amok.

This was a good example of bad news reporting. The agency had already decided to blame the AI for a mistake.

The actual problem lay with the professor who reportedly failed all his students. He supposedly fed their essays to ChatGPT and it responded that it had written all of them.

That is not how to use ChatGPT nor is that a good prompting strategy. Why? Users have discovered how they can make ChatGPT agree with practically anything after it generates an initial draft.

It is not easy to detect AI-generated writing because it mimics a very structured human writer. It is also pointless to cry foul at this point — the ChatGPT pandora’s box has already opened and it has revealed the weaknesses of trial-by-essay and stagnant pedagogy.

The top YouTube video comments hinted at approaches to change pedagogy, assessment, and mindset. One commenter shared how their professor required students to co-write with ChatGPT and to correct its mistakes. This not only recognises its use, it also mandates it and helps students learn to use it strategically and responsibly.

Another commenter suggested that the professor submit their own writing to ChatGPT as a control before accusing students of cheating. This applies the age old principle of “physician, heal thyself”. 

The professor might also reflect on how detecting plagiarism means detecting intent and such a process is multifaceted. Accusing someone of plagiarism is serious — a student who is academically dishonest can be expelled and faculty who do the same can be fired. You have to be absolutely certain if you are going to accuse an entire class of cheating!

The real news is old news: News agencies do not always report responsibly and some professors do not behave professionally. What might be actual news is stories of people unlearning bad habits and learning good new ones. You know, like reporting responsibly and teaching progressively.

Larry Cuban’s reflection on the Use of Technology in U.S. Classrooms concluded: 

The huge hype surrounding computer devices since their appearance in schools in the early 1980s definitely has resulted in giving children and youth access to these amazing instruments. And increase classroom and home usage has certainly accompanied this access. But both access and usage have not fundamentally altered how teachers teach. Teachers have, as they always have, adopted an innovation (e.g., film, radio, television) and adapted it to fit the contours of the age-graded classroom and social expectations of parents.

Here is a principle that is almost universal in schools anywhere: Change is skin-deep. Taken at face value, the change that might take 40 years seems impressive because the device to student ratio is 1:1 or even many-to-one. Dig deeper and you will probably discover that attitudes and behaviours have not changed that much. 

This is not to say that change agents should give up. It is a reminder to focus on what matters and to not rest on small victories.


This is rich. The tweet comes from a practically irrelevant video rental company giving a streaming video rental company some unsolicited advice.

You cannot compare sharing a video tape with sharing a password. The tape goes to only one other person and there is only so much time you can do this. A password can go to many users, and depending on the Netflix plan, could be used simultaneously and indefinitely. 

It is no wonder that Netflix is plugging that gap. It should also be no wonder that Blockbuster is busted for its own flagging business and its terrible comparison. 

Here is a better comparison. Blockbuster’s comment on Netflix’s move is like an old teacher’s unsolicited remarks to an educator who has learnt to work in-class, fully online, and in hybrid mode. The former cannot possibly understand what the latter has to do to make things work. Sometimes you have to identify and ignore remarks that simply seek attention. 

…bad in the USA and need fixing. That was the myth that Larry Cuban tried to address in a recent blog entry, Billionaires’ Love Affair with School Reform.

He argued that the same people who say this ignore how students from the same schools go on to same universities that attract the world’s talents. They might also realise how it is the lowest tier of schools, the ones that need the most help but are most ignored, that “policy elites and 24/7 media” conflate with all schools.

There was some brutally honest reflection and critique in his blog entry. 

I wonder if the opposite might be true what what we think or believe about Singapore schools. The local press plays up how our schools produce students who do well in international tests and measures, e.g., PISA and PIRLS

A previous education minister once coined the phrase “every school a good school”, but it has multiple interpretations. The simplistic is that every school in Singapore is academically excellent — this cannot be the case by any rigorous measure. Our schools are unequal and inequitable in that there are social stratifications and access, reputations, missions, etc. The phrase was meant to assure stakeholders that was a place for all students.

The call also started an ongoing movement to focus less on grades, reduce reliance on school rankings, and increase access to continuing and higher education. These are social experiments and improvements in progress. Any mindset about how “good” we are risks self-aggrandisement and stagnation.

Photo by KoolShooters on
Photo by George Milton on

This Wired piece started with how some podcasters were using artificial intelligence (AI) to generate voices in order to automate publishing. But it ended with this quote: 

…right now, discussions about AI’s impact are thoroughly distorted by hyperbole. We mistake attention-grabbing for paradigm-shifting.

Like most articles from Wired, it was well-written and researched. It did not stick simply with the “AI bad, human good” or the “people do what people do best” routes. There was space for how generative AI could summarise and pump out content regularly and quickly.

It also reminded readers how podcasts are not just about content; they are about conversations. So it is easy to focus on how some have taken the AI route — those summarising daily news nuggets — and assume this is the way. 

The new and novel grab attention, but this does not mean that podcasters are making the shift. Especially when they know their listeners follow them for the value they bring with personality, production, and patience.

A Welsh university probably took the lead in establishing a reasonable starting policy on ChatGPT use. It started well by blogging

There is a clear consensus among experts in the sector that no AI detection tool can provide conclusive evidence.

It was clear that they had done their homework and stayed up-to-date with reliable sources of information. Its blog also has links to it sources and claims.

It then recommended that faculty take these five steps when suspecting unacceptable academic behaviour from students: 

  1. Use the Turnitin AI detection tool in conjunction with other indicators
  2. Check sources
  3. Check facts
  4. Check level of detail
  5. Hold an interview to determine authenticity

Their blog entry elaborate on each step.

These are reasonable actions. They are also not unique to dealing with academic dishonesty as enabled by generative AI. Not using a single tool, checking sources, facts, and detail, and interviewing the student are what faculty could do pre-ChatGPT. Some strategies simply stand the tests of time and tide because they are built on good foundations.

Photo by Rodolfo Quiru00f3s on

Some people are rejoicing because they can soon edit WhatsApp messages up to 15 minutes after sending them. It is about time WhatsApp caught up with iOS Messages because the former seems to have greater reach.

But there is something else that WhatsApp can do to make it even better. No, not hiding conversations or using one account across multiple phones. I wish it would use Natural Language Processing (NLP) to analyse text. How so?

Twitter already uses NLP to do this. I wish I took a screenshot of someone demonstrating how he was about send an aggressive reply to someone else. The app detected the sentiment because of the words he used. It paused the send with a notification that it had detected his tone and asked if the person was sure he wanted to send the tweet.

Photo by Liza Summer on

I am not suggesting that WhatsApp do exactly the same thing. WhatsApp messages are typically not public like most tweets are. But I wish that WhatsApp would detect conversations in group spaces that are meant to be private.

Let’s say that two people start having a dialogue in a group space for ten. The rest do not need to be notified with every exchange. This would be like being in a restaurant and hearing every word of couple having a loud conversation or argument because they do not have volume control or they know no shame. 

I would like WhatsApp to detect the growing dialogue in a group space and prompt the two parties to message each other instead of broadcasting to the group. Just think of the relief of not receiving so many notifications! It people cannot exercise control or be considerate, I say we use technology to remind them.

Tags: ,

I tested Bard last Monday by using the same prompts I used with ChatGPT a few days earlier.

It failed to “ write a blog post about information literacy in the age of generative artificial intelligence”. So I challenged it to “write an essay about information literacy in the age of generative artificial intelligence. To be consistent with my challenge to ChatGPT, I then asked it to “use British English spelling”. This is what Bard generated:

Sure, here is the essay about information literacy in the age of generative artificial intelligence, written in British English spelling:

Information literacy in the age of generative artificial intelligence

Information literacy is the ability to find, evaluate, and use information effectively. In the age of generative artificial intelligence (AI), information literacy is more important than ever.

Generative AI is a type of AI that can create new information, such as text, images, and audio. This technology has the potential to revolutionize* the way we access and use information. However, it also raises new challenges for information literacy.

One challenge is that generative AI can be used to create fake news and other forms of misinformation. It is important to be able to evaluate the information we find online to ensure that it is accurate and reliable.

Another challenge is that generative AI can be used to create personalised** information bubbles. This means that we may only see information that confirms our existing beliefs. It is important to be able to find information from a variety of sources to get a well-rounded view of the world.

In the age of generative AI, information literacy is more important than ever. By being able to find, evaluate, and use information effectively, we can protect ourselves from misinformation and make informed decisions.

Here are some tips for information literacy in the age of generative AI:

  • Be aware of the potential for fake news and misinformation.
  • Evaluate the information you find online to ensure that it is accurate and reliable.
  • Find information from a variety of sources to get a well-rounded view of the world.
  • Be critical of the information you find, and be willing to question its accuracy and reliability.
  • Use your critical thinking skills to evaluate the information you find.
  • Be aware of your own biases and how they might affect your interpretation of the information you find.
  • Be willing to change your mind when presented with new information.

Information literacy is an essential skill in the age of generative AI. By following these tips, you can protect yourself from misinformation and make informed decisions.

My quick critiques of Bard’s offering:

  • It did not use British English consistently, e.g., it spelt “revolutionise” as “revolutionize” (see *) but got “personalised” right (see **). 
  • It offered seven aspects of information literacy (two more than ChatGPT) but did not elaborate on them.
  • The “essay” came across more like advice or tips (this word was used specifically in the list above).
  • When challenged to provide references for its claims on what counted as literacy skills, Bard was unable to provide sources.

The infinite monkey theorem is sometimes rephrased as 1000 (or a million) chimpanzees on typewriters producing a work of Shakespeare by chance. Bard is a lot more sophisticated than those hypothetical apes, but it is currently more primitive than ChatGPT.

Photo by Andre Mouton on
Tags: ,

Here is one heuristic I use to decide if a “news” article is worth reading. If it relies on shortcut thinking, I do not read it.

For example:

It relied on the lazy and harmful use of “addiction” to create clickbait. Using a phone often and even staring at it almost all the time is not necessarily a sign of addiction. It makes light of actual addictions. 

Here is another example:

This title claims that you can “ace” interviews. One read is not going to do what enrichment centres charge over several sessions. There are no shortcuts for learning how to listen, build confidence, or develop the gift of the gab.  


Usage policy

%d bloggers like this: