Another dot in the blogosphere?

Posts Tagged ‘writing

I was thinking of titling today’s reflection “Another uneven distribution” after reading the short article linked in the tweet below.

STonline reported how Normal (Technical) students can use laptops instead of manually writing in basic mother tongue language exams. On the surface, it seemed like an uneven but good distribution of technology for students because they were in the periphery.

But then I read how:

MOE and SEAB said in a joint reply to The Straits Times that the latest MTL curriculum places greater emphasis on developing students’ information and communications technology skills such as text input, and encouraging “self-directed and collaborative learning”.

I am certain that using laptops for “one section of the paper” might test text input because students have to type “their responses to questions in at least 75 words”. But I also wonder if text input should also include inputs like voice to help students who have psychomotor problems that interfere with typing.

To be constructively aligned, the supposed self-directed and collaborative forms of learning (SDL and CL) need to be evaluated as well. Perhaps these are determined elsewhere and I would have preferred to read about how these are done. Introducing typing in part of an exam has its challenges, but it is a low level outcome compared to SDL and CL.

To be fair, the article did state that students can use “text-to-speech and the e-dictionary to learn independently” in the Student Learning Space (SLS). 

To be honest, that is a poor excuse or illustration of SDL. Just how many students will bother? A more meaningful example of SDL might be how students pursue the learning of their mother tongue because of an interest or passion in a particular topic.

Alternatively, how might students be taught prompting skills when using ChatGPT or other large language model bots so that they learn language and associated skills on their own? Better still, how might they learn these in collaboration with others?

The report was an example normal writing of an “educational news” article. The paper reached for the low-hanging fruit because it was easy and/or because this was all the information it was offered. The normal news is “olds” and we need to be able to see through that veneer.

Larry Cuban used his blog platform to amplify voices on the potential impact of ChatGPT in education. I reflected on one such piece about a week ago.

One was from a teacher who wrote in The Atlantic while the other was a reporter from the New York Times. Today I focus on the teacher’s article.

I will point this out first — whatever they wrote in a draft gets change for better or worse by an editor so that the article fits the tone or agenda of the newspaper. 

It is not surprising to hear the teacher to start with “Teenagers have always found ways around doing the hard work of actual learning” and for The Atlantic article to be titled The End of High-School English. Newspapers seem to rely on this base strategy to lure readers.

The teacher tried to emphasise the importance of writing as a skill. For those with ambitions to be writer, this was a no-brainer. For others, the near-future and real-world applications of writing were “college admissions, writing a cover letter when applying for a job, or just writing an email to your boss”.

Then he gave examples of how ChatGPT generated a university admission essay, an application to be a manager at Starbucks. and an independent research paper for school work.

OpenAI terms and conditions, clause 1: Users must be at least 18-years-old to use ChatGPT. https://openai.com/terms/

But he glossed over the fact that 1) ChatGPT was not to be used by those below the age of 18 (see screenshot and original terms), and 2) ChatGPT can write at about the level of fairly well-informed and structured 15 or 16-year-old. 

In replicating what his high school students might be told to write, he pointed this out:

The technology doesn’t go much beyond the surface, nor does it successfully integrate quotations from the original texts, but the ideas presented were on-target…

He also challenged ChatGPT to improve the essay and claimed that it could and did: 

It kept the student’s words intact but employed them more gracefully; it removed the clutter so the ideas were able to shine through. 

At this point of the article, the teacher seemed to be going down the same route as others who bemoan the loss of yet another time-drawn skill to the convenience of emerging technology. But he neither took nor set that bait.

He pointed out how teachers could revert to strategies of desperation like forcing students to write essays by hand or only doing so in class. But this avoids the change that is already happening.

The teacher seemed to be saying that ChatGPT could elevate the work of writers who struggle. Others before him have already pointed out that ChatGPT could be used as a critical writer partner, not just an essay mill.  

He stance was informed by his philosophy of teaching:

I believe my most essential tasks, as a teacher, are helping my students think critically, disagree respectfully, argue carefully and flexibly, and understand their mind and the world around them. Unconventional, improvisatory, expressive, meta-cognitive writing can be an extraordinary vehicle for those things.

ChatGPT can make writing more efficient, but is still cannot (yet!) be reflective and insightful. If those are the human traits that we do not want to lose to AI, we should be teaching students how to use tools like ChatGPT strategically and ethically.

I challenged ChatGPT to write about Technology-Mediated Change. It offered the following:

The development of technology has been a driving force in shaping the world we live in. From the invention of the wheel to the internet, technological innovations have had a profound impact on our lives and societies. Technology-mediated change refers to the ways in which technology transforms our lives and the world around us.

The impact of technology-mediated change is far-reaching, affecting everything from the way we communicate and work to the way we think and learn. The widespread use of digital technologies has enabled people to communicate and collaborate across vast distances, breaking down traditional barriers to communication and opening up new possibilities for collaboration. This has had a significant impact on the way we work, with many jobs now requiring employees to be proficient in using digital technologies and tools.

In addition to its impact on the workforce, technology has also changed the way we learn and access information. The internet provides us with an unparalleled wealth of information and knowledge at our fingertips, enabling us to expand our knowledge and understanding of the world. This has revolutionized the way we learn and has made education more accessible to people around the world.

However, technology-mediated change is not without its challenges and drawbacks. The rapid pace of technological development has created new ethical and privacy concerns, as well as exacerbating existing inequalities and social issues. The increasing use of algorithms and artificial intelligence in decision-making processes, for example, has raised concerns about bias and discrimination, and the concentration of power in the hands of a few large technology companies has led to increased scrutiny of their role and influence in society.

In conclusion, technology-mediated change is a complex and far-reaching phenomenon with both positive and negative implications. It has the potential to improve our lives and transform the world in profound ways, but it also presents new challenges and ethical dilemmas that must be addressed. As technology continues to evolve and shape our lives and the world around us, it is important that we carefully consider its impact and implications and work to ensure that its benefits are distributed equitably and responsibly.

If you ask ChatGPT to write a short essay on any ill-defined topic or issue, you might get a similar response. A cursory analysis might reveal patterns like: 

  • the default use of US English spelling
  • a general introduction and definition of the topic
  • two or more paragraphs on perspectives about the topic (e.g., pros and cons)
  • no backing of claims or attribution of sources of information 
  • a generic conclusion

This essay structure was essentially the same as the ones it gave me when I first tested ChatGPT in December last year.

Someone once pointed out that the current iteration of ChatGPT writes like 16-year-old (see tweet above). I concur. The content is not sophisticated, but it might pass the scrutiny of a layperson unfamiliar with the written topic.

I did not ask ChatGPT for a rewrite or challenge the attempt. If I threw questions or challenges to ChatGPT, it would have made improvements. Such critiquing is a skill that a novice writer might not possess. This is one way to catch a lazy writer because they will likely submit the first draft.

But deciding what “lazy” writing looks like is relative. The way we write now — type-written, automatic spell and grammar-checked, concurrent searching and reading of just-in-time information — is lazy compared to when handwriting was the only option. 

Heck, writing 30 years ago with a simple word processor lacked tools and refinements we have now. Relative to that, writing today could seem lazy. But that does not mean that such writing is worse, unsophisticated, or uninformed. We have learnt to use new writing tools to get better at it. 

ChatGPT is another new tool. It is no doubt a powerful tool and possibly a watershed moment in how we embrace AI. We can learn to use and improve this new writing tool, or we can pretend that the old ways are always better and try to ban the likes of ChatGPT. 

Doing the latter is as futile as standing your ground in the rising tide — you will drown. Better to move with the tide and embrace technology-mediated change.

GPT-3 or Generative Pre-trained Transformer 3 “is an autoregressive language model that uses deep learning to produce human-like text. Given an initial text as prompt, it will produce text that continues the prompt” [source].

What is GPT-3 capable of? A group, Technophobia.org, whose Twitter byline reads “tackling reactionary technophobia in the media, government and academia whenever we see it” challenged it to write about about the moral panic around chairs.

GPT-3 might have technophobes and the igno-crati screaming about how AI is going to take all our jobs or end the world. @TechnophobiaOrg’s ask of GPT-3 was a perfect response because it illustrated how selective our fears can be, how useful or helpful objects can be feared, and how write-ups with no backing, attribution, or referencing can seem convincing.

I welcome a tool that challenges teachers and educators. These groups should be wondering how to take advantage of such writing tools, how to teach students to think when given the option of using such tools, and how to tell the difference between AI and human-generated writing.

If teachers and educators want a low entry attempt at GPT-3, they might try start here:

I tried the chat tool and these were its responses to my prompts to contrast game-based learning and gamification as well as the flipped classroom and flipped learning.

OpenAI chat tool contrasting some terms.

I messed around with the playground tool, which is something that might be used to start or even write entire essays. I challenged it to compare and contrast game-based learning and gamification,

OpenAI playground tool contrasting GBL and game-based learning.

The AI got much of it factually right. I would argue that even some classroom practitioners do not possess enough nuance to distinguish the concepts I presented GPT-3 with.

It was not perfectly on point, of course. For example, a reader might be hard-pressed to see the difference that the chat tool tried to express about the flipped classroom and flipped learning. The missing nuance was that the teacher was still the primary provider of information in the flipped classroom and more of a facilitator of learning in the flipped learning model.

I could challenge what the AI offered, but as I was new to the tool, did not offer suggestions or critique. But I have one concern: The generated text is blustery claim with no backing, i.e., it does not cite its sources.

The AI can only get better with use and testing. Teachers and educators can also get better by embracing and shaping this next leap in schooling and education.

Spotify source

I thoroughly enjoyed the latest Build for Tomorrow podcast episode Why People Can’t Write, and How to Fix That.

The host, Jason Feifer, started with the hook of how teachers complain that kids today cannot write properly thanks to their texting habits. Regular listeners like me might guess that Feifer would unpack this as an argument of correlation and not causation, i.e., texting is not the cause of poor writing; it is one of many possible contributing factors.

Feifer went on to interview a few language and writing experts. One in particular, Elizabeth Wardle, a professor of written communication at Miami University in Ohio, attributed the root of the rot to university writing classes in 1875 (see transcript).

Wardle went on to critique how writing was reduced to the five-paragraph structure because, as Feifer put it, this was “easy to teach, easy to grade“. Here is how Wardle phrased her argument (24 minute mark of the podcast): 

There’s well-structured problems, and there’s ill-structured problems. Well-structured problems have one right answer… Ill-structured problems do not have a right answer. Every writing problem is ill-structured. There’s a bunch of ways you can do it… But school really likes well-structured problems because they’re so much easier to assess.

This is a reminder whether we are teachers of writing or change agents: It is a mistake to oversimplify what is inherently complex.

Our real world problems tend to be ill-structured. Providing overly or well-structured crutches for our learners will not prepare them for what comes next. Such structures are also inauthentic and unmotivating. 

Embracing complexity is hard work. Anything worth doing takes hard work. Do the hard work.

Am I happy that there is a study and meta research that reports that there is no statistically significant advantage of handwriting over typing notes?

Sort of. In a previous reflection, I explained that it is what students do with recorded notes that matters more than how they take them. Their preferences also matter.

I am also glad that there is ammunition for me to fire back to anyone that claims “research says…” and does not go deeper than that.

But here are a few more factors to consider about this debate.

First, a quiz was the measure of ability to recall. A quiz and recall — the most basic tool for the most fallible aspect of learning. Consider these: Learning is not just a measure of basic recall and our brains are designed more to forget than to remember.

Second, the students in the study were not allowed to review their notes before the quiz. On one hand, this is good experimental treatment design as it excludes one confounding variable. On the other, this is inauthentic practice — the point of good note-taking is to process them further.

Finally, this type of research has been repeated enough times for a meta study. It is an indication of technological determinism, i.e., we attribute disproportionate effects of the type of technology (writing vs typing instruments). In doing so, we foolishly discount methods of teaching and strategies for learning.


Video source

When I was curating resources last year on educational uses of artificial intelligence (AI), I discovered how some forms were used to generate writing.
 

Video source

YouTuber, Tom Scott, employed writing AI (OpenAI’s GPT-3) to suggest new video ideas by offering topics and even writing scripts. The suggestions were ranged from the odd and impossible to the plausible and surprisingly on point.

This was an example of AI augmenting human creativity, but it was still very much in the realm of artificial narrow intelligence. The AI did not have the general intelligence to mimic human understanding of nuance and context.

I liked Scott’s generalisation about technology following how AI worked/failed for him. He described a technology’s evolution as a sigmoid curve. After a slow initial start, the technology might seem to suddenly be widely adopted and improved upon. It then hits a steady state.

Tom Scott: Technology evolution as a sigmoid curve. Source: https://youtu.be/TfVYxnhuEdU?t=431

Scott wondered if AI was at the steady state. This might seem to be the case if we only consider the boxed in approach that the AI was subject to. If it had been given more data to check its own suggestions, it might have offered creative ideas that were on point.

So, no, the AI was not that the terminal steady state. It was at the slow start. It has the potential to explode. It is our responsibility to ensure that the explosions are controlled ones (like demolishing a building) instead of unhappy accidents that result from neglect (like the warehouse in Beirut).

This hit my Twitter stream yesterday.

Confusing WHO statement about ibuprofen.

The one thing I can say for sure about this WHO statement is that it is confusing. What is the meaning of “does not recommend against the use”?

A clearer and more direct statement might have been: WHO does not have enough information at the moment about using ibuprofen to treat the symptoms of COVID-19.

I would jump on this ordinarily. But I have been sensitised to writing as a form of persuasive communication because I have been grading student essays.

Most of the essays need the same medicine. No, not ibuprofen. The medicine is clear and direct writing. This doctor says take one of each tablet (clear and direct) before, during, and after writing. It will prevent headaches.

I spent much of the week providing feedback and grading assignments that included a response essay. That part of the assessment required students to read a statement, agree or disagree with it, and then defend their stance with published evidence.

With a few exceptions, the responses and level of writing worried me. I am recording some reflections while the experience is fresh so that I can take pre-emptive steps next semester. I address two categories: Structured writing and writer mindset.
 

 
The main problem seemed to be that students did not know how to structure their responses. So I am offering some writing tips for the next batch of students. (This will add to the ones I had already written for a previous version of the assignment — crafting a teaching philosophy.)

Start with a declarative statement. When asked to make a choice or a stand, state it clearly and concisely at the beginning of the essay. There is no need for personal stories or opinions, trend analyses, waxing lyrical, and grand-standing. Get to the point.

Consider a nuanced stance. A choice is not always binary. Depending on the circumstances, both options could be valid. An option could also lie somewhere in between two extremes.

Defend your position in paragraphs. Each paragraph should contain one main idea or claim. That claim should be backed up with evidence. In the context of the assignment, the evidence should come from published research. The research should be unpacked to highlight how it supports the claim.

Consider counter arguments. A mature point of view is one that recognises contrary findings and perspectives. These might provide nuance to the overall argument or be used to strengthen the original stance.

Write the introduction and conclusion first. These are like the open and close brackets that encapsulate one’s thoughts. When a writer does this, the middle is like beads on a string. Sometimes the flow is simple and moves logically from one end to another. In other instances, the path might branch or is more elaborate. But in all cases, the conclusion reinforces the declarative statement.

I continue with writer mindset in part 2 tomorrow.

 
I think about an audience of one in two contexts.

The first is my daily musings on this platform. When people tell me they read my blog or ask me why I reflect daily, I tell them that I do not write for them; I write for me.

I think out loud, but organise my thoughts as I do. It is a habit and a discipline. It is one way I learn.

The second way I am reminded about writing for an audience of one is when I provide feedback on essays and grade papers. Students submit these papers as a requirement for course work and get a grade at the end of it.

They invest time and effort for an audience of one — me. Some do their best, some do not care. A few are skilled or learn to write for the reader, but most are not or do not.

Writing for the first audience of one is current and public. It has reach and impact that I cannot predict. I know why I keep doing it.

Writing for the second audience of one is a relic of schooling and even university education. It tends to stay in the confines of a course and makes little impact (if any). So why do we keep doing it?


Archives

Usage policy

%d bloggers like this: