Another dot in the blogosphere?

Posts Tagged ‘e-learning

I was a graduate student when I first found out about the disproportionate amount of time it took to prepare e-learning resources.

The ratio of development time (input) to learning time (output) varies. A fairly recent and oft quoted study by Chapman cited 127 developmental hours for every hour of e-learning (127:1). This ratio was for Level 2 e-learning developed relatively quickly from templates.

According to Chapman, the research data originated from 3,947 instructional designers (or people with similar roles) representing 249 companies.

The ratio might sound impressive because the numbers are a result of the efforts of corporate teams responsible for organisational e-learning. Such ratios are also rules-of-thumb sought by freelancers to provide estimates for potential clients.

I do not recall the number being so high when I was graduate student. However, back then the technologies did not include the more social, augmented, and virtual ones we have now.

That said, I do not know of any responsive learning organisation that can afford to invest 127 preparatory hours for an hour of standards-based training or e-learning. A freelance instructional designer (ID) would have to work thinner, lighter, and faster to compete for and retain clients.

ID work is a small part of my consulting work as I have to factor in many other considerations, e.g., institutional policies, social contexts, group dynamics.

I have kept track of my preparatory time in my latest consulting effort. Without revealing details covered by a non-disclosure agreement, I can say that the effort focuses on a small group of educators who need guidance in a form of communication.

The situation is dynamic as I have to respond to volatile schedules. I often have little time for preparatory work. For example, I gave myself a week to prepare a just-in-time segment for participants. I took 30 hours over six days to prepare for a 3-hour blended session. This is a 10:1 ratio.

So is my effort (10:1) less than worthy of a corporate one (127:1)? Based only on numbers, it is. Based on quality — my knowledge of context; the blending of content, pedagogy, and media; the attention to detail — I would argue not.

I had an e-xhausting week conducting e-valuations of future faculty.

The work week ended with a lovely “25 years of edtech” review by Martin Weller that focused on e-learning. A paragraph that struck me was:

By 1999 elearning was knocking on the door, if not already part of, the mainstream. In a typical academic fashion we argued what we meant by it, and it was obligatory for one person at every conference to say in a rather self-satisfied manner “there’s already an e in learning”.

The people that might still point out there there is still no need for the first “e” in e-learning are missing the point. Some of e-learning then and now focused on teaching (and poor teaching at that).

The “e” in e-learning should not be merely enhancing teaching or e-doing without actually learning. E-learning is not for keeping students bus-ee.

The “e” in e-learning should focus on learning — it should be enabling and empowering students to learn optimally, meaningfully, authentically, and perhaps in ways they might not have imagined before.

To design this type of e-learning, the approaches should be just-for-mE and just-in-timE. The foci are the learners and an inductive mode of learning.

This is a continuation of yesterday’s rant on a poorly conceived video by Channel News Asia (CNA), “Can e-learning make you dumb?”.

The presenter (and his writers, if he had any) equated educational apps with e-learning. Any apps might be used for e-learning, but they do not represent e-learning. Furthermore, labelling an app “educational” does not make it so. It is about HOW any app is used that makes it useful for schooling, education, or learning. This principle seemed to be lost on the makers of the video.

Today I critique the video in the order in which its ideas were presented.

The video started with the now iconic dragon playground as a representation of how kids used to play in the past. Its message was clear — nostalgic thinking was better even if it did not consider changing contexts and fallible memories.
Nostalgia quote.
The presenter then interviewed three sets of researchers and clinicians.

The first was a researcher from the National Institute of Education, Singapore. There was nothing new from this segment if you keep up with educator blogs or current papers on screen time.

The strategy was the same — highlight unwarranted fears and conveniently leave out the importance of supervised and strategic use of apps by children.

The most alarming segment of the video started with this question from the presenter:

These apps are just bad at teaching our children. What if they could also be messing our children’s brains in the long run?

The presenter started with a tiny sample of non-identical twins (n=2) to test executive function after one twin played with app and other sat and drew. He then showed how the app-using child seemed to have problems following instructions compared to his non-app kin.

The presenter claimed that his illustration was a “ripoff” of an actual study. So was the original study just as poorly designed and implemented? Any critical thinker or researcher worth their salt would ask questions like:

  • Were there no confounding variables that could have affected the results?
  • How can anyone control for all contributing factors?
  • Were the treatments switched after a sufficiently long rest period?

The only statement from the presenter that I agree with was his admission that “this is far from a scientific experiment”. His pantomime attempt to put the app-using child in bad light was neither valid nor reliable.
Texting Congress 1 by afagen, on Flickr
Texting Congress 1” (CC BY-NC-SA 2.0) by afagen
The presenter then interviewed two clinicians. “Interview” might be too generous; it was more like selectively confirming bias.

The first item on the interview list was the fabled harmful screen time. In doing so, they conveniently lumped all devices with screens to harmful screen time and ignored more nuanced definitions and revised guidelines from authorities like the American Pediatric Association (see this curated list of resources).

For example, one of the two clinicians pointed out the harm of passive screen time from watching too much TV. However, this did not discount active screen time.

If you do not know what active screen time looks like, I share a snapshot of future instructors I teach and mentor. This group was using apps with their learners.
Active screen time.
The other clinician said active use involved two-way communication or interaction with the environment. However, the video producers opted not to balance their bias with examples of such active screen time. They seemed to focus on children only as passive consumers and not active producers of content.

Not content to fearmonger about short-term effects of using apps, the interviewer also asked how the apps affect the career prospects of children. Read that again: Career prospects of children. This tangent then led to children leading lives of crime. I kid you not.

Reasonably logical and critical people do not need research or “research” to realise that the interviewer was over reaching here.

As if to appease the interviewer’s agenda, one researcher gave an example of a distracted child in a classroom. Really? This could be any child, app user or not, or to a child with ADHD.

There is no research that says that children sitting still are ultimately successful. Nor should there be. Not only are such studies unethical, they are illogical. No one can claim that a single factor (like app use) determines a child’s career prospects.

That same researcher suggested that a distracted child could suffer from bad grades, have poor health, and end up committing crimes. How can anyone draw a single, clear, and unbroken line that links a child’s app use to an adult’s job prospects or likelihood to commit crimes?

If the researcher was prone to exaggeration, then the interviewer was prone to oversimplification. He declared on camera:

I didn’t realise that just more screen time can develop to more crimes in society.

The real crime was that Channel News Asia pushed such drivel on screen.

The final expert interviewed by the presenter did what most people do with the delayed gratification study — misinterpret it.  The emphasis of the study was not IF a child delayed gratification, but HOW they did so.

The expert used the misinterpretation to highlight how apps provide instant gratification. Both the expert and the video producers conveniently ignored that both rewards and app use can be about the decision-making processes and the choices a child makes.

The CNA video was an attempt to pander to base fears instead of challenging viewers to look beyond the obvious. The question (“Can e-learning make you dumb?”) was designed as click bait and was a misdirect.

The answers were like a poorly written General Paper by a scatter-brained junior college (JC) student. That JC student was not a distracted app user. She was not supervised by her parents nor guided by teachers. She was not taught to question critically or research thoroughly.

An app alone cannot teach; an adult needs to be involved to monitor, moderate, and mediate. An app alone cannot make you dumb. Uninformed use, uncritical processing of the CNA video, or misguided beliefs in misinformation make you or your children dumb.

Apps do not make you dumb or keep you ignorant. Only dumb people who choose to be wilfully ignorant do.

I discovered this piece on Channel News Asia, “Can e-learning make you dumb?” (Note: To view the video, you must reduce your browser security by unblocking all insecure elements. If you see the video on loading the page, your need to lock your browser down!)

I take exception to this question, so I will make an exception. I am going to react to it at face value first.

I could cite the maxim that is the Betteridge law of headlines (see link here or the excerpt below). The answer to the question “Can e-learning make you dumb?” is no.

Betteridge law of headlines

To be fair, the law also applies if the question was phrased “Can e-learning make you smart?” The answer is also no. The questions are oversimplifications; e-learning alone does not make you dumb or smart.

That aside, the video focused on gaming apps that vendors and providers classify as “educational”. This is not the same thing as e-learning. So the question was an intentional misdirection and this raises another question: How might click bait get you views?

I was dumbfounded by the original question. No, I was not speechless. I found a new level of dumb instead. The question reeked of confirmation bias, luddite thinking, and wilful ignorance.

How do I know this? The blurb for the video was that the apps were “highly addictive and they can mess with the brain”.

To provide some balance, consider how skilled educators and informed parents might turn the negative “addictive” to the positive “addicted to learning”. And “mess with your brain” is fear mongering for what is a fundamental cognitive process called dissonance; it is integral to learning.

The headline, blurb, and accompanying video were an effort to spout the tired and uninformed rhetoric instead of actually making a difference. If there is anything dumb, it is the messages it tries to propagate. I outline and critique those signals tomorrow.

Primary 1 to 5 students stayed at home because of the PSLE oral exams for Primary 6 students late last week. When the first group of students needed to access e-learning resources from MCOnline, the service provider’s website crashed.

Parents complained, e.g., “the website is not available for public access” and “it took us 10 hours to finish a one-hour task”.

Even when the service was available in the past, one parent said, “The website is often very slow during peak hours to the point that it kicks you out”. Another parent, who also happened to be an educator, was resigned to saying, “I’m so used to this”.

I could point out tongue-in-cheek that MCOnline servers went on MC (medical certificate, the excuse slip for missing school, duty, or work). Instead, I shall point out the excuses and non-answers.

An unnamed MC Education representative said that a third-party arrangement to increase capacity “was not activated”. Why not? There was no reason given in the article for this oversight.

Will the service provider be held accountable for this outage just like the telco providers are? The article did not mention this either.

As information about the Student Learning System (SLS) was released last week, the attention turned there. Unfortunately, the focus was on access during emergencies. That might be why e-learning in Singapore actually stands for emergency learning.

An unnamed spokesperson from MOE said that the SLS would take advantage of cloud technologies. She also mentioned how the SLS would be compatible with most devices.

The first answer was vague. Just what are cloud technologies to the layperson? Which CMS or LMS provider does not depend on cloud technologies today? Since they do, why did a crash happen anyway? What is to prevent the SLS from suffering the same fate?

The secondary mention was a redundant non-answer. What is the point of multi-device compatibility if none can access the resources when servers are down?

We do not need redundant answers. We need more “redundant” servers to share the load. This is the sort of cloud technology the spokesperson probably meant. But this answer is still vague.

A better example might be to draw on what online users already experience with YouTube or Amazon. The uptime of these services is about as reliable as our power and water supply because they rely on “cloud technologies”.

Can MCOnline and the SLS promise the same reliability? These are services that we pay for with our tax money. Compare that with free and open services like YouTube. These are paid for by advertising that might be linked to our personal data, but that is not the point.

The point is that access and reliability of online learning resources come at a price. Neither cost is transparent to the average user. However, freely available services like YouTube are subject to scrutiny. Google, the parent company of YouTube, was recently fined 2.4 billion euros by the EU for anti-trust issues.

So I ask again: Will our online learning service providers be held accountable for outages like the telco providers are? Or is learning at home not as important as learning in school?

Let’s see if we put our money where our mouth is…

Yesterday I reflected on disaster-based technology integration. Today I focus on our context and what NOT to leverage on.

Singapore schools practice e-learning days where kids stay at home for lessons. Prior to this, schools send notifications to parents that explain how this helps us be prepared for the unexpected. In our context, this might mean a viral outbreak or the haze.

That type of rationale — e-learning is emergency learning — does us no favours. The viruses do not celebrate racial harmony in one day and the haze does not heed our kindness campaigns. That is my way of saying that WHEN such events occur and HOW LONG they will take is not easy to predict.

One e-learning day repeated a few times a year is not going to cut it. I know of schools that stagger e-learning content in batches to prevent server overload that one day. How prepared are we should we require constant access over a protracted period?

If there is model to look to, it is how Google ensures that YouTube is up 24×7. That sort of e-learning (entertainment-learning) is available all the time and any time.

When e-learning is relegated to a single day, the preparation to implement it is minimal both technologically and pedagogically. Content and platform access are outsourced to one of a few edtech vendors. There is practically no pedagogy beyond the blanket statement of encouraging students to be self-directed learners.

Being self-directed is important, but most e-learning days are not exemplars of that. Students are told exactly what to do, when, and how. They are following formulas, instructions, and recipes. They are not being independent.

What might self-direction look like? When learners have an authentic and complex problem they want to solve, they meet in a WhatsApp group they already have, watch a few relevant YouTube videos they look for, and discuss solutions.

Any parent with an e-learning notification letter can also tell you that e-learning days seem to coincide with days or the week right before vacation periods. Is the focus meaningful learning or administrative creativity? Does this mean that the e-learning is in excess, extra, or otherwise good-to-have but not essential?

Not many adults examine the quality of such “e-learning”. As a concerned educator and former head of a centre for e-learning, I offer some questions for both parents and teachers:

  • Bearing in mind what I just wrote, why do you have e-learning?
  • What does the e-learning material and experiences do the SAME as school?
  • What does the e-learning material and experiences do DIFFERENTLY from school?
  • What was worth the effort? What was effective and what was not? Why?
  • After answering the question above, why do you have e-learning (really)?

What might we take away when we compare our efforts with the disaster-driven technology for e-learning?

We should not be complacent when we have the time, space, and resources to do different and do better. But like the case study I summarised yesterday, we should leverage on what learners already do authentically, seamlessly, and without boundaries.

Does anyone learn anything from school-sanctioned e-learning days? Do the kids learn? Do the teachers? Do the administrators?

As an e-learning practitioner and director before, I had enough data, knowledge, and authority to say the answers to those questions was no. I have even described many e-learning events as more e-doing than actual e-learning.

Now I ask these questions again because I only have anecdotal answers.

From my regular interactions with teachers, I find that:

  • schools still schedule e-learning days.
  • teachers require students to work only according to that schedule, e.g., students are not encouraged to access or complete e-tasks outside that time.
  • the tasks are equivalent to conventional worksheets.
  • the content might be superficial or peripheral, or are easy enough to be repeated in class.

How many times do we need to test if kids can do things online that they already do in school? They already have strict school structures and class schedules for that.

Do people not see that the point of e-learning is to:

  • provide flexibility?
  • push creative pedagogy?
  • accommodate different learning needs?
  • nurture independent learning?
  • test the effectiveness of something different?

This should be the operating principle of any technology-mediated learning: It is a means of change for the better. Schools should not be doing the same old thing in a different medium.

So in the case of e-learning, school authorities and teachers should not be focusing on dealing with problems that will reduce over time. In Singapore’s context, these might include technology access and procedures to access e-platforms. These issues will not disappear entirely, but they should not be what we concern ourselves with.

Instead, we should be dealing with e-learning issues that will persist. Amongst many things, kids need to be taught how to independently or collaboratively read, listen, watch, analyze, evaluate, create, share, and critique online. These are skills and values-laden processes.

If they are not taught these, I question the validity and quality of the e-learning. The students are very likely going through the motions of e-doing instead of actually learning something valuable.

So I ask again. What do the kids learn? What do the teachers learn? What do the administrators learn?

Click to see all the nominees!

QR code

Get a mobile QR code app to figure out what this means!

My tweets


Usage policy

%d bloggers like this: