Another dot in the blogosphere?

Posts Tagged ‘covid-19

I am a fan of reflective pieces like behind-the-scenes (BTS) peeks at people and processes behind prominent products.

Video source

This televised townhall featured how the Moderna SARS-CoV2 vaccine was borne over the weekend. I wonder how many people watched and listened long enough for a lead scientist to explain how it took at least a decade of work and preparation for that to happen.

I also wish that people would read about the people behind the BioNTech and Moderna vaccines [NYT] [Reuters] [StatNews]. 

There was the lead scientist, Katalin Kariko, whose ideas and findings provided the foundation for both vaccines. Kariko struggled for years on how to deliver a therapeutic mRNA into cells. She could not get funding because her ideas were untested and she was demoted. 

Kariko’s emigration to the USA was also the stuff of movies. She hid money in her daughter’s teddy bear to avoid the US$100 export limit enforced by her home country of Hungary.

The couple behind BioNTech are of Turkish descent. BioNTech’s Chief Executive is Ugur Sahin and is described as “humble and personable”. The husband and wife team are medical professionals and were responsible for building on Kariko’s proof of concept and then getting Pfizer to produce their vaccine.  

By contrast, the story of the Moderna vaccine is fraught with infighting and Wall Street bro culture, e.g., putting money-making potential ahead of everything else. During the vaccine development, Moderna did not publish its findings; BioNTech published about 150 articles.

I am glad that Singapore is partnering with BioNTech in establishing a regional HQ and manufacturing facility here. Good people matter.

Rising above, I am reminded why something that looks quick and/or effortless really is not. There was a lot of toil, pain, and learning from failure that led up to the glam shot.

Teachers and educators can learn from press briefings.

Press Q&As are important for both politicians and the press. Amongst other things, they allow politicians to explain policy and journalists to clarify.

But politicians must communicate as best they can first. Take this important press briefing to remind us about using better masks as a pandemic control measure.

Video source

The general public needed to be reminded or educated on why cloth and single layer masks were insufficient. But I wondered why the ministers and experts did not provide examples of better masks.

These examples could be images or actual samples of such masks. The visuals or physical artefacts would illustrate and reinforce the verbal message of what “better masks” meant. See what this newspaper did the next day.

As an educator, I am not about to cite the bunk myth of what we remember aurally vs visually. That pseudoscience “theory” was a misused version of Dales Cone of Experience.

However, there is support for providing multiple stimuli for cognitive encoding. This is why teachers are taught to provide more than one medium and method when teaching a new concept to students.

Rising above, it is easier to stick to what one is comfortable with, e.g., just speaking and expecting people to listen. The problem is that your audience or learners do not see what you see in your mind’s eye. With just a bit more effort, e.g., bringing a few different mask samples, you get your point across more efficiently and effectively. Don’t just tell, show and tell.

Video source 

Video source 

The easy thing to do with videos like these is to show them to students who complain about going to school and telling them how grateful they should be.

The more difficult thing to do is to draw out meaningful questions, generate discussion, and educate our students on empathy and action. 

How do you balance the need to create a headline good enough to get readers to click through and getting an important message across? These should not be on opposite sides, but they are in a CNA news article.

This was the tweeted headline from CNA (screenshot below, in case the original tweet is deleted).

The actual article reported this:

First, ask yourself: How many people bother to click through, i.e., read beyond the headline?

Next, if readers do not read the article, they are left with the information that there are at least 2,700 reports of adverse vaccination effects among 2.2 million doses.

The potential impact of the headline is the attention paid to the 2,700 reaction cases. This creates or reinforces fear that fuels vaccination hesitancy. 

How many then learn that the adverse effects were classified into not-so adverse (common reactions) and actually adverse (serious reactions)?  The latter was represented by 95 cases.

That number of cases is 0.004% of doses administered (95/2,213,888 x 100). The article stated 0.04% which is 10 times higher. The same article has a table which reports the correct figure of 0.004%. The percentage in the main body of the text and the table do not match.

Finally, how many rationalise that 0.004% is a very small incident rate? How low is this chance? You have a 1 in 25,000 random chance getting a severe reaction to vaccination.

How unlikely is 1 in 25,000? I found a summary site of statistics maintained by someone who mined NSC and CDC data. If we were in the USA in 2002, each person had a 1 in 25,000 chance of being murdered with a gun.

If that is hard to relate to, then you get my point. The tiny chance and the large number of doses are difficult to rationalise. Suffice to say that the chance of reacting severely to the vaccination (or being gunned down) is very small.

Think of it this way: If you were in the USA and not terribly afraid that you were going to get shot, you should not be afraid that you are going to react severely to the vaccination.

The issue that writers and editors of newspaper headlines do not seem to understand is human psychology. People tend to focus on the part of the headline that screamed “reports of suspected adverse effects”. The headline also includes the initialism HSA, Singapore’s Health Sciences Authority. So it might come across as a warning. The number of cases could have been 27 or 270, but the focus would have been on the authority and the adversity.

The messaging is important. Recipients have a right to know the possible side effects of the vaccination. The HSA was transparent with its statistics. However, the news agency was irresponsible with the clickbait headline and the wrong calculated figure of the severe cases in the main body of its text.


Video source

If Johnny Harris has his facts straight, then there is a surprising and enlightening historical link between Belgian imperialism of the Congo and the Johnson and Johnson (J&J) COVID-19 vaccine. The video is well worth the watch for the story and the skill with which Harris serves it up.

For me, this is a reminder to always be aware of the history of any policy, process, or product. In my field, all three are ingredients of edtech.

Like the J&J vaccine, each form of edtech has storied histories. Some might have dark or dirty roots. While we cannot change the past or if the present might be positively unrecognisable from a shady past, knowing the history is still valuable.

How so? After being informed about the past, I believe that we can do something about making the present policy or practice better. For example, content management systems with strict command and control might be giving way to more open platforms that encourage co-creation and collaboration.

We can also can be humble about what we are involved in. This approach is particularly relevant if we are part of a successful intervention, e.g., the provision of mobile devices and Internet dongles to students-in-need. We acknowledge what others have done to enable online and distance learning, and do our part to keep everyone moving forward and upward.


Video source

This video is timely given the misleading way some people use the efficacy numbers of different COVID-19 vaccines.

The efficacy of the a vaccine is not the same as its effectiveness. I recommend this NYT article for an explanation of how something like “95%” efficacy is derived.

Vaccine trial efficacy is not the same as real use effectiveness. A trial use of the vaccine includes a placebo for one sampled group of people and the vaccine for another group. Actual use only includes the vaccine and is applied across a much larger group of people.

The J&J vaccine trials were also conducted in South Africa and Brazil. Vox video (https://www.youtube.com/watch?v=K3odScka55A) on why the vaccine efficacy numbers cannot be compared.

The J&J trials were also conducted outside the USA — in South Africa and Brazil.

The J&J vaccine trial was conducted over a more severe infection period. Vox video (https://www.youtube.com/watch?v=K3odScka55A) on why the vaccine efficacy numbers cannot be compared.

The J&J vaccine trial was conducted over a more severe infection period.

Back to the video — it explains why efficacy numbers cannot be compared. For example, the Moderna trial was only in the USA. The Johnson & Johnson (J&J) trial also included countries outside the USA (Brazil and South Africa) where variants of SARS-CoV-2 emerged. It was also conducted over a more severe infection period compared to the Pfizer/BioNTech and Moderna trials.

Here is something the video did not point out. The Pfizer/BioNTech and Moderna vaccines have high efficacies after two doses. The J&J vaccine is a single-dose shot.

Screenshot of the range of outcomes after vaccination. From Vox video (https://www.youtube.com/watch?v=K3odScka55A) on why the vaccine efficacy numbers cannot be compared.

The range of outcomes after vaccination.

The video also highlighted that all the vaccines are not designed to absolutely prevent COVID-19 symptoms. If after vaccination people got mild to moderate symptoms, the vaccine is considered effective.

During the trials, all the vaccines mentioned in the video prevented hospitalisation and death among sampled participants. By that measure, all the vaccines were just as good. If we focus only on trial efficacy numbers, we lose sight of this more important outcome.

One general takeaway that applies in any problem-solving and policy-making is this: Numbers are a start, but they are not the end. The explanations and narratives that accompany them provide depth, nuance, and exceptions. If we do not go beyond the numbers, we risk misinforming ourselves and others.

The tweet and report above are fodder for anti-vaccination Facebook groups and taxi uncles alike. The headline is irresponsible because it implies causality. However, no other factors for the death were explored or considered in the tweeted article.

Contrast the lack of context and information to the tweet thread below.

If I had to fault the tweet, I would point out that it did not immediately provide sources for the numbers. However, a Guardian article in the second part of Dr Clarke’s thread reported:

The MHRA, which collects reports of side-effects on drugs and vaccines in the UK through its “yellow card” scheme, told the Guardian it had received more notifications up until 28 February of blood clots on the Pfizer/BioNTech than the Oxford/AstraZeneca vaccine – 38 versus 30 – although neither exceed the level expected in the population.

The MHRA is the Medicines and Healthcare products Regulatory Agency in the UK.

The actual numbers of blood clot cases will vary over time, but the fact remains that the incidents are so low as to be below actual chance. What does that mean?

In an actual population, a certain number of people would naturally get blood clots. Take this thought experiment: We inject the entire population with saline that mimics blood plasma that has no drugs or vaccines in it. The result: More people will get blood clots with that saline jab than the AstraZeneca (AZ) vaccine.

The AZ vaccine use is new and the blood clot cases might rise. But for now the data indicate what Dr Clarke and others in the Guardian article have said — it is safe to use, not using it is dangerous.

Thankfully some good sense has prevailed since I started drafting this reflection. The BBC news report below revealed how the EU has declared the vaccine to be safe for continued use.


Video source

I have two takeaways from reading both news reports. The first is the image quote below.

It's easy to lie with statistics, but it's hard to tell the truth without them. -- Andrejs Dunkels

My second is a parallel in teaching. Just as CNA was irresponsible for its misleading article, it is just as bad to teach content without context. While the use of vaccines has regulatory bodies that will correct wayward action, everyday teaching does not.

The AZ vaccine might see a quick comeback with investigation and regulation. But teaching that focuses primarily on content and teaching to the test has a long term detriment — it nurtures students who cannot think for themselves.

Better edubloggers than me have reminded us why schools should not return to normal post-pandemic.

In a moment of serendipity, Seth Godin just blogged this:

…we learn in ways that have little to do with how mass education is structured…

…The educational regimes of the last century have distracted us. It turns out that the obvious and easy approaches aren’t actually the ones that we need to focus on.

How likely is meaningful change to happen? Not very, but we can hope while pushing from whatever edge and corner we are at.

If nothing substantial happens this time round, perhaps the next pandemic will bring a more forceful reminder.

History repeats itself. It has to, because no one ever listens. -- Steve Turner.

Today I reflect on a COVID-19 poll and compare it to end-of-course surveys.

Evaluating the above-mentioned poll result at face value, would you consider 6 out of 10 a good outcome?

Compared to pre-COVID-19, such a finding could be a good thing. But given how polls and surveys are not always designed and conducted scientifically, you might pause for thought.

That is not where I stop unless I was conducting a course on descriptive statistics and survey design. Instead, I focus on the purpose of a poll or survey — to take a snapshot of self-reported behaviours in this case.

A critically-minded person might ask if self-reporting is sufficient. After all, it is one thing to make a claim (e.g., I will keep sanitising my hands in future) and actually doing it. Other points of triangulating data like observation of behaviours and measuring sanitiser use would help determine the latter.

End-of-course surveys suffer the same weaknesses. They are about perception and self-reporting behaviours of students. If we are to really get a bead on learning, we need to pursue its longer tail, e.g., if and how students actually apply knowledge and skills in context.

This is not to say that end-of-course surveys have no use. Like the post-COVID-19 behaviour poll, they are quick snapshots of user perceptions. But they must be recognised as such. Like the quality of information from a single photo compared to that in a video clip, it is important to recognise the limitations of such a survey.

Today I try to link habits of an app use to a change in teaching.

Like many Singaporeans, I have had months of practice using the location aware app, SafeEntry, to check in and out of venues. We do this in a collective contract tracing effort during the current pandemic.

You cannot forget to check in because you need to show the confirmation screen to someone at the entrance. However, you can easily forget to check out* because, well, you might mentally checked out or have other things on your mind.

Therein lies a flaw with the design and implementation of the app. Instead of making both processes manual, the app could be semi-automatic. It could have a required manual check in at entrances, but offer automated exits.

How so? The mobile app is location-aware. It has a rough idea where you are and can suggest where to check in. This is why the manual check in is better — the human choice is more granular.

However, when people leave a venue, the app could be programmed to automatically check them out if the app detects that they are no longer there over a period of, say, 10 minutes. I say give the option to user for a manual check out or an automated one.

*The video below reported that checking out is not compulsory. But not checking out creates errors in contact tracing, i.e., we do not know exactly where a person has been and for how long. This not only affects the usability of the data but also inculcates blind user habits.


Video source

For me, this is a lesson on rethinking teaching during the pandemic by using awareness as key design feature. It is easy to just try to recreate the classroom room and maintain normal habits when going online or adopting some form of hybrid lessons.

But this does not take advantage of what being away from the classroom or being online offers. The key principle is being aware of what the new issues, opportunities, and affordances are, e.g., isolation, independence, customisation.

Making everyone to check in and out with SafeEntry is an attempt to create a new habit with an old principle (the onus is all on you). This does not take advantage of what the mobile app is designed to do (be location aware).

Likewise subjecting learners to old expectations and habits (e.g., the need to be physically present and taking attendance) does not take advantage of the fact that learning does not need to be strictly bound by curricula and time tables.

The key to breaking out of both bad habits is learning to be aware of what the app user and learner thinks and how they experience the reshaped world. This design comes from a place of empathy, not a position of authority.
 


http://edublogawards.com/files/2012/11/finalistlifetime-1lds82x.png
http://edublogawards.com/2010awards/best-elearning-corporate-education-edublog-2010/

Click to see all the nominees!

QR code


Get a mobile QR code app to figure out what this means!

My tweets

Archives

Usage policy

%d bloggers like this: