Another dot in the blogosphere?

Posts Tagged ‘digital

One of the simplest forms of digital curation is teaching YouTube algorithms what videos to suggest.

Curating by informing YouTube algorithms.

I do this by marking videos that I have no wish to watch with “not interested” (see screenshot above). I also remove some videos from my watched history listing.

Sometimes I watch videos based on a suggestion or a whim, but I find them irrelevant. If I do not remove them from my watch history, I will get suggestions that are related to those videos the next time I refresh my YouTube feed.

These simple steps are an example of cooperating with relatively simple AI so that algorithms work with and for me. This is human-AI synergy.

While it might have seemed like I was picking apart this opinion piece on providing universal Internet access yesterday, I support most of its ideas and the principles it was based on.

For example, one of the concepts was that it was not enough to simply provide devices and broadband connections to all households. We also need to drive behavioural change, e.g., utilising the connections and devices productively and ethically.

Providing hardware and software without good “humanware”* leaves users open to potential harm. For example, they might not know how to secure their devices against hacks.

Equitable access to broadband connections and devices also does not ensure access to information. Users need to be taught how to work remotely with secure video conferencing or to participate in online learning.

*I consider the practices, attitudes, and values that are socially transmitted and negotiated to be humanware.

One barrier to the installation of humanware is another divide: The access to timely advice and reputable sources of information. Consider the importance of using Virtual Private Networks (VPNs).

VPNs are not created equally even though most claim to provide secure and private Internet surfing experiences. Rerouted traffic goes through a VPN provider’s servers and what they do with all that data is not immediately transparent to the average user.

VPNs also allow users to access information they need or want even if an overriding policy prohibits it. This does not have to be an illegal act.

I have a Netflix subscription and was looking forward to the interactive episode of Unbreakable Kimmy Schmidt. According to the actress who plays Kimmy, it was released in the USA over two weeks ago.

The episode has been delayed indefinitely in Singapore (see screen shot below).

Unbreakable Kimmy Schmidt interactive episode delayed.

This past week I visited Netflix hoping to watch the special episode. The page oscillated between displaying the “safety” message and telling me it was not available in my region.

I know Netflix dubs episodes in different languages to reach larger audiences, but the last time I checked, I spoke english in Singapore. I still do.

The rules for not streaming the interactive episode were not known, they prevented access, and they did not make exceptions. This was despite the comedy being mild. How mild? This IMDB parent’s guide stated that “’Shit’ is used once in season 4.”

I resorted to using my VPN service to watch the episode. As it was a choose-your-pwn-adventure special, I watched it with my wife to enjoy the different routes it took.

So what it my point? VPNs provide access to what you want or need even when obtuse or outdated policies hold you back. In my case, I enjoyed some harmless entertainment. In the case of a worker or student, a good VPN might not only provide a more secure web browsing experience, it could also provide a richer one. But only if this humanware is first installed and constantly updated.

Access is not just about the hardware and software, it is also about the know-how and know-why of humanware.

This opinion piece by two academics about digital access as a universal right and basic utility could not be more timely. But I seek to balance it with some critique.

The article cited a statistic that might surprise those who view affluent Singapore from the outside:

According to Professor Jean Yeung’s recent Straits Times article on her study of a nationally representative sample of over 5,000 children aged six and under, although the Wi-Fi penetration rate is near universal in Singapore, 8 per cent of families in her study who lived in rental units did not have a connection, and 44 per cent lacked a computer or a laptop at home.

The authors pressed with this statement:

As local media reports revealed, the home-based learning experience was highly uneven across families.

Whereas affluent families fretted over higher order concerns such as the quality of online instruction and children’s excessive screen time, less well-off families grappled with basic problems of device ownership and Internet access.

I agree, but I think that that we should not be looking for equality, i.e., treating everyone the same. We should be striving for equity, i.e., provide more help and resources for those that need it more. This is not just a semantic argument. It is a pragmatic one because it shapes the actions we take.

U-Save 2020.

Consider a system we already have in place, U-Save — vouchers that eligible households receive to offset the cost of utilities. The government provides more financial aid to those living in smaller apartments and less to those in larger ones. This is based on the working principle that the less well off live in small apartments and need more assistance.

The authors of the article then proposed that a system like Wireless@SG be extended to every home:

With our Nationwide Broadband Network successfully in place, offering broadband access speeds of 1Gbps and more, extending free home Wi-Fi to residential areas will not involve more than a concerted coordination with telcos outfitting every home with modems and wireless routers.

Our other utilities — electricity, gas, water — are not free and their infrastructure needs to be maintained. Wired and wireless infrastructure need to be maintained and upgraded. The latter tend to be the first to fail and make headlines compared to the more established utilities, e.g., StarHub and M1 each had a major outage in April and May respectively during our circuit breaker (our shelter-in-place).

Making Internet services “free” will place the burden on taxpayers. The same taxpayers will also likely have to put up with inferior customer service since there is no commercial pressure to compete and improve.

The authors then addressed the need for digital devices:

The current NEU PC Plus scheme offered by IMDA is generous and well-intentioned.

Yet, as with all mean-tested programmes with conditions, coverage will fall short. Some who need it will not apply while some who apply will not be given.

NEU PC Plus programme by the IMDA.

[Image source]

They then pointed out how disadvantaged families tended to choose mobile phones over computers because phones cost less. Computers, if present at home, were old and shared.

Financial cost is not the core issue. A Chromebook or mid-range laptop costs less than a high-end mobile phone. You might even be able to buy two or three Chromebooks instead of fully-specced iPhone.

The pressing issue is that learning resources and platforms tend to be optimised for computers. Computer screens are larger and computers have more processing power, storage space, and extendibility (think peripherals).

I argue that there an urgent need to shift the mindsets of teachers, instructional designers, and platform developers. The shift is mobile first (or even mobile only). This means that content delivery, curation, and creation, as well as cooperation and communication, be designed with the affordances of a phone or slate first.

Such a shift highlights another need: Access to professional development for learning and platform designers to operate with such a mindset. If we design first for mobiles, we reach all who have access to mobile devices.

Thinking and doing mobile first is not reaching for low-hanging fruit. If designers and developers currently operate on the desktop paradigm, it can be challenging for them to do otherwise.

But if they do, they might discover how the many affordances of a phone — location-awareness, orientation in 3-D space, augmented audio and video among them — provide opportunities that level the playing field.

We have all probably experienced this and our reaction might be similar. But the effect and impact of clicking on a paywalled article is not that simple.

The experience and reaction is shared if you consider only your perspective and what is immediately obvious.

It is not if you consider the possible metrics for the newspaper, e.g., clicks (successful or not; which headlines work better), insidious ads (e.g., those running in the background or whitelisted by your adblocker), records of paying and non-paying visits, cookies that track what grabs your attention (e.g., op pieces over fluff), etc.

In the longer term, the newspaper gets something out of you — data about your habits and preferences and maybe even some ad revenue — even if you do not get what you want from a click.

That is why one of my concerns of late is our digital rights and privacy [1] [2]. I am even more concerned when the target audience is school-aged kids and young adult learners in institutes of higher education. They are tracked as they access content and learning management systems.

Some of the tracking is necessary, e.g., to take note of where the learners were last at and where they might need to go next. But some tracking is not, e.g., if data is mined by third parties without the knowledge of the learners or their parents.

Clicking on a link to get what you want (or to be denied access) might seem like a simple transaction. However, there are insidious transactions that we might not know or care about. This is like throwing plastic bags away and not knowing or caring where they end up.

We need to know and act better. We need to be more digitally and information literate. If the agencies that guide us do not have compasses that point north [example], we need to teach and police ourselves.

The title of this reflection is a quote from one of the participants of the video below.


Video source

The participants had to evaluate the claims made by another video producer about the properties of “real” and “fake” food. I highlighted one reaction because it was an honest and direct response to attempts to mislead.

However, it might be easier to spot misleading claims about those food claims than statements in tweets or headlines.

Thankfully that is why we have the Navigating Digital Information series by Crash Course [my thoughts and annotations on the series] and two episodes so far by TED-Ed [annotations on part 1] [annotations on part 2].

The sad thing is that the video above will probably get more views on its own than all 12 of the videos combined about being digitally literate. It is easier to tell people “Don’t be a dumbass!” than to get them not to be dumbasses.


Video source

The video above has a clickbait title — this one weird trick will help you spot clickbait.

The examples highlight not one but three strategies when evaluating clickbait titles of news or video reports:

  1. Drawing a line between cause and effect
  2. Understanding the impact of sample size on reported results
  3. Distinguishing between statistical or scientific significance and practical bearing

Crash Course provided a ten-part series called Navigating Digital Information. But what good is claiming to be information literate if you cannot prove it?


Video source

This TED-Ed video is a quick test on applying some of that knowledge by evaluating misleading headlines.

The video title states that this test is Level 1. So will there be more difficult tests?

I am sad. This is the last episode of Crash Course’s series on Navigating Digital Information.


Video source

This week’s focus was social media.

Host John Green started by outlining how social media has had far reaching consequences, e.g., shaping our vocabulary, changing our expectations of privacy, organising grassroots efforts.

But probably the most important impact of social media might be that it is now the most common source of information and news. This includes disinformation, misinformation, and fake news, all of which are easy to spread with a click or a tap.

The ease of creating, sharing, and amplifying is social media’s best and worst set of affordances. The affordances are neutral, but we can choose to bully and mislead, or make new friends and organise special interest groups.

Regardless of their purpose, social media are powered by targetted advertising and algorithms. Both affect what we read, hear, or watch in our feeds. This can create filter bubbles.

This insulation is a result of social media companies needing to keep us engaged. A consequence of this is that we might not get to process dissenting views or the truth behind the lies we are fed.

If we know what drives social media, we could take Green’s advice by:

  • Following entities that have different perspective from us.
  • Deactivating the recommended results or top posts so that you get a more neutral feed.
  • Avoid going down rabbit holes (deep dives of content or perspectives that result in more of the same or the extreme).
  • Exercising click restraint and practising lateral reading.
  • Having the courage and taking the effort to correct mistakes.

The week’s episode of Crash Course’s navigating digital information focused on click restraint.


Video source

Click restraint is about not relying on the first few returns in Google search. It is about scanning, analysing, and evaluating the rest of the returns. It is not about immediate gratification but about figuring out the most valid and reliable sources of information.

Why exercise click restraint?

Searches are not objective. The search algorithms (rules) are shaped by us and the results are processed by us. We do all these based on our perspectives, biases, or bubbles.

How might we exercise click restraint?

By analysing the search results first:

  1. Scan the titles and URLs of the results for their sources
  2. Read the snippets or blurbs that accompany the titles or URLs
  3. Try to determine if the nature of the resource — opinion piece, satire, official report, etc.

This week’s episode of Crash Courses’s Navigating Digital Information focused on data and its visual representation.


Video source

Data, whether represented by raw numbers or graphics, can seem objective. However, they are not neutral because people gather and interpret them. (As a former academic, I shuddered whenever I overhead colleagues talking about “massaging data”.)

In evaluating data, host John Green reminded us to ask:

  • Does the data support the claim? (Is it relevant?)
  • How reliable is the source of data? (Who commissioned the research and why? Who conducted it and why?)

As for data visualisations, Green reminded us to check if the graphic was based on real data (check its source) and that the data was transferred and presented accurately.

Another consideration specific to data visualisations like infographics is how complex phenomena are simplified in the creative process. This might sacrifice the accuracy of the data.

If we combine both sets of principles, we might be in a stronger position to evaluate the following example. Two organisations, used the same set of data to send messages on climate change.

Organisation A’s image is on the left and B’s is on the right.

Screenshot of graphs from https://www.youtube.com/watch?v=OiND50qfCek&t=201s.

Organisation A had already concluded that temperatures were not rising globally over time, so it manipulated the y-axis to range from -10 to 110 deg F. Organisation B zoomed in a smaller range and the average temperature increase was more pronounced. B critiqued A’s representation as misleading.

Both organisations used relevant data that supported their claims. The data was sourced from a neutral third party (NASA’s GISS). However, the presentation was manipulated by A to obscure the trend.

My perspective: Seeing should not immediately lead to believing because the data might be selectively or “sexily” presented. The first is only sharing data that supports preconceived notions; the second is using elaborate or compelling-looking visuals to disinform or lie.

A side note: Have you ever noticed that “lie” is central to believe?


http://edublogawards.com/files/2012/11/finalistlifetime-1lds82x.png
http://edublogawards.com/2010awards/best-elearning-corporate-education-edublog-2010/

Click to see all the nominees!

QR code


Get a mobile QR code app to figure out what this means!

My tweets

Archives

Usage policy

%d bloggers like this: