Another dot in the blogosphere?

Posts Tagged ‘algorithm

…the Facebook algorithms. This includes its adopted children Instagram and WhatsApp.

Or starve them at least by not posting, sharing, liking, etc. 

Photo by cottonbro on Pexels.com

It is not content that you are creating or propagating. It is data that you are creating. You and your behaviours are the data.

In the hands of responsible entities, such data might be handled with care. Facebook is irresponsible and greedy, and it craves user data. The recent tracking limitations in iOS 14.3 are useless if we do not limit ourselves.

I have been following the disaster that was the algorithmic grade prediction for the International Baccalaureate in the US and the GCEs in the UK.


Video source

The video above is one of several news reports on the issue. The NYT piece offers an excellent overview and critique.

I offer an oversimplification of the problem. Students could not take their exams due to the coronavirus pandemic. Administrators decided to rely on algorithms to predict their grades after using teacher-graded work and the school’s past performance as indicators.

Unfortunately, some students received lower than expected grades. Worse still, these students were disproportionately from poorer or otherwise disadvantaged schools.

A problem that administrators wanted to avoid was grade inflation, i.e., better than expected results compared to the previous year. This was an underhanded way of suggesting that teachers might grade their students too generously.

Another problem that they and other observers worried about was the impact of remote lessons and exam preparation on students. This was a shady way of saying that learning online was inferior and/or that teachers thrown into the deep end of emergency remote teaching did not do a good enough job.

However, what resulted was grades that were better than expected. They were so good that they might have been algorithmically adjusted downward.

So here is an unoriginal thought: Either the teachers had no integrity (they cheated by being lenient) or that online teaching and learning was better than expected. The en masse fraud among teachers is unthinkable. Some people do not seem to want to give credit to online teaching and learning. So what happened then?

It is a long time before I need to facilitate a course on future edtech again, but I am already curating resources.


Video source

As peripheral as the video above might seem, it is relevant to the topic of algorithms and artificial intelligence (AI).

The Jolly duo discovered how YouTube algorithms were biased against comments written in Korean even though that was the language of a primary audience. Why? YouTube wanted to see if it could artificially drive English-speakers there instead of allowing what was already happening organically.

Algorithms and AI drive edtech and both are designed by people. Imperfect and biased people. Similar biases exist in schooling and education. One need only recall the algorithms that caused chaos for major exams in July for the international baccalaureate (IB) and August for the General Certificate Exams in the UK. Students received lower than expected results and this disproportionately affected already disadvantaged students.

Students taking my course do not have to design algorithms or AI since that is just one topic of many that we explore. The topic evolves so rapidly that it is pointless to go in depth. However, an evergreen aspect is human design and co-evolution of such technology in education.

We shape our tools and then our tools shape us. -- Marshall McLuhan

Marshall McLuhan’s principle applies in this case. We cannot blindly accept that technology is by itself disruptive or transformative. We create these technologies, the demand for them, and the expectations of their use.

A small and select groups have the know-how to create the technology. They create to the demand by convincing administrators and policymakers who do not necessarily know any better. Since those gatekeepers are not alert, we need new expectations — we must know, know better, and do better. All this starts with knowing what algorithmic bias looks like and what it can do.

I put three seemingly unrelated videos in one of my private YouTube playlists for watching or use later.

The first was about chocolate. The second about non-digital special effects. The third was about an autistic man. While they seem unrelated, they are linked to what and how I watch on YouTube.


Video source
 


Video source
 


Video source
 

I watch SciShow religiously — I also subscribe to their podcast — so the first video is not surprising. This video feeds my need for nuanced views and to correct misconceptions.

The second might have appeared on my feed when I searched for current examples of augmented and virtual reality for a Masters course I am currently facilitating. This video appeared in my feed after that session was over and it was about neither AR nor VR, but it emphasised the importance of tactile manipulation in learning. It is something I can use in the closing session to highlight contextual use.

The third was a welcome surprise since I also facilitate a short course on ICT for inclusive education. The course stopped for a while as administrators worked out funding issues, but now that it is back I am glad to have another possible resource to spark discussion.
 

 
The link between these videos was how YouTube algorithms learnt my preferences and habits. While such algorithms are design to serve up videos and ads that might be relevant to me, it does not always do this well.

The ads are driven by more than personalisation. There is the brute force push and sell of products and services that have no relevance to me, e.g., how to be a Carousell or Amazon top seller. Those algorithms, if they apply at all, do not have my interest in mind.

The recommended videos are better. I help the algorithms out by occasionally deactivating my watch and search history. I might also use an incognito browser window. I do this to prevent the algorithms from thinking that I am interested in something new.

I also visit my watch history and delete videos listings that might misinform YouTube’s algorithms. This also helps me receive more relevant content.

The lesson is about taking control of your feeds. Do this and your feeds provide you with relevant content and serendipitous surprises. Don’t do this and you become a pawn in someone else’s game.

YouTube relies on algorithms to guess what videos you might be interested in and make recommendations.

While it is machine intelligent, it does not yet have human intuit, nuance, and idiosyncrasies.

All I need to do is search for or watch a YouTube video I do not look for regularly and it will appear in my “Recommended” list. For example, if I search for online timers for my workshop sites, YouTube will recommend other timers.


Video source

If I watch a clip of a talk show host that I normally do not follow, YouTube seems to think I have a new interest and will pepper my list with random clips of that person.

This happens so often that I have taken to visiting my YouTube history immediately after I watch anything out of the ordinary and deleting that item. If I do not, my carefully curated recommendations get contaminated.

Some might argue that the algorithms help me discover more and new content. I disagree. I can find that on my own as I rely on the recommendations of a loose, wide, and diverse social network to do this.

YouTube’s algorithms cannot yet distinguish between a one-time search or viewing and a regular pattern. It cannot determine context, intent, or purpose.

Until it does, I prefer to manage my timeline and recommendations and I will show others how to do the same. This is just one of the things from a long list of digital literacies and fluencies that all of us need to do in the age of YouTube.

Recently I downloaded Visr, an app that relies on algorithms to highlight questionable words and images that might appear in my son’s social media channels.

Doing this reminded me why parents and teachers cannot rely on algorithms, blacklists, whitelists, or anything relies largely on automation.

The app provides summary reports on a schedule of your choice. It monitors the channels you choose, e.g., Google+ and YouTube, and both what a child consumes and creates in those channels.

However, I have found its algorithms to be like a fervent puritan.

Sample VISR report

This is a screenshot from the report of my son’s YouTube videos on using LEGO to build a likeness of a Team Fortress 2 sentry. The algorithm marked that the video as containing nudity when there is none.

I have noticed that the algorithm picks up faces, be they actual human faces or cartoonish ones, as nudity. Perhaps the algorithm is focusing on the eyes or the eyes and nose. By a stretch of imagination these might look like more private parts of the body.

The app lets you specify if the alert is a real concern, to see fewer of such alerts, or to point out a mistake in identification. I try to teach the algorithm by telling it to ignore such images. But it does not seem to learn.

Therein lies the problem with using only technology (the app) to deal with what might be perceived as a technological problem (you know, kids and their devices these days). Adults forget that the issue is socio-technical in nature.

I take time to chat with my son about cultural and social norms. We talk about what is acceptable and responsible behaviour. I do not shield him from social media because that is part of life and learning. I do not ignore its pitfalls either. But I do not just rely on apps to deal with apps. Both of us will have to live and learn by trying, making mistakes, and trying again.


http://edublogawards.com/files/2012/11/finalistlifetime-1lds82x.png
http://edublogawards.com/2010awards/best-elearning-corporate-education-edublog-2010/

Click to see all the nominees!

QR code


Get a mobile QR code app to figure out what this means!

My tweets

Archives

Usage policy

%d bloggers like this: