Posts Tagged ‘algorithm’
YouTube relies on algorithms to guess what videos you might be interested in and make recommendations.
While it is machine intelligent, it does not yet have human intuit, nuance, and idiosyncrasies.
All I need to do is search for or watch a YouTube video I do not look for regularly and it will appear in my “Recommended” list. For example, if I search for online timers for my workshop sites, YouTube will recommend other timers.
If I watch a clip of a talk show host that I normally do not follow, YouTube seems to think I have a new interest and will pepper my list with random clips of that person.
This happens so often that I have taken to visiting my YouTube history immediately after I watch anything out of the ordinary and deleting that item. If I do not, my carefully curated recommendations get contaminated.
Some might argue that the algorithms help me discover more and new content. I disagree. I can find that on my own as I rely on the recommendations of a loose, wide, and diverse social network to do this.
YouTube’s algorithms cannot yet distinguish between a one-time search or viewing and a regular pattern. It cannot determine context, intent, or purpose.
Until it does, I prefer to manage my timeline and recommendations and I will show others how to do the same. This is just one of the things from a long list of digital literacies and fluencies that all of us need to do in the age of YouTube.
Recently I downloaded Visr, an app that relies on algorithms to highlight questionable words and images that might appear in my son’s social media channels.
Doing this reminded me why parents and teachers cannot rely on algorithms, blacklists, whitelists, or anything relies largely on automation.
The app provides summary reports on a schedule of your choice. It monitors the channels you choose, e.g., Google+ and YouTube, and both what a child consumes and creates in those channels.
However, I have found its algorithms to be like a fervent puritan.
This is a screenshot from the report of my son’s YouTube videos on using LEGO to build a likeness of a Team Fortress 2 sentry. The algorithm marked that the video as containing nudity when there is none.
I have noticed that the algorithm picks up faces, be they actual human faces or cartoonish ones, as nudity. Perhaps the algorithm is focusing on the eyes or the eyes and nose. By a stretch of imagination these might look like more private parts of the body.
The app lets you specify if the alert is a real concern, to see fewer of such alerts, or to point out a mistake in identification. I try to teach the algorithm by telling it to ignore such images. But it does not seem to learn.
Therein lies the problem with using only technology (the app) to deal with what might be perceived as a technological problem (you know, kids and their devices these days). Adults forget that the issue is socio-technical in nature.
I take time to chat with my son about cultural and social norms. We talk about what is acceptable and responsible behaviour. I do not shield him from social media because that is part of life and learning. I do not ignore its pitfalls either. But I do not just rely on apps to deal with apps. Both of us will have to live and learn by trying, making mistakes, and trying again.