Teaching an algorithm
Posted May 22, 2016
on:Recently I downloaded Visr, an app that relies on algorithms to highlight questionable words and images that might appear in my son’s social media channels.
Doing this reminded me why parents and teachers cannot rely on algorithms, blacklists, whitelists, or anything relies largely on automation.
The app provides summary reports on a schedule of your choice. It monitors the channels you choose, e.g., Google+ and YouTube, and both what a child consumes and creates in those channels.
However, I have found its algorithms to be like a fervent puritan.
This is a screenshot from the report of my son’s YouTube videos on using LEGO to build a likeness of a Team Fortress 2 sentry. The algorithm marked that the video as containing nudity when there is none.
I have noticed that the algorithm picks up faces, be they actual human faces or cartoonish ones, as nudity. Perhaps the algorithm is focusing on the eyes or the eyes and nose. By a stretch of imagination these might look like more private parts of the body.
The app lets you specify if the alert is a real concern, to see fewer of such alerts, or to point out a mistake in identification. I try to teach the algorithm by telling it to ignore such images. But it does not seem to learn.
Therein lies the problem with using only technology (the app) to deal with what might be perceived as a technological problem (you know, kids and their devices these days). Adults forget that the issue is socio-technical in nature.
I take time to chat with my son about cultural and social norms. We talk about what is acceptable and responsible behaviour. I do not shield him from social media because that is part of life and learning. I do not ignore its pitfalls either. But I do not just rely on apps to deal with apps. Both of us will have to live and learn by trying, making mistakes, and trying again.
May 22, 2016 at 9:20 am
Crystal: liked this. via twitter.com
LikeLike