Another dot in the blogosphere?

Posts Tagged ‘crashcourse


Video source

This was the final episode of the the CrashCourse series on artificial intelligence (AI). It focused on the future of AI.

Instead of making firm predictions, the narrator opted to describe how far AI development has come and how much further it could go. He used self-driving cars as an example.

Five levels or milestones of self-driving AI.

Viewed this way, the development of AI is gauged on general milestones instead of specific states.

The narrator warned us that the AI of popular culture was still the work of science fiction as it had not reached the level of artificial general intelligence.

His conclusion was as expected: AI has lots of potential and risks. The fact that AI will likely evolve faster than the lay person’s understanding of it is a barrier to realising potential and mitigating risks.

Whether we develop AI or manage its risks, the narrator suggested some questions to ask when a company or government rolls out AI initiatives.

Questions about new AI initiatives.

I thoroughly enjoyed this 20-part series on AI. It provided important theoretical concepts that gave me more insights into the ideas that were mentioned in the new YouTube Original series, The Age of AI. Watching both series kept me informed and raised important questions for my next phase of learning.


Video source

This was another episode that focused on hands-on Python coding using Google Colaboratory. It was an application of concepts covered so far on dealing with biased algorithms.

The takeaway for programmers and lay folk alike might be that there is no programme free from undesirable bias. We need to iterate on designs to reduce such bias.


Video source

This was an episode that anyone could and should watch. It focused on bias and fairness as applied in artificial intelligence (AI).

The narrator took care to first distinguish between being biased and being discriminatory. We all have bias (e.g., because of our upbringing), but we should prevent discrimination. Since AI adopts our bias, we need to be more aware of ourselves so as to prevent AI from discriminating harmfully by gender, race, religion, etc.

What are some examples of applied bias? Google image search for “nurse” and you are likely to see photos of women; do the same for “programmer” and you are more likely to see men in the photos.

The narrator suggested five sources of bias. I paraphrase them as follows:

  1. Existing data are already biased (e.g., the photo example above)
  2. New training data is unbalanced (e.g., providing photos of faces largely from one main race)
  3. Data is reductionist and/or incomplete (e.g., creative writing is difficult to measure and simpler proxies like vocabulary are used instead)
  4. Positive feedback loops (e.g., past actions are repeated as future ones regardless of context)
  5. Manipulation by harmful agents (e.g., users teaching Microsoft’s Tay to tweet violence and racism)


Video source

Finally. An episode on how search engines use AI to help (or not help) us find answers to questions.

The narrator likened search engines to library systems: They had to gather data, organise them, and find and present answers when needed.

The gathering of data is done by web crawlers — programmes that find and download web pages. The data is then organised by reverse indexes (like those at the back of textbooks).

The indexed web content is associated with numbers. Each time we search with an engine, these numbers are then linked to associated web content.

Example of indexing.

Since there is so much content, it needs to be ranked by accuracy, relevance, recency, etc. We help the AI to this with bounces (returning to the search) to click-throughs (staying with what we were presented).

The narrator also explained how we might be presented with immediate answers and not just links to possibly relevant web resources. AIs use knowledge bases instead of reverse indexes.

Knowledge bases might be built with NELL — Never Ending Language Learner. The video explains this better than I can.

NELL — Never Ending Language Learner.

Fair warning: Search engines still suck at questions that are rarely asked or are nuanced. AI is still limited by what data is available. This means that it is subject to the bias of people who provide data artefacts.

The next episode is about dealing with such bias. Now the series gets really interesting!


Video source

This was an episode that would make a novice coder happy because it provided practice.

It did not apply to ame because I was merely getting some basics and keeping myself up to date for a course I facilitate.

In this episode, the host led a session on how to code for a movie recommendation system. To do this, he revisited concepts like pooling large datasets, getting personalised ratings, and implementing collaborative filtering. In doing so, this host suggested solutions for incomplete data, cold starts, and poor filtering.

The next episode promises to provide insights on how search engines make recommendations.

It took a while, but CrashCourse finally provided some insights into how YouTube, Netflix, and Amazon make recommendations.


Video source

Long story short: The AI recommendations are based on supervised and unsupervised learning. The interesting details are that the algorithms may be content-based, social-based, or personalised.

Content-based algorithms examine what is in, say, YouTube videos. Social-based algorithms focus on what the audience does (e.g., likes, views, time spent watching). As we have different preferences, algorithms can learn what we like and serve us similar content or content from the same provider.

The recommendations we see on YouTube are a combination of all three and the process is called collaborative filtering. This relies on unsupervised learning to predict what we might like based on what other users similar to us also like/watch.

The AI might make mistakes in the recommendations. This can be due to sparse data (e.g., low views, low likes), cold starts (i.e., AI does not know enough about us initially), and statistics (i.e., what is likely is not the same as what is contextually relevant). A good example of this sort of mistake is online ads.

Some pragmatics: To get good recommendations, we might subscribe and like videos from content creators we appreciate. To avoid getting tracked, we might use the incognito mode in most modern web browsers.


Video source

This week’s episode countered the mainstream and entertainment media message that artificial intelligence (AI) will take over all our jobs and eventually us as well. It focused on how humans and AI can collaborate and complement one another.

AI is quick, consistent, and tireless. But it is poor with insight, creativity, and nuance, traits that we possess despite ourselves. The narrator related an example of how chess players worked with AI to beat human chess masters or AI-only opponents.

Beyond chess, the narrator suggested that AI could help with medical diagnoses. It can focus on rote tasks and processing large amounts of information and combine its findings with a doctor’s experience and knowledge of a patient.

In engineering, AI could suggest basic designs of structures based on existing rules while humans might consider the practicality of those designs in context. In human development, AI could artificially give us more strength, endurance, or precision, e.g., robot exoskeletons, remote surgery.

As much as AI helps us, we also help AI. We provide data for AI every time we contribute of any online database. When AI spits results out based on algorithms, it often shows us the products but not the processes; humans can provide insights into those processes or fine-tune them.

AI has no moral value systems. That is a human thing. But so is bias, which happens to be the focus of the next episode.


http://edublogawards.com/files/2012/11/finalistlifetime-1lds82x.png
http://edublogawards.com/2010awards/best-elearning-corporate-education-edublog-2010/

Click to see all the nominees!

QR code


Get a mobile QR code app to figure out what this means!

My tweets

Archives

Usage policy

%d bloggers like this: