Another dot in the blogosphere?

Posts Tagged ‘analytics

This timely tweet reminded me to ask some questions.

Other than “learning styles”, are career guidance programmes here going to keep wasting taxpayer money on Myers-Briggs tests for students and the same training for teachers?

Are people who claim to be edtech, change, or thought leaders still going to talk about “21st century competencies” and “disruption” this decade?

Might people keep confusing “computational thinking” or “authoring with HTML” with “coding”?

Will administrators and policymakers lie low in the protection and regulation of the privacy and data rights of students?

Are vendors going to keep using “personalised learning” and “analytics” as catch-all terms to confuse and convince administrators and policymakers?

Are sellers of “interactive” white boards still going to sell these white elephants?

Are proponents of clickers going to keep promoting their use as innovative pedagogy instead of actually facilitating active learning experiences?

I borrow from the tweet and say: Please don’t. I extend this call by pointing out that if these stakeholders do not change tact, they will do more harm than good to learners in the long run.

I have avoided reading and reviewing this opinion piece Analytics can help universities better support students’ learning. When I scanned the content earlier this month, my edtech Spidey sense got triggered. Why?

Take the oft cited reason for leveraging on the data: They “provide information for faculty members to formulate intervention strategies to support individual students in their learning.”

Nowhere in the op piece was there mention of students giving permission for their data to be used that way. Students are paying for an education and a diploma; they are not paying to be data-mined.

I am not against enhancing better study or enabling individualisation of learning. I am against the unethical or unsanctioned use of student data.

Consider the unfair use of student-generated data. Modern universities rely on learning management systems (LMS) for blended and online learning. These LMS are likely to integrate plagiarism checking add-ons like Turnitin. When students submit their work, Turnitin gets an ever-increasing and improving database. It also charges its partner universities hefty subscription fees for the service.

Now take a step back: Students pay university fees while helping a university partner and the university partner makes money off student-generated data. What do students get back in return?

Students do not necessarily learn how to be more responsible academic writers. They might actually learn to game the system. Is that worth their data?

Back to the article. It highlighted two risks:

First, an overly aggressive use of such techniques can be overbearing for students. Second, there is a danger of adverse predictions/expectations leading to self-fulfilling prophecies.

These are real risks, but they sidestep the more fundamental issues of data permissions and fair use. What is done to protect students when they are not even aware of how and when their data is used?

This is not about having a more stringent version of our PDPA* — perhaps an act that disallows any agency from sharing our data with third parties without our express consent.

It is about not telling students that their data is used for behavioural pattern recognition and to benefit a third party. While not on the scale of what Cambridge Analytica did to manipulate political elections, the principle is the same — unsanctioned and potentially unethical use of a population’s data.

*I wonder why polytechnics are included in the list of agencies (last updated 18 March 2013) responsible for personal data protection but universities are not.

Twitter is like a rapidly flowing stream of consciousness. Follow more people or engage in an exciting chat and your Twitter timeline might look like a torrent.

Thankfully there are ways to take the occasional snapshot and to catch a few fish so that you can get a sense of what is going on. I briefly describe three tools and strategies.

Storify allows you to archive and tell a story of, say, a Twitter-based chat. To the uninitiated, a Twitter chat can be disorienting because the chat is not threaded, the @handle replies easily lose context, and the conversation appears in reverse chronological order. Storify is a good way consolidate a chat.

Here is an example of the affordances of Storify based on a chat on self-directed learning that #edsg had six months ago. In the example, I illustrate how to use forward chronology, chunking, and commenting to add value to an archive.

Twitter analytics used to be available only to corporate or paying customers. Now everyone on Twitter has access to their own dashboard. Now everyone can see how much (or more likely, how little) reach or impact they have!

Instead focusing on how each and every tweet is doing, you might be more interested in where your followers are from. Assuming you have authentic followers, you can click on the “followers” section of the Twitter analytics dashboard to see this. I did this when prompted after an #NT2t question a few weeks ago.


I discovered that about a third of my followers were from Singapore, followed by the USA, Australia, UK, and everywhere else. About 60% of my followers were interested in education-related news, which is great as that is what I mostly tweet about.

I amplify my blog reflections by posting Twitter blurbs via WordPress. I do so just once a day between 10 and 11am at Singapore time, so this does not necessarily reach all my readers. Despite this, there are more visitors to my blog from the USA than Singapore.

The Twitter analytics snapshot hints at other entry points and visitor expectations. For example, I now have some evidence that my Singapore followers are more content to just read the Twitter blurbs while the ones from the USA take the trouble to read my blog and/or find my content some other way. is a trackback and feedback-to-source tool I discovered only a few weeks ago. It takes a bit of setting up to integrate it with a blog, but once done, might fill in a much needed gap.

Most bloggers discover that they lose conversations and comments about their blog content because these happen in Facebook, Twitter, and other platforms. attempts to find and link some of them back to the blog.

The system is not perfect though. My WordPress counts of tweets often exceeds the number of returns, but that is probably because of the Twitter preferences of my followers.

For example, seems to only trackback the auto-tweets WordPress sends to Twitter on my behalf. Twitter followers who tweet comments and link to my blog independently of my tweet, reply to those tweets, or auto-link to other services like are not detected.

Collectively, using these tools is like trying to catch fish from the stream with a net. You are not going to get a perfect catch all the time, but it is far better to try and to know than to operate blindly.

Tweets are fleeting and might represent a collective stream of consciousness. Anything that bobs its head several times in that stream tends to stand out.

If you are not a celebrity and something you tweeted was retweeted 66 times, you might be happy to note the agreement, endorsement, or share-ability of the idea. (By comparison, if you are a celebrity, you could tweet that you pooped and it could be retweeted thousands of times.)

However, your poopless joy should be tempered with context. Take the analytics of a recent tweet of mine.

To date, the tweet has been viewed 2,145 times, retweeted 66 times, and has an engagement rate of 3.1%. In the context of the number of views, that is a low return. In a good week, each of my tweets gets 3000-4000 views within three days. Given more views, the engagement figure is likely to drop.

That is what playing only the numbers game gets you.

Exploring the context of tweeting further, Twitter analytics do not capture modified tweets. For example, someone might tweet the URL of my tweet or tweet a variation of it.

Consider another example.

I took this screenshot in October 2014 of a popular blog entry I wrote in February of the same year. The blog entry now has 107 tweets (and an unknown number of retweets). If I focused just on the numbers, I could figure out the gain of tweets per month or the average in a year.

I would rather focus on the fact that something simple I wrote still has traction today. The WordPress dashboard tells me how the entry gets found, e.g., from Google searches or the other bloggers’ efforts.

My point: Numbers can be used to tell a story, the making of a story, or to bypass the story altogether. People who focus on playing the numbers game do not care for the story. The lowest hanging fruits are what matter to them. This is like focusing on grades instead of learning.

I prefer to get the fruit that are not within immediate reach. It takes more effort to climb, but the fruits of my labour are much sweeter. I also get a better view as a bonus. That is just my way of saying that I would rather use the numbers to tell a story even if that requires a bit more work.

When I first heard the news that Twitter made its analytics dashboard available to all, I jumped on it straightaway.

I was surprised to learn of my reach or what Twitter calls impressions. That said I have no doubt that others have a far wider reach.

But now I am wondering about the reliability of the analytics dashboard.

I discovered the analytics tools on 28 Aug. However, the date and time seem to be set for some other part of the globe. That said, my reach for 27 Aug as recorded on the morning of 28 Aug was 27,263 (see screencapture below).


The analytics engine was already collecting data for 28 Aug as evident by the small bar to the right of the highlighted one.

On 29 Aug, I checked for activity on the 28th. I moused over the 27th accidentally and noticed that the count went up to 28,842 (see screencapture below).


I am not sure why the numbers changed.

Perhaps the counts got adjusted for the time and date difference. Perhaps older tweets were getting views two or three days after being posted and their hit counts were not yet stable.

The numbers seem to settle about two days into collection. It might be best then to monitor on a weekly or monthly basis.

That was lesson number one.

What is worrying is the low engagement. I have read a few reviews by other individual tweeters [example from Gizmodo] and they say the same thing.

Each of my tweets gets between 4000 to 5000 views. But you can count on two hands (and occasionally include the feet) the number of reader interactions with the tweets. These include retweeting, favouriting, clicking on embedded resources, etc.

The tweets with higher interactions tend to be question-oriented. Ask a question and you are likely to get responses. The tweets with lower interactions are information-oriented. Provide something of value and the consumer consumes. Do not expect a thank you, feedback, or a pass-it-on.

This behaviour is not unique to Twitter. When I was privy to my former department’s Facebook reports, our engagement rate was equally low.

If I was a company I would be concerned that customers were not engaging with me. As an educator carefully curating and sharing, I might be a bit concerned about the viewing habits of my informal audience and learners.

I used to say today’s learner seems to move at twitch speed. This is not another way of saying they have short attention spans because they do not. Anyone who has observed someone else immersed in a game or in a state of flow will realize how much focus gamers have.

I mean to say that they move superficially from one resource to another due to the breadth of information presented to them. Their rallying cry seems to be tl;dr (too long, didn’t read).

Now I am tempted to say that my followers move at Twitter speed. That might sound like superficial consumption, but at least they read and read lots of seemingly disparate information. It is the brain foraging as this MindShift article points out.

So another lesson might be to leverage on Twitter as the learner expects. Not so much in a forced provide-feedback-in-a-classroom way, but in an informal, scattered goodies way or a curiosity-driven, #hashtag-focused chat.

The K-12 Horizon Reports have been forecasting the rise of learning analytics for a few years now. But as the field is new, I think there are different perceptions of analytics.

I think that there are at least three dimensions of learning analytics.

  1. Administrative
  2. Learner
  3. Learning

Administrative analytics provide a God-mode overview of who is using WHAT and HOW OFTEN. But this does not offer insights of how the resources and tools are implemented pedagogically. In an LMS, administrative analytics might look like a dashboard that provides information to an administrator or policy maker on how many courses embed YouTube videos, employ discussion forums, or an assignment feature.

Learner analytics are like progress reports to the instructor. Imagine a dashboard that shows activity completion rates, which students are stuck at what points, and test scores. Learner analytics let you know WHERE WHO is.

Real learning analytics are the holy grail that rely on Big Data. A system with a backend design something like the one shown above might take the frontend form of an omnipresent AI tutor that monitors a learner’s progress and offers help. I imagine this to be like the now defunct Microsoft Clippy but on steroids.

The Amazon store might have all three equivalents. Amazon can monitor its stocks and order more from its suppliers automatically (administrative analytics). It can project what to sell based on the browsing and purchasing habits of its customers (learner analytics). It can also make recommendations aggressively to its users (learning analytics).

It is not a perfect analogy nor a comprehensive analysis, but it helps me make sense of this complex phenomenon.

This is an exception that I am making to a rule. I am responding to an email request to feature an infographic.

I am featuring it partly because the person asked nicely, had credentials, and responded to my queries. I am also including it here because it addresses an emerging but important trend that not many people understand.

Learning analytics is an edtech trend to watch. I highlighted this with the help of the 2011 K-12 Horizon Report to folks from Blackboard when we met at the eLearning Forum Asia 2011 (eLFA).

Based on a tool demonstration they provided some months later, I did not get a sense that Blackboard really understood what learner analytics was. I only saw administrative analytics, not learning or learner analytics.

The infographic below provides a better picture of this [source].

A learning analytics system does not just data mine. It reacts and responds as an intelligent system to every learner. It augments a human instructor by providing more immediate feedback and personalizing learning.

Bottom line: A good learning analytics system is not designed with an administrator or KPIs in mind. It is designed for the learner first and foremost.


Usage policy

%d bloggers like this: