Learning analytics or big brother?
Posted October 26, 2011on:
I had two meetings with Blackboard (BB) representatives earlier this week and I need to vent.
I learnt about a new pricing model and their move towards learning analytics. I could rant about the first but I’ll limit myself to the second.
First, I’ll say that learning analytics as described by the NMC in the K-12 Horizon Report 2011 is an important forecasted trend. I borrow from their report to explain the purpose of learning analytics:
Learning analytics loosely joins a variety of data- gathering tools and analytic techniques to study student engagement, performance, and progress in practice, with the goal of using what is learned to revise curricula, teaching, and assessment in real time.
Imagine being able to determine in real time what difficulties a learner is having and addressing those needs based on the artefacts that a learner creates. In other words, the focus of learning analytics is learning and the learner.
BB showcased a prototype learning analytics tool. To their credit, the prototype system seems robust and all data is not sent to a remote server for processing. This will avoid data privacy issues and prevent groups like marketers from accessing this information.
But what the BB representative demonstrated left me with a “big brother is watching you” feeling.
I did not get a sense that BB understood that this was a tool for educators, not just administrators and policy makers.
Why do I say this? With BB’s analytics tool, you can find out how many staff have not created discussion forums, which courses embed YouTube videos or compare how one cohort of students performs against another. From a systemic point of view, this tool is great for reporting corporate-type KPIs.
But I think that the point of learning analytics is to figure out what types of learning are taking place, if it is happening at all and assist the educator in analyzing the needs of the learner.
I think that BB’s prototype system has the capacity to do this. But what was demonstrated did not focus on the learner. It focused on what a university provost or systems administrator might be interested in, e.g., which faculty use the LMS and how often do users log in?
For me, this was a good example of the type of thinking and practice that makes an LMS go wrong. Where was the learning in the LMS? This was about administrating and policymaking. This was also about impressing someone in higher management who is ill-equipped to make a fully informed decision.
Don’t get me wrong. It is important to have policies in place that promote things like meaningful mobile learning. But you get there by first examining what happens at the level of the learner and the class. You should not be looking at tables or charts from an ivory tower equipped with a monitoring system designed to keep you at a distance.