Another dot in the blogosphere?

Posts Tagged ‘research

In a few weeks, yet another batch of future faculty will pass through my hands. I can only hope that they remember to teach with learning and the learner in mind.

Another related task that they have to do is start a teaching philosophy statement. As this piece of writing is a challenge even for established faculty, I will be providing them links to two resources I shared in this blog:

  1. 10 tips for crafting a teaching philosophy
  2. Writing tips for future faculty

Today, I add one more simple tip: Find a balance between storytelling and citing pedagogical research.

Narratives can be compelling because they are often personal stories. However, one person’s story does not necessarily represent a system nor is it credible.

Citing pedagogical research that has rigour and respect goes a long way to providing some credibility to an approach to teaching. However, it lacks personalisation.

I recommend blending the two. For example, a personal story of a bad learning experience could provide context for a new pedagogical approach.

When the strength of one method compensates for the weakness of another, it makes sense to combine the both in a delicate balance.

Journalists who write for papers are fond of backing up their claims with “research says” or “according to research”, but not actually citing, linking, and listing it.

Claiming that “research says” or “according to research” sounds authoritative, but it is not. Readers should not have to take your word for it; they should be able to access the original spruce material and decide for themselves.

Even academics who have been brought in to write opinion or expert pieces seem guilty of doing this. However, I suspect that the academic style of using and citing references gets edited out to suit newspaper style.

All that said, even if references sneak in, they are no guarantee of accuracy or authority. A writer typically has an agenda or has to follow someone else’s agenda, so the references might be biased.

Even if a writer remains as objective as possible, the returns on what research says is often mixed. This is particularly true of the social sciences, of which educational research is firmly part of.

Consider video-based learning. In the age of YouTube, there is research on the effects of videos on learning.

There are generic and summary-oriented articles like Research On Using Video for Learning or How Students Learn From Video.

Then there are articles that claim that videos are key to learning, like Why Flipped Learning Is Still Going Strong 10 Years Later. But there are also articles like Why Videos May Not Be the Best Medium for Knowledge Retention whose title is self-explanatory. Interestingly, the contrasting articles are from the same publishing source.

Asking what “research says” is no guarantee of finding the answers you expect, need, or want. Quite the opposite. You might end up more undecided than before.

But that is the partly point of research. It is not to provide clear or definite answers. It is to roughly point the way with the help of more questions.

If you seek to indoctrinate, provide the answers. If you seek to educate, provide questions.


The video below about fidget spinners might look like clickbait, but it asks an important question: What does research say about their effectiveness?

Video source

The answers may not satisfy because the question was dealt with critically. The answers blew away personal experience and confirmation bias, and instead highlighted how little we know for sure.

“What does research say?” is a reasonable question to ask. I do not hear it as often as I would like after a presenter on education has said his or her piece. Most audiences seem to be satisfied with being inspired (which does not last) or taking snapshots of fancy diagrams (which may not transfer to practice or transform practice).

Audiences and readers should be asking the critical question of “What does research say?”. When they do, they should also be critical of the sources and the type of answers.

If the speaker is from an edtech company, was the “research” sanctioned or provided by the same company or an affiliate? What does actual research conducted by neutral third parties say?

Often the reality is that research that answers your question precisely is sparse. The tool, strategy, or idea is not quite untested or unchartered, but is not fully a sure bet either. The speaker is not likely to admit that if he or she wants to sound confident and needs to make a sale.

Research in education often reveals best guesses or recommended practices based on specific contexts and conditions. To claim otherwise is to overstate.

What does research actually say? Not much, there are conditions, there are limitations, there is no significant difference.

Do yourself a favour and do your own research on research.


The writers of Quartz, some of whom I have described as using lazy writing, wondered why one of the world’s wealthiest countries is also one of its biggest online pirates.

The country was Singapore, “the world’s fourth richest country, measured by gross national income per capita and adjusted for purchasing power”. Quartz wondered why Singaporeans still resorted to piracy despite having access to Netflix.

Does it assume that 1) there are no poor people in Singapore, 2) everyone here has heard of Netflix or other legal video streaming platforms, 3) the rich people here (all of us!) subscribe to something like Netflix, and 4) having access to legal streaming should reduce piracy significantly?

These are flawed assumptions. In not trying to answer its own question, it revealed lazy thinking and research.

For example, it did not mention how Netflix Singapore only offers 15% of TV shows carried by Netflix USA. (The exact figure might vary over time and is available in the table at this site.)

It did not mention that we have relatively low-cost fibre optic broadband plans.

Telcos here now push 1Gbps plans. One needs only a cursory examination of this chart maintained by the Infocomm Media Development Authority (IMDA) of Singapore to see that the plans hover around S$50 now.

The low access to the full Netflix USA library combined with ready access to high speed Internet point to our ability to get the same resources elsewhere.

Quartz decided to call our behaviour kiasu. That is a catchall term that avoids actual thought and explanation. The label is convenient: You are all just like that despite your money and access.

Like most sociotechnical phenomena (behaviours shaped and enabled with technology), the underlying reasons are nuanced. I have suggested just two and backed it up with the data.

Caution by dstrelau, on Flickr
Caution” (CC BY 2.0) by dstrelau

Last week I read this blog entry, Give a kid a computer…what does it do to her social life? It summarised a research paper that claimed to study how computers influenced social development and participation in school.

The paper might seem like a good read, until you realise its limitations. The blogger pointed these out:

A few caveats of these conclusions should be borne in mind. First, the study only lasted for one school year. Second, having a smart phone, with the constant access it affords, may yield different results. Third, children were given a computer, but not Internet access. Some kids had it anyway, but the more profound effects may come from online access.

The single year study is quite a feat even though a longer longitudinal study would have been better. The researchers were probably limited by schooling policies and processes like access to students and how students are grouped.

I am more critical of the other study design flaws.

My first response was: Computers only, really?

Phones are the tools, instruments, and platforms of choice among students. You can take away their computers, but you can only remove their phones from their cold dead hands. If you wanted to study the impact of a technology set that was key to social development and school participation, you should focus on the influence of the phone.

My second response was: Not consider Internet access, really?

That is like studying the impact of cars on air quality or travel stress by limiting the cars to a thimble of fuel. Much of what we do with computers and phones today requires being online. You can focus on what happens offline with these devices, but this is such a limited view. This is like saying you observe what happens in one minute out of every hour and claim to know what happens all day.

There might be a need to study the impact of, say a 1:1 programme, but this would likely happen in the larger context of Internet-enabled phone use. It does not make sense to silo study the impact of non-Internet computer use.

My third response on reading the abstract was: Self-reporting via surveys, really?

There is nothing wrong with surveying itself, particularly if the surveys were well-designed and valid. However, self-reporting is notoriously unreliable because participant memories are subject to time, contextual interpretation, emotion, and other confounding factors.

Given that the study was quasi-experimental, where were the other data collection methods to triangulate the findings? These methods include, but are not limited to, observations, interviews, focus groups, document analysis, video analysis, etc.

While my critique might sound harsh, this is the norm of academic review. If a study is to inform theory or practice, is must be rigorous enough stand up to logical and impartial critique.

There is no perfect study and on-the-ground situations can be difficult. But if the researchers do not manage the circumstances and design with better methods, then their readers should read critically with informed lenses. If the latter do not have them, this doctor offers this free prescription.

Although I am no longer an academic, I see research opportunities everywhere. One set of untapped research is in Pokémon Go (PoGo).

I am not talking about the already done-to-death exercise studies or about the motivations to play and keep playing.

I am thinking about how sociologists might add to PoGo’s trend analysis. Number crunchers have already collected data on its meteoric rise and now its declining use. While these provide useful information to various stakeholders, I wonder if anyone has considered the impact of PoGo uncles and aunties.

I am not the first to observe how much older players have started playing PoGo. I tweeted this a while ago and someone just started a thread in the PoGoSG Facebook group about uncles and aunties at play.

A quick search on Twitter with keywords like “pokemon go” and “auntie” or “uncle” might surprise you.

The PoGo aunties and uncles are quite obvious here. So far I have noticed three main types: Solo aunties, uncles in pairs or small groups, and auntie-uncle couples. There are more types, of course, but these three are common enough to blip frequently on social radar.

But I would not be content with just describing the phenomenon. I would ask if they contribute to the “death” of PoGo just like how the older set adopted Facebook and how teens then migrated to Snapchat.

We should not underestimate the impact of uncles and aunties. After all, there must be a reason for this saying: Old age and treachery will always overcome youthfulness and skill.

Since some people would rather watch a video bite than read articles, I share SciShow’s Hank Green’s 2.5 minute critique of “learning styles”.

Video source

From a review of research, Green highlighted how:

  • the only study that seemed to support learning styles was severely flawed
  • students with perceptions that they had one style over others actually benefitted from visual information regardless of their preference

This is just the tip of the iceberg of evidence against learning styles. I have a curated list here. If that list is too long to process, then at least take note of two excerpts from recent reviews:

From the National Center for Biotechnology Information, US National Library of Medicine:

… we found virtually no evidence for the interaction pattern mentioned above, which was judged to be a precondition for validating the educational applications of learning styles. Although the literature on learning styles is enormous, very few studies have even used an experimental methodology capable of testing the validity of learning styles applied to education. Moreover, of those that did use an appropriate method, several found results that flatly contradict the popular meshing hypothesis. We conclude therefore, that at present, there is no adequate evidence base to justify incorporating learning-styles assessments into general educational practice.

In their review of research on learning styles for the Association for Psychological Science, Pashler, McDaniel, Rohrer, and Bjork (2008) came to a stark conclusion: “If classification of students’ learning styles has practical utility, it remains to be demonstrated.” (p. 117)

In Deans for Impact, Dylan Wiliam noted:

Pashler et al pointed out that experiments designed to investigate the meshing hypothesis would have to satisfy three conditions:

1. Based on some assessment of their presumed learning style, learners would be allocated to two or more groups (e.g., visual, auditory and kinesthetic learners)

2. Learners within each of the learning-style groups would be randomly allocated to at least two different methods of instruction (e.g., visual and auditory based approaches)

3. All students in the study would be given the same final test of achievement.

In such experiments, the meshing hypothesis would be supported if the results showed that the learning method that optimizes test performance of one learning-style group is different than the learning method that optimizes the test performance of a second learning-style group.

In their review, Pashler et al found only one study that gave even partial support to the meshing hypothesis, and two that clearly contradicted it.

Look at it another way: We might have learning preferences, but we do not have styles that are either self-fulling prophecies or harmful labels that pigeonhole. If we do not have visual impairments, we are all visual learners.

Teaching is neat. Learning is messy.

Learning is messy and teaching tries to bring order to what seems to be chaos. The problem with learning styles is that it provides the wrong kind of order. Learning styles has been perpetuated without being validated. A stop sign on learning styles is long overdue.

Click to see all the nominees!

QR code

Get a mobile QR code app to figure out what this means!

My tweets


Usage policy

%d bloggers like this: