Another dot in the blogosphere?

Posts Tagged ‘review

If there was an agenda in this excellent review article, it was to provide answers to the question: What is online learning? Here is Part 1 of my notes on the article.


  • Online learning with synonymous with asynchronous, text-based learning
  • Blended learning was about mixing face-to-face and online modes of learning
  • Hybrid learning (Australia, non-USA) was synonymous with blended learning (USA)

By late-1990s

  • Synchronous learning methods evolved
  • Examples: Basic ‘live’ sharing of resources, e.g., slides; “blended online learning”


  • Video conferencing ramped up
  • Synchronous learning was practically synonymous with video-enabled communication
  • Modalities (i.e., blended or hybrid) became largely irrelevant

In 2007: HyFlex (hybrid-flexible)

  • Combining both online and face-to-face modalities, and flexible, where “students may choose whether or not to attend face-to-face sessions
  • Similar to what is happening in universities during the age of COVID-19

In 2006, the author of the review article offered her own framework that mixed three modes (face-to-face, online synchronous, online asynchronous) with one on access (open access or not). While I favour any experience designed with open access, I do not see the logic of the mix from a modal lens.

When viewed through the lens of learner access, however, her framework starts to make sense. The learner decides if s/he goes to campus or not, works concurrently with others or not, and has limited or unlimited access to materials.

2010s: Multi-access frameworks

  • Examples: Blended synchronous (2013) and synchronous hybrid (2014)
  • In both, students can be on campus or online, but both meet via conferencing or shared online platforms/virtual worlds/telepresence robots.

The author took a paragraph to focus on asynchronous efforts in the same time frame. Some important ideas:

  • Asynchronous communication requires more monitoring and digital literacy than synchronous-only classes
  • Those new to teaching online in general may also prefer the synchronous-only design, so as to minimise the workload creep that comes with robust asynchronous communication
  • Designs should consider… reducing synchronous instructional hours to create time for asynchronous activities and dialogue
  • Many learners… will develop their own private backchannel spaces to support learner-only asynchronous peer-to-peer communication

More notes tomorrow!

Since some people would rather watch a video bite than read articles, I share SciShow’s Hank Green’s 2.5 minute critique of “learning styles”.

Video source

From a review of research, Green highlighted how:

  • the only study that seemed to support learning styles was severely flawed
  • students with perceptions that they had one style over others actually benefitted from visual information regardless of their preference

This is just the tip of the iceberg of evidence against learning styles. I have a curated list here. If that list is too long to process, then at least take note of two excerpts from recent reviews:

From the National Center for Biotechnology Information, US National Library of Medicine:

… we found virtually no evidence for the interaction pattern mentioned above, which was judged to be a precondition for validating the educational applications of learning styles. Although the literature on learning styles is enormous, very few studies have even used an experimental methodology capable of testing the validity of learning styles applied to education. Moreover, of those that did use an appropriate method, several found results that flatly contradict the popular meshing hypothesis. We conclude therefore, that at present, there is no adequate evidence base to justify incorporating learning-styles assessments into general educational practice.

In their review of research on learning styles for the Association for Psychological Science, Pashler, McDaniel, Rohrer, and Bjork (2008) came to a stark conclusion: “If classification of students’ learning styles has practical utility, it remains to be demonstrated.” (p. 117)

In Deans for Impact, Dylan Wiliam noted:

Pashler et al pointed out that experiments designed to investigate the meshing hypothesis would have to satisfy three conditions:

1. Based on some assessment of their presumed learning style, learners would be allocated to two or more groups (e.g., visual, auditory and kinesthetic learners)

2. Learners within each of the learning-style groups would be randomly allocated to at least two different methods of instruction (e.g., visual and auditory based approaches)

3. All students in the study would be given the same final test of achievement.

In such experiments, the meshing hypothesis would be supported if the results showed that the learning method that optimizes test performance of one learning-style group is different than the learning method that optimizes the test performance of a second learning-style group.

In their review, Pashler et al found only one study that gave even partial support to the meshing hypothesis, and two that clearly contradicted it.

Look at it another way: We might have learning preferences, but we do not have styles that are either self-fulling prophecies or harmful labels that pigeonhole. If we do not have visual impairments, we are all visual learners.

Teaching is neat. Learning is messy.

Learning is messy and teaching tries to bring order to what seems to be chaos. The problem with learning styles is that it provides the wrong kind of order. Learning styles has been perpetuated without being validated. A stop sign on learning styles is long overdue.

After reading this review of research on homework, my mind raced to how some people might resort to formulaic thinking.

This was the phrase that seeded it:

Based on his research, Cooper (2006) suggests this rule of thumb: homework should be limited to 10 minutes per grade level.

What follows were examples and an important caveat:

Grade 1 students should do a maximum of 10 minutes of homework per night, Grade 2 students, 20 minutes, and so on. Expecting academic students in Grade 12 to occasionally do two hours of homework in the evening—especially when they are studying for exams, completing a major mid-term project or wrapping up end-of-term assignments—is not unreasonable. But insisting that they do two hours of homework every night is expecting a bit much.

If you assume that people would pay more attention to the caveat than to the formula, you assume wrongly. Doing the former means thinking harder and making judgements. The latter is an easy formula.

Most people like easy.

If those people are teachers and administrators who create homework and homework policies, then everyone who is at home will likely suffer from homework blues.

Am I overreaching? I think not. Consider another example on formulaic thinking.

I provide professional development for future faculty every semester, but this semester was a bit different. There was a “social” space in the institution’s learning management system (LMS) where a certain 70:30 ratio emerged.

A capstone project for these future faculty is a teaching session. The modules prior to that prepare them to design and implement learner-centred experiences. At least one person played the numbers game and asked what proportion of the session should be teacher-centred vs student-centred.

I provide advice in person and in assignments that the relative amount is contextual. My general guideline is that student-centred work tends to require more time since the learners are novices and that the planning should reflect that.

However, once that 70:30 ratio was suggested in the social space, it became the formula to follow. It was definite and easier than thinking for and about the learner. It allowed future faculty to stay in their comfort zone of lecturing 70% of the time and grudgingly attempt student-centred work 30% of the time.

But guess what? When people follow this formula or do not plan for more student-centred activities and time, they typically go over the 70% teacher talk time and rush the actual learning. This pattern is practically formulaic.

Formulaic thinking is easy, but that does not make it right or effective. In the case of the course I mentioned, the 70:30 folk typically return for remediation. It is our way of trying to stop the rot of formulaic thinking.

It has been a hot month of April in more ways than one. 

I rarely rely on air-conditioning, but I have had to use it several times this month to get a decent night’s sleep. 

I have also enjoyed the most varied work ever since striking out on my own as an education consultant since August 2014. 

In early April, I evaluated the ability of future faculty to facilitate modern learning. Last week I sat with colleagues in what might be called a Board of Examiners meeting. We were bored of examining because the series of learning experiences is unlike anything I have ever been involved in. 

In the middle of April, I delivered a keynote and participated in a panel for the Social Services Institute, the professional development arm of the National Council of Social Services, Singapore. It was wonderful to see a major player wanting to shrug off the shackles of traditional education. 

Not long after that I flew to a conference overseas to facilitate conversations on the flipped classroom vs flipped learning. The strange thing is connecting with Singaporeans there that I could more easily meet at home. 

After returning from my trip, I met with a passionate edu-preneur and professor after we connected via my blog.

Another connection was a result of my keynote. It will take place via one of two Google Hangouts that will bring April to a close. I hope that it will bring more opportunities in the months to come.  

The other Hangout is a result of my flipped learning talk last January at Bett 2015. I am tempted to call it remote mentoring and hope to repeat a strategy I tried at the more recent conference. 

The exceptionally warm weather here is not the norm at this time of year. The variety of work I have had is not the norm either. While I hope the muggy days and nights go away, I do what I can to keep the sizzling work in play.

WordPress emailed me a year-in-blogging report. Here are the highlights.

Two of the top five were not strictly about education. They were about getting prepaid SIMs overseas.

Only one of the five (the last one) was something I wrote about in 2015 in response to the Kinabalu tragedy. Are my best edublogging days behind me?

I appreciate the slide over and split views in iOS9.

Slide view

Slide over allows me to launch one app and pull up another in a small drawer on the right of my iPad screen. This only works with newer iPads and with applications optimised to do this.
Split view

Split view is a variation of this in that I can make the drawer item take half the page.

When I was reviewing my notes for a workshop yesterday, I could call up Evernote on the left and Safari on the right for that segment of the workshop.

Such multitasking has been the norm for desktop and laptop computers, but this is a milestone for slates. They are no longer devices for single task or serial consumption and can now be taken as devices for serious content review and creation.

Tweetbot on iPhone 6

Tech blogs seemed to go ga-ga over the latest iteration of Tweetbot, a well-established alternative to Twitter’s default mobile app.

I was less impressed given how it seems targetted at the power user and is a paid app (SGD6 for a limited period). But I concede that it does what few other Twitter apps do.


  1. No group private DMs: Tweetbot supports direct messages from individuals but not groups. I cannot form private groups or receive group DMs. I have to rely on the Twitter mobile app or TweetDeck on a desktop.
  2. No scheduling of posts:I cannot prepare tweets in advance for posting on a schedule. I need to use Hootsuite on mobile or TweetDeck on a desktop.
  3. No blocking, only muting: I cannot seem to find a way to block users in Tweetbot. I get lots of users sending messages to the wrong @ashley and this makes it hard for me to focus on who and what matters. I end up using the Twitter app (tap and hold) and TweetDeck (click the “…” area) to block them.
  4. Mentions and Activity are separate: Twitter and TweetDeck collate all mentions of my user handle under Notifications. Tweetbot separates these messages to a Mentions space and an Activity subspace under the Stats space. Viewing replies or mentions should take one tap or click; it takes several in Tweetbot.

As a result of these shortcomings, I still need to have Twitter and Hootsuite on my mobile devices.


  1. No ads: Promoted (paid) tweets do not seem to appear in my Tweetbot timeline. This makes for more focused reading of my carefully curated follows.
  2. Synced sessions: This might be worth the cost of the app alone. I can start reading my timeline tweets on my iPhone and scroll back to tweets, say, three hours ago. Later I can pick up my iPad and resume from that point instead of trying to remember where I was. I process more relevant tweets that way.
  3. Uses Safari View Controller: This is another feature that makes the app worthwhile. When you click on a link, Tweetbot launches an in-app lite version of Safari that supports content and ad blockers. I get faster, more private, and ad-free reading.

I have made Tweetbot my default Twitter tool for now based on the latter two strengths. I am constantly reminded and irritated by its weak features, but they are not deal breakers.

I wish the app had a try-before-you-buy option. In the absence of this feature, I share some thoughts so that this might help others make decisions on whether to buy the app or not.

I upgraded my mobile devices to iOS9 the moment the update was available yesterday primarily because I wanted to test the content blocking features.

I read Techcrunch’s review of three main blockers, 1Blocker, Blockr, and Crystal.

When I tried to download all three last night, only Crystal was available. The other two displayed “Not available in your country’s store” messages. Thankfully both were available this morning.

I tested all three on STonline (@STcom) pages which are littered with the awful ads. They are so bad that they distract from reading and encourage accidental tapping.

Crystal is free to try at the moment. It did not seem to remove the ads because the pages looked intact before and after I applied this blocker.

1Blocker is free and offers in-app purchases that enable more features. I discovered that 1Blocker was heavy-handed. On applying just the ad blocking feature, entire pages in STonline would not appear. I had to load pages without blocking to read anything, but when I did this, the ads would appear.

The best content blocker was Blockr. It is also the only one of the three that does not allow you to try before you buy. It is US$0.99 (S$1.28) at the moment and worth the small amount of money. It not only blocks inline ads that interrupt reading (see my tweet above), it also blocked all the other annoying ads at the bottom of the page.

While it is very early days yet in the battle for blockers, this was a reminder that you get what you pay for.

This week I read a good critique of the way some science teachers in Singapore design test questions and grade them. The issues were a misplaced emphasis on rote learning (instead of inquiry) and the poor use of language (English and scientific) in setting test questions.

A parent wrote in to the ST Forum with a suggestion:

There seems to be something inherently wrong with how science is taught in primary schools today. Perhaps the time is ripe for a systemic review of the curriculum to address all these concerns.

This suggestion will not work alone. Curricular reviews and revisions tend to focus on content. That is only one piece of the jigsaw puzzle.

To see the whole picture, one needs to also factor in how teachers teach an academic subject (which is a function of pedagogy), and how they unlearn old habits in favour of learning new ones (professional development, leadership, incentives, and more).

A seemingly superficial or simple problem like stupid test questions or stubborn teacher behaviour has complex roots. The layperson does not dig as far and is not expected to. The real problem is when some schools, their leaders, and/or their teachers are not aware that they need to dig deep too.

Is there a social or “new” media platform that does not offer year-end reviews?

YouTube has its now traditional rewind video.

Video source

I avoid Facebook but another YouTube video informed me of personalized 2014 reviews there that did not fare too well.

Video source

Google+ Photos autoselected photos that were supposed to represent my year in a slideshow. The algorithm seemed to select only photos with people and were the least representative in my opinion. They can cite data all they want; they do not represent emotion.

Not to be outdone, WordPress informed me that I missed just three days this year in reflections. I was very ill for a period in August.

My five most read entries for the year were a mix of old and new:

The top five referral sites to my blog were Twitter,, Facebook, Google+, and NIE.

My blog attracted readers from 145 countries with the top three being Singapore, the USA, and Australia.

So now what?

I started this blog in 2008 and grew to blog daily whether or not anyone was reading. I developed this habit when I blogged on behalf of my then unborn son in 2003.

Video source

I plan on keeping that habit up just like the way the guy in the video took a photo of himself every day for 12 years.

I wonder if I can replicate that stare in writing…


Usage policy

%d bloggers like this: