Another dot in the blogosphere?

Posts Tagged ‘research

Although I am no longer an academic, I see research opportunities everywhere. One set of untapped research is in Pokémon Go (PoGo).

I am not talking about the already done-to-death exercise studies or about the motivations to play and keep playing.

I am thinking about how sociologists might add to PoGo’s trend analysis. Number crunchers have already collected data on its meteoric rise and now its declining use. While these provide useful information to various stakeholders, I wonder if anyone has considered the impact of PoGo uncles and aunties.

I am not the first to observe how much older players have started playing PoGo. I tweeted this a while ago and someone just started a thread in the PoGoSG Facebook group about uncles and aunties at play.

A quick search on Twitter with keywords like “pokemon go” and “auntie” or “uncle” might surprise you.

The PoGo aunties and uncles are quite obvious here. So far I have noticed three main types: Solo aunties, uncles in pairs or small groups, and auntie-uncle couples. There are more types, of course, but these three are common enough to blip frequently on social radar.

But I would not be content with just describing the phenomenon. I would ask if they contribute to the “death” of PoGo just like how the older set adopted Facebook and how teens then migrated to Snapchat.

We should not underestimate the impact of uncles and aunties. After all, there must be a reason for this saying: Old age and treachery will always overcome youthfulness and skill.

Since some people would rather watch a video bite than read articles, I share SciShow’s Hank Green’s 2.5 minute critique of “learning styles”.


Video source

From a review of research, Green highlighted how:

  • the only study that seemed to support learning styles was severely flawed
  • students with perceptions that they had one style over others actually benefitted from visual information regardless of their preference

This is just the tip of the iceberg of evidence against learning styles. I have a curated list here. If that list is too long to process, then at least take note of two excerpts from recent reviews:

From the National Center for Biotechnology Information, US National Library of Medicine:

… we found virtually no evidence for the interaction pattern mentioned above, which was judged to be a precondition for validating the educational applications of learning styles. Although the literature on learning styles is enormous, very few studies have even used an experimental methodology capable of testing the validity of learning styles applied to education. Moreover, of those that did use an appropriate method, several found results that flatly contradict the popular meshing hypothesis. We conclude therefore, that at present, there is no adequate evidence base to justify incorporating learning-styles assessments into general educational practice.

In their review of research on learning styles for the Association for Psychological Science, Pashler, McDaniel, Rohrer, and Bjork (2008) came to a stark conclusion: “If classification of students’ learning styles has practical utility, it remains to be demonstrated.” (p. 117)

In Deans for Impact, Dylan Wiliam noted:

Pashler et al pointed out that experiments designed to investigate the meshing hypothesis would have to satisfy three conditions:

1. Based on some assessment of their presumed learning style, learners would be allocated to two or more groups (e.g., visual, auditory and kinesthetic learners)

2. Learners within each of the learning-style groups would be randomly allocated to at least two different methods of instruction (e.g., visual and auditory based approaches)

3. All students in the study would be given the same final test of achievement.

In such experiments, the meshing hypothesis would be supported if the results showed that the learning method that optimizes test performance of one learning-style group is different than the learning method that optimizes the test performance of a second learning-style group.

In their review, Pashler et al found only one study that gave even partial support to the meshing hypothesis, and two that clearly contradicted it.

Look at it another way: We might have learning preferences, but we do not have styles that are either self-fulling prophecies or harmful labels that pigeonhole. If we do not have visual impairments, we are all visual learners.

Teaching is neat. Learning is messy.

Learning is messy and teaching tries to bring order to what seems to be chaos. The problem with learning styles is that it provides the wrong kind of order. Learning styles has been perpetuated without being validated. A stop sign on learning styles is long overdue.

After reading this review of research on homework, my mind raced to how some people might resort to formulaic thinking.

This was the phrase that seeded it:

Based on his research, Cooper (2006) suggests this rule of thumb: homework should be limited to 10 minutes per grade level.

What follows were examples and an important caveat:

Grade 1 students should do a maximum of 10 minutes of homework per night, Grade 2 students, 20 minutes, and so on. Expecting academic students in Grade 12 to occasionally do two hours of homework in the evening—especially when they are studying for exams, completing a major mid-term project or wrapping up end-of-term assignments—is not unreasonable. But insisting that they do two hours of homework every night is expecting a bit much.

If you assume that people would pay more attention to the caveat than to the formula, you assume wrongly. Doing the former means thinking harder and making judgements. The latter is an easy formula.

Most people like easy.

If those people are teachers and administrators who create homework and homework policies, then everyone who is at home will likely suffer from homework blues.

Am I overreaching? I think not. Consider another example on formulaic thinking.

I provide professional development for future faculty every semester, but this semester was a bit different. There was a “social” space in the institution’s learning management system (LMS) where a certain 70:30 ratio emerged.

A capstone project for these future faculty is a teaching session. The modules prior to that prepare them to design and implement learner-centred experiences. At least one person played the numbers game and asked what proportion of the session should be teacher-centred vs student-centred.

I provide advice in person and in assignments that the relative amount is contextual. My general guideline is that student-centred work tends to require more time since the learners are novices and that the planning should reflect that.

However, once that 70:30 ratio was suggested in the social space, it became the formula to follow. It was definite and easier than thinking for and about the learner. It allowed future faculty to stay in their comfort zone of lecturing 70% of the time and grudgingly attempt student-centred work 30% of the time.

But guess what? When people follow this formula or do not plan for more student-centred activities and time, they typically go over the 70% teacher talk time and rush the actual learning. This pattern is practically formulaic.

Formulaic thinking is easy, but that does not make it right or effective. In the case of the course I mentioned, the 70:30 folk typically return for remediation. It is our way of trying to stop the rot of formulaic thinking.
 

The video below highlights some research that would not pass muster today.


Video source

Today we are guided by the principle of “do no harm”, or at least “do the least harm”.

I wonder if the same could be said about social experiments that are a result of non-researchers tinkering with systems and policies.

For example, how much social experimental harm has the PSLE caused?

Make no mistake: The PSLE has been a very successful as a social experiment. It has become the operating standard, it shapes expectations, and we cannot seem to think outside it.

However, we need to ask ourselves if the PSLE embedded in our collective psyche is a good thing. Simply mentioning it as harmful begs disbelief in some quarters and helplessness to do otherwise in others.

Just because something is successful does not make it helpful or harmless. Pandemic diseases spread with us as carriers and our technologies as enablers. Often we do not even know we are helping the disease spread until it is too late.

This doctor is highlighting some symptoms of PSLE. Are you feeling OK?

There are many things that could be said about research.

As a former academic, I share just three truisms:

  1. Publish or perish.
  2. To steal from one is plagiarism. To steal from many is research.
  3. Practice without research is blind. Research without practice is sterile.

I share a variation of the third truism as an image quotation I created some time ago.

Practice without theory is blind. Theory without practice is sterile.

Most young academics learn the first truism as a graduate student by being mentored or observing professors carefully. If they end up in Research I universities, publish or perish is a constant mantra. Their jobs depend on how much and how well they publish.

The second truism is sneaked in various contexts and said half in jest. It is the recognition that we stand on the shoulder of others, be they giants or not. Combined with the first principle, research can often be a dog eat dog world.

The third truism and couplet is something some researchers ignore. In order to build and stay in ivory towers, no doubt funded by generous research grants, it helps to spout rhetoric that the research adds to the pool of knowledge. It does not have to actually make a larger impact.

Research that is based on practice and informs practice is vital, but it is still sorely lacking particularly in education. Some experts play the old game because they are far removed from the ground.

If you are a practitioner, do not be tempted to ignore research as a result of this. Set up conditions and demand for research that informs practice instead.

Tags:

Yesterday I reflected on the moral dilemma of playing the research game because it benefits only a few stakeholders. Today I continue with the processes of publishing research.

Most academics review articles and serve on editorial boards because it looks great on their CVs. For a few, this also provides power to lord over others by rejecting papers in the name of “objective” reviews. The same might be said of committees that determine disbursement of funds for research.

But all that is child’s play when compared to the ruse of publishers.

With one hand they pull in reviewers of journal papers for free (it is a service academics provide for one another after all). With the other, the publishers collect money by charging top dollar to libraries, organizations, and individuals who want journal collections or specific papers.

What I have reflected on is not news. In 2002, Frey compared the publishing process to prostitution. PhD Comics had an amusing take on this in 2011.


Source

The open movement is a disruptive process that threatens the membership and rules of the game of research as currently played.

Open practice champions like Martin Weller do great work in this respect. His recent blog entry on the benefits of being open is a must-read.

Influential bodies like the Bill and Melinda Gates Foundation are insisting that research data and publications be shared with the Creative Commons Attribution licence.

A few local universities and agencies have shared some materials openly, but they are an insignificant drop in the research bucket.

Not only is the rest of published research is not so freely shared, researchers are complicit by playing to the rules set by publishers, universities, and grant bodies.

If you are not an academic, you should be morally outraged. If you are, you should reflect critically on the state of the playing field.

There is a game that university academics play. The game has a strict selection process and the chosen must play by the rules.

However, like casinos, the house always wins, the players think they win, and the players’ stakeholders tend to lose.

Casino Velden Panorama by geek7, on Flickr
Creative Commons Creative Commons Attribution-Noncommercial-No Derivative Works 2.0 Generic License   by  geek7 

 
The game is called research and publishing. It is a game that academics play because they are expected to. Very few seem to challenge its rules and the ethics of playing the game the same old way.

Anyone can conduct research without getting a grant or by paying out of pocket, but why would they? They get more points in their appraisals if they successfully apply for grant money.

The money comes from a corporation or a government body or the university itself, and there are often stringent demands when applying for funds. That is a good thing because the money ultimately comes from the taxpayer and layperson.

What might be less clear is how the money benefits these stakeholders even if researchers have to justify their research. Leaders and managers of universities and funding agencies recognize this ethical issue and take administrative and policy measures to address it. There are strict review processes, rules to protect human subjects, regular reporting processes, expectations of social responsibility or scaling up, etc.

But with the way the game is played in reality, the benefit to stakeholders seems tertiary, if at all. The research money primarily benefits researchers and journal publishers, and secondarily benefits a research ecosystem.

Research money helps some academic staff publish papers and get promotions. If enough of them do these, they raise the profile and international ranking of the university. Research outputs go to journals and publishers profit from the work of researchers. These are the primary beneficiaries.

In order to conduct research and publish, academic staff need to buy equipment, hire staff, outsource some services, arrange for conference travel, and so on. This could benefit some stakeholders by providing employment and creating a demand for assorted services. These are the secondary beneficiaries.

But research is typically funded over only two or three years. This means that funding cycles are tight and a researcher needs to be creative with resources and/or apply for multiple grants if s/he wants to sustain the research.

Sustaining a study is particularly important in educational or social studies type of research because of the subjectivity and complexity of human factors. Such studies also might have interventions like technology use which take time to develop, implement, and revise.

Sometimes researchers move from one grant to another (and therefore from one research topic to another) like slash-and-burn farmers move from plot to plot. Both leave damage in their wake. In the case of educational research, it might be schools, teachers, and students who have no support after the study team pulls out.

Closed circles are created when researchers team up with one or a few partner teachers or schools. If there is harm, it is contained. If there is good, it is highly contextualized and difficult to generalize.

The process of publishing the results or impact of research is also closed. More thoughts on that tomorrow.


http://edublogawards.com/files/2012/11/finalistlifetime-1lds82x.png
http://edublogawards.com/2010awards/best-elearning-corporate-education-edublog-2010/

Click to see all the nominees!

QR code


Get a mobile QR code app to figure out what this means!

My tweets

Archives

Usage policy

%d bloggers like this: