Another dot in the blogosphere?

Posts Tagged ‘oecd

 
Yesterday I reflected on how our Number One ranking in OECD’s problem-solving test raised more critical questions than provided model answers.

This tweet gave me more fuel thought.

The processes behind the products of learning are just as important, if not more so. A Number One ranking is a product of a combination of complex processes. Actually it is a by-product because we are not schooling kids for a worldwide competition.

Topping the ranking boards can send unintended and undesirable messages. Among them might be:

  • We are the best, so there is no need to change.
  • Let’s maintain the ranking for the sake of being Number One.
  • We have little or even nothing to learn from others.
  • This is a competition to be the best, so we must guard our secrets.

Unlikely as these messages might be, they can still be normalised actively or passively. The press or authorities might actively laud these accomplishments uncritically. We might passively believe everything we hear by not questioning the processes and products of rankings.

If we want learners to be resilient and creative in the face of failure, teachers and educators must first model such thinking and actions. A single-minded focus on narrow measures of success does not reveal the stories and iterations of moving forward by falling. I say we ignore rankings and do what ranking tables and agencies do not or cannot measure.

The STonline reported that a sample of Singapore students topped an Organisation for Economic Cooperation and Development (OECD) test on problem-solving.

I am glad to read this, but only cautiously so. This is partly because the press tends to report what is juicy and easy. I am cautious also because such news is not always processed critically from an educator’s point of view.

For example, how did the OECD test for problem-solving ability? According to an excerpt from the article above:

Screen capture of original article.

Screen capture of original article.

There were no other details about the authenticity, veracity, or adaptability of the software-based simulation. Only the makers of the software and the students who took the test might provide some clues. This test system is a closed one and lacks critical observers or independent evaluators.

Perhaps it would be better to raise some critical questions than to make blanket statements.

The product of problem-solving is clear (the scores), but not all the processes (interactions, negotiations, scaffolding, etc.). So how can we be certain that this problem-solving is authentic and translates to wider-world application? Our Ministry of Education (MOE) seemed to have the same concern.

MOE noted that the study design is a standardised way of measuring and comparing collaborative problem-solving skills, but real-life settings may be more complex as human beings are less predictable.

Our schools might have alternative or enrichment programmes — like the one highlighted in Queenstown Secondary — that promote group-based problem-solving. How common and accessible are such programmes? To what extent are these integrated into mainstream curriculum and practice?

The newspaper’s description of the problem-solving simulation sounds like some of the interactions that happen in role-playing games. How logical and fair is it to attribute our ranking only to what happens in schools? What contributions do other experiences make to students’ problem-solving abilities?

Test results do not guarantee transfer or wider-world impact. What are we doing to find out if these sociotechnical interventions are successful in the long run? What exactly are our measures for “success” — high test scores?

What is newsworthy should not be mistaken for critical information to be internalised as knowledge. The learning and problem-solving do not lie in provided answers; they stem from pursued questions.

I argue that we have more questions than answers, and that is not a bad thing. What is bad is the current answers are inadequate. We should not be lulled into a collective sense of complacency because we topped a test.

Late last year, the OECD released a report that declared that using educational technology did not guarantee good results.

The press had a field day with it, nay-sayers gleefully taunted “I told you so!”, and anyone associated with enabling change with ICT questioned their lot in life.

Well, this was not quite true for the last group of people.

While some suffered a dent in confidence, other educators moved beyond this argument and focused on what was and still is important: Enabling powerful and meaningful learning by students regardless of results measured only by narrow-beam tests.

The argument that technology does not help is old and invalid.

The press and nay-sayers focused on the negative and forgot to point out that the ineffectiveness could be due to teachers who do not know how to marry new tools with new strategies.
 

 
Consider a person with a hammer (old tool) and who is an expert at hammering (old strategy). Now give them a Swiss Army Knife (new set of tools). They might struggle to use the tools (poor strategy) or resort to hammering (using the old strategy regardless of tool affordances).

The argument is old because we already know that for something like a wide range of ICTs to be effective, there must be broad acceptance, regular use, and rigorous professional development. There must be changes in teaching behaviours before we try measuring the effectiveness of ICT.

How you measure effectiveness is also important. Schools and the OECD used tests. Do these test for knowledge, attitudes, and skills that are a result of ICT-enabled learning? For example, are the tests open, collaborative, and Google-enabled?

No, they are not. It is like the tests are designed to measure how someone can run in a straight line when you actually need to determine how well they can climb up a tree. The body motions look similar when the person is miming the actions, but climbing is very different from running. The tests are simply invalid.

Say no to the nay-sayers because they do not know what they are talking about. I have told you do. Now you tell them so!

I appreciated having access to the official transcript of the speech that Mr Ong Ye Kung, Acting Minister for Education (Higher Education and Skills), gave at the Opening of the OECD-Singapore Conference on Higher Education Futures on 14 October 2015.

The speech ticked all the right rhetorical boxes. I took comfort from the words of one of our two new Ministers for Education. To move from comfort to confidence, I await action.

Some of the action might have to start right at his doorstep. This is a screenshot I took and underlined from the TODAY copy of the transcript.

I got the message of diversifying our educational system to meet to varied needs. Everything he mentioned in the latter paragraph showed thought leadership.

But does our minister have the support of people who think similarly and are able to put excellent rhetoric into play? If they are beginning presentations with similar templates, are they not reliant on cookie-cutters?

Some might point out that the same start does not mean the same path or the same end. They might also say that a common template shows shared values and unity of purpose.

However, the disruption and change described by the minister require different starts, culling of sacred cows, and striving for uncertain ends. If the situation could be likened to a biological one, then what we do not need is a small and shallow gene pool. Quite the opposite.

Are we diverse enough? Do we listen to voices in our deserts? Do we embrace our outliers?

I found this photo on Twitter taken by @garystager.

I do not have to guess that he took the photo here in Singapore because the Twitter geo tag tells me it was taken in the eastern part of our main island.

Signs like these are very common at fast food joints and upmarket coffee shops because students frequent these spots and deny customers seating by spending long hours there.

Locals do not bat any eyelid because such signs are the norm. It takes outsiders to find them unusual or funny. When they do this, they hold up a mirror with which we should examine ourselves.

Why is it not just socially acceptable but even expected that kids study in places meant for relaxation, entertainment, or a quick meal? You might even spot mothers or tuition teachers drilling and grilling their charges at fast food restaurants.

This is almost unique to Singapore. I suspect it happens (or will happen) elsewhere. Where? Any place that has high PISA scores.

So here is a tongue-in-cheek proposition for OECD. Why not investigate the relationship between studying at places like Pizza Hut and performance in PISA tests?

Policymakers worldwide might not be aware or care for the effect that the tuition industry might have on Singapore’s PISA test scores. But McDonald’s is everywhere. It might be an untapped solution to cure test score ills.


http://edublogawards.com/files/2012/11/finalistlifetime-1lds82x.png
http://edublogawards.com/2010awards/best-elearning-corporate-education-edublog-2010/

Click to see all the nominees!

QR code


Get a mobile QR code app to figure out what this means!

Archives

Usage policy

%d bloggers like this: