Another dot in the blogosphere?

Top in problem-solving: More Q than A

Posted on: November 23, 2017

The STonline reported that a sample of Singapore students topped an Organisation for Economic Cooperation and Development (OECD) test on problem-solving.

I am glad to read this, but only cautiously so. This is partly because the press tends to report what is juicy and easy. I am cautious also because such news is not always processed critically from an educator’s point of view.

For example, how did the OECD test for problem-solving ability? According to an excerpt from the article above:

Screen capture of original article.

Screen capture of original article.

There were no other details about the authenticity, veracity, or adaptability of the software-based simulation. Only the makers of the software and the students who took the test might provide some clues. This test system is a closed one and lacks critical observers or independent evaluators.

Perhaps it would be better to raise some critical questions than to make blanket statements.

The product of problem-solving is clear (the scores), but not all the processes (interactions, negotiations, scaffolding, etc.). So how can we be certain that this problem-solving is authentic and translates to wider-world application? Our Ministry of Education (MOE) seemed to have the same concern.

MOE noted that the study design is a standardised way of measuring and comparing collaborative problem-solving skills, but real-life settings may be more complex as human beings are less predictable.

Our schools might have alternative or enrichment programmes — like the one highlighted in Queenstown Secondary — that promote group-based problem-solving. How common and accessible are such programmes? To what extent are these integrated into mainstream curriculum and practice?

The newspaper’s description of the problem-solving simulation sounds like some of the interactions that happen in role-playing games. How logical and fair is it to attribute our ranking only to what happens in schools? What contributions do other experiences make to students’ problem-solving abilities?

Test results do not guarantee transfer or wider-world impact. What are we doing to find out if these sociotechnical interventions are successful in the long run? What exactly are our measures for “success” — high test scores?

What is newsworthy should not be mistaken for critical information to be internalised as knowledge. The learning and problem-solving do not lie in provided answers; they stem from pursued questions.

I argue that we have more questions than answers, and that is not a bad thing. What is bad is the current answers are inadequate. We should not be lulled into a collective sense of complacency because we topped a test.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

http://edublogawards.com/files/2012/11/finalistlifetime-1lds82x.png
http://edublogawards.com/2010awards/best-elearning-corporate-education-edublog-2010/

Click to see all the nominees!

QR code


Get a mobile QR code app to figure out what this means!

My tweets

Archives

Usage policy

%d bloggers like this: