Another dot in the blogosphere?

Posts Tagged ‘pisa

This article would like you to believe that students in the US are motivated by extrinsic rewards to do well in tests.

According to the article, a team of academics from the US and China conducted research on the math abilities of students from both countries. The students took a “25-minute test of 25 math questions that had previously been used on PISA”.

The treatment groups were given “envelopes filled with 25 one-dollar bills and told that a dollar would be removed for every incorrect or unanswered question”. The incentive was to get as many questions right as possible to receive the highest monetary reward.

Source: National Bureau of Economic Research

According to the article:

  • The incentives did not significantly impact the students from Shanghai, China.
  • The students from the US were more likely to attempt more questions and get more answers right with incentives.
  • The incentivised US student performance was equivalent to a PISA finish of 19th place instead of the actual 36th place out of 60 countries.

The researchers concluded that poor PISA test results could be due more to apathy than a lack of ability.

Tests like PISA — which have no impact on students’ grades or school accountability measures — aren’t taken as seriously as federally mandated assessments or the SAT.

All that said, the article ignored another important trend in the data: The less academically inclined students — see School 1 Low and School 1 Regular — did not do as well and were not as motivated even with incentives.

While this seems obvious even without the benefit of data, this casts light on the largely non-transparent method of how students are selected for PISA.

In OECD’s 2015 report, China was represented by Macao, Hong Kong, and special combination of Beijing-Shanghai-Jiangsu-Guangdong (B-S-J-G). China was in the top 10 for math and science test results.

Both the comparative study and the China selection results raise questions about the selection of students for the PISA tests. For example, this Forbes article asked if PISA results could be “rigged” as a result of such selections.

If officials make disproportionate selections from rich cities, then suspicions of bias are valid. Students with higher socio-economic status have more opportunities in schooling and have access to better resources than those propping them up in the lower rungs. Such students are more likely to do better in tests.

There are guidelines for selecting students for PISA testing. However, there is seems to be enough wiggle room for officials to get creative (see Malaysian example in the Forbes article).

Officials wanting to boost rankings can manipulate the selection seemingly within guidelines. For example, imagine a system with 100 schools. All 100 cannot participate for pragmatic reasons, e.g., students are not available or unwilling, resources are poor, scheduling is inconvenient, schools see no benefits, etc. So the officials resort to stratifying the random sampling of students. This means selecting certain schools within each band, i.e., low, regular, high-performing.

Officials might select students the higher performing schools from each band or maximise the sample for the potentially highest performers while minimising the selection from the likely lowest performers. In all cases, the students are still randomly selected from the pool, but there is stratification of the pool by bands and percentages.

This practice is not transparent to the layperson or perhaps even the reporters that write news articles. But the PISA results are lauded whenever they are released and policymakers make decisions based on them. Should we not be watchdogs not just for the validity of PISA tests, but also for how students are selected to take them?

The STonline reported that a sample of Singapore students topped an Organisation for Economic Cooperation and Development (OECD) test on problem-solving.

I am glad to read this, but only cautiously so. This is partly because the press tends to report what is juicy and easy. I am cautious also because such news is not always processed critically from an educator’s point of view.

For example, how did the OECD test for problem-solving ability? According to an excerpt from the article above:

Screen capture of original article.

Screen capture of original article.

There were no other details about the authenticity, veracity, or adaptability of the software-based simulation. Only the makers of the software and the students who took the test might provide some clues. This test system is a closed one and lacks critical observers or independent evaluators.

Perhaps it would be better to raise some critical questions than to make blanket statements.

The product of problem-solving is clear (the scores), but not all the processes (interactions, negotiations, scaffolding, etc.). So how can we be certain that this problem-solving is authentic and translates to wider-world application? Our Ministry of Education (MOE) seemed to have the same concern.

MOE noted that the study design is a standardised way of measuring and comparing collaborative problem-solving skills, but real-life settings may be more complex as human beings are less predictable.

Our schools might have alternative or enrichment programmes — like the one highlighted in Queenstown Secondary — that promote group-based problem-solving. How common and accessible are such programmes? To what extent are these integrated into mainstream curriculum and practice?

The newspaper’s description of the problem-solving simulation sounds like some of the interactions that happen in role-playing games. How logical and fair is it to attribute our ranking only to what happens in schools? What contributions do other experiences make to students’ problem-solving abilities?

Test results do not guarantee transfer or wider-world impact. What are we doing to find out if these sociotechnical interventions are successful in the long run? What exactly are our measures for “success” — high test scores?

What is newsworthy should not be mistaken for critical information to be internalised as knowledge. The learning and problem-solving do not lie in provided answers; they stem from pursued questions.

I argue that we have more questions than answers, and that is not a bad thing. What is bad is the current answers are inadequate. We should not be lulled into a collective sense of complacency because we topped a test.

 
Two recent newspaper articles [1] [2] kept referring to one study that claimed that tuition did not have an impact on Singapore’s high PISA score. I question this research.

Today I reflect on how the articles might be focusing on a wrong question asked the wrong way: Does tuition impact Singapore’s PISA score?

It is a wrong question because it begs an oversimplistic “Yes” or “No” answer when the answer is likely “Depends”. There will be circumstances when tuition helps and when it does not.

Tuition is not a single entity. The are the sustained forms of remedial, enrichment, some combination of the two, or other forms. There are short interventions that focus on just-in-time test exam strategies. There are broad shot forms that deal with one or more academic subjects and there are formulaic forms that focus on specific subtopics and strategies.

Add to that messy practice the fact that a phenomenon like learning to take tests is complex and will have many contributing factors, e.g., school environment, home environment, learner traits, teacher traits, etc.

Wanting to know the impact of tuition, not just on PISA scores, but also on schooling and education in Singapore’s contexts are questions worth asking. A better way to ask one question might be: “How does tuition impact X (where X is the phenomenon)?”

This core question bracketed by: “What forms of tuition are there in Singapore?” and “What other factors influence the impact of this form of tuition?”

Methods-wise, the study would not just play the numbers game. Narratives flesh out and make the case for numbers or even explain what might seem counterintuitive.

We live in a post-truth world. You cannot believe everything you read online. You cannot take what you read offline or in newspapers at face value either.

I wrote the title using the Betteridge law of headlines. Such a headline almost always leads to no as the answer.

I write this in response and reflection to this STonline opinion piece, Kids with tuition fare worse.

An academic analysed PISA data from 2012 and concluded that students who had tuition:

  1. Came from countries where parents placed a premium on high-stakes examinations.
  2. Were likely to come from more affluent households.
  3. Performed 0.133 standard deviations worse than their counterparts who did not and after adjusting for “students’ age, gender, home language, family structure, native-born status, material possessions, grade-level and schools, as well as parents’ education levels and employment status”.

So does the third point not counter the Betteridge law of headlines? That is, I asked “Does tuition lead to lower PISA scores?” and the answer seemed to be yes instead of no.

A standard deviation value tells us that the scores of tuition receivers varies relatively little from a mean score. There should be some students with tuition above that mean and others below it, but the scores are tightly clustered around that mean. Furthermore, just how practically significant is 0.133 standard deviations?

The practical reality is that the answer varies. Treated as a faceless corpus of data for statistical analysis, the answer might be yes. Take individual cases and you will invariably get yes, no, maybe, depends, not sure, sometimes yes, sometimes no, and more.

More important than the statistic are the possible reasons for why students with tuition might perform worse than their counterparts without. The article mentioned:

  • They are already weak in the academic subjects they receive tuition for.
  • Forced to take tuition, they might grow to dislike the subject.
  • Tuition recipients become overly dependent on their tuition teachers.

 

 
There are at least three other questions that the article did not address. The questions that have social significance might include:

  1. What kind of tuition did the students receive (remedial, extra, enrichment, other)?
  2. If the tuition is the remedial type and the kids are already struggling or disadvantaged, why do we expect them to do as well as or better than others?
  3. Why must the comparison be made between the haves and have-nots of tuition, particularly those of the remedial sort, when the improvement should be a change at the individual level?

The article hints at tuition that is of the enrichment, or better-the-neighbours sort. However, students get tuition for other reasons. The original purpose of tuition was remediation for individuals or small groups when schools dropped the ball thanks to large class enrollments.

Tuition is not a single practice and is sought for a variety of reasons — from babysitting to academic help — and needs to be coded and analysed that way.

If the point of the article was to dissuade parents from having tuition for its own sake or for competition, then I am all for that message.

On the other hand, if the point was to actually help each child be the best they can be academically, then a comparison — even one that says tuition does not help — is not helpful. Some kids might benefit from individualisation and close attention that remedial tuition affords.

So my overall response to my own question “Does tuition lead to lower PISA scores?” is that it does not matter if each child and learning are the centre of any effort.

Rankings from surveys and studies like TIMMS and PISA are released around this time of year.

We expect Singapore to be at the top or very near it. We are so used to our heady heights that to not be there would be embarrassing.

The next few days and weeks will see opinion pieces in papers and blogs about how Singapore does it. Practically every one will try to sound original, but you will hear the same refrains from every tune: Our methods, our teachers, our culture.
 

 
I say these three things:

  1. If we are to take rankings seriously, we should not cherry-pick only the ones that make us look good. We should also focus on where we do not do as well and seek to do good or be better.
  2. If we are to attribute what creates good results in tests, I say we not ignore a) critics like Yong Zhao, and b) the effect of tuition and test preparation.
  3. Tests are just that and so are their results. They are not necessarily designed to be predictive nor do they guarantee transfer of test-based skills to wider world application. The context of a test is the test.

I also ask this question: Ever notice how the rankings focus on mathematics and science? Have you wondered why there are no rankings for the humanities or our humanity? Have you reflected on your answer to the last question?

I found this photo on Twitter taken by @garystager.

I do not have to guess that he took the photo here in Singapore because the Twitter geo tag tells me it was taken in the eastern part of our main island.

Signs like these are very common at fast food joints and upmarket coffee shops because students frequent these spots and deny customers seating by spending long hours there.

Locals do not bat any eyelid because such signs are the norm. It takes outsiders to find them unusual or funny. When they do this, they hold up a mirror with which we should examine ourselves.

Why is it not just socially acceptable but even expected that kids study in places meant for relaxation, entertainment, or a quick meal? You might even spot mothers or tuition teachers drilling and grilling their charges at fast food restaurants.

This is almost unique to Singapore. I suspect it happens (or will happen) elsewhere. Where? Any place that has high PISA scores.

So here is a tongue-in-cheek proposition for OECD. Why not investigate the relationship between studying at places like Pizza Hut and performance in PISA tests?

Policymakers worldwide might not be aware or care for the effect that the tuition industry might have on Singapore’s PISA test scores. But McDonald’s is everywhere. It might be an untapped solution to cure test score ills.

This tweet was not an April Fools joke.

Being Number 1 in problem-solving is something to be proud of. The problem with that is 1) some people do not understand that there are different kinds of problems and problem-solving, and 2) the report brushes aside important details in favour of the numbers game.

The problems featured and tested in the report were the academic sort. They were certainly made more realistic, but they do not measure complete problem-solving ability.

For example, try providing neat responses to:

  • How do I stop this bully?
  • Should I marry this person?
  • How am I going to get by this month?
  • Why should I (not) leave this game guild?
  • How do we get newspapers to report more thoroughly?

I was privileged to hear Andreas Schleicher present in greater detail the comparative problem-solving abilities of 15-year-olds around the world. I Storified some quick notes here.

I reshare a photo I took about the sample of Singapore kids’ ability to solve “interactive” problems. Schleicher used the word “dynamic” when he presented. We are not Number 1 in this aspect.

One might argue that situations where the variables keep changing all the time are harder problems to solve. These also mirror life more accurately.

Let’s not sit on our laurels. Let’s not be fooled by a headline.

On a separate and unrelated note, I really enjoyed Mojang’s juvenile but funny April Fools prank. They replaced the usual Minecraft startup music with the Game of Thrones theme song.


Video source

Tags: , ,

http://edublogawards.com/files/2012/11/finalistlifetime-1lds82x.png
http://edublogawards.com/2010awards/best-elearning-corporate-education-edublog-2010/

Click to see all the nominees!

QR code


Get a mobile QR code app to figure out what this means!

My tweets

Archives

Usage policy

%d bloggers like this: