Another dot in the blogosphere?

Posts Tagged ‘scores

Last Saturday, STonline reported the International Baccalaureate (IB) performances by Singapore schools. As usual, it featured pass rates and the number of perfect scores.

The local rag does this with our PSLEs, GCE O-Levels, and GCE A-Levels, so the article almost writes itself with a template. To be fair, the template has been updated to include human interest stories — the people behind the numbers — but these can seem like afterthought or filling newspaper space for a few days.

The IB result article fit the mould perfectly. It featured the usual suspects with the usual stellar results. Then it zoomed in on twins from the School of the Arts who got perfect scores.

What is wrong with doing this?

There is nothing wrong with human interest stories provided the children are not coerced into doing them and if the overcoming-the-odds stories inspire others.

What is wrong is the almost perverse fascination with quantitive results. It is one thing for schools and the Ministry of Education to keep track of these statistics, it is another to tout them and lead stories with them.
 

 
The health of our schooling system is not just measured by numbers. This would be like diagnosing sick patients by measuring only their temperatures and blood pressure. Even a layperson would say that stopping at such triage is irresponsible. The same could be said of the STonline reporting.

About five years ago, the MOE stopped revealing the names and schools of the top Primary School Leaving Examination (PSLE) students. It also discouraged the ranking and banding of schools into socially-engineered leagues in order to operate by its “every school a good school” principle. The move was meant to emphasise the holistic development of each child.

The IB results article and its ilk hold us back. Yes, the template includes human interest stories and background information about the IB. But the newspaper conveniently forgets or ignores that the IB practically an alternative form of assessment. From the article:

Founded in Geneva in 1968, the programme is now available in 4,783 schools in over 150 countries and territories.

IB diploma students take six subjects and Theory of Knowledge, a course that combines philosophy, religion and logical reasoning. They also learn a second language, write a 4,000-word essay and complete a community service project.

Why not focus on how the thinking and value systems are nurtured? What are the impacts of the community service projects on all stakeholders? How might the rest of the schooling system learn from the IB process? Finding these things out is not easy. Then again, nothing worth doing is easy. Using a writing template is easy.

STonline might think it has the perfect template for reporting academic results. It might. But this template has lost relevance given MOE policy changes. In emphasising the numbers game, it creates speed bumps and barriers in a schooling system that is trying to plod slowly forward.

I wrote the title using the Betteridge law of headlines. Such a headline almost always leads to no as the answer.

I write this in response and reflection to this STonline opinion piece, Kids with tuition fare worse.

An academic analysed PISA data from 2012 and concluded that students who had tuition:

  1. Came from countries where parents placed a premium on high-stakes examinations.
  2. Were likely to come from more affluent households.
  3. Performed 0.133 standard deviations worse than their counterparts who did not and after adjusting for “students’ age, gender, home language, family structure, native-born status, material possessions, grade-level and schools, as well as parents’ education levels and employment status”.

So does the third point not counter the Betteridge law of headlines? That is, I asked “Does tuition lead to lower PISA scores?” and the answer seemed to be yes instead of no.

A standard deviation value tells us that the scores of tuition receivers varies relatively little from a mean score. There should be some students with tuition above that mean and others below it, but the scores are tightly clustered around that mean. Furthermore, just how practically significant is 0.133 standard deviations?

The practical reality is that the answer varies. Treated as a faceless corpus of data for statistical analysis, the answer might be yes. Take individual cases and you will invariably get yes, no, maybe, depends, not sure, sometimes yes, sometimes no, and more.

More important than the statistic are the possible reasons for why students with tuition might perform worse than their counterparts without. The article mentioned:

  • They are already weak in the academic subjects they receive tuition for.
  • Forced to take tuition, they might grow to dislike the subject.
  • Tuition recipients become overly dependent on their tuition teachers.

 

 
There are at least three other questions that the article did not address. The questions that have social significance might include:

  1. What kind of tuition did the students receive (remedial, extra, enrichment, other)?
  2. If the tuition is the remedial type and the kids are already struggling or disadvantaged, why do we expect them to do as well as or better than others?
  3. Why must the comparison be made between the haves and have-nots of tuition, particularly those of the remedial sort, when the improvement should be a change at the individual level?

The article hints at tuition that is of the enrichment, or better-the-neighbours sort. However, students get tuition for other reasons. The original purpose of tuition was remediation for individuals or small groups when schools dropped the ball thanks to large class enrollments.

Tuition is not a single practice and is sought for a variety of reasons — from babysitting to academic help — and needs to be coded and analysed that way.

If the point of the article was to dissuade parents from having tuition for its own sake or for competition, then I am all for that message.

On the other hand, if the point was to actually help each child be the best they can be academically, then a comparison — even one that says tuition does not help — is not helpful. Some kids might benefit from individualisation and close attention that remedial tuition affords.

So my overall response to my own question “Does tuition lead to lower PISA scores?” is that it does not matter if each child and learning are the centre of any effort.

The first three parts of my reflections on PSLE2021 was like reviewing the good, the bad, and the ugly.

  • Part 1: The good change is the move to criterion-based testing
  • Part 2: The bad is that the assessment is still summative
  • Part 3: The ugly is how T-score differentiation goes away only to be replaced by course granularity

 

 
Most people know how the current A grade in the PSLE spans scores of 75 or more. They have pointed out how the new Achievement Levels 1 to 4 will be equivalent to the current A.

The concern seems to be that the old A was attainable at 75 while straight As (75s) under the new scheme results in four AL4s and aggregate of 16. The new aggregate will not look and feel as pretty.

Others have focused on the disparity of score spans for each AL. I illustrate the score spans for each AL in the table below.
 

AL Raw score range Score span
1 ≥90 11
2 85 to 89 5
3 80 to 84 5
4 75 to 79 5
5 65 to 74 10
6 45 to 64 20
7 20 to 44 25
8 <20 19

 
But those who think this way are missing the point.

Not only do the ALs try to introduce some granularity to the grades, I speculate that they are an attempt to 1) prevent grade inflation, and 2) insidiously reintroduce the bell curve.

Grade inflation is the ease with which students get an A or even an A* for each of their examinable subjects. It is more commonly referred to in the context of school, university, or workplace admission offices. The people who work here help decide which students get entry and they struggle to distinguish between numerous diplomas filled with straight As.

This is the source of the snippet on grade inflation that I tweeted last year.

The “finer” grained ALs help separate the good As from the not so good As. This punctures grade inflation, and very likely, egos and morales too.
 

 
In Part 1 of this series, I wrote about how the future standards or criterion-based testing was better than the current norm-referenced testing. I described it as an important fundamental shift in the PSLE. Implemented well, the focus could shift centrally to the learner and learning instead of focusing on sorting.

However, administrators and policymakers like “God views” of their system. Reducing people to numbers, data points, statistics, and diagrams are their work (and could be their idea of fun). The bell curve is too sexy to let go because phenomena only seem normal if there is a normal distribution.

Things seem neater and safer under the umbrella of a bell curve. You can be sure that one or more groups have crunched numbers with existing data to see if the ALs might insidiously recreate a normal distribution.

With some logical guesswork, you might see how this pattern might emerge as well.

It is a fairly safe assumption to say that many kids taking PSLE have be “tuitioned” and/or drilled in school. Quite a few will get As. The ALs 1 to 4 will spread them out: There will be fewer AL1s than AL4s. The curve draws itself with greater granularity.

TL;DR? The uneven AL bands in PSLE might not just be for increasing the granularity of measuring achievement. It might actually help administrators and policymakers prevent A-grade inflation and recreate the bell curve.


http://edublogawards.com/files/2012/11/finalistlifetime-1lds82x.png
http://edublogawards.com/2010awards/best-elearning-corporate-education-edublog-2010/

Click to see all the nominees!

QR code


Get a mobile QR code app to figure out what this means!

My tweets

Error: Twitter did not respond. Please wait a few minutes and refresh this page.

Archives

Usage policy

%d bloggers like this: