Another dot in the blogosphere?

Posts Tagged ‘test

Yes, kids should learn from mistakes. But they are not likely to this as a result of high stakes summative exams.

Tests are not the best method for developing resilience and critical reflection. A major exam like the PSLE has one main purpose — to sort. 

Assessment and evaluation experts know this. The layperson does not. It will take a lot of re-education of learners young and old to beat the exam mindset into submission. I doubt we can do this. But we might be able to get enough people to realise the limits of tests and exams.

If you read in between the lines, this tweeted headline says this: How to overpromise and oversimplify. 

A listicle (an article driven by a list) attempts to distill what its writer thinks are ultimate strategies, or worse, so-called best practices, for people to follow.

One problem with this is that formulae do not fit everyone. Another is that the chase for such  gain propagates a mindset of taking shortcuts and/or looking only over the short term.

Buying in to such a mindset and practice starts with asking what harm following such advice and thinking that you can abandon it later. We need to get over transactional thinking.

Instead we need to operate over a longer term. Oral skills are built over more than the one month that the article warns of. Such a test is also about confidence and fluency, which go beyond the classroom even though they are tested in one.

More than anything, we need to get over high-stakes tests that measure narrowly. They do not account for actual use, continued practice, remediation, or attitude of use. If they did have a long term and broader consequence, we would see and hear for ourselves in the public sphere.

Video source

Oh, the humble IQ test.

Not so fast. This video by Ted Ed provides insights on what the IQ test was originally for (identifying students for remediation) and what it has become (sorting, categorising, and labelling, not always with good intent or consequences).

Caveat emptor — let the buyer beware. If you rely on IQ tests, be aware of what you might be buying into and perpetuating.

Tags: , , ,

How might an educator reflect on or respond to such a tweet?

One general reflection might be to not ask questions that you do not want answers to. These questions invite trouble.

The student was scientifically correct, but socially not. If this was a science test, then the student should get the marks; if this was a social science test, then the student might not. The teacher might respond by teaching both content and values.

If we consider the SAT, the prime test for entrance to US universities, what does that test actually measure?

The video below provides insights into the history and design of the SAT.

Video source

It concludes with this sobering thought:

The SAT was created in the pursuit of precision. An effort to measure what we’re capable of — to predict what we can do. What we might do. What we’ve forgotten is that, often, that can’t be untangled from where we’ve been, what we’ve been through, and what we’ve been given.

The same could be said about practically any other academic test taken on paper.

Video source

Even though I might have referred to the marshmallow study a few times in the past, I misrepresented it. I simply passed on what I had heard instead of being more critical and nuanced.

In 2014, I learnt that the original study was less about how childhood traits like self-control (delaying gratification) were predictors of adult success. It was more about the children’s coping mechanisms and decision-making. The researcher behind the study, Walter Mischel, said so.

The press, YouTube video creators, and even Sesame Street do not always get it right, especially there seems to be an obvious link. If they take the bait instead of exploring nuance, they put marshmallow in the horse’s mouth and end up with egg in their faces.

This is the MOE press release that accompanied the announcement on reducing tests in Singapore schools.

First comes the policy shift (long overdue, in my opinion). Then might come the years-long mindset shifts. Next is the decades or generations-long behavioural shifts.

The press release ends as most documents that herald change do.

You could apply points 15 and 16 to any change in schooling, but that does make them any less true.

The stakeholders hardest to reach and change lie immediately outside the school arena, i.e., parents and enrichment tuition centres. This is what makes the change process arduous.

Like teaching, the policy announcement is neat. And like learning, the actual change processes are messy. It is time to muck about.

If you are going to use video game-based teaching to have video game-based learning, you also need to align the assessment for video games.

What schools and educational institutions often do instead is use video games to try to teach content. The more informed ones ones might focus on attitudes and skills, but most stop at content acquisition. That is why the tests remain in the traditional realm.

Video source

The pedagogy needs to be aligned with the assessment. So what might assessment that leverages on video games look like? The video above provides some clues. Spoiler: The tests are performative, not just cognitive.

If we measure only for cognitive outcomes, other methods might already be efficient and/or effective for the teaching and learning of content. This is not so say that we should not also test for cognitive outcomes. But we need to be aware that our current assessment falls short. This is why new interventions often have negative or “no significant differences”.

Tags: , ,

One basic aspect of assessment literacy is question design. There are several principles in the case of multiple choice questions. The tweet below illustrates a few.

The options cannot be so obvious as to not challenge the learner. No one lives to be 500 and even a child without siblings knows a grandparent cannot be 5-years-old.

The choices should not just be about content and standards, they also have to be authentic. To avoid embarrassment and mistakes, it helps to think like and for the learner.

272/365: Student by Rrrodrigo, on Flickr
Creative Commons Creative Commons Attribution-Noncommercial 2.0 Generic License   by  Rrrodrigo 

Recently I read an article on The Atlantic, The End of Paper-and-Pencil Exams?

The headline asked a speculative question, but did not deliver a clear answer. It hinted at mammoth change, but revealed that dinosaurs still rule.

Here is the short version.

This is what 13,000 4th grade students in the USA had to do in an online test that was part of the National Assessment of Educational Progress. They had to respond to test prompts to:

  • Persuade: Write a letter to your principal, giving reasons and examples why a particular school mascot should be chosen.
  • Explain: Write in a way that will help the reader understand what lunchtime is like during the school day.
  • Convey: While you were asleep, you were somehow transported to a sidewalk underneath the Eiffel Tower. Write what happens when you wake up there.

This pilot online assessment was scored by human beings. The results were that 40% of students struggled to respond to question prompts as they were rated a 2 (marginal) or 1 (little or no skill) on a 6 point scale.

This was one critique of the online test:

One downside to the NCES pilot study: It doesn’t compare student answers with similar questions answered in a traditional written exam setting.

I disagree that this is necessary. Why should the benchmark be the paper test? Why is a comparison even necessary?

While the intention is to compare the questions, what a paper vs computer-based test might do is actually compare media. After all, the questions are essentially the same, or by some measure very similar.

Cornelia Orr, executive director of the National Assessment Governing Board, stated at a webinar on the results that:

When students are interested in what they’re writing about, they’re better able to sustain their level of effort, and they perform better.

So the quality and type of questions are the greater issues. The medium and strategy of choice (going online and using what is afforded there) also influence the design of questions.

Look at it another way: Imagine that the task was to create a YouTube video that could persuade, explain, or convey. It would not make sense to ask students to write about the video. They would have to design and create it.

If the argument is that the YouTube video’s technical, literacy, and thinking skills are not in the curriculum, I would ask why that curriculum has excluded these relevant and important skills.

The news article mentioned some desired outcomes:

The central goal of the Common Core is deeper knowledge, where students are able to draw conclusions and craft analysis, rather than simply memorize rote fact.

An online test should not be a copy of the paper version. It should have unGoogleable questions so that students can still Google, but they must be tested on their ability to “draw conclusions and craft analysis, rather than simply memorize rote fact”.

An online test should be about collaborating in real-time, responding to real-world issues, and creating what is real to the learners now and in their future.

An online test should not be mired in the past. It might save on paper-related costs and perhaps make some grading more efficient. But that focuses on what administrators and teachers want. It fails to provide what learners need.

Click to see all the nominees!

QR code

Get a mobile QR code app to figure out what this means!

My tweets


Usage policy

%d bloggers like this: