Another dot in the blogosphere?

Posts Tagged ‘testing

We live in testing times, not least because of people like Trump and the consequences of their thoughtlessness.

Last week, the local press bragged about how Singapore universities were moving towards electronic examinations.

This sounds oh-so-progressive until you read excerpts like:

  • “laptops to replace pen-and-paper exams because students are losing the ability to write by hand”
  • “online exams save paper”
  • “efficiency in distribution of exam papers, marking and collating results”

The reasons for changing the medium of exams were relatively superficial. Legibility of writing and saving paper are natural shifts in switching media. That is like saying switching from a bicycle to a plane lets you travel further and faster, and allows you to have a bird’s eye view. Of course you would!

There was no mention of how switching to electronic forms was not only more aligned with how we consume media today and how many students take their notes. The latter, in turn, is linked to the practice medium matching the task medium. If you do not understand the last point, consider a common response from teachers: Why should we use computers when students still have to take exams with papers and pens?

“Efficient” or “efficiency” was mentioned at least four times in the short article. Apparently, more effective ways of measuring learning were not on the radar.

The paper claimed that universities were “adopting more creative ways of assessment… audio or video segments, and interactive charts and graphics”. Again, that those are functions of richer media.

But can students also respond in equally creative and critical ways? Apparently not since “the students will have a ‘lock-down browser mode’ to prevent cheating, which cuts access to the Internet”.

Those that prepare the e-exams would rather set the same type of lower level Google-able, app-solvable questions than to change their methods and set unGoogle-able questions or tasks instead.

I said it in my tweet and I will say it again: This is a change in exam media, but not a shift in method or mindset.

Still on the topic of tests, I tweeted a WaPo article last night.

TLDR? Here is a comic I found in 2014 that summarises the take home message.

Tests. I can take tests.
 

The WaPo article did an excellent review of a national exam in the USA and tested the test with the help of three researchers. The researchers were experts in the content area of the test (history) and of assessment in general.

The researchers found that the tests only functioned to teach test-takers how to take tests. The questions did not necessarily test critical thinking skills like:

  • “explain points of view”
  • “weigh and judge different views of the past,” and
  • “develop sound generalizations and defend these generalizations with persuasive arguments”

Those tests were also going electronic or online. But again the change in medium was apparent; the change in method was not.

If we are going to design better forms of assessment and evaluation, we need to think outside the traditional test. This Twitter jokester gives us a clue on how to do this.

The test looks like a simple two-choice series of questions. However, the test-taker has the liberty of illustrating their answers. This provides insights into their mindsets, belief systems, and attitudes.

This makes such tests harder to quantify, but this is what changing the method entails. It is not just about increasing the efficiency of tests, it is also about being more effective in determining if, what, and how learning takes place.

 
Two recent reads articulated what I sometimes struggle to put into words: What seems to work in schools is sometimes an illusion.

I elaborate on the first today, an Edsurge article, that explained how much “education” research is based on flawed designs.

One example was how interventions are compared to lectures or didactic teaching. With the baseline for comparison so low, it was (and still is) easy to show how anything else could work better.

Then there is the false dichotomy of H0 (null hypothesis) and H1 (hypothesis). The conventional wisdom is that if can prove that H0 is false, then H1 is true. This is not the case because you might be ignoring other contributing or causal agents.

Finally, if there is no significant difference (NSD) between a control and the new intervention, then the intervention is judged to be just as good. Why is it not just as bad?

This makes it easy for unscrupulous edtech vendors to sell their wares by fooling administrators and decision-makers with a numbers game.

There was something else that the article skimmed on that was just as important.

This graph was the hook of the article. If the data are correct, then the number of movies that Nicholas Cage appeared inform 1999 to 2009 eerily correlates with the number of swimming pool drownings during the same period.

No one in their right minds would say that Cage being in movies caused those drownings (or vice versa). Such a causal link is ridiculous. What we have is a correlation of unrelated phenomena.

However, just about anything can be correlated if you have many sources and large corpuses of data. So someone can find a correlation between a product use and better grades. But doing this ignores other possible causes like changes in mindsets, expectations, or behaviours of stakeholders.

So what are educators, decision-makers, and concerned researchers to do? The article recommends a three-pronged approach:

  1. Recognise that null hypothesis significance testing does not provide all the information that you need.
  2. Instead of NSD comparisons, seek work that explains the practical impacts of strategies and tools.
  3. Instead of relying on studies that obscure by “averaging”, seek those that describe how the intervention works across different students and/or contexts.

This is good advice because it saves money, invests in informed decision-making, and prevents implementation heartache.

I have seen far too many edtech ventures fail or lose steam in schools not just because the old ways accommodate and neutralise the new ones. They stutter from the start because flawed decisions are made by relying on flawed studies. Pulling the wool away from our eyes is long overdue.

If this tweet was a statement in a sermon, I would say amen to that.

Teachers, examiners, and adminstrators disallow and fear technology because doing what has always been done is just more comfortable and easier.

Students are forced to travel back in time and not use today’s technologies in order to take tests that measure a small aspect of their worth. They bear with this burden because their parents and teachers tell them they must get good grades. To some extent that is true as they attempt to move from one level or institution to another.

But employers and even universities are not just looking for grades. When students interact with their peers and the world around them, they learn that character, reputation, and other fuzzy traits not measured in exams are just as important, if not more so.

Tests are losing relevance in more ways than one. They are not in sync with the times and they do not measure what we really need.

In an assessment and evaluation Ice Age, there is cold comfort in the slowness of change. There is also money to be made from everything that leads up to testing, the testing itself, and the certification that follows.

 
Like a glacier, assessment systems change so slowly that most of us cannot perceive any movement. But move they do. Some glaciers might even be melting in the heat of performance evaluations, e-portfolios, and exams where students are allowed to Google.

We can either wait the Ice Age out or warm up to the process of change.

By reading what thought leaders share every day and by blogging, I bring my magnifying glass to examine issues and create hotspots. By facilitating courses in teacher education I hope to bring fuel, heat, and oxygen to light little fires where I can.

What are you going to do in 2014?

 
I finally read a tab I had open for about a week: A teacher’s troubling account of giving a 106-question standardized test to 11 year olds.

This Washington Post blog entry provided a blow-by-blow account of some terrible test questions and an editorial on the effects of such testing. Here are the questions the article raised:

  • What is the purpose of these tests?
  • Are they culturally biased?
  • Are they useful for teaching and learning?
  • How has the frequency and quantity of testing increased?
  • Does testing reduce learning opportunities?
  • How can testing harm students?
  • How can testing harm teachers?
  • Do we have to?

The article was a thought-provoking piece that asked several good questions. Whether or not you agree with the answers is moot. The point is to question questionable testing practices.

I thought this might be a perfect case study of what a poorly designed test looks like and what its short-term impact on learning, learners, and educators might be.

The long term impact of bad testing (and even just testing) is clear in a society like Singapore. We have teach-to-the-test teachers, test-smart students, and grade-oriented parents. We have tuition not for those that need it but for those who are chasing perfect grades. And meaningful learning takes a back seat or is pushed out of the speeding car of academic achievement.

We live in testing times indeed!

I was thinking about how testing and grading were contributing to things like competitive tuition syndrome here and the race to the bottom in the US.

I was also wondering why politicians and policymakers were hitting the panic button when students in their countries did not do well in international tests like TIMMS and PISA. Was it really possible to draw a straight line from grades to economic success?

I seriously doubted it as there are many more important factors that contribute to the well-being of a country. Put another way, who cares if your students are not test-smart but are world leaders and world beaters in various fields?


Video source

I gained some perspective when I watched the closing ISTE 2012 keynote by Yong Zhao. The video is long, but the important bit starts at the 54-minute mark.

In short, Yong Zhao illustrated how there was a negative correlation between test scores like PISA and entrepreneurial indicators. A country whose students did well in tests would not guarantee economic success.

But correlations do not explain phenomena. Low test scores do not cause high entrepreneurial capability. The numbers do not reveal truths, but neither do they lie. They merely hide deeper issues that need to be explored and explained.

Yong Zhao did this by asking and answering three questions in his keynote:

  1. What matters more? Test scores or confidence?
  2. Are you tolerant of talent? Do you allow it to exist? Do you support it?
  3. Are you taking advantage of the resources you have? Or do you impoverish yourself in the pursuit of test scores?

I think he had one statement that practically addressed all three questions. In explaining why US students did poorly in tests but well on the world economic stage, he said:

Creativity cannot be taught… but it can be killed. American schools don’t teach it better. We kill it less successfully.
Video source

Our schooling system pins creativity to the ground and mindless tuition applies the coup de grâce.

I wish we could be less successful killers of creativity too…

This article begins with an intriguing question: When is a test not a test? It cites a tweet by @Scott_E_Benson:

Then it dances around the benefits and pitfalls of tests before suggesting how one might assess and evaluate without the tests that we are most familiar with.

It suggests gamification and gaming strategies. It suggests portfolios, self-assessment, and peer accountability. It suggests measures that are more progressive than the quality control tests that are relevant only for the industrial age.

Thinking gamers might tell you that they are being tested all the time but the tests do not feel like tests. That is when a test is not like a test.


Video source

A game can be pure fantasy, be based on reality, or be a hybrid like the one featured above. Unlike a most video games, the game does not have obvious quests and thus mirrors much of life.

It is also been said that, unlike school, life throws tests at you whether or not you are ready. When that happens, you experience a knowledge gap and you need to problem-solve. That makes the seeking, analysis, and use of information relevant.

Despite the surprises that life throws at you, this form of insidious testing seems natural. School-based testing does not.

Like other creatures in the animal kingdom, we start learning by play. Why not be tested by play?

I read this NYT article and I loved this response by Cathy Davidson.

One of the main ideas of the NYT article was that the push to adopt various technologies was not leading to higher test scores. One of Davidson’s responses was that we should not be integrating technology to raise test scores but to promote meaningful learning and prepare learners for the way they will live.

I agree. The problem is not that technology use is not raising test scores. The problem is the view that test scores should be the indicator of successful technology integration in the first place. Traditional test scores should not be the benchmark for determine if technology adoption or integration is successful.

I would go so far as to propose that if you only want higher test scores, forget about creative or meaningful use of ICT or interactive digital media (IDM). Just focus on test preparation!

To put it more simply, if you aren’t going to change anything about schooling, then don’t use technology. After all, today’s technologies serve as disruptive forces to leverage upon, as this blogger argues.

If you really want higher scores, then have newer tests that measure the other opportunities technology brings to the classroom. As one teacher in the NYT article pointed out:

… look at all the other things students are doing: learning to use the Internet to research, learning to organize their work, learning to use professional writing tools, learning to collaborate with others.

To reinforce that point, I quote Davidson:

We must, if we are responsible, educate them for the world they already inhabit in their play and will soon inhabit in their work. The tests we require do not begin to comprehend the lives our kids lead.

So measure these life skills and test if you must. But let us also look at the learner’s ability to organize, evaluate and collaborate.

Tigers Play Fighting in Water 5 by Abysim, on Flickr
Creative Commons Attribution-Share Alike 2.0 Generic License  by  Abysim 

The other thing I took away from that quote is the importance of play.

In the animal kingdom, other mammals prepare for life through play. While our own mammalian lives are, by our own measures, more complex, I think that basic principle still holds true.

Somehow we are schooled to leave this behaviour behind even though it is the most natural of instincts. We label such behaviour childish.

Do mammals outgrow the need to play? They seem to, but they retain that capacity. Humans are the slowest among the primates to develop independence, so we play more and longer. We retain our capacity to be child-like.

Our sense of play should be encouraged instead of being stifled. It is what makes us explore, take risks and learn from experience. It is tools like the iPad that encourage play and that is why they are so natural and popular.

What modern day kids do in their play is relevant to the world that they will inherit and inhabit. I am certainly not the only one who believes this. The Davidsons, Gees, Squires, Appelmans, and McGonigals of the world certainly seem to think so.

For now we live with traditional tests. Gee would argue that games are essentially one series of tests after another. They just do not look alike and they measure different things. So it is a testing time in more ways than one. I say we deal with it with some serious play.


http://edublogawards.com/files/2012/11/finalistlifetime-1lds82x.png
http://edublogawards.com/2010awards/best-elearning-corporate-education-edublog-2010/

Click to see all the nominees!

QR code


Get a mobile QR code app to figure out what this means!

My tweets

Archives

Usage policy

%d bloggers like this: