Another dot in the blogosphere?

We live in testing times, not least because of people like Trump and the consequences of their thoughtlessness.

Last week, the local press bragged about how Singapore universities were moving towards electronic examinations.

This sounds oh-so-progressive until you read excerpts like:

  • “laptops to replace pen-and-paper exams because students are losing the ability to write by hand”
  • “online exams save paper”
  • “efficiency in distribution of exam papers, marking and collating results”

The reasons for changing the medium of exams were relatively superficial. Legibility of writing and saving paper are natural shifts in switching media. That is like saying switching from a bicycle to a plan lets you travel further and faster, and allows you to have a bird’s eye view. Of course you would!

There was no mention of how switching to electronic forms was not only more aligned with how we consume media today, it is also how many students take notes. The latter, in turn, is linked to the practice medium matching with the task medium. If you do not understand the last point, consider a common response from teachers: Why should we use computers when students still have to take exams with papers and pens?

“Efficient” or “efficiency” was mentioned at least four times in the short article. Apparently, more effective ways of measuring learning were not on the radar.

The paper claimed that universities were “adopting more creative ways of assessment… audio or video segments, and interactive charts and graphics”. Again, that those are functions of richer media.

But can students also respond in equally creative and critical ways? Apparently not since “the students will have a ‘lock-down browser mode’ to prevent cheating, which cuts access to the Internet”.

Those that prepare the e-exams would rather set the same type of lower level Google-able, app-solvable questions than to change their methods and set unGoogle-able questions or tasks instead.

I said it in my tweet and I will say it again: This is a change in exam media, but not a shift in method or mindset.

Still on the topic of tests, I tweeted a WaPo article last night.

[TWEET]

TLDR? Here is a comic I found in 2014 that summarises the take home message.

Tests. I can take tests.
 

The WaPo article did an excellent review of a national exam in the USA and tested the test with the help of three researchers. The researchers were experts in the content area of the test (history) and of assessment in general.

The researchers found that the tests only functioned to teach test-takers how to take tests. The questions did not necessarily test critical thinking skills like:

  • “explain points of view”
  • “weigh and judge different views of the past,” and
  • “develop sound generalizations and defend these generalizations with persuasive arguments”

Those tests were also going electronic or online. But again the change in medium was apparent; the change in method was not.

If we are going to design better forms of assessment and evaluation, we need to think outside the traditional test. This Twitter jokester gives us a clue on how to do this.

The test looks like a simple two-choice series of questions. However, the test-taker has the liberty of illustrating their answers. This provides insights into their mindsets, belief systems, and attitudes.

This makes such tests harder to quantify, but this is what changing the method entails. It is not just about increasing the efficiency of tests, it is also about being more effective in determining if, what, and how learning takes place.

19 Oct: News broke of the NetsPay app that would allow more widespread contactless and cashless payments.

This was a long-awaited move by NETS given that the cashless and PIN-based option was 32-years-old. It languished in inertia while others elsewhere overtook us — the oft-cited street vendors in parts of China being a prime example.

20 Oct: Launch day saw users facing teething problems. Hopefully these go away because technical issues are relatively easy to solve.

What is less easy to deal with is mindsets. Let me share two examples — store owner readiness and development design.

I was out and about on the 20th and asked staff from two stores at point-of-sale terminals if I could use NetsPay. I might as well have been speaking Swahili.

Both seemed unsure about what I was talking about despite me showing the app on screen. Both said no.

The user interface of the app is basic. It does not have to be complicated, but it should look better than something designed by interns who did not communicate with one another.

NetsPay app help screen.

The help screen uses a font similar to Comic Sans and is a mixture of hand-drawn and typed material. The screenshot above also illustrates a clearly misplaced circle. It should be over the “+” sign.

This look-and-feel might be acceptable in a mock-up, not in an app that is already out of beta.

Am I being petty? I think not, particularly when 1) small things add up, and 2) these indicate what the design and roll out were like.

Collectively the cold response from store holders, the technical glitches, and the amateurish design point at rushed development and insufficient evaluation.

While some might argue that there is no better test bed than real use, there is no excuse for poor preparation, design, and roll out. These are things developer can control.

Such control needs to be driven primarily by a user-centric mindset. This means asking questions and experiencing processes from the user’s point of view. This means making the app easy and convenient to use. This means treating the user with savvy and as savvy.

Ultimately being user-centric means putting users first, not just having your administrative and regulatory bases covered.

Tags:

 
I only have myself to blame…

I wrote “10 tips for crafting a teaching philosophy” a year ago with the intent to share it with new batches of learners. I forgot and have to deal with poorly organised statements.

It is not that the tips would guarantee good writing. They would have simply provided a scaffold for inexperienced writers to craft a challenging piece of writing.

I am not forgetting this time around. The pain of providing repetitive feedback on disorganised essays has reminded me to create a link in the Google Site that is my workshop resource.

I have two more tips for future faculty or anyone who has to write academically.

One, avoid passive voice. This tweet might help you spot passive voice:

So write “Students perform task X” instead of writing “Task X was performed… by zombies”.

Two, when learning to prepare lesson plans, write for someone else to teach it. This means stepping outside yourself to see what someone else might not understand about your learners, intent, content, strategies, assessment, etc.

Just as you try to teach in a student-centred way, you should write in a reader-centred manner. Aim for clarity, not complexity. You must convince, not confuse.

Picture a difficult student or an indifferent teacher. What is worse coming from both is not “I have done my part” or “I do not know”; it is “I do not care”.

“I have done my part” and “I do not know” often stem from ignorance. This can be remedied with teaching, modelling, mentoring, coaching, practice, and monitoring.

“I do not care” comes from a place of willful ignorance. Learners might be made aware of a harmful mindset or behaviour, but they choose not to change.

It is easy enough to school the “I have done my part” and “I do not know” learner. But the “I do not care” individuals need a sustained and long-term education.

This sort of education is not always pleasant. It requires the unlearning of old and bad habits and the learning of new ones.
 

 
I like to think of the process as smashing glassware, melting the shards, and shaping the sludge into something new. The process is hot, sweaty, and requires much experience and skill.

You can teach an old dog new tricks. Just remember that it is tough on the dog and the trainer.

Tags:

I have been facilitating a series of workshops for future faculty for several semesters. I also provide feedback on written assignments and conduct performative evaluations.

This academic semester is the first to worry me because of a few cases of plagiarism.

But first, some background.

Like some universities, the one I work with relies on Turnitin to pre-process assignments. Turnitin is embedded in the institutional learning management system (LMS) and provides summary scores of the assignments based on how much they match the ones already in the Turnitin database.

Plagiarism is a huge sin in academia. It is passing someone else’s work off as your own. If severe, it can get a faculty member fired or a student expelled.

Plagiarism is a human intent to cheat. It is an attitude or belief system that manifests in behaviours. Algorithms can try to make sense of patterns that result from those behaviours, but they cannot judge if a person has plagiarised.

So I do not take the matching scores at face value. As I have explained before, a high score might not be evidence of plagiarism while a relatively low score might hide it.
 

 
This semester I detected a few cases of plagiarism in assignments of every group of adult learners I processed. I used to get a clear case or two once every blue moon. The incidences this semester made it feel like there was an epidemic.

Thankfully there was no epidemic of dishonesty. But one case is one too many because the adult learners I educate today are tomorrow’s lecturers and professors.

So I provide a warning and follow the procedure of arranging for counselling by a central office.

The common response to “Why did you do this?” is often “I did not know”. I do not buy that since there are briefings about plagiarism and its consequences as well as an academic culture that avoids it.

“I did not know” might be an authentic answer. It might also be a convenient excuse. Both stem from not attending the briefings, or being new or blind to the culture. If this is the case, there is a more serious problem than skipping briefings or experiencing the world blind and deaf. It is being immune to change and living by another phrase: “I do not care”.

More on this phrase tomorrow.

 
A reply like “I’ve done my part” sounds innocuous, right?

This is was what an adult learner said to me when I asked him why he was not contributing to his group’s discussion.

I was surprised, angry, and disappointed, roughly in that order. He had not “done his part” despite sharing his views because he did not listen to his peers, offer responses, or raise questions.

He did the bare minimum and expected the rest to carry the weight of the discussion.

Anyone who has done group work or projects for school or work knows at least someone like that. People with bad attitudes is why group work and projects have a bad name.

I did not let “I’ve done my part” get away with it. I gently but firmly reminded him of his other responsibilities to the group.

He was not done. But he might be in a different way. I do not forget a face and I will remember his name. I take my role as watchdog as seriously as I do educator.

As a teacher educator, I was aggressive in making sure that student teachers who had bad attitudes did not go on to affect and infect children in schooling.

As an educator of future faculty, I will not claim “I’ve done my part”. I still have lots to do.

By the time you read this, I might be done with four intense sessions of performative evaluations of adult learners over the last two days.

Long story short: My learners need to provide evidence that they can facilitate student-centric lessons in a university context. This is as challenging to do as it is to evaluate.

To complicate evaluative matters, I have two batches of learners. I will need to grade the final written assignments of the second cohort as I evaluate the performative skills of the first.
 

 

This got me thinking about how much grading can be like the Rotten Tomatoes (RT) rating system for movies. If you are not familiar with RT, the video below provides a concise description and criticism of its flaws.
 

Video source

People might refer to a movie’s RT score as representation of its performance and then decide whether to watch it or not. While a single number is quick and convenient, it may not be valid or reliable.

The same could be said about essay grading and performative evaluations. As much as we use guidelines, standards, or rubrics, subjectivity is still an important factor.

I have reflected before on why and how I embrace subjectivity. This is particularly important when we need to find a balance between maintaining standards and treating each learner as an individual.

I am heavily influenced by Todd Rose’s work, The End of Average. We need to learn to recognise the contexts where using an average is meaningless. If we insist on grading strictly on a curve or using an unrealistic standard, we do more harm than good.

This harm affects our learners because we do not treat them as individuals and start where they are. The harm taints the teaching profession because we practice blindly. The harm persists when we act without question.

Does it take a rotten tomato hitting us in the face before we change?

http://edublogawards.com/files/2012/11/finalistlifetime-1lds82x.png
http://edublogawards.com/2010awards/best-elearning-corporate-education-edublog-2010/

Click to see all the nominees!

QR code


Get a mobile QR code app to figure out what this means!

My tweets

Archives

Usage policy

%d bloggers like this: