Another dot in the blogosphere?

Posts Tagged ‘evaluation

I am recreating some of my favourite image quotes I created some time ago. This time I use Pablo by Buffer and indicate attribution and CC license.

Tomorrow's educational progress cannot be determined by yesterday's successful performance.

I like this quote because it addresses how academic progress is often measured largely or even solely by paper-based tests. Such tests are yesterday’s measure and they are relatively easy to prepare for and score in.

Today’s educational progress and successful performance has higher order demands and outcomes. Consider soft skillsets like communication and collaboration; factor in literacies digital and scientific; think about metacognition and value systems.

We cannot test those things; they must be experienced, performed, and reflected on. We need to be designing and implementing performative evaluations and e-portfolios. We need to get learners to constantly create, not just consume.

Note: I am on vacation with my family. However, I am keeping up my blog-reflection-a-day habit by scheduling a thought a day. I hope this shows that reflections do not have to be arduous to provoke thought or seed learning.

… is another man’s poison.

That was the saying that came to mind when I read this student’s feedback on teaching.

A reporting officer or an administrator might view this feedback on teaching negatively.

A teacher who focuses on content as a means of nurturing thoughtful learners might view this positively.

I am not describing a false dichotomy. I am summarising reality.

The word “evaluation” might have been ill-defined and misused.

I was surprised to read someone like Senge reportedly saying this about evaluation.

Evaluation is when you add a value judgment into the assessment. Like, ‘Oh, I only walked two steps. I’ll never learn to walk.’ You see, that’s unnecessary. So, I always say, ‘Look, evaluation is really optional. You don’t need evaluation. But you need assessment.

Evaluation is about adding a value judgement into assessment. That is why it is called eVALUation. But that does not make evaluation negative or optional.

Student A might get an assessment score of 60/100. Student B might get an assessment score of 95/100. One way to evaluate the students is to compare them and say that student B performed better than A. More is better and that is the value, superficial as doing that may be.

If you consider that Student A previously got a score of 20/100 and B a previous score of 90/100, the evaluation can change. Student A improved by 40 points; student B by 5 points. The evaluation: Student A made much more improvement than Student B.

The value judgements we bring into assessments are part of evaluations. Assessments alone are scores and grades, and not to be confused with the value of those numbers and letters.

In the context of working adults who get graded after appraisals, a B-perfomer is better than a C-performer. The appraisal or assessment led up to those grades; the worker, reporting officer, and human resource manager place value in those letters (no matter how meaningless they might actually be).

The assessments of children and adults are themselves problematic. For kids, it might be a broad way of measuring a narrow band of capabilities (academic). For workers, it might be an over simplistic way of assessing complex behaviours. So the problem might first lie with assessment, not evaluation.

As flawed as different assessments may be, they are simply forms of measurement. We can measure just about anything: Reasoning ability, level of spiciness, extent of love, degree of beauty, etc. But only evaluation places value on those measurements: Einstein genius, hot as hell, head over heels, having a face only a mother could love.

I have noticed people — some of them claiming to be teachers or educators — not understanding the differences between assessment and evaluation. As the terms have not been made more distinct, evaluation has been misunderstood and misused.

Evaluation is not a negative practice and it is not optional. If evaluations seem overly critical (what went wrong, how to do better), they merely reflect the values, beliefs, and bias of the evaluator. We do not just need assessment, we also need evaluation to give a measurement meaning.

What is wrong with designing a teaching resource because it is cute, fun, and current?

Nothing, if there is good reason for it.

A good reason for an exit ticket is to find out if and what students think they learnt. Another is to get feedback about a teacher’s instruction.

The tweeted idea is a more current version of the traditional smiley sheet. In evaluative terms, it is barely Level 1 of Kirkpatrick’s evaluation framework. The emoticon sheet might provide answers to whether students liked the instruction. However, liking something does not mean you learnt anything.

It is important to find out how students feel after a lesson. It is more important to find out if they learnt anything.

The fascination with scores, symbols that can be codified to numbers, and distractions from learning undermines what a teacher needs to find out with an exit ticket.

There are at least three critical questions exit tickets should address in well-thought but curriculum-oriented teaching:

  1. Did the students learn?
  2. What did they learn?
  3. What needs to happen next?

You might be able to get away with just the first two if the session is standalone or discrete (e.g., a TED talk).

Designing only with aesthetics and/or numbers in mind is not enough. Good educational theory that is based on rigorous research and/or critical, reflective practice should be applied to the design of learning experiences and resources. To do anything less is to do a disservice to our learners.

I have been on the circuit as an independent education consultant for over a year. I continue what I did before as a teacher, educator, and teacher educator in that I conduct seminars and facilitate workshops. Despite the difference in job title, the job scope remains the same: Trying to win hearts and minds, and creating the push and pull for change.

Anyone who stands up to this task will want to know how successful their sessions are. The success of such interventions can be measured several ways: Involvement of participants before, during, and/or immediately after the event; longer term follow up after the event; scores in a feedback form; the “feels”.

Most event organizers rely heavily or even primarily on a feedback form. They forget or ignore the backchannels, the one-on-one conversations, the informal follow-ups that lead to loose online communities, etc. A feedback form is limited in scope and Kirkpatrick might say that this is only Level 1 evaluation.

Most speakers and facilitators are used to relying on the sense they get after an event (the “feels”). Depending on their experience and how sensitive their radars are, this might be a gauge that can dovetail with other methods. The problem with relying solely on this method is that a person can experience 99 positive things and just one negative thing, but choose to dwell on the latter.

I have discovered another measure that has strong predictive and evaluative effects. This is the energy of the room. The room is often the combination of the physical venue and the people in it. It can also be online spaces for interacting with others and getting feedback.

The energy of a room takes many forms. For example:

  • How many people are there early or on time?
  • Are they smiling?
  • Do they make the effort to participate?
  • What is their body language as they sit or do?
  • What types of questions do they ask?
  • Are they there for just one session or many in a series?
  • Do they get the nuances or jokes?

The most important question to find answers to is: Are they there because they have to or want to? If they are part of the event by their own choice, half the battle is won. They will participate more willingly and they are likely to follow up with some action on their part.

Unfortunately, I cannot fully control this factor as I design learning experiences. I can merely influence it by urging organizers to carefully select participants or skillfully craft their communication. I take the trouble to do this because the energy from a room is infectious. It gives me the energy to keep doing what I do. It is also the initial tank of fuel for my participants’ journeys of change.

If this tweet was a statement in a sermon, I would say amen to that.

Teachers, examiners, and adminstrators disallow and fear technology because doing what has always been done is just more comfortable and easier.

Students are forced to travel back in time and not use today’s technologies in order to take tests that measure a small aspect of their worth. They bear with this burden because their parents and teachers tell them they must get good grades. To some extent that is true as they attempt to move from one level or institution to another.

But employers and even universities are not just looking for grades. When students interact with their peers and the world around them, they learn that character, reputation, and other fuzzy traits not measured in exams are just as important, if not more so.

Tests are losing relevance in more ways than one. They are not in sync with the times and they do not measure what we really need.

In an assessment and evaluation Ice Age, there is cold comfort in the slowness of change. There is also money to be made from everything that leads up to testing, the testing itself, and the certification that follows.

Like a glacier, assessment systems change so slowly that most of us cannot perceive any movement. But move they do. Some glaciers might even be melting in the heat of performance evaluations, e-portfolios, and exams where students are allowed to Google.

We can either wait the Ice Age out or warm up to the process of change.

By reading what thought leaders share every day and by blogging, I bring my magnifying glass to examine issues and create hotspots. By facilitating courses in teacher education I hope to bring fuel, heat, and oxygen to light little fires where I can.

What are you going to do in 2014?

One of the announcements at this year’s National Day Rally was a wider spectrum of entry criteria for the Direct School Admission programme.

Some might say the DSA makes a mockery of standardized exams because it allows Primary school students to get into the Secondary school of their choice. While Primary School Leaving Examination (PSLE) results are still used as criteria once they are released, the student with entry via DSA already has a foothold that non-DSA students do not.

A few might wonder if the PSLE is even necessary if such an alternative form of evaluation exists. Others might argue that the DSA criteria are not enough.

That brings us back to increasing the selection criteria for DSA. What traits might students be evaluated on? Leadership? Character?

When those traits were bandied about in popular media, people asked if things like character and leadership could be measured among 12-year-olds.

You can measure just about anything, even fuzzy, hard to quantify things like happiness [happiness index]. But let us not kid ourselves into thinking that these measures are absolute, objective, or universal.

A trait like creativity is due to many things, and an instrument no matter how elaborate, cannot measure all aspects of creativity. Most fuzzy concepts, like beauty, are subjective no matter how much you quantify them. Ask anyone to define creativity or beauty and you will get different answers; there is no single understanding.

Whenever you measure anything, there are margins of error that originate from the measurer and the measuring instrument. Sometimes the object or subject measured introduces error. Consider what happens if person A measures a fidgety person B’s height with a tape measure.

Let us say that you could measure leadership or character precisely. Just because you can does not mean you should. How different is a person when he is 6, 12, 18, 24 or 36? What if a value judgement at 12 puts a child on a trajectory that s/he is not suitable for?

We learnt that the hard way when we started streaming kids when they were 10 (Normal, Extended or Monolingual). Thankfully that process has been removed from our schooling system. Actually, I take that back. We still test for “giftedness” at 10. Some schools start pre-selecting at 9.

That said, we would be foolish to think that we do not already gauge people on fuzzy traits like character. It happens in the hiring and firing of employees. Some might argue that we are just bringing that process up the line of development.

There are many ways to measure fuzzy traits. At a recent #edsg conversation, I tweeted:

Whether or not these measures to provide alternative evaluation are implemented, we will read in forum letters, blog entries, and Facebook posts rhetorical statements like “parents must change their mindsets.”

Of course they must. But they are not going to do so automatically.

Folks who highlight mindset sometimes fail to realize that you have to start somewhere with behaviour modification. In systemic change, you start with one or more leverage points. In our case, it is the way people are evaluated.

Click to see all the nominees!

QR code

Get a mobile QR code app to figure out what this means!

My tweets


Usage policy

%d bloggers like this: