Another dot in the blogosphere?

Posts Tagged ‘evaluation

The simple message in the tweet below hides a profound principle of evaluation.

Why is the message a timely one? In many university campuses all over the world, an academic semester is nearing an end or already ended. It is time for the end-of-course evaluations.

Instructors who do not have teaching backgrounds, those who resent teaching, or those who cannot teach well are dreading these evaluations. If only they would collectively point out that such exercises are based on a flawed approach .

Many end-of-course evaluations (otherwise known as student feedback on teaching, SFTs) read like customer satisfaction surveys because they are often designed by administrators, not assessment and evaluation experts.

Even a well-prepared instructional designer should be able to point out that SFTs often only operates at Level 1 of the Kirkpatrick Evaluation Model. It is a simple measure of each student’s snapshot reaction to weeks or months of coursework.

SFTs should be about the effectiveness of teaching and the quality of learning. But if you unpack the questions in most evaluation forms, you will rate such “evaluations” as satisfaction surveys instead. A researcher with rudimentary knowledge of data collection will tell you that such information is not valid — it does not measure what it is supposed to measure.

I have reflected before on how I do not place much stock in SFTs if they are not well designed and implemented. I ignore the results even though I do well in them. How can I when I know that they are not valid measures? Why should I be satisfied with unsatisfactory practices?

Video source

Today I offer another reason why the one-size-fits-all type of end of course evaluations are not valid.
 

 
I have reflected on how I design and implement my classes and workshops to facilitate learning. I do not try to deliver content. The difference is like showing others how to prepare meals vs serving meals to them.

You would not evaluate a chef and a Grab delivery person the same way. Each has their role and worth, so each should be judged for that. Likewise student feedback on teaching (SFT) must cater to the design and implementation of a course.

 
I have never placed much weight on end of course feedback. This was even if the results of such data was favourable. Why? My knowledge research on such feedback and my experiences with the design of questions hold me back.

In my Diigo library is a small sample of studies that highlight how there is gender, racial, and other bias in end of course feedback tools. These make the data invalid. The feedback forms do not measure what they purport to measure, i.e., the effectiveness of instruction, because students are influenced by distractors.

Another way that feedback forms are not valid is in their design. They are typically created by administrators who have different concerns from instructors. The latter are rarely, if at all, consulted on the questions in the forms. As a result, students might be asked questions that are not relevant.

For example, take one such question I spotted recently: The components of the module, such as class activities, assessments, and assignments, were consistent with the course objectives. This seems like a reasonable question and it is an important one to both administrator and instructor.

An administrator wants alignment particularly if a course is to be audited externally or to be benchmarked against other similar offerings elsewhere. An instructor needs to justify that the components are relevant a course. However, there are at least three problems with such a question.

First, the objectives are not as important as outcomes. Objectives are theoretical and focus on planning and teaching while outcomes practical and emerge from implementation and learning. Improvement: Focus on outcomes.

The second problem is that it will only take one component — an activity, an assessment, or an assignment — to throw the question off. The student also has the choice to focus on one, two, or three components. Improvement: Each component needs to be its own question.

Third, not all components might be valid. Getting personal, one of the modules I facilitate has no traditional or formal assessment or assignments. The student cannot gauge a non-existent component, so the question is not valid. Improvement: Customise end of course forms to suit the modules.

Another broad problem with feedback forms is that they are not reliable. The same questions can be asked of different batches of students, and assuming that nothing else changes, the average ratings can vary wildly. This is a function of the inability to control for learner expectations and a lack of reliability testing for each question.

End of course evaluations are convenient to organisers of courses and modules, but they are pedagogically unsound and lazy. I would rely more on critical reflection of instructors and facilitators, as well as their ability to collect formative feedback during a course to make changes.

Trying to incorporate studio-like sessions into traditional structures is challenging, but the evaluative equivalent is worse.

I had to work with and against the expectation of essay-as-evidence. Outside of exam papers, the semestral take home essay is the most common form of assessment in higher education. Even exams have essays.

The next most common form of assessment might be projects, but this is not always possible because these are even more difficult to grade. The further one moves from paper-based assessments, the more difficult it might seem to quantify.

And that is what evidence of learning in most institutes of higher education looks like — graded assessments. Not evaluations, just grades.

I work against assessments and move towards evaluations by designing and implementing mixed experiences. I work within the system of essays but 1) make them challenging, and 2) embrace praxis.

Praxis is theory put into practice and theory-informed practice. So my evaluations of learning are based not just on what my learners claim to do, they are also based on what they can actually do. I get them to perform by sharing, teaching, critiquing, and reflecting.
 

 
Reflection is particularly important. In my latest design of one Masters level challenge, I asked learners to look back, look around, and look forward at theri writing and their practice. This was challenging not just because of the three prongs but because most students do not seem to reflect deeply and regularly.

But my students rise to such challenges and impress me with their writing and performance.

I am now near the end of providing feedback and grading a written component that incorporated the three prongs. This has been a challenge for me since each student’s work has required me to take between two to three hours to evaluate. This means I process no more than two students’ work each day.

I shall be working over the weekend to tie up loose ends. This means giving all their work a second look and completing an administrative checklist.

Why bother? Because I care about putting evaluation over assessment, measuring studio-based learning with praxis, and nurturing critical reflection.

I am recreating some of my favourite image quotes I created some time ago. This time I use Pablo by Buffer and indicate attribution and CC license.

Tomorrow's educational progress cannot be determined by yesterday's successful performance.

I like this quote because it addresses how academic progress is often measured largely or even solely by paper-based tests. Such tests are yesterday’s measure and they are relatively easy to prepare for and score in.

Today’s educational progress and successful performance has higher order demands and outcomes. Consider soft skillsets like communication and collaboration; factor in literacies digital and scientific; think about metacognition and value systems.

We cannot test those things; they must be experienced, performed, and reflected on. We need to be designing and implementing performative evaluations and e-portfolios. We need to get learners to constantly create, not just consume.

Note: I am on vacation with my family. However, I am keeping up my blog-reflection-a-day habit by scheduling a thought a day. I hope this shows that reflections do not have to be arduous to provoke thought or seed learning.

 
… is another man’s poison.

That was the saying that came to mind when I read this student’s feedback on teaching.

A reporting officer or an administrator might view this feedback on teaching negatively.

A teacher who focuses on content as a means of nurturing thoughtful learners might view this positively.

I am not describing a false dichotomy. I am summarising reality.

 
The word “evaluation” might have been ill-defined and misused.

I was surprised to read someone like Senge reportedly saying this about evaluation.

Evaluation is when you add a value judgment into the assessment. Like, ‘Oh, I only walked two steps. I’ll never learn to walk.’ You see, that’s unnecessary. So, I always say, ‘Look, evaluation is really optional. You don’t need evaluation. But you need assessment.

Evaluation is about adding a value judgement into assessment. That is why it is called eVALUation. But that does not make evaluation negative or optional.

Student A might get an assessment score of 60/100. Student B might get an assessment score of 95/100. One way to evaluate the students is to compare them and say that student B performed better than A. More is better and that is the value, superficial as doing that may be.

If you consider that Student A previously got a score of 20/100 and B a previous score of 90/100, the evaluation can change. Student A improved by 40 points; student B by 5 points. The evaluation: Student A made much more improvement than Student B.

The value judgements we bring into assessments are part of evaluations. Assessments alone are scores and grades, and not to be confused with the value of those numbers and letters.

In the context of working adults who get graded after appraisals, a B-perfomer is better than a C-performer. The appraisal or assessment led up to those grades; the worker, reporting officer, and human resource manager place value in those letters (no matter how meaningless they might actually be).

The assessments of children and adults are themselves problematic. For kids, it might be a broad way of measuring a narrow band of capabilities (academic). For workers, it might be an over simplistic way of assessing complex behaviours. So the problem might first lie with assessment, not evaluation.

As flawed as different assessments may be, they are simply forms of measurement. We can measure just about anything: Reasoning ability, level of spiciness, extent of love, degree of beauty, etc. But only evaluation places value on those measurements: Einstein genius, hot as hell, head over heels, having a face only a mother could love.

I have noticed people — some of them claiming to be teachers or educators — not understanding the differences between assessment and evaluation. As the terms have not been made more distinct, evaluation has been misunderstood and misused.

Evaluation is not a negative practice and it is not optional. If evaluations seem overly critical (what went wrong, how to do better), they merely reflect the values, beliefs, and bias of the evaluator. We do not just need assessment, we also need evaluation to give a measurement meaning.

What is wrong with designing a teaching resource because it is cute, fun, and current?

Nothing, if there is good reason for it.

A good reason for an exit ticket is to find out if and what students think they learnt. Another is to get feedback about a teacher’s instruction.

The tweeted idea is a more current version of the traditional smiley sheet. In evaluative terms, it is barely Level 1 of Kirkpatrick’s evaluation framework. The emoticon sheet might provide answers to whether students liked the instruction. However, liking something does not mean you learnt anything.

It is important to find out how students feel after a lesson. It is more important to find out if they learnt anything.

The fascination with scores, symbols that can be codified to numbers, and distractions from learning undermines what a teacher needs to find out with an exit ticket.

There are at least three critical questions exit tickets should address in well-thought but curriculum-oriented teaching:

  1. Did the students learn?
  2. What did they learn?
  3. What needs to happen next?

You might be able to get away with just the first two if the session is standalone or discrete (e.g., a TED talk).

Designing only with aesthetics and/or numbers in mind is not enough. Good educational theory that is based on rigorous research and/or critical, reflective practice should be applied to the design of learning experiences and resources. To do anything less is to do a disservice to our learners.

I have been on the circuit as an independent education consultant for over a year. I continue what I did before as a teacher, educator, and teacher educator in that I conduct seminars and facilitate workshops. Despite the difference in job title, the job scope remains the same: Trying to win hearts and minds, and creating the push and pull for change.

Anyone who stands up to this task will want to know how successful their sessions are. The success of such interventions can be measured several ways: Involvement of participants before, during, and/or immediately after the event; longer term follow up after the event; scores in a feedback form; the “feels”.

Most event organizers rely heavily or even primarily on a feedback form. They forget or ignore the backchannels, the one-on-one conversations, the informal follow-ups that lead to loose online communities, etc. A feedback form is limited in scope and Kirkpatrick might say that this is only Level 1 evaluation.

Most speakers and facilitators are used to relying on the sense they get after an event (the “feels”). Depending on their experience and how sensitive their radars are, this might be a gauge that can dovetail with other methods. The problem with relying solely on this method is that a person can experience 99 positive things and just one negative thing, but choose to dwell on the latter.

I have discovered another measure that has strong predictive and evaluative effects. This is the energy of the room. The room is often the combination of the physical venue and the people in it. It can also be online spaces for interacting with others and getting feedback.

The energy of a room takes many forms. For example:

  • How many people are there early or on time?
  • Are they smiling?
  • Do they make the effort to participate?
  • What is their body language as they sit or do?
  • What types of questions do they ask?
  • Are they there for just one session or many in a series?
  • Do they get the nuances or jokes?

The most important question to find answers to is: Are they there because they have to or want to? If they are part of the event by their own choice, half the battle is won. They will participate more willingly and they are likely to follow up with some action on their part.

Unfortunately, I cannot fully control this factor as I design learning experiences. I can merely influence it by urging organizers to carefully select participants or skillfully craft their communication. I take the trouble to do this because the energy from a room is infectious. It gives me the energy to keep doing what I do. It is also the initial tank of fuel for my participants’ journeys of change.

If this tweet was a statement in a sermon, I would say amen to that.

Teachers, examiners, and adminstrators disallow and fear technology because doing what has always been done is just more comfortable and easier.

Students are forced to travel back in time and not use today’s technologies in order to take tests that measure a small aspect of their worth. They bear with this burden because their parents and teachers tell them they must get good grades. To some extent that is true as they attempt to move from one level or institution to another.

But employers and even universities are not just looking for grades. When students interact with their peers and the world around them, they learn that character, reputation, and other fuzzy traits not measured in exams are just as important, if not more so.

Tests are losing relevance in more ways than one. They are not in sync with the times and they do not measure what we really need.

In an assessment and evaluation Ice Age, there is cold comfort in the slowness of change. There is also money to be made from everything that leads up to testing, the testing itself, and the certification that follows.

 
Like a glacier, assessment systems change so slowly that most of us cannot perceive any movement. But move they do. Some glaciers might even be melting in the heat of performance evaluations, e-portfolios, and exams where students are allowed to Google.

We can either wait the Ice Age out or warm up to the process of change.

By reading what thought leaders share every day and by blogging, I bring my magnifying glass to examine issues and create hotspots. By facilitating courses in teacher education I hope to bring fuel, heat, and oxygen to light little fires where I can.

What are you going to do in 2014?


Archives

Usage policy

%d bloggers like this: