Posts Tagged ‘evaluation’
Put yourself in my shoes for a moment. They are not too smelly, I promise.
I am part of a committee that evaluates presentations of edtech projects that compete for funding.
Let us say that we have to listen to 10-minute presentations, have five minutes to ask questions, and have five more minutes to deliberate. I do not think this is fair to the presenters or to us. Why?
A clear picture of a project cannot be accurately painted in just 10 minutes. With only five minutes to clarify issues and another five to decide whether to decide their fate, you are lucky to get a word in edgewise (if you can think of something fast enough and worth saying). We invariably take more than the allocated time for questions and deliberation and this makes for very long meetings.
I see efforts to streamline this task. The efforts have included providing shorter write-ups prior to the meetings and having more meetings. Whether by design or accident, the write-ups have been provided well in advance of meetings or just before them. But these do not deal with the core issues.
One of the core issues is having enough information to evaluate. The other is how we evaluate that information: A panel that cannot possibly provide quantity and quality feedback in such a short period of time.
I do not think it right to be a representative from my field without bringing some unique value or perspective. I am not a cushion to decorate or warm a seat. I am not a complainer either. I have made a few suggestions.
If we were to bring back the longer write-ups but disseminate them based on our interest areas or areas of expertise, this could result in better use of our time and in better evaluation feedback for presenters.
During the presentations, panel members could listen actively by using an electronic rubric (say a Google Form) with clear criteria and a means to collate data that members submit. We could use the same form to ask questions or state concerns.
All the responses should be viewable to all members so we can see for ourselves what common concerns or unique perspectives we might have. Think of this as some sort of backchannel.
I think that these measures allow presenters to make their case more fairly and for evaluators to ask more timely and critical questions.
This article begins with an intriguing question: When is a test not a test? It cites a tweet by @Scott_E_Benson:
The future of testing will be tests that students, teachers and parents do not think of as tests.—
Scott Benson (@Scott_E_Benson) April 26, 2012
Then it dances around the benefits and pitfalls of tests before suggesting how one might assess and evaluate without the tests that we are most familiar with.
It suggests gamification and gaming strategies. It suggests portfolios, self-assessment, and peer accountability. It suggests measures that are more progressive than the quality control tests that are relevant only for the industrial age.
Thinking gamers might tell you that they are being tested all the time but the tests do not feel like tests. That is when a test is not like a test.
A game can be pure fantasy, be based on reality, or be a hybrid like the one featured above. Unlike a most video games, the game does not have obvious quests and thus mirrors much of life.
It is also been said that, unlike school, life throws tests at you whether or not you are ready. When that happens, you experience a knowledge gap and you need to problem-solve. That makes the seeking, analysis, and use of information relevant.
Despite the surprises that life throws at you, this form of insidious testing seems natural. School-based testing does not.
Like other creatures in the animal kingdom, we start learning by play. Why not be tested by play?
The Onion, the news satire website, is always good for a laugh, that is, provided you know that it’s poking fun at real life events or people!
One of their latest “news reports” was a stab at Justin Bieber (gag!).
But not everyone realized that it was satire. Here is a snapshot of group of local students and a teacher having a Facebook conversation about it (click to see larger version). I have blocked out the names and faces to protect their identities. (Bieber, on the other hand, needs no protection. Quite the opposite, really.)
It’s enough to make you cry. I’m not referring to The Onion, but to the use of English and the digital ignorance.
I won’t say much about the teaching and learning of English because that is the domain of English teachers. I will say that what I have captured is quite typical and yet still decipherable. (It is almost impossible to read the tweeny and teeny tweets that come my way accidentally because my handle is @ashley.)
What worries me is that the analysis and evaluation of digital resources does not seem to feature prominently in our schools. It is not taught or modelled in any significant way. You don’t need a special course or teacher to do this. It should be done in every academic subject by every teacher!
Yes, what I have captured is a snapshot. But any teacher who takes advantage of social media experiences this every day, perhaps several times a day. Put all these snapshots together and you see the bigger picture.
We need to teach our learners how to peel onions (or Onions) apart, layer by layer, to figure out if they are edible (have any worth). The process won’t be pleasant, but they must do this because they already live, study and work in the digital world.
The BBC has an interesting article on educational policy: Signs of a turning tide on tests. It was interesting to me because the Commons Schools Committee advocated that stakeholders “trust the teachers” instead of relying heavily on things like standardized tests.
From the article:
The report did not argue for an end to all external assessment. But it called for a shift toward more within-school, teacher-led assessment. This, it said, would not only save money but also a lot of the teaching time that is lost to exam preparation and administration.
And this is the key point: it is not about dropping school accountability altogether, but about making sure it does not obstruct teaching and learning.
I hope that the UK does this while the rest of the world watches and learns. I also hope that we in Singapore act on this same issue before it is too late.
I think a scheme like this will work only if a) teachers are treated/nurtured as professionals, b) we expect them to behave as such and c) we hold them accountable for what they do. The measure of accountability should not just be exam results otherwise the test tide will return.
Instead, evidence of student ability, attitudes and skills could be recorded in portfolios, community involvement, personal and group projects, etc. In other words, more authentic, meaningful and rigorous assessments.
It’s almost the end of a long teaching semester. For reasons too long and boring to mention, some of my colleagues and I had to start next semester’s teaching this semester.
The two things that usually happen at the semester’s end are I fall ill and I think about what to do next. So I type now before the flu completely takes over!
One thing that doesn’t usually happen at the end of semester is a huge grading load to process during the break and before the second half of the course resumes next year. This is why I found Siemen’s recent comments on grading and evaluation particularly relevant. Some snippets:
Grading is a waste of time. We only do it in schools and universities. It’s a sorting technique, not truly an evaluation technique. Iterative and formative feedback is what’s really required for learning.
Agreed! Our teacher education university is still in sorting mode but for reasons that are no longer relevant. Why? First, they are selected by interviews (coarse sorting). Second, a few bad apples that beat this filter or the trainees who cannot handle teaching will drop out on their own (self sorting). Third, the sorting is only based on academic results. If anyone wants to sort them out, do so along the lines of their values and attitudes because they must be role models and lifelong learners (yes, values and attitudes can be measured). Lastly, even after they are sorted, teacher trainees graduate and end up in schools irrespective of their grades. It is not as if A-grade teachers end up in some schools and C-grade teachers end up in another. So why sort at all?
Siemens concludes with:
The authors of the HASTAC post are not trying to do away with grading (as I would suggest we should). They are trying to use technology to make grading more “modern” or “in line” with society’s needs today. I think that’s exactly the wrong way to go about it. Question the model, don’t modernize it.
Thought-provoking and something I thoroughly agree with. If you consider the concepts of assessment of, for and as learning, I’d argue that most of what we do is, at best, only assessment of learning. Furthermore, assessment is just a number. Unlike evaluation, the value of that number is not made clear.
So what’s on my agenda? This year my guiding principle as I facilitated the ICT course was to get my teacher trainees to use what their students were already using in terms of technology. Next year my approach will likely be “question the model, don’t modernize it”.
I started thinking about how we assess teacher trainees here in NIE. While my thoughts are still forming, I think that we should be evaluating them instead.
It never ceases to amaze me how well some of my teacher trainees write when they reflect in their blogs. It also never surprises me how poorly constructed some of their written assignments are. I am reminded of the latter now that I have collected essays from all four of my classes.
I have blogged before about how I enjoy reading my trainees’ blogs because it allows me to get inside their heads. Blogging seems to give them the time to think deeply and write thoughtfully.
So should an essay assignment. But an assignment is not necessarily as meaningful as blog entries. The assignment is high-stakes, defined by a small group of “experts” and heavily influenced by university policies. I think that an assignment makes the most sense to the teacher and not the student, but students do them because because they have to and are conditioned that way.
Assessments only put numbers or grades to performances. They measure cognitive ability and often only just that. Evaluations, on the other hand, determine the VALUE of a test, a portfolio, a performance, etc. If we want to start thinking about “alternative assessments”, we should really be asking ourselves how we are evaluating teachers-to-be.
I would love to see a portfolio system become the norm in teacher education. After all, teachers are practitioners and they must show evidence of what they can do, not just write about it or create flashy presentations of what they might do.
I think that reflective blogging has a place in the portfolio system. Both the teacher trainee and I can monitor how their knowledge, skills, attitudes and values change over time. Much of what is written is self reported and skeptics might argue that teachers-to-be might say only the “right” things. I’d argue that these educators do not get the idea of portfolios nor have they taken the opportunity to build trust and to emphasize core educator values.