Posts Tagged ‘feedback’
One of my favourite tools for written feedback is Turnitin’s document viewer.
The name viewer is a misnomer since the tool also allows an evaluator to annotate digital documents.
The screenshot above is of the side bar of the tool and this is what makes it useful and powerful. (I pixellated some of the customised content to respect the work of others and left one of my examples plain to see.)
As a lone evaluator, I can add frequently-used comments to the side bar. When I notice something in an essay that triggers concern, I highlight some text in the essay and click on a button that represents that comment.
For example, I might find that someone has a misplaced trust in “learning styles”. I highlight those words in the online document and click on my “learning styles” button. My entire comment (text in bottom window) is added to the document as feedback in a speech bubble.
Users who receive feedback do not need to install anything or have a browser extension. They revisit their graded work and hover a cursor over the speech bubbles in order to read the feedback.
Even better than this convenience is another affordance: When evaluating as a group, each member can add their own comments which other evaluators can see and use. The tool becomes a pool of distilled wisdoms in the form of critical feedback.
Unfortunately, this tool is limited to educational institutions that pay for the service and add-on. I had long wished for a similar tool that was more open and preferably free. I might have found something close in the form of JoeZoo.
JoeZoo is a Chrome extension and I have yet to explore it fully. It promises the efficiency of reusing comments, but it does not seem to pool shared wisdoms.
Like Turnitin, students receiving feedback on their work do not need to install anything on their computing systems to view the feedback.
If you are like me and security conscious, you might block third-party cookies from your browser. If you do this, JoeZoo will not work. To get around this issue, you will need to create this cookie exception: [*.]googleusercontent.com.
My ideal feedback and grading tool would be a hybrid of these two tools.
- Very simple to use like Turnitin’s side bar
- Visually appealing for teachers and student like JoeZoo
- Shared or pooled comments like Turnitin
- Free and open like JoeZoo
Disclosure: I have not been paid or otherwise compensated for mentioning Turnitin or JoeZoo.
Formative feedback: It is a pillar that upholds learning. Without it a student gets grades and the learning stops. Why? The student does not know what exactly went wrong or right, and why. As a result, that student does not reflect and change strategies.
Ideally the feedback is meaningful and timely. For feedback to be meaningful, the student needs to know: Why is this important? How do I make sense of it? For it to be timely, the student wants to know: How soon can you give it to me? Am I ready to receive it?
Trying to provide feedback that answers these questions is a big problem for any educator. The problem has a bad side and a good one.
Feedback is often given from the point of view of an expert who cannot remember what it was like to struggle with learning. This creates a disconnect.
An educator trying to provide good feedback will also realise that quality soon leads to quantity. This could be in terms of time spent with individuals, or the amount of written or otherwise recorded feedback.
Both these problems stem from the fact that traditional grading and feedback depends on an audience of one — the teacher. The students write for one person, and that person has to be director, manager, applauding audience member, performance critic, publicist, and popcorn seller.
There is far too much for one person to do and too little time to do it in. So what is an educator to do?
Some might point to the future of artificial intelligence (AI). Already some AI can fool very educated academics into thinking that another expert gave them feedback.
However, most teachers need solutions now. Not solutions like robots that scan bubble sheets or testing programmes that tally answers. Those tend to be summative, relatively quick, and as they involve grades, may not focus on learning.
Current technologies for providing feedback on written work (like Google Docs, Kaizena, JoeZoo) or performances (like video capture and annotation) require an investment in time.
Again, there is far too much for one person to do and too little to do it in. So what is an educator to do?
The problem presents opportunities. These are good problems and I present them as questions.
- How might our learners be more peer-driven and collaborative?
- How can we be more open so that other experts contribute to the process?
- How might the tasks be more authentic or otherwise more representative of the wider world?
- How do we filter noise from signal?
If there is power in peer teaching, then the same could be said about peer assessment. While learners do not have the same content expertise or thinking ability as an expert, they are cognitively closer to each other than the teacher is to them. They will use language and examples in ways that a teacher cannot.
Opening up assessments to a wider audience also places peer pressure on learners. The wider audience could include other educators and experts in the field.
So far the suggestions operate in the classroom and academic bubble. Step outside of it and consider what happens in social media and YouTube: Feedback is constant, brief, brutally honest, occasionally pleasant, but always real. It can be messy and the learner has to decide what is important to take in and what to ignore.
Even before a student leaves school or university, he or she is already operating outside that bubble. When they eventually leave, they will spend even more time there. They are learning how to operate in the social media and YouTube world largely without the benefit of the teacher. Their audience of potentially many is missing that audience of one.
Why are we not using strategies that already work outside our bubble? What is holding some of us back from embracing the new normal in the wider world? What is more important: Our fears or our learners?
Facilitating a course at a university means there are assignments to grade and provide feedback on.
The assignments I grade are cumulative — #1 is the foundation of #2 which leads to #3 — so they increase in complexity. They take longer to process too.
However, some semesters are interrupted by breaks and holidays so that a cohort is effectively divided into two. This was one of them. Two batches were separated by about two weeks, so the teaching and grading got a bit confusing.
Some weeks ago I was facilitating module 1 for two classes and module 3 for another set. The facilitators also swop classes so we get to see almost everyone, but this might be confusing for the learners as well.
This week I also have assignments crossing lanes and piling up because of that.
Educators worth their salt know how important it is to give timely feedback, so our group of facilitators gave itself a rough target of a week between electronic submission and electronic markup plus feedback.
Any educator honest enough will also tell you how easy it is to get worked up while doing some serious grading. So it was nice to receive and recall some bouquets out of the blue. For example, one email query ended with a bouquet like this:
At the end of my sessions, I use the one-minute paper strategy in Padlet so my learners can express what they will take away. I give them the option of leaving feedback as well. Here are a few from the semester so far:
As I receive these rose petals, I am aware that the feedback that I provide might look like thorns.
I make the effort to highlight what is good about what I read in the assignments. After all, if positive feedback feels good and energises me, it will do the same to my learners. However, there are two things I watch out for.
One, if the feedback is positive but not specific, it might have a feel good factor but it goes nowhere. Two, the positive feedback must be deserving, not given for its own sake.
Every rose has its thorns. If you are going to pick roses, be prepare to get pricked. My feedback might feel thorny, but I am being cruel to be kind. If I do not highlight mistakes now, my learners will carry them forward and accumulate them.
As my learners’ final assessment is performance-based, I chose to be strict with their drafts and “scripts”. Better to hear the tough words and listen to the unpleasant music now than to be booed on stage later when it matters.
My reflection starts with an Apple Pay verification process and ends with lessons on teaching and assessment.
When Apple Pay launched in Singapore in May, I jumped on the bandwagon by verifying one of my credit cards. The process was quick and painless: Scan the card details into the Wallet app and verify the card by SMS.
I tried the process with another eligible card, but did not receive SMS verification. I put that down to early implementation issues.
However, I tried about ten times between the launch in May and this month and was still unsuccessful. The Wallet app provided the alternative verification process of calling the credit card issuing bank’s customer service.
I dread using such customer “service” because the process make me feel like a rat being tested in a maze.
I had to get through several layers of number pressing before getting the option to speak with a human. Once there, I was informed that they were “experiencing a high call volume”.
I missed having an old phone that I could slam down on the receiver.
This particular bank provided the option of leaving my contact number so that I would receive a call-back in four hours. That must have been some really high call volume!
I received one shortly before the four-hour mark and explained how I did not receive SMS verification for Apple Pay from that bank’s system. I also mentioned that I had done the verification for another bank’s card quickly and seamlessly with the same process.
The customer service representative (CSR) was puzzled, checked the messaging records, and told me that SMS had been sent to my phone. I wanted to reply that I was not an idiot, but I bit my tongue. I repeated that I did not receive any despite several attempts over two months.
The CSR then advised me not to use my bank-issued security dongle. I told him that the dongle was irrelevant because it was not a verification option in Apple’s Wallet app. So he said he needed to look into my case and asked if he could call me back in an hour.
As soon we disconnected, something connected. A long time ago, I blocked a few of the bank’s SMS numbers because I kept getting marketing messages despite telling them I did not want any. I wondered if the SMS verification shared one of those numbers.
I figured out how to unblock the numbers and tested the SMS verification for that bank card. It worked as quickly as my first card.
The was not the fault of the bank. It was mine for blocking numbers, irritating as their messages were.
I reminded myself of two lessons on teaching:
- You should not just stick to a script. It is important to first listen to the learner’s problem before suggesting a learning solution. The CSR’s advice to not use the dongle was obviously part of a recommended script, but it was irrelevant in this context. Mentioning the dongle not only did not help matters, it added to my frustration.
- Thinking out loud is one of the best ways to learn. I knew what the symptom of my problem was (no SMS from the bank), but I did not know its root cause (I had blocked some SMS numbers). Speaking to someone helped me pull thoughts to the surface and helped me find my own solutions.
When the CSR called back, I explained how I had solved the problem myself. He was relieved. I was relieved.
Right after we disconnected, he triggered an SMS to me to rate the customer service by text. It was like being pranked.
I did not respond to the SMS because the ratings were too coarse: Below, Meet, Exceed.
The phone service took place over more than one call and had multiple components. Averaging the experience was not meaningful. Detailed feedback on what was good or not good about the experience and analysing a recording of the exchanges are more tedious but better options.
I thought of two lessons on assessment:
- The administrative need to collect and collate data drives such bad practice. Just because you collect these data does not make the data meaningful or help CSRs improve. Administrative needs should not drive assessment.
- The average rating approach is a hallmark of summative assessment. It is grading an experience. If the CSR received “Exceed”, did he get a pat on the back? If the feedback was “Meet”, would he just keep reading from scripts? If the grade was “Below”, what can he do with that information? Good assessment is based on quality feedback, not just grades.
It does not take special events, teacher observations, prescribed professional development, or even a personal learning network to learn how to teach or assess better. The lessons and reminders are everywhere, even in the Apple Pay card verification process. You just have to pay attention.
It has taken me a month into 2015 to change two things in my social media presence.
First, I have updated my Twitter profile. I used to mention that I was the Head of the Centre for e-Learning. Proud as I am of what I was and did, that is the past.
Now I merrily proclaim that:
I’m a child in an adult’s body wanting to show other adults how to educate with technology. Founding member of #edsg & member of TEDxSG Brain Trust.
This will invariably invite questions and comments both online and offline. It is also what I am and do.
Second, I am also doing something a bit different at my Presentations page. Not only am I providing some insights into why I designed a presentation a particular way, I intend to share some audience feedback if I have it.
I am not including all the feedback there though as doing this seems self-congratulatory. I will keep those as Twitter favourites, backchannel comments, or other feedback channel artefacts. Such encouragement serves as support for the glass stage on which speakers stand.
My workshops are like roller coasters. By this I mean the rides I design for my participants and the feelings I generate.
I take the the work in workshops seriously. I believe that for people to learn they must not just do, they must work actively and meaningfully. So I challenge workshop participants to teach, create, destroy, and rebuild.
This leaves them experiencing highs and lows, twists and turns. The lows might be the frustrations in the form of obstacles or problems; the highs are the a-ha moments or the positive feedback they get from their peers. But when they think the get something, they question what they know at the next turn.
Even though it sounds unpleasant, there is a simultaneous rush and relief at the end of the ride. The best part is the feedback: It was over too fast. Can I go again?
by Karl Horton
I hid yesterday’s entry from general view because of some possibly sensitive information. But I share the second, generic half of my reflection here in case it helps someone. It is about getting timely feedback directly from your learners.
… instructors who take things into their own hands can create simple Google Forms to get feedback if they need to quantify things. I also ask for feedback regularly on Edmodo. If you do this, you should be aware that your learners may take many surveys and you will want to keep things simple.
One of my participants remarked: “I always receive instant feedback for my assignment and I appreciate that.” I know they do, which is why I go out of my way to respond as quickly as I humanly can.
Hattie, in his meta study of meta studies, identified feedback as the most important factor of effective instruction. He summarized by saying: The most simple prescription for improving education must be “dollops of feedback.”
I do not think I do dollops, but I try to offer timely feedback.
Just as learners appreciate timely feedback, so do instructors. If you do not get this feedback as an instructor, you can seek it by taking matters into your own hands. If you leverage on technology like Google Forms and Edmodo, you have data that you can use to your advantage.