Another dot in the blogosphere?

Posts Tagged ‘plagiarism

I have been facilitating a series of workshops for future faculty for several semesters. I also provide feedback on written assignments and conduct performative evaluations.

This academic semester is the first to worry me because of a few cases of plagiarism.

But first, some background.

Like some universities, the one I work with relies on Turnitin to pre-process assignments. Turnitin is embedded in the institutional learning management system (LMS) and provides summary scores of the assignments based on how much they match the ones already in the Turnitin database.

Plagiarism is a huge sin in academia. It is passing someone else’s work off as your own. If severe, it can get a faculty member fired or a student expelled.

Plagiarism is a human intent to cheat. It is an attitude or belief system that manifests in behaviours. Algorithms can try to make sense of patterns that result from those behaviours, but they cannot judge if a person has plagiarised.

So I do not take the matching scores at face value. As I have explained before, a high score might not be evidence of plagiarism while a relatively low score might hide it.

This semester I detected a few cases of plagiarism in assignments of every group of adult learners I processed. I used to get a clear case or two once every blue moon. The incidences this semester made it feel like there was an epidemic.

Thankfully there was no epidemic of dishonesty. But one case is one too many because the adult learners I educate today are tomorrow’s lecturers and professors.

So I provide a warning and follow the procedure of arranging for counselling by a central office.

The common response to “Why did you do this?” is often “I did not know”. I do not buy that since there are briefings about plagiarism and its consequences as well as an academic culture that avoids it.

“I did not know” might be an authentic answer. It might also be a convenient excuse. Both stem from not attending the briefings, or being new or blind to the culture. If this is the case, there is a more serious problem than skipping briefings or experiencing the world blind and deaf. It is being immune to change and living by another phrase: “I do not care”.

More on this phrase tomorrow.

Every semester I provide formative feedback on written work submitted by graduate students. Before I do this, the students submit their assignments to Turnitin via an institutional LMS to determine the extent to which their work matches other work in the database.

Every semester I get at least one email from a concerned student worrying about the matching score. The worry is good in that the plagiarism talks they attend have an impact. However, the worry is bad because they misunderstand what plagiarism is and how tools like Turnitin work.

Turnitin runs on formulae and algorithms. It has a huge database of references and previously submitted work. Any new student work is compared against this content. The extent to which the new content matches with the existing work is a percentage that I call the matching score.

Some students seem to think that the matching score is the same as plagiarism. This is not necessarily the case.

If a student uses a template provided by a curriculum committee or tutor, the headers and helping text will match. If another student correctly and ethically cites common quotations and lists reference, these will match with other existing work. All these means that the matching scores go up, but this does not mean the students have plagiarised.

In 2009, I provided examples on how the scores alone are not valid or reliable indications of plagiarism. A low score could hide plagiarism while a high score could actually come from the work of conscientious student with lots of correctly cited references.

Both the students and I have the benefit of not just the quantative matching scores, but also the qualitative highlights of matching texts. The latter should allay fears of plagiarism or highlight what is potential plagiarism. The student can take remedial action and I can determine if a score actually indicates plagiarism.

The problem with the system is the human element. Grading teams, administrators, librarians, advisers, and supervisors often arbitrarily set ranges of matching scores to mean no plagiarism, possible plagiarism, or definite plagiarism. The numbers are an easy shortcut because they take out human decision-making. The reports with highlighted text require reading and evaluation and thus mean a bit more work.

Both faculty and students need to be unschooled in focusing on numbers and playing only the numbers game. Life is not just about what can be quantified. Neither is the quality of a student’s assignment and their mindset on attribution.

I hope that a particular link does not work by the time you read this. Why? Every now and then someone will copy entire entries from my blog and paste it into their blog without acknowledgement or my permission.

How do I know this happens? Pingbacks.

I received an email notification yesterday that my reflection on the cost of textbooks was pinged in another of my musings about novice teaching mistakes.

Blog post plagiarism.

Screenshots of my blog entry on the left and the copy-and-paste job on the right.

I am choosing not to share the link to the plagiarising blog. Instead I share screenshots of my original and the copy [1] [2].

On visiting that blog, I noticed that it harvested an assortment of posts probably in a bid to get attention, increase its search engine optimisation (SEO), and benefit from ad revenue.

I will be taking steps like contacting the owner to have the copied entry removed, asking Google to counter the copy’s SEO attempt, and notifying the copycat’s web host.

I know for a fact that people steal the ideas that I share as I reflect openly in my blog. But I am not too worried.

When I say steal, I mean that people take the credit for my work (or even make a profit off an idea) and fail to properly attribute me or my blog as a source.

Part of the problem lies with the prevalent mentality that “if it is online, it is free for all”. That could not be further from the truth from a legal standpoint, but try arguing with the thieves and you will get nowhere.

It is sometimes difficult to lay claim to an idea or definitively identify the source of an idea. There are very few unique ideas. All of us stand on the shoulders of some other giant.

The knee-jerk reaction is to not share at all or to create in a closed environment. I do not think this is helpful because it does not allow for a diversity of ideas that result from cross-pollination.

Another reaction is to remain open. I do not mind if my some of my ideas get taken and developed for the greater good. But I do ask that people respect the Creative Commons license I share them under (scroll down and look to the right).

Putting your ideas online, well formed or not, will date and time-stamp them. In the absence of a patenting or intellectual properties office, this allows you to lay claim to an idea quickly and freely.

That aside, I believe that what goes around comes around. If you steal or fail to give credit where it is due, your actions will return to haunt you. You will get away with it some of the time, but you will not get away with it all of the time.

Victorian mindmapped man. by LukePDQ, on Flickr
Creative Commons Attribution-Noncommercial-Share Alike 2.0 Generic License  by  LukePDQ 

A teacher laments that we have a problem when she finds out that a student cheated on a class assignment. I agree with that teacher, but not in the way you might expect.

The complaint and the rest of the story is told at Teachers Put to the Test by Digital Cheats. (Many thanks to @hychan_edu for sharing this.)

While the article says that the problem lies with students (the erosion of values that comes with ease of access to information), I think that is only half the story. The missing half is the problem that lies with teachers.

If you set questions that a student can Google answers to, the problem is yours. You set the wrong type of question. If a student would use Google in real life, why would s/he hesitate in the classroom context?

If you set a complex question that a student can get a complex answer to thanks to an answer mill AND you have no idea that this happens, you have a problem. But in this case I agree that the student has a problem too.

You cannot just blame the cases of cheating on the ease of access students have to resources and to each other. This is the world we live in. As technology evolves, behaviours change and so do some values. One teacher acknowledged this:

What the educator needs to do is adapt to the age of technology and change the question… Maybe what (students) are learning should change. Maybe how they’re learning should change. Now the challenge to me is to match that technology and say what I’m doing needs to change.

Not maybe. Definitely!

I think the deeper problem lies with the mindsets of teachers and students.

I think some teachers do not recognize that they are setting bad questions and/or not keeping with the times.

There will be an erosion of student values if teachers do not go beyond talk to walk. Talk example: Plagiarism is wrong and this is why. Walk example: I caught you cheating and this is what is going to happen. Another walk example: I was tempted to plagiarize but this is what I did instead and why I did it.

I am not absolving kids of the blame. I am saying that they are a product of their environment and their nurturing. We, as adults, shape both.

The other interesting thing about the article was right at the end. The author mentioned that the root of the cheating could also be attributed to the need to do well in tests:

Anderman, the Ohio State researcher, said one thing has been proven to cut down on cheating, but installing it would require a sharp cultural change in an educational system that is placing ever more importance on test results.

“The bottom line in our research is pretty simple,” he said. “Where teachers are really emphasizing the test, you’re more likely to get cheating. When teachers are emphasizing the learning more than the test, you get less cheating.”

One of the things I did prior to grading essays was to submit all 130 of them to SafeAssign (SA). This online tool is embedded within my university’s learning management system (LMS) and it compares each essay with other works in a large database.

Both my trainee teachers and I get to use SA to remove possible instances of plagiarism. My trainees get to submit a draft of their assignments which they can then edit after SA provides them with a report; I get to submit their final versions.

But there are at least two things that I do not like about its use. Don’t get me wrong. I like SA as it is a useful tool. It is the implementation of SA that bugs me.

The instructions in the LMS used to refer to the “plagiarism score” that users see in a post-submission report. It has thankfully been corrected to “matching score” because that is all it is: The report that my trainees and I can view indicates how much of a person’s work matches someone else’s. It is plagiarism only if that person opts not to take corrective action, e.g., properly citing another person’s work. But the common lingo used by many students and instructors alike is “plagiarism score”. This assumes guilt before action!

My second bugbear is my institution’s guidelines on what score indicates plagiarism and what does not. While I do not have the liberty to say that that percentage is, I think that any number is ridiculous. For the sake of argument, let us say that the number is 33%. This means that up to a third of a person’s essay can match someone else’s and that person is safe.

I think that this number does a disservice because it 1) provides a “safe zone” and 2) hides actual plagiarism. If a person has a score of 30%, then s/he is not obliged to edit his/her work further to bring the score down. This perpetuates a wrong value system.

I’d also argue that someone with a score of just 15% could have plagiarised work while someone with a score of 40% might not. For example, the first person (with the 15% matching score) could have copied an entire paragraph, but s/he is within the safe zone. The second person (with the 40% matching score) could have lots of references which were properly cited. These cited works invariably bring up the matching score, but the second person is not guilty of plagiarism. This is the numbers game. Looking only at the final matching scores assumes the first person is innocent and the second person guilty, when in reality, the opposite is true.

Now any reasonable grader will realise this and examine the SA reports to determine if a 33% matching score is or is not indicative of plagiarism. But will they do this 130 times like I did? It really does not take that much time to eyeball the reports because the matching parts are highlighted and you can make judgments immediately. Out of 130 scripts, I have detected three clear cases of plagiarism. All three were well below the limit set by my institution.

Am I terribly concerned? Not particularly. Philosophically speaking, I think practices like mashups and creative commons licences will gain ground over concepts like intellectual property and copyright.

But practically speaking, I am more concerned about the trainee teacher behaviour that stemmed from attitude. Did the three who plagiarised bother to use SA to check their drafts? If they did that and read the reports, why did they opt not to edit their work? They may be good teachers by various measures, but if they do not have the industry or integrity to check their work, they are not good teachers in my book.

On further reflection, I realise how this reinforces what I thought earlier on issues surrounding grading. The assignment and the grading itself is not likely to reveal much to me about my trainees. But their actions around it already have.

[The following rant reflects my opinion and in no way represents the thoughts of my colleagues or policies of NIE. This is another medium for discourse and I encourage any stakeholders to comment openly and maturely.]

This week my trainees will be submitting their first assignment using a tool called SafeAssign (SA). SA is recognised mostly as a tool for detecting plagiarism.

I was one of a few faculty members who tested a similar version of this tool last semester. It worked well then because trainees were able to submit drafts of their assignment to SA one week before submitting their final versions. This allowed them to 1) experience the submission process, 2) revise their submissions based on the similarity score, and 3) evaluate the effectiveness of such a tool.

I call it the “similarity score” instead of the plagiarism score because SA simply reports the portions of their assignments that are similar to the database. So common references, questions, and even instructions also get detected and these are in no way instances of plagiarism!

What I really dislike about having to use SA this semester is that the draft submission is no longer possible (the draft option is there, but it does not work). I am told that this is due to limitations of BlackBoard and the new SA service provider. This means that my trainees have only one chance to submit their work and cannot revise it and resubmit it to SA. If they did, the system would compare their latest submission with the previous one and report a very high similarity score!

To be fair, some of my other colleagues are going to allow their trainees to submit hardcopies if they choose to revise their assignments. But I think this defeats the purpose of telling them to submit their FINAL versions of their assignment to SA. This also creates logical and logistical confusion on the submission process: Trainees in those classes have to submit hardcopies and the most recent softcopies via thumbdrive or email.

If we stick to just SA, all the final softcopies can be downloaded as one ZIP file! The documents can then be marked up and graded digitally… but, sigh, that is another step that people are not comfortable with. We are comfortable with allowing so much of our lives to be managed electronically, e.g., bill payment, purchases, change in particulars, etc., But grading electronically and providing immediate feedback? Oh, no!

But back to my rant. The single-and-final submission is very bad modelling on how to use such a tool. I mentioned this in class, but I think I have to reiterate this point because it will more real to my trainees this week. I should also remind those that had high “plagiarism” scores not to worry because most of the time, the reports highlight exactly what is in common with the database, i.e., references and questions.

Like most tools, SA is dumb. It is only a means to an end (checking for plagiarism). Human judgment is still necessary. A report might indicate a high similarity score, but the trainee might have used APA referencing well only for SA to detect the references as common to another item in the database (here is one example). Another trainee might have a low plagiarism score but copied critical chunks of main text.

I’ll have to repeat the moral of the story to this batch of trainees: Let technology do what it does well (make rapid comparisons); let humans do what they do well (make informed judgements).

Click to see all the nominees!

QR code

Get a mobile QR code app to figure out what this means!


Usage policy

%d bloggers like this: