I read an article on The Verge this morning which sparked my interest.
These students figured out their tests were graded by AI — and the easy way to cheat
The student in the article was undertaking an online test, and it wasn’t multiple choice, but short form answers to questions.
…he’d received his grade less than a second after submitting his answers. A teacher couldn’t have read his response in that time, Simmons knew — her son was being graded by an algorithm.
What the parent found was that by using a mix of keywords, or “word salad”, the system would mark the answer as correct. So the student could “cheat” the system!
The article itself was stemmed from a Twitter thread.
Algorithm update. He cracked it: Two full sentences, followed by a word salad of all possibly applicable keywords. 100% on every assignment. Students on @EdgenuityInc, there's your ticket. He went from an F to an A+ without learning a thing.
— Dana Simmons (@DanaJSimmons) September 2, 2020
I posted the link to the article to the Twitter (as I often do with links) and it generated quite a response. Didn’t go viral or cause a Twitterstorm, but the article got people thinking about the nature of assessment and marking, with the involvement of AI.
There was quite a bit of feedback that this wasn’t cheating, but actually providing an answer to the question that the AI would mark as correct.
I'd be tempted to see this as students learning how to successfully complete assignments. Good students learn what you're testing. If what you're testing is not what you really want them to learn, they won't learn what you want them to. This is a problem with human marking too. https://t.co/PZ6j7qB3YJ
— Doug Clow (@dougclow) September 3, 2020
Others felt that this wasn’t AI.
That's not even AI if it just looks for keywords.
How pathetic edtech turns out to be.— Joëlle Nebbe-Mornod (@iphigenie) September 3, 2020
I would agree, this was being called AI, but it was more of a system which matched keywords from answers given by students to a list provided by a teacher. The system wasn’t analysing what and how an answer was written, it was a text matching process.
A flawed approach to testing, which resulted in students been able to “game” the system to get 100%.
The lesson here is, for anyone looking at automated online assessment, is if there is a way in which the system can be manipulated, then it probably will be.
2 thoughts on “It wasn’t cheating!”