As artificial intelligence gets “smarter” and becomes more commonplace, educators are trying to ensure students don’t submit assignments completed by AI.
Dr. Lloyd Smith, professor of computer science at Missouri State University, has tested chatGPT and its current ability to complete an assignment for his multimedia programming course. He noted that the work was pretty good but contained errors.
Smith also tasked chatGPT with writing song lyrics, blog posts, a story proposal and a travel article.
His overwhelming feeling? Good but not outstanding – and certainly not “A” worthy.
“The song lyrics were cute, but I don’t think they would be commercially competitive,” Smith said. “The blog posts and travel article were bland.”
In the classroom
In high school and college classrooms, essays are a common test of students’ knowledge and critical thinking.
But how can professors be assured that the homework displays a student’s ability rather than the AI’s?
“That’s where the academic integrity challenge lies,” he said. “Our students aren’t going to build skills if AI is doing their assignments.”
The first solution is simple: students complete essays in a classroom with a teacher present. This is not ideal for larger assignments that couldn’t be completed in a class period.
To combat this, Smith suggests using plagiarism checkers. He also adds that researchers have begun assessing the ability to identify AI-written work.
“Another approach that’s been suggested is to collect samples of a student’s writing then perform author-attribution analysis on assignments,” he said. “An AI program could probably do that analysis for us.”
Simplifying AI
As a computer scientist himself, Smith is excited by the possibilities of AI.
He explains that AI stores many small chunks of input, then strings those chunks together in new ways.
“Because they’re regurgitating previously seen input, I don’t think we can expect them – yet – to do anything truly innovative,” he said.