IMAGE|
Last year, artificial intelligence entered the global spotlight when live language models such as ChatGPT displayed significant technological advancement in generating information, audio clips, and pictures. Since then, professors at ATU have had to adapt to keep up with the use of AI in their classrooms.
When ChatGPT gained the capacity to generate essays, many plagiarism detectors, such as TurnItIn, added an AI detector into their systems. However, some professors have found this tool unreliable and unnecessary. One such professor being Dr. Jacob Siebach, an assistant professor of geology.
“I had students last semester for my geology class that were using AI for the problem sets … It’s easy to tell. Just most students don’t write that well. I mean, most people don’t write that well. it’s the level of writing you see in a textbook or an official article,” Siebach said.
On the other hand, Siebach said that catching AI in essays isn’t always so straightforward.
“If you think that 30% of your class is potentially doing this, well, I mean, you’re talking about hours and hours of work,” Siebach said. He continued, “So, the real question is “How much time would you like to spend being a detective?”
Because of this, he has changed his essay assignment and adopted a writing guide by psychologist Dr. Jordan Peterson that breaks the process down into steps that make it easier for the students to write legitimately rather than use AI. This way, he can ensure that students are learning content rather than manufacturing an essay.
While AI has presented challenges, it has also provided a wide range of benefits as well according to Dr. Jason Warnick, a professor of psychology and behavioral sciences.
Warnick cited a couple examples of how he has used AI, ranging from planning one of his vacations to helping him train his dog. He also mentioned a new AI program named “Sophia” that is able to compile and analyze psychological test results, saving school psychologists hours of time.
“Now they have, you know, four to six hours that they can devote to other things,” Warnick said. “Actually working with kids, one-on-one, providing counseling services, doing things like that.”
Warnick also said that these past few years have reminded him of the beginning of the internet.
“It reminds me of that time in a lot of ways where I think faculty might be more aware of it than the students are. And when you do see students using it, it’s still at that clumsy level,” Warnick said.
Warnick also mentioned that students should consider using programs like Google Docs to track their writing process as a way to prove legitimacy. This is also because AI detectors and AI generators are, as he says, in a “cat and mouse game” where they are trying to keep up with each other.
“We shouldn’t be scared of AI. It is a tool for us to use,” Warnick said.
However, he also said that AI is not a substitute for skill. Rather, that it should be used as an assistive tool that is constantly being checked for errors. According to Dr. Edward Greco, a professor of electrical engineering, one error that AI is known to produce is a “hallucination,” which consists of sources, references, or pieces of information the AI has created that do not exist. He also said that this has been a known problem with live language models such as ChatGPT for some time now, though the problem is getting better.
Siebach suggested that AI could make skills such as coding more accessible to the public. However, Greco said that this wasn’t entirely the case.
“Packages [AI] can generate code for you,” Greco said. “But without having some background in coding, you may or may not get what you want. So, I don’t think it makes everybody a computer programmer.”
“It’s not been available to the general public like it is now,” Greco said. “But it’s been used in academia and the industry for some time.”