How ChatGPT is affecting academic integrity at KPU
With an increase in students using AI tools in course work, KPU is adapting to the new technology
Ever since ChatGPT, an artificial intelligence (AI) chatbot, launched in November 2022, it has changed the scene of academia forever.
It has raised several questions since, the crux of it threatening the concept of academic integrity.
Created by OpenAI, an AI and research company, ChatGPT is a language model that allows users to compose emails, essays, summarize documents, and complete assignments.
The chatbot last month came out with an update that allows users to have voice conversations and interact using images. Whether it’s narrating bedtime stories or solving your troubles with the click of a picture, the new update makes it possible for the chatbot to do it all.
Kwantlen Polytechnic University recognizes ChatGPT has the potential to both enhance education but hinder academic integrity. With AI-writing being detectable, the unauthorized use of ChatGPT can be considered as an academic integrity violation.
However, in some cases, if an instructor permits, the chatbot can be used for assignments if properly cited. For such assignments, the instructor may also ask the students to provide proof of prompts used and the outputs.
“It’s sort of like a really scaled up version of autocorrect on your phone,” says Kwantlen Student Association Advocacy Coordinator John O’Brian.
“It’s absolutely dominating the discussions about academic integrity. As far as I can tell, every school is looking at it, thinking about it, and having similar experiences it seems.
O’Brian says there was a considerable increase in the violations and allegations of academic integrity during the beginning of the COVID-19 pandemic. While the use of AI is generally considered as an aftermath of online learning, the cases did not go down after everything went back to normal.
While it appears a lot of people might be using the chatbot for assignments, Brian says it is hard to be sure whether the work is AI generated or poorly completed.
“A lot of the hints that somebody who’s used one of these tools to write something is just that it’s very bad writing,” he says.
A student caught using the chatbot unethically would undergo an academic integrity process that would include receiving an email from the dean’s office about the allegation. If a student wishes to dispute the allegation, they can set up a meeting with a university representative to make their case. Based on the evidence, the representative would make their decision.
Students have the right to appeal the decision in light of new information about the incident or if they believe that the process was not carried out accurately. They are also allowed to bring a support person to the meeting with the university representative whether it’s a friend, family member, or someone from the KSA.
After a university representative extrapolates a breach of academic integrity has occurred, the sanctions are decided upon. The sanctions depend on the extent, impact, and circumstances of the committed breach.
The first offence often results in failing the assignment, the second offence results in a failing grade for the course, while a third offence would result in suspension from the university.
One of the signs that a student has used ChatGPT in their work is the use of stilted and repetitive language. To further investigate whether a student has used the chatbot, their writing style in emails is compared to the assignment they handed in.
“It feels like it could fundamentally shift what we do at university, … and technological change works like that,” says Rebecca Yoshizawa, a sociology instructor at KPU. “Technology changes faster than society can adapt a lot of the times.”
Yoshizawa has experimented with the use of ChatGPT whether it’s designing assignments that allow students to make use of the chatbot, using it to prepare a course outline, or writing up a lecture.
She found when the students were allowed to make use of the chatbot, they realized the potential of their own work as opposed to the one produced by ChatGPT.
Yoshizawa says that the chatbot usually produces only the widely held beliefs and ideas rather than complex or developed arguments. It lacks the imagination humans possess and puts forward only mundane and often common knowledge.
“I found this is nothing new under the sun. When the internet first came out it was like, ‘Oh, this is going to change academic research.’ And it did, but we adapted, and we’re still here for better or worse,” Yoshizawa says.
“I found the best that ChatGPT can do is inspire a flow of ideas. So, when you’re trying to write something, sometimes it helps to start reading before you start writing. It inspires things to move.”
Yoshizawa says a lot of instructors feel threatened by the chatbot since it challenges the essence of being an academic or educator. One of the solutions to academic integrity being affected by ChatGPT is to simply include the chatbot as one of the resources used in a way that does not harm integrity.
Her other suggestion is to have a complete pedagogical shift in the way students are taught and assignments are designed to incorporate AI.
One of the ways Yoshizawa has instigated analysis and engagement in her classes is to incorporate questions that provoke analysis instead of asking students to report back memorized facts.
“I found that this has been really effective and motivated students to think the way that I want them to think in my classes, which is not just to memorize things, but actually engage and apply sociological skills to explain, describe, and explore real world situations,” she says.
Yoshizawa says academic integrity is not a list of things that can be checked off but rather “a moving target.” It is not a new concept but something that is continually being worked on and revised.
“We can hide, or we can embrace, and we can [find] different ways to embrace and different ways to hide,” Yoshizawa says.
She also says a punitive approach to the use of ChatGPT might not have the same effect as a non-disciplinary approach that would allow students to make use of the chatbot in a responsible way.
As an instructor, Yoshizawa says it can be disappointing to receive work that has questionable integrity from students, especially after the work the instructor has put into designing a course and assignments.
Amidst the use of generative AI, Yoshizawa says the way university leadership approaches its use will determine the nature of relationship between students and instructors.
“I think that our university is putting a lot of work into addressing these issues,” she says.
Anna Robinson, manager of academic integrity at KPU, has been working towards figuring out what academic integrity would look like in a generation of AI.
“Artificial intelligence technology has been around for quite some time; however, the emergence of ChatGPT and other generative artificial intelligence technology has provided an opportunity for some deep thought around how technology is used and for what purposes,” Robinson wrote in an email statement to The Runner.
Robinson also wrote the appropriate use of the AI is dependent upon different contexts and how students are consuming course material. It is up to the discretion of the dean’s office and faculty to determine whether a student was plagiarizing or cheating based on the balance of probabilities.
“A balance of probabilities means that an act is more likely than not to have occurred,” reads the ST2 Student Academic Integrity Policy at KPU.
“It is important that students make sure they understand the expectations around this for each of their courses,” Robinson wrote.
The main challenge that comes up with the use of AI in coursework is helping people understand the rules and expectations in association with academic integrity. The technology provides both opportunities as well as limitations and requires use with awareness and curiosity at the same time.
“If an instructor allows students to use ChatGPT in their assignments, it is necessary to acknowledge and appropriately cite its use,” Robinson wrote.
“There cannot be a one-size-fits-all approach to this type of technology because its use is very dependent on the content and assessment methods used in each course by each instructor,” she wrote.
Robinson also predicts more discussions in the future about the responsible use of technology as well as biases and limitations that come with it.
“Many educators are already adjusting their methods of teaching and assessment, and I imagine we will see more educational activities that incorporate the use of this technology in some way as we all learn more about it and see how it is being used outside of academia and in different industries,” she wrote.
Haley Stock, a journalism student at KPU, thinks using the chatbot would be unethical due to students submitting work that is not their own but also recognizes it is inevitable.
Stock says it’s easy to detect when ChatGPT has been used for an assignment due to its specific writing style and she’s also interested in how educational facilities cope with proving whether a student has the chatbot in their work.
“There are a lot of sites that say that they can detect ChatGPT in writing and some of them will say that the American Constitution has been AI generated when clearly it hasn’t. So, a lot of them are false positives, which can make it even more tricky,” she says.
Stock also says students accused of using the chatbot should be considered innocent until proven guilty, which is not always the case.
“I think teachers should familiarize themselves with ChatGPT so that they might be able to better identify when it’s being used instead of relying on sites that aren’t accurate,” Stock says.
According to KPU’s library guide, the use of AI is considered appropriate when used as a study aid for exams, to improve understanding, or to help with critical discussions only when permitted by an instructor.
“These tools are also quickly evolving, so it is important to stay on top of developments,” Robinson wrote.