Study shows thousands of UK university students have used AI to cheat

A recent survey of academic integrity violations found nearly 7,000 confirmed cases of AI-related cheating in 2023-24.

A recent survey of academic integrity violations found nearly 7,000 confirmed cases of AI-related cheating in 2023-24.

ad-image

Thousands of university students across the United Kingdom have used artificial intelligence tools to cheat on academic work, and experts warn the problem is growing as AI becomes more widely available.

According to a report by The Guardian, a recent survey of academic integrity violations found nearly 7,000 confirmed cases of AI-related cheating in 2023-24. That equals 5.1 cases per 1,000 students, a significant increase from the 1.6 cases per 1,000 recorded in 2022-23. The number is expected to climb to approximately 7.5 cases per 1,000 students this year.

Before generative AI tools became easily accessible, traditional plagiarism made up nearly two-thirds of academic misconduct in 2019-20. That figure rose during the COVID-19 pandemic as assessments shifted online. However, the latest data shows a sharp drop in conventional plagiarism, coinciding with a rise in AI-related misconduct.

While confirmed cases are up, the actual scope of AI misuse is likely far greater. A February survey by the Higher Education Policy Institute found that 88 percent of students admitted to using AI tools for assessments. Separately, researchers at the University of Reading tested their own assessment systems last year and found that AI-generated work went undetected 94 percent of the time.

“I would imagine those caught represent the tip of the iceberg,” said Dr Peter Scarfe, an associate professor of psychology at the University of Reading who co-authored the study. “AI detection is very unlike plagiarism, where you can confirm the copied text. As a result, in a situation where you suspect the use of AI, it is near impossible to prove, regardless of the percentage AI that your AI detector says (if you use one). This is coupled with not wanting to falsely accuse students.”

Scarfe added that while institutions could return to entirely in-person assessments, that approach is not practical. 

“Yet at the same time the sector has to acknowledge that students will be using AI even if asked not to and go undetected,” he added.


Image: Title: gpt

Opinion

View All

RAW EGG NATIONALIST to JACK POSOBIEC: Affluent leftist radicals are the real domestic threat—just look at the J6 pipebombing suspect

"These leftist agitators, these anarchist agitators, a lot of them aren't from the lumpenproletariat,...

Trump, leaders of Congo and Rwanda sign Washington Accords peace deal

The signing took place at the US Institute of Peace, where Trump said the deal finalizes terms first ...

MICHELLE MALKIN: How did Obamacare waivers work out for big corporations? (2012)

Answer: In the same miserable boat as every other unlucky business struggling with the crushing costs...

BRENDAN PHILBIN: Public schools are failing students by obstructing free speech rights

By silencing critics, pushing politics, or imposing beliefs, school districts fail in their central m...