The widespread application of generative artificial intelligence in academic writing has sparked controversy over the definition of plagiarism. Large language model (LLM) tools like ChatGPT can improve writing efficiency but also make plagiarism harder to detect.
Many researchers believe that the use of AI tools can be allowed in certain situations, but full disclosure of usage should be required. However, LLMs generate text by digesting large amounts of published articles, which may be similar to "plagiarism". Using these tools may lead researchers to pass off machine-generated content as their own work, or generate text that is too similar to others' work without attribution.
A survey of 1,600 researchers showed that 68% of respondents believed AI would make plagiarism easier to commit and harder to detect. Experts worry that LLMs could be used to disguise deliberately plagiarized text.
There is debate over whether using AI-generated unsigned content constitutes plagiarism. Some experts argue this should be defined as "unauthorized content generation" rather than plagiarism itself. But others believe that generative AI tools infringe on copyright.
Since the release of ChatGPT, the use of AI in academic writing has grown explosively. Research estimates that at least 10% of biomedical paper abstracts used LLMs in the first half of 2024. Papers from countries like China and South Korea show more signs of LLM use compared to English-speaking countries.
Despite the controversy, many researchers believe AI tools have value in academic writing, improving clarity and reducing language barriers. But there is still confusion about what constitutes plagiarism or ethical violations.
Currently, many academic journals allow some use of LLMs but require full disclosure of usage, including systems used and prompts. Authors are responsible for accuracy and ensuring no plagiarism occurs.
As AI technology develops, academia needs to reach a consensus on how to use these tools appropriately to maintain academic integrity.