The AI chatbot has only been around for a few months, but it’s listed as an author of two hundred books on Amazon. After making it a co-author on several research papers, publishers of scientific journals are banning or restricting the use of ChatGPT in articles.
ChatGPT has appeared as a co-author on papers exploring the potential for AI-assisted medical education, as well as in articles in the academic journals Nurse Education in Practice and Oncoscience. The chatbot is generally used for drafting and writing papers or gathering data. It is listed as an author as it was believed to have contributed intellectually to the paper’s content.
Now science journals are changing their policy on authorship regarding AI. ChatGPT isn’t going anywhere anytime soon, but they have decided to ban the listing of it as an author.
What is ChatGPT?
It’s a chatbot that uses AI, which conversationally interacts with you. OpenAI launched the bot in November 2022, and it is currently free for anyone to use. It answers questions, codes, and writes various things, including emails, essays, poetry, and even drafts of legal documents. While the quality of the written work is engaging, the fact that it can do those things is pretty cool.
Problems with authorship
Research papers are banning the chatbot as an author as they believe that the software cannot fulfill duties, claiming that the authorship responsibilities extend beyond publication. Though this may be the case for research papers, Euronews published an article in which four of its paragraphs were written by AI to see if their readers could tell the difference. It was good. My favorite part of the AI excerpt was when it described content writing as a ‘low-skilled’ job.
The other problem with identifying AI as an author for all work it produces is that it cannot claim intellectual property rights. Because it’s not a person with research precisely, it cannot correspond with scientists or answer questions from the press on its work. It is also still at risk of making mistakes. Human or robot, there is always a margin for error.
The most crucial issue, which applies to anything ChatGPT produces, is that it cannot create an original idea. It can process all information and all data, but it cannot make something new. This has also posed the problem of plagiarism. If ChatGPT cannot produce original ideas and composes something based on everything it has processed, is it making plagiarised work?
AI in education
ChatGPT has already caused problems in schools and universities. The chatbot will happily answer any question you ask it. Naturally, this has led to students using it to do their homework for them. Is it slightly lazy? Yes. But you have to admire their attempts to avoid putting effort in.
The BBC reported a university student who used the chatbot to write one of his essays ‘as an experiment.’ The essay scored 53%, 12 marks off the students’ usual mark of 65%. He described it as ‘scary,’ but it proves that AI cannot compete with the human standard of written work. ChatGPT can’t quite live up to ‘low-skilled jobs’ like writing.
AI isn’t quite ready to take our jobs
The journal Science has changed its AI policy to read: “Text generated from AI, machine learning, or similar algorithmic tools cannot be used in papers published in scientific journals, nor can the accompanying figures, images, or graphics be the products of such tools, without explicit permission from the editors. In addition, an AI program cannot be an author of a Science journal paper.”
Nature has also declared that it will not allow AI to be credited as an author on research papers.
While the journals have rejected the use of AI and listed it as an author, it continues to be used in the literary world. From children’s books to chatbot guides, there is no stopping ChatGPT. Technology has always driven us, but this does feel like it’s going too far. Let’s leave the writing to humans.