AI writes good papers, if you don’t look too closely

3 minute read


To the untrained eye, ChatGPT seems to crank out plausible journal articles.


Your Back Page correspondent is a big fan of AI, but not for the reasons you might expect.

While there is a grudging respect for the speed at which a tool such as ChatGPT can perform helpful data analysis, we’re mostly in it for the entertainment value.

Whenever AI is used to do something genuinely creative with the information it’s gathered, the results are usually hilariously bad or woefully inaccurate or both.

A fine example of this is shown in a recent study conducted by the Indiana University School of Medicine, where researchers decided to test how good ChatGPT might be at writing articles for a scientific journal.
They took three topics – fractures and the nervous system, Alzheimer’s disease and bone health and covid and bone health – and asked the subscription version of ChatGPT to create scientific articles about them. 

The researchers used three approaches – human only, AI only, or a combination of both – to create 12 articles which they published in a special edition of Current Osteoporosis Reports

“The standard way of writing a review article is to do a literature search, write an outline, start writing, and then faculty members revise and edit the draft,” Melissa Kacena, PhD, vice chair of research and a professor of orthopaedic surgery at Indiana, told media.

“We collected data about how much time it takes for this human method and how much time it takes for ChatGPT to write and then for faculty to edit the different articles.”

For the AI-only written articles, up to 70% of the references were wrong.

When humans were brought in to help, the reference errors went down, but the amount of plagiarising went up – especially when the tool was given more references at the start to work with.

Then there was the writing style. According to the study team, even though the AI tool was prompted to use a higher level of scientific writing, the words and phrases were “not necessarily written at the level someone would expect to see from a researcher”.

Overall, while AI decreased the time required to write the article, this was counteracted by the increased time needed for fact checking.    

“It was repetitive writing and even if it was structured the way you learn to write in school, it was scary to know there were maybe incorrect references or wrong information,” one of the human co-authors on nine of the papers told media.

Which is good news if you are someone such as your correspondent who still makes a living as a keyboard warrior and likes to check his facts.

And the fact that AI cannot – for the time being at least – think outside the box illustrates that while the technology may be smart, it certainly can’t be called intelligent.

Send story tips with citations in APA style to penny@medicalrepublic.com.au.

End of content

No more pages to load

Log In Register ×