Journal Information
Vol. 30. Issue 5.
Pages 413-415 (September - October 2024)
Share
Share
Download PDF
More article options
Vol. 30. Issue 5.
Pages 413-415 (September - October 2024)
Editorial
Full text access
How Artificial Intelligence is changing scientific publishing? Unrequested advices for young researchers II
Visits
1938
Stefano Mazzolenia,b,
Corresponding author
stefano.mazzoleni@poliba.it

Corresponding author.
, Nicolino Ambrosinoc
a Department of Electrical and Information Engineering, Politecnico di Bari, Italy
b The BioRobotics Institute, Scuola Superiore Sant'Anna, Pisa, Italy
c Istituti Clinici Scientifici Maugeri IRCCS, Pulmonary Rehabilitation Unit of Montescano Institute, Italy
This item has received
Article information
Full Text
Bibliography
Download PDF
Statistics
Full Text

“I know I've made some very poor decisions recently, but I can give you my complete assurance that my work will be back to normal. I've still got the greatest enthusiasm and confidence in the mission. And I want to help you”. HAL 9000.

Stanley Kubrik, director: 2001: A space odissey.1968

There is an increasing submission rate of manuscripts supported by Artificial Intelligence (AI) tools. While most journals require the declaration of such use, especially young researchers should be aware of potentials and limits of any innovation in scientific research.

We therefore below report the following paragraphs (in italics) including the answer to a question posed to ChatGPT itself.

“Artificial intelligence, including models like ChatGPT, has the potential to dramatically change scientific publishing in several ways:

  • 1)

    Automated Literature Review: AI tools can efficiently scan and summarize vast amounts of existing literature on a given topic. This capability streamlines the process of conducting literature reviews, enabling researchers to stay abreast of the latest research findings more effectively and saving time in the initial stages of research.

  • 2)

    Language Editing and Proofreading: AI-driven language models can assist authors in improving the clarity, coherence, and grammar of their manuscripts. They can provide real-time suggestions for sentence restructuring, word choice, and grammatical corrections, thereby enhancing the quality of scientific writing and reducing the need for extensive manual editing.

  • 3)

    Data Analysis and Interpretation: Algorithms can analyse complex datasets more quickly and accurately than humans, facilitating the interpretation of experimental results and the generation of insights. This capability can accelerate the publication process by expediting data analysis and enabling researchers to draw conclusions more efficiently.

  • 4)

    Automated Peer Review: AI-powered systems could potentially assist with the peer review process by automatically evaluating quality and validity of manuscripts. While fully automated peer review may not replace human judgment entirely, AI tools can assist in identifying potential reviewers, detecting plagiarism, and assessing the technical soundness of manuscripts, thereby expediting the peer review process.

  • 5)

    Personalized Recommendations: AI algorithms can analyse researchers' publication histories, interests, and citation patterns to provide personalized recommendations for relevant journals, collaborators, and funding opportunities. This personalized approach can help researchers identify the most suitable publishing venues for their work and enhance their visibility within their respective fields.

  • 6)

    Enhanced Accessibility and Translation: AI tools can facilitate the translation of scientific publications into multiple languages, making research findings more accessible to a global audience. Furthermore, AI-driven text-to-speech technologies can enable visually impaired individuals to access scientific literature more easily, promoting inclusivity and widening the dissemination of knowledge.

  • 7)

    Content Generation: AI models like ChatGPT can generate scientific content, such as summaries, abstracts, and even entire manuscripts, based on input from researchers. While human oversight and editing are essential to ensure accuracy and coherence, AI-generated content can serve as a valuable starting point for authors, particularly in fields with established frameworks and methodologies.

Overall, AI tools have the potential to streamline various aspects of scientific publishing, from literature review and data analysis to language editing and peer review. By leveraging AI-driven technologies, researchers, publishers, and other stakeholders can enhance the efficiency, accessibility, and quality of scientific communication, ultimately advancing the pace and impact of research across disciplines“.

Now, having reported that, an analysis on the above sentences returned by ChatGPT is presented. The AI free-to-use chatbot was released on November 30, 2022 by OpenAI in San Francisco, California. At the beginning of 2023 ChatGPT had already been listed as an author on a bunch of research papers.

Policies on the use of ChatGPT and other large language models (LLMs) in the writing process have been announced by academic publishers. By last October 2023, 87 out of 100 top scientific journals had provided instructions to authors on the use of generative AI, which can create text, images and other content.1 As generative AI is continuing to improve, ethical issues have been posed, by considering the current rules which govern publications of scientific research.2,3

In a 2023 survey of more than 1600 scientists, almost 30 % reported that they had used generative AI tools to help write manuscripts, and about 15 % said they had used them to help write grant applications.4 In addition to the ways mentioned by ChatGPT itself on how these tools are changing scientific publishing, LLMs can help scientists to write code and brainstorm research ideas: more recently, LLMs from other companies are boosting their performances.

The potential benefits perceived by the researchers are represented by the ability to edit and translate writing for researchers whose mother language is not English.1 In addition, generative AI would reduce language barriers in research by 2030 and could take on repetitive tasks, such as literature reviews.5

It is clear that such tools can help researchers to write papers at a faster pace. But are there any drawbacks? And what are they? LLMs can still make language mistakes. This represents one of the reasons why researchers have to acknowledge LLMs use in their publications. Moreover, there is the risk of stumbling into the so called “AI hallucination”, a phenomenon wherein a LLM perceives patterns or objects that are nonexistent or imperceptible to human observers, creating outputs that are nonsensical or altogether inaccurate. These misinterpretations occur due to various factors, including overfitting, training data bias/inaccuracy and high model complexity.

The increase in number of scientific publications generated with the help of LLMs may have detrimental effects on the peer review process, as there might not be enough people available to continue to do free peer review. Furthermore, concerns have been raised about the use of AI in the review process as submissions under review are privileged communications. Communicating any component of a submitted manuscript to online services, including LLMs may violate the confidentiality privilege, as these and similar LLMs can incorporate all user interactions and materials into their data stores.6 It should be noticed that the American Association for the Advancement of Science forbids the use of LLMs during the peer review procedure and Springer Nature's policy prohibits peer reviewers from uploading manuscripts into generative-AI tools.7,8

Coming back to the “publish or perish” model, it can be argued that a shift towards a prioritisation of quality over quantity should be considered. However, the use of LLMs should be documented in the methods or another section of the manuscript. Recently several organizations have come out with defensive statements requiring authors to acknowledge the use of generative AI. Regardless of policies that will be proposed, and changed periodically, the fundamental aspects of originality and quality of the scientific publications still remain. Authors must still take full responsibility for their work.

In other words, responsible use of every available resource seems to be the answer together with a change in the reward model in force so far towards high quality publications, thus leaving the quantity as absolute and determining factor of judgement of the track records and scientific careers of researchers.

No scientific innovation is bad or good per se, however its use may be, think of nuclear medicine vs nuclear war. As far as the role of AI in publishing is concerned, young researchers should remember that a scientific journal is aimed to (hopefully) spread science and not to support individual academic careers. Therefore, do not stop thinking with your own mind and please always remember ethical and scientific responsibility when using these tools in writing papers.3 Do not waste time and energy fighting a posteriori with co-authors (and with the chief editors) to be included as the corresponding author of the manuscript.

PS: This editorial has not been supported by AI.

References
[1]
C. Ganjavi, M.B. Eppler, A. Pekcan, B. Biedermann, A. Abreu, G.S. Collins, et al.
Publishers’ and journals’ instructions to authors on use of generative artificial intelligence in academic and scientific publishing: bibliometric analysis.
[2]
R. Watkins.
Guidance for researchers and peer-reviewers on the ethical use of Large Language Models (LLMs) in scientific research workflows.
[3]
N. Ambrosino, F. Pacini.
Publish or perish? Perish to publish? (Unrequested advices to young researchers).
Pulmonology, 28 (2022), pp. 327-329
[4]
R. Van Noorden, J.M. Perkel.
AI and science: what 1,600 researchers think.
Nature, 621 (2023), pp. 672-675
[5]
ERC.
Foresight: Use and Impact of Artificial Intelligence in the Scientific Process.
European Research Council, (2023),
[6]
D.S. Chawla.
Is ChatGPT corrupting peer review? Telltale words hint at AI use.
Copyright © 2024. Sociedade Portuguesa de Pneumologia
Download PDF
Pulmonology
Article options
Tools

Are you a health professional able to prescribe or dispense drugs?