Tools like ChatGPT threaten transparent science; Here are our basic rules for using it

ChatGPT threatens the transparency of basic methods of science.Credit: Tada Images/Shutterstock

It has been clear for several years now that artificial intelligence (AI) is gaining the ability to generate fluent language, resulting in sentences that are increasingly difficult to distinguish from the text people type. last year, nature I mentioned that some scientists were already using chatbots as research assistants — to help organize their thinking, generate feedback on their work, help write code and summarize research literature (nature 611, 192–193; 2022).

But the launch of an AI Chatbot ChatGPT in November has brought the capabilities of these tools, known as Large Language Models (LLMs), to a large audience. Its developers, OpenAI in San Francisco, California, have made the chatbot free to use and easily accessible to people without technical expertise. Millions use it, and the result has been an explosion of fun and sometimes frightening typing experiences that have only heightened excitement and anxiety about these tools.

ChatGPT can write great student essays, summarize research papers, and answer questions well enough to pass medical exams and generate useful computer code. It produced research abstracts good enough that scholars have had difficulty determining whether a computer typed them. Of community concern, it can also make it easier to produce spam, ransomware, and other malicious output. Although OpenAI has tried to put sandbars on what the chatbot will do, users are already finding ways around it.

A major concern in the research community is that students and scholars can deceptively pass off a text written in the LLM as their own, or use the LLM in a simplistic way (such as undertaking an incomplete literature review) and produce unreliable work. Many pre-editions and published articles have already credited ChatGPT with official authorship.

This is why it is time for researchers and publishers to set ground rules about using MBAs ethically. natureAlong with all Springer Nature journals, he formulated the following two principles, which have been added to the existing authors’ directory (see go.nature.com/3j1jxsw). like natureThe news team of other “science publishers” has reported that it is likely to adopt a similar position.

First, no LLM will be accepted as an authorized author on a research paper. This is because any attribution of authorship carries with it accountability for the work, and no AI tools can take such responsibility.

Second, researchers using LLM tools should document this use in the methods or acknowledgments sections. If the paper does not include these sections, the Introduction or another appropriate section may be used to document the use of the MSc.

Pattern recognition

Can editors and publishers detect LLM-generated text? Now, the answer is “maybe”. ChatGPT’s raw output can be detected upon close examination, particularly when more than a few paragraphs are involved and the topic is related to scholarly work. This is because LLMs produce patterns of words based on statistical correlations in their training data and the stimuli they see, which means that their outputs can sound bland and generic, or contain minor errors. Furthermore, they cannot cite sources to document their output.

But in the future, AI researchers may be able to overcome these problems—there are already some experiments connecting chatbots with source citation tools, for example, and others training chatbots on specialized scholarly texts.

Some tools promise to detect the output generated by the LLM, and natureSpringer Nature, publisher, is among those developing the technologies to do just that. But LLMs will get better, and fast. There are hopes that the creators of LLM will be able to put a watermark on the output of their tools in some way, although this may not be technically foolproof.

Since early times, science has operated with openness and transparency about methods and evidence, no matter what technology was in vogue. Researchers must ask themselves how to maintain the transparency and trustworthiness on which knowledge generation depends if they or their colleagues use software that operates in an essentially opaque way.

So nature He lays out these principles: Ultimately, research must have transparency in methods, and integrity and truth from the authors. This is the foundation upon which science relies for progress.

#Tools #ChatGPT #threaten #transparent #science #basic #rules

Leave a Reply

Your email address will not be published. Required fields are marked *