Best Practices for Generative AI in Research

Generative artificial intelligence (AI) is transforming academic research and scholarly writing. Ethical concerns persist, with the consensus being that researchers should always disclose AI use in publications. Human oversight is crucial to ensure accuracy and address AI-generated content's limitations.

Updated on December 15, 2023

two researchers practicing Best Practices for Generative AI in Research

In today's digital era, the advancement of artificial intelligence (AI) is revolutionizing numerous trades, including academic research and scholarly writing. Generative artificial intelligence (AI), for example, uses specialized AI systems to create new content, including images, text, and code.

Like countless other people, researchers and scholars around the world are exploring the value and utility of generative AI. When applied to academic writing, this set of AI tools has the  potential to support researchers in all aspects of their writing, from brainstorming topics and formulating hypotheses to drafting abstracts and editing manuscripts.

There is debate, however, surrounding the ethics of using generative AI in research and academic writing. Questions like, “How will it affect the quality and originality of my work?” and “How can I avoid plagiarism and make the best use of this tool?” are top of everyone’s mind.

Here, we will explore these questions and reveal some agreed upon best practices for the responsible use of generative AI in research.

Do I need to disclose the use of generative AI?

The short answer is ‘yes,’ always

To ensure transparency and reproducibility, researchers must treat anything produced by generative AI just as they would any other source of information to avoid scrutiny and malfeasance.

How exactly to accomplish this is the tricky part.

Initially, there were instances of researchers listing a specific AI tool as co-author on their manuscripts. The new practice sparked immediate backlash and rebuttal in both the science and publishing industries for good reason. AI, machine learning, and algorithmic tools in general do not meet the criteria of authorship. Most importantly, it cannot take responsibility nor be held accountable for the content.

There was also discussion about implementing a disclosure system. The system involved adding a disclaimer to written works that would help readers quickly identify who or what contributed information. Robert J. Gates proposed a simple model: 

  • Disclosure: The following content was generated entirely by an AI-based system based on specific requests asked of the AI system.
  • Disclosure: The following content was generated by me with the assistance of an AI-based system to augment the effort. 
  • Disclosure: The following content was generated entirely by me without assistance from an AI-based system. 

The current standard of practice for acknowledging the use of generative AI in research projects is outlined in the authorship and publishing guidelines of all the major scholarly journals, such as:

  • Science journals (summarized)- Each author agrees to be personally accountable for the author’s own contributions and for ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated, resolved, and documented in the literature. In addition, artificial intelligence tools cannot be authors.
  • JAMA network (extracted)- The submission and publication of content created by artificial intelligence, language models, machine learning, or similar technologies is discouraged, unless part of formal research design or methods, and is not permitted without clear description of the content that was created and the name of the model or tool, version and extension numbers, and manufacturer. Authors must take responsibility for the integrity of the content generated by these models and tools.

How to cite the use of AI is still evolving. Understanding when it is necessary can be confusing, but it is always best to err on the side of caution. Follow your institution’s and intended journal’s guidelines explicitly. Never assume ‘they won’t find out.’

AI detectors exist. They’re growing in number and becoming more accurate everyday. They are used extensively by professors, peers, editors, and anyone else who wants to know how a piece was written. Many of these tools detect AI generated text and images and determine what percentage of the work was done by AI by highlighting the parts.

Does a human need to oversee all the AI outputs?

Again, the answer is a definitive ‘yes’. AI is a technology, a tool; it has no sense of morality or reason. It simply formulates answers to prompts based on a finite dataset without any true understanding of its informational inputs or outputs.

Because generative AI has no means to interpret the data it works with, the answers are often incorrect. AI can only combine and recombine answers in ways that potentially satisfy a prompt. The liability of accuracy falls squarely on the users’ shoulders.

For this reason, it is of the utmost importance that researchers are diligent in reviewing, verifying, and correcting all AI generated content. Some of the most common mistakes are found in:

  • Information presented as facts
  • Copyrighted information
  • Quotations
  • Citations
  • Mathematic computations

Examples: Getting the best possible outputs

When using generative AI, the extent of one’s expectations needs to directly correspond with the degree of their efforts. By carefully structuring and restructuring the prompts they input, researchers ensure the best possible outputs. 

Try this formula and examples:

Acting as [ROLE] perform [TASK] in [FORMAT]: insert unique data

Table showing best possible inputs for generative AI in Research

Role

  • Expert Science Writer
  • Ronald C. Kessler 
  • Professor XYZ 
  • Editor from [target] journal
  •  PhD student in [specific field]

Task

  • Formulate research questions 
  • Write an abstract
  •  Analysis
  • Condense lists
  • Draft references

Format

  • PDF
  • Bullet points 
  • Summary 
  • Table/Chart 
  • (MLA, APA, Chicago) citations

Example 1: Acting as an expert science writer write an abstract in PDF: (paste your manuscript here)

Example 2: Acting as a PhD student in Chemistry draft references in APA citation and alphabetize. (paste a list of your sources here)

Use this formula as a jumping board for identifying the key facets of your prompt. Then, continue reworking it and adding details until satisfied with the results.

What is the best way to use generative AI?

When an author chooses to learn and understand the limited capabilities of generative AI, it can be a valuable asset. Think of this tool as an assistant for supporting various aspects of the writing process:

  1. Discovery\investigation: You can use an AI text summarizer, such as Quillbot or ChatPDF, to quickly sift through potential sources and find those most pertinent to your work. Then, employ an AI tool like INK to generate research questions that reflect your project’s unique path.
  2. Prewriting: Expand on your research questions with an AI brainstorming session utilizing HyperWrite or other similar tool. Turn those ideas into an outline with an AI outline generator like Wordtune and then develop a strong thesis with an AI tool like Smodin.
  3. Drafting: A content generator is a convenient way to kickstart the drafting phase. By offering personalized content that reflects your varied sources and ideas, AI tools such as SEO.ai, Jasper, and Copy.ai can streamline this process.
  4. Editing: Most authors apply an AI proofreader, such as Grammar Check or a Microsoft Word Add-in, throughout the writing process to catch straightforward spelling and grammar errors. Once completed, though, it is imperative to also have an editor, like Curie, improve the clarity, conciseness, flow and overall readability of your work.

It is important to note the variety of AI choices in these examples. Each tool is trained on a unique set of data that determines its effectiveness in performing specialized tasks. Pinpointing and selecting the one that is most appropriate for your needs is essential for getting the most out of these tools.

Bottom line

Like any new tool or technology, generative artificial intelligence is surrounded by a buzz of excitement and a host of concerns. It has the potential to both revolutionize academic research and scholarly writing and sabotage its ethical foundations.

Cultivating, following, and sharing responsible best practices for these tools is imperative to upholding the integrity of the research process. By embracing transparency, providing human oversight, and acknowledging the limitations of generative AI, researchers will remain at the forefront of ethical innovation.

Contributors
Tag
ai and language editing servicesai
Table of contents
Share+
FacebookTwitterLinkedInCopy linkEmail
Join the newsletter
Sign up for early access to AJE Scholar articles, discounts on AJE services, and more

See our "Privacy Policy"

Expert academic editing from AJE. No one likes desk rejection.

Prepare for journal submission with AJE English Editing.