As the technical capabilities of Artificial Intelligence (AI) advance, and its widespread availability grows, the use of AI in research will become more prevalent. The regulatory landscape surrounding the responsible use of AI is in its infancy, which may create uncertainty regarding appropriate ways to implement AI in research. Utah State University strongly encourages faculty to consider the following when determining if AI use is appropriate for their research:
- Confidentiality – Researchers should not input any confidential, proprietary, or restricted (i.e., human subjects) data into a generative AI tool. Significant questions involving data privacy, ownership, and access when using generative AI tools warrants caution.
- Reliability – Be cautious in reviewing and incorporating AI-generated data or information into your research. Content generated from an AI tool may be outdated, inaccurate, or biased.
- Plagiarism – AI tools may not provide proper citation to source materials. Researchers are responsible to verify and give appropriate credit for another person’s ideas, processes, results, or works. Accusations of plagiarism fall under the definition of Research Misconduct, and may lead to inquiries or investigations.
- Publications – Publishers have yet to find consensus as to whether AI developed content is acceptable for publication. Some journals have prohibitions on AI-generated text, while others allow for AI use, so long as the author provides a disclosure in their article. Researchers should verify if a publisher has any AI restrictions and adjust proposed articles accordingly.
- Understanding Expectations – Prior to implementing an AI tool, researchers should verify that the sponsor funding the research has not imposed any restrictions or limitations on AI use. Similarly, faculty should effectively communicate with co-investigators, subawardees, and collaborators to ensure all parties have the same understanding as to how AI will be incorporated into the project, and what restrictions apply.
Federal Agency Guidance
- National Science Foundation –
- On December 14, 2023, NSF released Notice to research community: Use of generative artificial intelligence technology in the NSF merit review process. The two biggest takeaways from the notice are:
- NSF reviewers are prohibited from uploading any content from proposals, review information and related records to non-approved generative AI tools.
- Proposers are encouraged to indicate in the project description the extent to which, if any, generative AI technology was used and how it was used to develop their proposal.
- On December 8, 2025, NSF released a Policy Notice for Policies and Procedures Guide (PAPPG) 24-1, Supplement 1, incorporating the use of AI tools into their research misconduct definition. The bolded text has been added:
- RESEARCH MISCONDUCT means fabrication, falsification, or plagiarism, whether committed by an individual directly or through the use or assistance of other persons, entities, or tools, including artificial intelligence (AI)-based tools, in proposing or performing research funded by NSF, reviewing research proposals submitted to NSF, or in reporting research results funded by NSF.
- On December 14, 2023, NSF released Notice to research community: Use of generative artificial intelligence technology in the NSF merit review process. The two biggest takeaways from the notice are:
- National Institutes of Health –
- On June 23, 2023, NIH released The Use of Generative Artificial Intelligence Technologies is Prohibited for the NIH Peer Review Process. stating that “NIH prohibits NIH scientific peer reviewers from using natural language processors, large language models, or other generative Artificial Intelligence (AI) technologies for analyzing and formulating peer review critiques for grant applications and R&D contract proposals.”
- On July 17, 2025, NIH released Supporting Fairness and Originality in NIH Research Applications, stating that “NIH will not consider applications that are either substantially developed by AI, or contain sections substantially developed by AI, to be original ideas of applicants. If the detection of AI is identified post award, NIH may refer the matter to the Office of Research Integrity to determine whether there is research misconduct while simultaneously taking enforcement action including but not limited to disallowing costs, withholding future awards, wholly or in part suspending the grant, and possible termination. NIH will only accept six new, renewal, resubmission, or revision applications from an individual Principal Investigator/Program Director or Multiple Principal Investigator for all council rounds in a calendar year.
USU will share guidance and updates on this website as they are received from federal agencies and other reputable sources.