Generative AI Policy
Overview
FIRE recognises that generative artificial intelligence (GenAI) tools are increasingly present in academic research and writing. This policy establishes transparent, consistent expectations for the use of GenAI across all stages of the publication process — from manuscript preparation to peer review and editorial decision-making — in line with recommendations from leading international bodies including STM (2025), COPE, WAME, and Scopus.
Our approach is moderate: we permit the responsible use of GenAI tools where they support clarity, accessibility, and research quality, provided that their use is transparently disclosed. The intellectual integrity of all submitted work remains the full responsibility of the human authors.
1. Policy for Authors
1.1 Permitted Uses
Authors may use GenAI tools for the following purposes, subject to the disclosure requirements in Section 1.3:
- Language editing and proofreading (grammar, spelling, punctuation, and readability improvements to human-authored text)
- Paraphrasing or restructuring sentences for clarity, provided the intellectual content originates with the author
- Literature search assistance and reference organisation
- Translation support for non-native English speakers, where the scientific content is authored by the researcher
- Formatting and structural suggestions
1.2 Prohibited Uses
The following uses of GenAI are not permitted:
- Generating the substantive intellectual content of a manuscript (arguments, findings, interpretations, or conclusions)
- Fabricating, hallucinating, or misrepresenting data, citations, or factual claims
- Listing an AI tool as an author — AI cannot fulfil the responsibilities of authorship (COPE, WAME, ICMJE)
- Using AI to generate peer review reports or editorial assessments (see Section 2)
- Submitting AI-generated images or figures as original research data without explicit disclosure
- Using public GenAI tools in ways that compromise the confidentiality of unpublished manuscripts or reviewer comments
1.3 Disclosure Requirements
Authors must declare any use of GenAI that goes beyond basic spell-checking. Disclosure should be included in a dedicated statement at the end of the manuscript, before the reference list, under the heading "Use of Generative AI." The statement should specify:
- The name and version of the GenAI tool(s) used
- The purpose for which each tool was used (e.g., language editing, literature search)
- Confirmation that all AI-assisted content was reviewed, verified, and takes full responsibility by the author(s)
Example disclosure statement: "ChatGPT (OpenAI, GPT-4, accessed March 2026) was used to improve the clarity and readability of selected passages in this manuscript. All content was subsequently reviewed and verified by the authors, who take full responsibility for the accuracy and integrity of the published work."
1.4 AI-Generated Images and Figures
Authors who include AI-generated visual content must declare this in both the figure caption and the GenAI disclosure statement. Authors remain responsible for ensuring that images do not misrepresent data, contain biases, or infringe on third-party intellectual property.
2. Policy for Peer Reviewers
Peer review is a confidential process that relies on the expert judgement of human scholars. Reviewers are expected to write their evaluations independently. The following rules apply:
- Reviewers must not upload any part of a submitted manuscript into a public GenAI tool, as this constitutes a breach of confidentiality.
- Reviewers must not use GenAI tools to generate review reports or substantive evaluations on their behalf.
- Limited use of GenAI for minor language assistance in writing the review text is permitted, provided it does not compromise the independence or quality of the evaluation.
- Any use of GenAI in the review process must be disclosed to the editor at the time of submission of the review.
3. Policy for Editors and Editorial Decisions
Editors at FIRE are responsible for ensuring fair, rigorous, and transparent editorial decisions. In this context:
- Editors may use secure, publisher-approved AI tools (e.g., plagiarism detection software) as part of the editorial workflow.
- Editors must not use public GenAI tools to assess manuscript quality or generate editorial decisions.
- Where AI-assisted tools are employed in the editorial process, this will be disclosed on the journal website.
- Submitted manuscripts must not be uploaded to external AI platforms at any stage of the editorial process.
4. Compliance and Violations
Failure to disclose the use of GenAI tools, or use of AI in ways that violate this policy, may result in manuscript rejection, retraction of published work, or notification of the author's institution. The editors reserve the right to request additional information about AI use at any stage of the submission and review process. All cases will be handled in accordance with COPE guidelines on publication ethics.
5. Policy Updates
The field of artificial intelligence is evolving rapidly. FIRE is committed to reviewing and updating this policy regularly to reflect developments in technology, emerging best practices, and guidance from international bodies. Authors, reviewers, and editors are encouraged to consult this page before each submission or review cycle.
For further information, please visit:
- STM Association (2025). Recommendations for a Classification of AI Use in Academic Manuscript Preparation. https://stm-assoc.org
- COPE (2023). Authorship and AI tools. https://publicationethics.org
- WAME (2023). Chatbots, Generative AI, and Scholarly Manuscripts. https://wame.org
- Elsevier (2023). The use of generative AI and AI-assisted technologies in the review process. https://elsevier.com
- Scopus (2024). Scopus Content Policy and Selection Criteria — Generative AI Policies. https://scopus.com

