Artificial Intelligence Policy
The journal recognizes that artificial intelligence–based tools, including generative AI and large language models, are increasingly used in research and scholarly communication. To safeguard academic integrity, transparency, and trust in the scientific record, all participants in the publication process are required to comply with the following principles governing the responsible use of AI and automated tools.
The use of AI does not replace human responsibility. Authors, reviewers, and editors remain fully accountable for the content, evaluations, and decisions associated with the publication process.
For Authors
Authors may use AI-assisted tools for limited and appropriate purposes, such as language improvement, grammar correction, formatting, or readability enhancement. Any use of AI tools that goes beyond basic linguistic or editorial assistance must be explicitly disclosed in the manuscript, either in the Acknowledgements section or in a dedicated declaration.
Authors are fully responsible for verifying the accuracy, originality, completeness, and integrity of all content generated or assisted by AI tools. The use of AI does not absolve authors of responsibility for errors, omissions, misleading statements, fabricated content, or inappropriate citations.
Generative AI tools do not meet authorship criteria and must not be listed as authors or co-authors. AI systems cannot take responsibility for the work and therefore cannot be credited with intellectual contribution.
AI-generated content must not introduce fabricated references, data, images, or results. Authors are responsible for ensuring that all cited sources are original, verifiable, and appropriately referenced. AI tools must not be cited as primary sources of information.
The use of AI for data analysis, image generation, data simulation, or figure creation must be clearly described in the Methods section when applicable, including sufficient detail to allow transparency and reproducibility. Any ethical, legal, or copyright implications arising from such use remain the sole responsibility of the authors.
Failure to disclose the use of AI tools, or misuse of AI that compromises scientific integrity, may be considered a breach of publication ethics and may result in rejection, correction, or retraction.
For Peer Reviewers and Editors
Peer reviewers and editors must not use generative AI tools to create, draft, or substantially generate peer review reports, editorial assessments, or publication decisions. This restriction is necessary to protect manuscript confidentiality, avoid unauthorized data processing, and prevent superficial, biased, or fabricated evaluations.
Reviewers and editors may use AI tools only for limited editorial assistance, such as minor language editing of their own comments, provided that such use does not involve uploading confidential manuscript content to external systems and is disclosed to the editorial office when relevant.
Editorial decisions must be based on human academic judgment, supported by expertise, ethical responsibility, and critical evaluation. AI tools must not be used to replace or automate editorial responsibility.
Journal Use of Automated Tools
The journal may employ automated tools for administrative and integrity-related purposes, such as plagiarism detection, text similarity analysis, reference checking, or image screening. All such tools are used under human supervision, and their outputs are reviewed and interpreted by qualified editorial staff.
Automated tools are used to assist, not replace, editorial judgment. Any concerns identified through automated checks are evaluated case by case, and authors are given the opportunity to respond when appropriate.
Alignment with Elsevier Policies
This policy is fully aligned with Elsevier’s policies on the use of artificial intelligence and generative AI in scholarly publishing, including principles of transparency, human accountability, authorship integrity, and responsible disclosure.
For further reference, see Elsevier’s official policy on AI and generative AI:
https://www.elsevier.com/about/policies-and-standards/generative-ai-policies-for-journals