Generative AI Policies
The initial development of these policies was prompted by the emergence of generative AI and AI-assisted technologies, which were anticipated and have subsequently been confirmed to be increasingly adopted by researchers. In order to align with evolving best practices, the policies have been revised to enhance transparency and provide clear guidance for authors, reviewers, editors, readers, and other contributors within the Jurnal Elemen ecosystem.
For Authors
The Use of Generative AI and AI-Assisted Technologies in Manuscript Preparation
Jurnal Elemen recognizes the increasing significance of generative artificial intelligence (GenAI) and AI-assisted tools in facilitating scholarly endeavors. When employed responsibly and under critical human oversight, these tools can assist researchers in streamlining literature synthesis, exploring research questions, identifying knowledge gaps, generating hypotheses, organizing content, and enhancing language clarity and readability.
Authors preparing manuscripts for Jurnal Elemen may utilize AI tools to aid in these tasks. However, AI must never supplant the author’s intellectual contribution, critical analysis, or scholarly judgment. Human supervision is imperative at every stage—from input and generation to review, editing, and final validation.
Ultimately, authors bear full responsibility for the integrity, accuracy, and originality of their work. This includes:
- rigorously verifying all AI-generated content—including factual claims, logic, and especially references, as AI models may produce plausible but fabricated or inaccurate citations;
- thoroughly revising and adapting AI-generated text to ensure the manuscript reflects the author’s own reasoning, interpretation, and scholarly voice;
- disclosing the use of AI tools transparently in a dedicated statement upon submission (see Disclosure below);
- safeguarding confidentiality, data privacy, and intellectual property by reviewing the terms of service of any AI tool used—particularly concerning input data (e.g., unpublished manuscripts) and output rights.
Responsible Use of AI Tools
Authors must ensure their use of AI tools aligns with ethical and legal standards:
- Avoid inputting personally identifiable information, confidential data, or third-party copyrighted material (e.g., images, text, or voice) into AI systems unless explicitly permitted by the tool’s terms and relevant rights holders.
- Refrain from generating images, figures, or artwork that replicate, imitate, or modify protected content, including likenesses of real individuals or branded products.
- Critically assess outputs for factual accuracy, logical coherence, and potential algorithmic bias.
- Confirm that the tool’s licensing terms do not claim ownership over input data (e.g., for model training) and do not impose restrictions on how AI-generated outputs may be used, including publication in academic journals.
Disclosure Requirements
Disclosure Requirements Transparency is essential. Authors are required to disclose any use of AI tools in the preparation of their manuscripts through a dedicated AI Disclosure Statement submitted alongside the manuscript. A standardized version of this statement will be included in the published article.
The disclosure should detail:
- The name and version of the AI tool(s) used (e.g., ChatGPT-4o, May 2025),
- The purpose of use (e.g., language editing, structuring arguments, summarizing literature),
- The extent of human oversight and validation.
When AI tools play a crucial role in the research methodology, such as in AI-driven data analysis or simulation, their application must be thoroughly detailed in the Methods section, rather than merely mentioned in the disclosure statement.
Authorship and Attribution
AI tools cannot be listed as authors or co-authors, nor cited as such. Authorship involves intellectual responsibility, accountability, and the ability to approve the final work—roles that only human contributors can fulfill. All listed authors must:
- have made significant intellectual contributions to the conception, analysis, or interpretation of the work,
- approve the final manuscript and agree to its submission, and
- be able to address questions regarding the accuracy or integrity of the work.
Authors are expected to familiarize themselves with Jurnal Elemen’s Publication Ethics Policy, which aligns with COPE and national standards.
Use of AI in Figures, Images, and Artwork
The use of GenAI or AI-assisted tools for generating or manipulating scientific images, figures, or data visualizations is strictly forbidden, unless such use is an explicit, reproducible component of the research methodology (e.g., AI-based medical image reconstruction or segmentation). In these methodological instances:
- The role of the AI tool must be thoroughly documented in the Methods section, including the tool's name, version, manufacturer, and a detailed step-by-step workflow.
- Authors may be required to provide raw or unprocessed images or intermediate outputs for editorial verification.
- Adjustments to brightness, contrast, or color balance are allowed only if they do not distort, obscure, or eliminate original data features.
For non-scientific visuals:
- Graphical abstracts and article artwork must be created by humans; AI-generated artwork is not allowed.
- Cover art may include AI-generated components only with prior written approval from the Editor-in-Chief and Publisher, proof of rights clearance, and proper attribution.
For Reviewers
The Use of Generative AI and AI-Assisted Technologies in the Peer Review Process
Peer review is fundamental to scholarly integrity, and Jurnal Elemen is committed to maintaining the highest standards of confidentiality, trust, and intellectual accountability throughout this process.
When reviewers are invited to assess a manuscript, they must treat the submission with strict confidentiality. Under no circumstances should they upload the manuscript—or any part of it, including figures or data—into generative AI or AI-assisted tools. Doing so could violate the authors’ proprietary rights, breach data privacy (especially if the manuscript contains personally identifiable or sensitive information), and undermine the ethical foundation of peer review.
This confidentiality also applies to the review report itself, which may contain unpublished insights, critiques, or information about the authors. Therefore, reviewers must not input their draft or final review reports into AI tools, even for purposes such as language polishing or readability enhancement, as this could compromise confidentiality and expose sensitive content to third-party systems.
Importantly, the substantive evaluation of a manuscript must be conducted solely by the human reviewer. Generative AI tools are not allowed to assist in the scientific assessment of a paper, including evaluating originality, methodology, validity of conclusions, or scholarly contribution. Such tasks require critical judgment, domain expertise, and ethical discernment—capacities that remain uniquely human. AI-generated critiques may be inaccurate, incomplete, biased, or lack contextual understanding, thereby jeopardizing review quality and fairness.
Reviewers are fully responsible and accountable for the content, tone, and integrity of their reports.
Authors who used AI tools during manuscript preparation are required to disclose this in a dedicated AI Disclosure Statement (typically placed before the References section), as outlined in Jurnal Elemen’s Author Guidelines. Reviewers may consult this statement to inform—but not substitute—their independent assessment.
For Editors
The Use of Generative AI and AI-Assisted Technologies in the Journal Editorial Process
Manuscripts submitted to Jurnal Elemen are considered confidential from submission until either publication or formal rejection. Editors are prohibited from uploading any part of a manuscript—such as text, data, figures, or supplementary files—into generative AI or AI-assisted tools. Doing so could compromise authors' intellectual property rights, breach confidentiality agreements, and, if the manuscripts contain personally identifiable or sensitive information, violate data privacy regulations.
This confidentiality obligation extends to all editorial correspondence, including decision letters, handling editor notes, reviewer invitations, and internal communications. Editors must avoid inputting such materials into AI tools, even for minor language refinement, as this could inadvertently expose confidential or sensitive information to external systems.
Editorial decision-making is a fundamental scholarly responsibility that requires independent professional judgment, contextual understanding, and ethical discernment. Consequently, Jurnal Elemen prohibits the use of generative AI or AI-assisted tools for:
- evaluating a manuscript’s scientific merit, originality, or significance;
- synthesizing or interpreting reviewer comments; or
- drafting editorial decisions or recommendations, such as acceptance, revision, or rejection.
While AI may assist with administrative tasks (see below), final editorial judgments must be made solely by human editors, who are fully accountable for the integrity of the process, the soundness of decisions, and all communications with authors and reviewers.
Authors are allowed to use AI tools during manuscript preparation, provided they transparently disclose this use in a dedicated AI Disclosure Statement, typically placed before the References section. Editors should review this statement as part of the initial screening, but it does not replace the critical appraisal of the manuscript’s content and compliance with journal policies.
If an editor suspects misuse of AI by an author, such as undisclosed AI-generated content or fabricated references, or by a reviewer, such as AI-generated review reports, they must document their concerns and escalate the matter to the Editor-in-Chief or Publisher for further investigation and action in accordance with Jurnal Elemen’s Publication Ethics Policy and COPE guidelines.



