AI Policy
Policy on the Use of Artificial Intelligence (AI)
Journal of Social Sciences and Humanities (JSSH) recognizes the increasing use of artificial intelligence (AI) and automated tools in scholarly communication, including generative AI systems such as large language models (LLMs). While these technologies may support certain aspects of manuscript preparation and editorial workflows, their use must not compromise research integrity, transparency, or confidentiality.
This policy outlines the acceptable use of AI and automated tools by authors, peer reviewers, editors, and the journal.
Use of AI by Authors
Authors may use automated tools to assist with language correction, grammar improvement, formatting, or editing for clarity. However, when generative AI tools are used for purposes beyond basic language editing, the use of such tools must be clearly disclosed in the manuscript. The disclosure should indicate the name of the tool and the purpose for which it was used. Authors remain fully responsible for the content of their manuscripts. This includes verifying the accuracy, validity, and originality of any output generated by automated tools used during the research or writing process. Artificial intelligence systems and automated tools cannot be listed as authors because they cannot take responsibility for the work, approve the final manuscript, or meet the authorship criteria required in scholarly publishing. Generative AI tools must not be cited as primary sources in scholarly references.
Use of AI by Peer Reviewers
Peer reviewers must maintain strict confidentiality regarding all manuscripts submitted to the journal. Reviewers should not upload or share manuscript content with generative AI systems or automated tools for the purpose of generating review reports or evaluations. The use of AI to produce peer review assessments is not permitted due to risks including breaches of confidentiality, superficial or non-specific feedback, potential bias, hidden prompts, and the generation of false or fabricated information such as non-existent references. Limited use of tools for language editing or rewriting of review comments may be acceptable if such use does not involve sharing confidential manuscript content and is transparently disclosed to the journal.
Use of AI by Editors
Editors must ensure that editorial decisions are based on scholarly judgment and peer review. Generative AI tools must not be used to independently assess the scientific merit of manuscripts or to generate editorial decisions regarding acceptance, revision, or rejection. Editors may use automated tools to support administrative or technical tasks, such as plagiarism detection, text similarity checking, workflow management, or identifying potential peer reviewers. However, all such tools must be appropriately evaluated before use.
Use of AI Tools by the Journal
The journal may employ automated systems to assist with routine editorial processes, including text similarity detection, screening for research integrity issues, or reviewer suggestions. Any such use of automated tools is overseen by human editors or journal staff. Automated outputs are always verified by editors or editorial staff before any action is taken. Editorial decisions and integrity assessments remain the responsibility of human decision-makers.
Oversight and Responsibility
The use of artificial intelligence and automated tools must always be subject to human oversight. Authors, reviewers, and editors are responsible for ensuring that these tools are used ethically, transparently, and in accordance with the journal’s publication ethics policies. Failure to comply with this policy may result in actions such as manuscript rejection, correction, retraction, or other measures consistent with the journal’s ethical guidelines.