| Contact |
| Editorial Team |
| Reviewer |
| Indexed |
| Visitors Statistic |
| Publisher |
| Publication Ethics |
| Author Guidelines |
| AI Usage Policies |
| Article Processing Charge |
AI Policies
International Journal of Linguistics, Literature, Language Teaching, and Culture Studies recognizes the growing role of artificial intelligence (AI) and machine learning technologies in scholarly publishing. With the increasing accessibility of generative AI tools such as ChatGPT, Gemini, Claude, and others, the journal underscores the need to balance their potential benefits with the accompanying ethical responsibilities.
This policy establishes the journal’s official position on the use of AI tools by authors, reviewers, and editorial staff. It aims to ensure transparency, accountability, and adherence to the ethical standards set by Elsevier and the Committee on Publication Ethics (COPE).
1. Use of AI Tools by Authors
Authors may employ AI tools to assist in the preparation of manuscripts, provided such use is conducted transparently, responsibly, and ethically. Acceptable applications include language refinement, grammar enhancement, and reference formatting. However, AI tools must not be used to generate substantive content that replaces original scientific reasoning, conceptualization, or interpretation of results.
Authors retain full responsibility for the accuracy, integrity, and originality of all content submitted, including sections where AI tools have been employed.
In line with the COPE Position Statement (2023), AI tools cannot be credited as authors under any circumstances. Authorship requires accountability, consent, and intellectual contribution—criteria that AI tools cannot meet. Accordingly, all listed authors must be human individuals who satisfy established authorship standards.
2. Disclosure of AI Use
Authors must provide full disclosure of any AI tools used during manuscript preparation. This includes, but is not limited to, applications for text generation, image creation, data analysis, coding support, or translation.
Such disclosures should be placed in the Acknowledgements section and must include the name of the AI tool, its version, and the purpose of use.
Example disclosure statement:
“The authors used OpenAI’s ChatGPT (version [X]) to refine the wording of the Introduction section. All generated outputs were reviewed and verified by the authors for accuracy and integrity.”
Failure to disclose AI use may constitute a violation of ethical publishing standards and could lead to manuscript rejection or post-publication retraction.
3. Author Responsibility and Accountability
Authors bear complete responsibility for the content and scholarly validity of their manuscripts. When AI tools are used, authors must ensure that all outputs are accurate, original, unbiased, and ethically sound.
Authors are expected to verify that AI-generated text does not contain:
- Fabricated or “hallucinated” references
- Factual inaccuracies or unsupported claims
- Biased or discriminatory language
- Plagiarized or misattributed content
The submission of wholly or predominantly AI-generated manuscripts without human oversight or disclosure constitutes unethical conduct and will be handled in accordance with COPE and International Journal of Linguistics, Literature, Language Teaching, and Culture Studies’ editorial integrity policy.
4. Use of AI in Peer Review
International Journal of Linguistics, Literature, Language Teaching, and Culture Studies requires all peer reviewers to uphold the principles of confidentiality, integrity, and scholarly competence. Reviewers must not use AI tools to generate or structure peer review reports, nor may they input confidential manuscript content into AI systems, as this may breach data protection and confidentiality obligations.
If reviewers wish to use AI tools for non-content-related purposes (e.g., linguistic refinement of their review text), they must ensure that no confidential information is shared and must disclose such use to the handling editor. The editorial board reserves the right to reject reviews found to have been produced through inappropriate AI assistance.
5. Editorial Use of AI
Editorial staff at International Journal of Linguistics, Literature, Language Teaching, and Culture Studies may employ AI tools for administrative and non-decisional tasks such as plagiarism detection, reference verification, formatting checks, and language editing. However, AI tools will not be used to determine editorial outcomes.
All acceptance and rejection decisions are made exclusively by qualified human editors to preserve accountability, transparency, and ethical rigor in the decision-making process.
6. Ethical Considerations and Bias Prevention
AI applications must not compromise ethical integrity or academic objectivity. Authors, reviewers, and editors are urged to critically evaluate AI-generated outputs to prevent bias, misrepresentation, or the inclusion of misleading or culturally insensitive content.
Over-reliance on AI tools, particularly in tasks requiring critical interpretation, evaluative judgment, or creative synthesis, should be avoided. The journal encourages responsible innovation grounded in ethical scholarship.
7. Violations and Consequences
Any instance of ethical misconduct involving the misuse of AI—such as the fabrication of data or references, undisclosed AI-generated content, or misrepresentation of authorship—will be treated seriously. International Journal of Linguistics, Literature, Language Teaching, and Culture Studies reserves the right to:
- Reject the manuscript upon submission
- Request formal corrections or revisions
- Retract published articles
- Notify the authors’ affiliated institutions when warranted
All such cases will be investigated following the COPE guidelines on publication misconduct.
8. Policy Review and Updates
This AI policy will undergo periodic review and revision to reflect technological developments and evolving best practices in academic publishing. International Journal of Linguistics, Literature, Language Teaching, and Culture Studies remains committed to fostering responsible use of AI while upholding the quality, transparency, and integrity of scholarly communication.
References
- Elsevier (2023). Generative AI Policies for Journals. Retrieved from:
https://www.elsevier.com/about/policies-and-standards/generative-ai-policies-for-journals - Committee on Publication Ethics (COPE) (2023). Position Statement on Authorship and AI Tools. Retrieved from:
https://publicationethics.org/guidance/cope-position/authorship-and-ai-tools - Committee on Publication Ethics (COPE) (2024). Discussion Document on AI and Peer Review. Retrieved from:
https://publicationethics.org/news/cope-publishes-guidance-on-ai-in-peer-review - Committee on Publication Ethics (COPE) (2023). Discussion Paper: Ethical Considerations in the Use of Generative AI in Publishing.
https://publicationethics.org/topic-discussions/artificial-intelligence-ai-and-fake-papers
