AI Policy

The Journal of Sustainable Equity and Social Research (JSESR) supports the use of contemporary methods, tools, and technologies in the conduct and presentation of scientific research. At the same time, the journal prioritizes academic integrity, ethical principles, and human-centered knowledge production in all stages of scholarly publishing.

While Generative Artificial Intelligence (GAI) tools offer significant opportunities and practical advantages for researchers through their capacity to generate text, images, and data, their rapid development also raises substantial ethical risks if used without appropriate oversight. JSESR adopts an AI policy grounded in transparency, accountability, and scholarly responsibility.

Potential Ethical Risks in the Use of Generative AI

The use of GAI tools in scientific research and publication may lead to ethical violations in the following situations:

  1. Failure to disclose the use of GAI in content production,
  2. Unauthorized use of content produced by others,
  3. Improper quotation or use of existing literature without appropriate citation,
  4. Use of incorrect or misleading information generated by GAI without verification,
  5. Inclusion of non-replicable or non-transparent data and results in academic literature,
  6. Deepening discrimination against vulnerable or sensitive groups due to biased or limited data,
  7. Collection, storage, transfer, or reuse of personal data in violation of applicable legislation.

Fundamental Principle

Authorship of a scientific article, peer review, and editorial decision-making are responsibilities that can only be attributed to humans.
The Committee on Publication Ethics (COPE) clearly states that artificial intelligence tools:

  • cannot qualify as authors,
  • cannot declare conflicts of interest, and
  • cannot manage copyright or licensing responsibilities.

Accordingly, GAI tools cannot be listed as authors in JSESR and cannot assume scholarly responsibility.

Guidelines for Editors

  • Editors must not upload submitted manuscripts or any part thereof to GAI tools, considering confidentiality and intellectual property rights.
  • GAI tools must not be used at any stage of the editorial process, including language editing or stylistic revision.
  • If an editor suspects a violation of the journal’s AI policy by an author or reviewer, the editorial board must be informed.
  • Editors should carefully review authors’ AI usage declarations and may request additional clarification or documentation.
  • Manuscripts suspected of unauthorized GAI use may be investigated editorially and, if necessary, rejected.

Guidelines for Reviewers

  • Manuscripts under review are confidential documents.
  • Reviewers must not upload manuscripts or any part of them to GAI tools, as this may pose serious risks to confidentiality and intellectual property.
  • Peer review reports must not be generated, edited, or enhanced using AI tools, even for language or readability purposes.
  • If a reviewer suspects unauthorized GAI use beyond permitted areas, they are expected to inform the handling editor.

Guidelines for Authors

  • Authors are responsible for ensuring that submitted manuscripts are original, scientifically sound, and ethically produced.
  • A manuscript may not be written in whole or in part by GAI tools.
  • GAI tools cannot be listed as authors and cannot assume authorship responsibility.
  • The use of AI-generated text, images, audio, video, or other materials that may violate copyright or personal rights is strictly prohibited.
  • Manuscripts in which GAI replaces the author’s intellectual contribution will not be considered for review.
  • Plagiarism detection software and AI-detection tools may be used when necessary.
  • Manipulation of images, tables, or figures using AI (e.g., enhancement, removal, concealment, or addition of elements) is not permitted. Adjustments such as brightness, contrast, or color balance are acceptable only if they do not obscure or alter original information.

Permitted Uses of Generative AI

When used ethically and transparently, GAI tools may be employed in the following limited contexts:

  • Idea generation during the early stages of research (brainstorming, identifying research gaps, hypothesis development),
  • Supporting online literature searches and classification of sources (authors remain responsible for verifying accuracy and quality),
  • Data analysis using AI models trained by the authors themselves, provided that code sources are clearly stated,
  • Visualization of research steps or transformation of original data into tables or graphs,
  • Translation and language editing (final responsibility rests with the authors).

Reference management software (e.g., Zotero, Mendeley, EndNote) is excluded from this policy and may be used without declaration.

Mandatory Disclosure of AI Use

If GAI tools are used in a study, authors must clearly specify:

  • the full name and version of the AI tool,
  • which stage of the research it was used in,
  • how it was used, and
  • its contribution to the study.

This information must be provided in the Methodology or Acknowledgements section.

Citing Artificial Intelligence Tools (APA 7)

In-text citation:
… may lead to ethical concerns (OpenAI, 2025).

Reference list:
OpenAI. (2025). ChatGPT (GPT-4o) [Large language model]. https://chat.openai.com/

Final Note

Generative Artificial Intelligence refers to systems such as ChatGPT or DALL·E that produce content in the form of text, images, or other media.
Even if substantial modifications are made afterward, content whose primary creator is an AI tool is considered “AI-generated.”

Authors bear full responsibility for the originality, accuracy, reliability, validity, and integrity of their submissions. JSESR expects all authors to use AI tools responsibly, in accordance with publication ethics and the journal’s editorial policy.

For Further Information

  • Council of Higher Education (Türkiye): Ethical Guidelines on the Use of Generative AI in Scientific Research and Publication
  • Committee on Publication Ethics (COPE): Authorship and AI Tools