Generative AI usage key principles
Generative AI Usage – Key Principles
In response to the increasing use of Generative AI and Large Language Models (LLMs) in academic writing, The CAAGA Conference maintains a strict policy to ensure transparency, accountability, and academic integrity. Authors must clearly distinguish between original human-authored content and content assisted by automated tools. The following principles outline what is permissible and what is not regarding the use of generative AI during the research and publication process:
- Copywriting any part of an article using a generative AI tool/LLM would not be permissible, including the generation of the abstract or the literature review, the author(s) must be responsible for the work and accountable for its accuracy, integrity, and validity.
- The generation or reporting of results using a generative AI tool/LLM is not permissible, the author(s) must be responsible for the creation and interpretation of their work and accountable for its accuracy, integrity, and validity.
- The in-text reporting of statistics using a generative AI tool/LLM is not permissible due to concerns over the authenticity, integrity, and validity of the data produced, although the use of such a tool to aid in the analysis of the work would be permissible.
- Copy-editing an article using a generative AI tool/LLM in order to improve its language and readability would be permissible as this mirrors standard tools already employed to improve spelling and grammar, and uses existing author-created material, rather than generating wholly new content, while the author(s) remains responsible for the original work.
- The submission and publication of images created by AI tools or large-scale generative models is not permitted.