This version is effective as of January 14, 2026. Links to previous versions of these guidelines can be found at the end of this document.
The history of these guidelines and the current version can be found at https://www.etla.fi/en/ai-ethics.
ETLA requires that research personnel comply with the guidelines mentioned here regarding the professional use of artificial intelligence.
If artificial intelligence has been used in the production of ETLA publications or other documents (e.g., PowerPoint), it should be referenced as follows:
”Artificial intelligence has been used to support human work in the production of this [publication/presentation, etc.] in accordance with Etla’s ethical guidelines (version 22.11.2025, see https://www.etla.fi/en/ai-ethics/).”
As a principle
ETLA Economic Research is a trusted source of reliable, independent and scientifically based information that supports political and governmental decision-making. While artificial intelligence (AI) does not alter the fundamental nature of this task, it does serve as a means or instrument to facilitate its efficient execution.
Etla’s position on the use of AI is proactive and supportive. The organization is committed to scientific integrity in research, recognizing the importance of human expertise and the responsibilities that come with it.
It is recommended that AI be used for tasks such as text maintenance, summarization, and preliminary brainstorming, allowing time to be freed up for in-depth analysis, policy-making, and other conclusions.
Instructions
- Artificial intelligence is an aid and a tool in our work, but at ETLA, we always put human expertise first. AI should be tried and used with an open mind, but in such a way that the researcher is the expert guiding its use, making critical choices and taking responsibility for the result.
- ETLA does not publish any material produced purely by AI in its research activities (cases are dealt with separately by the management). Researchers are required to check and finalize all material produced by AI and researchers themselves are responsible for their output. An error made by AI is never an acceptable explanation or mitigating factor. For example, if an internal draft contains unchecked and unedited AI material, this must be clearly indicated in the draft.
- If AI is a key method in the actual analysis of the research, the use of artificial intelligence must be reported in the method description in a neutral and honest manner, as with the use of any other analysis software (mentioning the platform or software name and number, the time of use, and saving the raw output). It is also worth noting that peer review, for example, may require more extensive information about other uses of artificial intelligence.
- It is imperative that all facts and information derived from artificial intelligence, particularly sources, be meticulously verified. In its current state, AI will never be entirely error-free, and it reflects the stereotypes and biases of the material it is trained on. Researchers must work to correct these issues. At ETLA, we are committed to respecting copyrights and ensuring that AI plagiarism is not a concern. It is essential to consistently reference the work of others in accordance with established scientific standards.
- If AI has been used in accordance with these guidelines, it should be noted that artificial intelligence is not a co-author. Its use should therefore not be indicated in the output, except in relation to compliance with the ethical guidelines (see the beginning of these guidelines; also section 3, which may require more extensive documentation).
- Never enter confidential material, personal data, or trade secrets into public, unprotected AI platforms. Only fully anonymized or public data may be used in free versions. Even when using secure AI platforms, it is important to consider how much you can trust the platform in question. In general, anonymizing AI prompts does not affect the result (i.e., the actual names of institutions/individuals or specific descriptions of the context of use are not needed), in which case anonymization should be done in any case. It is safest not to share materials and inputs for AI training either – this can be checked and, if necessary, changed in the settings of most AI services.
- If an Etla researcher acts as a peer reviewer or in another role evaluating another person or their output, final conclusions must not be outsourced to artificial intelligence.
Previous versions of the guidelines