Generative AI use in the PhD programme – Traffic light model principles
Alignment with Institutional, TENK, and EU Guidelines
These guidelines are informed by:
- The Finnish National Board on Research Integrity (TENK)
- The European Commission’s Living Guidelines on the Responsible Use of Generative AI in Research (2024)
- Hanken’s Guidelines on the Responsible Use of Generative AI in Research
Doctoral researchers must adhere to these frameworks, including GDPR, data protection, and ethical conduct. Generative AI is not a co-author and cannot be held accountable for research outputs.
Transparency and Documentation
All generative AI use in doctoral studies must be transparently documented (see section 3) and doctoral researchers should always be able to provide intermediate versions of their work created before any generative AI was applied. Doctoral researchers are ultimately responsible for the ethical and transparent use of generative AI in their work. Supervisors, examiners and doctoral committees provide guidance, but responsibility and decision-making lie with the individual doctoral researcher. Doctoral Researchers are encouraged to reflect on generative AI’s role in research and ethics when reporting.
Traffic Light Categories and Principle
The traffic light model is based on the principle of intellectual authorship: tasks requiring original academic contribution must be performed by the doctoral researcher, while supportive tasks may involve generative AI under strict transparency.
🟥 Red - Not allowed
Tasks where the doctoral researcher’s own authorship and intellectual work are essential and cannot be compromised. Generative AI must not be used in any way for these tasks.
🟨 Yellow - Conditional use
Tasks where generative AI can assist but must not replace critical thinking or originality. Use requires transparency, documentation, and critical evaluation.
🟩 Green – Allowed
Tasks where the doctoral researchers adhere to institutional, TENK, and EU guidelines. Which are transparently documented.
Important! — usage guidelines can vary by course and research context. All practices do not always follow generic examples; therefore, always refer to the course or context specific description and stated generative AI instructions and policy. Examiners are responsible for clearly informing Doctoral students about how the use of generative AI is permitted during the course or activity. They may also require disclosure of tools used.
1. Generative AI Use in Doctoral-Level Courses
Applicability:
The traffic light model below applies to all doctoral-level courses given at Hanken, including core research methodology, discipline-specific seminars, electives, and workshops.
Important Notes:
- Yellow and Green categories require full reporting of generative AI use and saving intermediate versions of work prior to generative AI involvement.
- Instructions provided by the examiner or course instructor take precedence over these general guidelines.
- Some learning tasks might require full collaboration of generative AI use to achieve the intended learning outcomes.
|
Category |
Guidelines – Activities |
|
🟥 Red |
Any task that demonstrates your own academic competence—such as writing essays, research proposals, or assessed papers—must be completed entirely by the student. Generative AI cannot be used to create, rewrite, or substantially shape content that is graded or used to evaluate your learning. |
|
🟨 Yellow |
Generative AI may support peripheral tasks like brainstorming ideas or improving clarity, but it cannot be used to provide factual content, perform analysis, or write sections that demonstrate your mastery of the subject. Always verify course-specific rules and critically review any generative AI-assisted input. |
|
🟩 Green |
Tasks that do not compromise authorship—such as literature searches, language polishing, code generation, or creating visualizations—can involve generative AI. Some tasks might even require you to collaborate and co-create with generative AI, if so, it will be specified in the course instructions. Document all use and ensure outputs are critically assessed. |
2. Generative AI Use in Doctoral Thesis
Applicability:
These traffic light models below apply to all components of the doctoral thesis at Hanken, including the kappa, essays and literature review, as well as the research process. The declaration and documentation of generative AI use must be stated in the Kappa.
Yellow and Green categories require full reporting of generative AI use and saving intermediate versions of work prior to generative AI involvement.
Thesis Work (Kappa, Essays, Literature Review):
|
Category |
Guidelines - Activities |
|
🟥 Red |
Core intellectual work—analysis, interpretation, discussion, and conclusions—must be the students own. Generative AI cannot be used to write or rephrase these sections or substitute your academic contribution. |
|
🟨 Yellow |
Generative AI may assist with language refinement or structural suggestions, but it cannot be used to create substantive content or arguments. All generative AI-supported input must be critically evaluated and integrated with your own reasoning. |
|
🟩 Green |
Generative AI can help with tasks that do not affect authorship, such as compiling references, mapping literature, visualizing trends, or generate code or figures to support analysis. Document all use and review outputs for accuracy and relevance. |
Note: Published articles must follow journal-specific generative AI disclosure policies. For unpublished work, include the generative AI use declaration in the kappa.
Research Process (Planning, Research Seminar, Logbook, Supervision):
|
Category |
Guidelines - Activities |
|
🟥 Red |
Personal reflections, decision-making, and communication with supervisors must remain entirely the students own. Generative AI cannot be used to write reflective logs or make research decisions. |
|
🟨 Yellow |
Generative AI may support planning or suggest ideas but cannot replace the student’s judgment. Reflections supported by generative AI must include the students own insights and reasoning. Always verify information and references from original sources to avoid plagiarism, fabricated content, or references. |
|
🟩 Green |
Generative AI can assist with structuring project plans, creating timelines, and suggesting resources. Document all use and critically assess its relevance. |
3. Declaration and Documentation of Generative AI Use
To ensure transparency and ethical integrity in doctoral research, all use of generative AI must be both declared and documented by the doctoral researcher. This requirement aligns with TENK, EU guidelines, and Hanken’s policies. Important! Doctoral researchers should always be able to show intermediate versions of their work, before and after generative AI was used.
Placement of Declaration in the thesis
The declaration and documentation of how generative AI was used, must be stated in the Kappa.
Important! For published articles, follow the journal’s disclosure requirements
The documentation log must include the following:
Whether generative AI was used
If not used, state: “No generative AI tools were used in the preparation or during this work/thesis.”
• Where and how it was used
Specify the work/thesis components (e.g., kappa, essays, literature review, planning).
- What specific tasks did you use generative AI for?
- How did AI use affect your work?
- What were the benefits or limitations?
- How did you make sure that the information was correct?
• Which tools were used
Full citation (e.g., Microsoft Copilot v1.0, ChatGPT-4, Elicit). The tools need to be properly referenced in the bibliography.
• Purpose and prompts
Describe the task in more detail (e.g., language editing, literature mapping, figure generation).
Provide representative examples of the prompts you used, not necessarily every single one. For instance:
- “Summarize the key findings of this article in 150 words.”
- “Generate a conceptual diagram illustrating the research process.”
• Ethical and legal considerations
Address privacy, data protection, intellectual property, and transparency.
- Did you anonymize data?
- Did you avoid copying AI-generated text?
- Did you cite tools used?
• Researcher’s own contribution
Reflect on how generative AI supported/contributed—not replaced—your intellectual work.
- How did the text benefit from the use of generative AI in different stages of the writing process?
- How do you think the text would have evolved without the use of generative AI?
- How did generative AI contribute and support your own learning and work?
4. Action plan in case of suspected misconduct
If there are indications that the doctoral researcher may not have followed these guidelines and principles, the examiner or supervisor should address the matter with the doctoral researcher as early as possible.
Initial Discussion
- Supervisor/examiner meets with the doctoral researcher promptly.
- Researcher presents intermediate versions and AI usage log (documentation).
- Presumption of innocence applies throughout the process.
Consultation
- If concerns persist, supervisor/examiner consults the Programme Director.
- If needed, escalate to the Director for Education and Digital Services for procedural guidance.
Formal Hearing
- Convene a hearing if suspicion remains unresolved.
- Ensure:
- Adequate representation (doctoral researcher, supervisor, programme director).
- Clear explanation of potential repercussions.
- Opportunity for the doctoral researcher to respond.
Decision and Reporting
- If innocence cannot be established, supervisor/examiner submits a formal report to the Disciplinary Committee.
- Committee determines sanctions in accordance with institutional policy.