Generative AI Use
This section includes important information and guidance for PGR students on the use of Generative AI in summative assessments
Generative AI Use for PGR Summative Assessment
Generative Artificial Intelligence (Gen AI) tools are defined as end-user applications whose technical implementation employ a generative model based on deep learning methodologies [1]. The divisions encourage the responsible exploration and informed use of Gen AI tools (including but not limited to ChatGPT, Claude, Copilot, Bing Chat) within postgraduate academic study and research. Gen AI represents a significant extension of established digital resources such as search engines, spell-checkers, and code debuggers, offering enhanced capabilities in information synthesis, written communication, and code generation.
The breadth and rapid development of these technologies and their application require careful consideration in relation to originality, attribution, and the maintenance of academic integrity. This document provides clarification of University-level guidance in the context of postgraduate research student summative assessment stages, including transfer and confirmation of status and DPhil and MSc by Research thesis production and examination. It should be read in conjunction with the University’s policy on the use of generative AI in Research.
Students are encouraged to reflect critically on their AI use and should be prepared to have a nuanced discussion of the appropriate use of AI during any viva voce examinations should the examiners require.
Transfer and confirmation assessors and DPhil examiners are invited to discuss Gen AI use with the student. This will include questions on how far Gen AI has been used by the student in their research and preparation of materials submitted for or presented at transfer and confirmation of status or in preparation of the final thesis. Students should be able to give a detailed and critically reflective account of their use (or lack of use) of Gen AI to their assessors/examiners.
General considerations for the use of Gen AI
The use of AI tools will be ubiquitous in future research and workplaces, opening up new opportunities, but also carrying significant risks. It is essential that students acquire a critical familiarity with these tools as they develop and how they can be used responsibly within a research context and beyond.
To enable and support the safe and productive use of generative AI tools, the University has agreements with a number of generative AI providers and recently entered into an agreement with OpenAI to provide ChatGPT Edu licences to all members of the University, including students at all levels. This version of ChatGPT is specifically built for universities and includes enterprise-level security and controls. The University’s ChatGPT Edu provision includes assurances that interactions will not be used for further training of AI models, incorporating some level of security for inputs to the system, but this should not lead to complacency when working with private or confidential data.
The University has some broad guidance on the safe and responsible use of generative AI tools. This guidance is broad and not specifically targeted at research students but nonetheless constitutes a resource that students should be aware of as a starting point for safe and responsible use of generative AI tools.
It is worth stressing that all DPhil and MSc by Research students are bound by the University’s declaration of authorship for all theses, in which you pledge “that the work [you] are submitting is entirely [your] own work, except where otherwise indicated.”
Permissible use
The use of local editing tools—such as grammar assistants, code debuggers, and spell-checkers—is permitted and need not be declared. These tools only make small, local changes (for example, fixing spelling, grammar, or small pieces of code), usually affecting just a few words or tokens at a time.
The use of AI tools for background research, language translation, creation of bibliography indices and general subject understanding is allowed and does not have to be declared. However, while Gen AI can assist, it should not replace one’s own critical engagement with their subject, which will be examined by assessors at transfer, confirmation and in the final DPhil viva voce examination. Assessors and examiners are expected to test that the student fully understands the work that they are doing at a deep, technical level.
Use of Gen AI for coding purposes is permitted, where the coding serves a purpose in the research but is not the substantive output of the project. Use of Gen AI for this type of coding is now common practice in many research areas and does not differ significantly from sourcing code from open source repositories. Students are responsible for any code they use being correct, regardless of source, and use of Gen AI code should be declared (excluding debugging).
For any use of AI tools, it should be remembered that they may “hallucinate” details or make factual errors, while producing very believable outputs. They may also reproduce others’ content without acknowledgement (leading to classical plagiarism and copyright infringement).
Good academic standards require fact-checking for correctness, and where appropriate, citation of suitable sources for all information presented within the thesis. AI-generated content must not be presented as your own whether copied or paraphrased, unless explicitly authorised.
Non-Permissible use
The application of University policies on plagiarism and research integrity will be applied to all summative assessments and students whose work falls short of the standards expected can expect to face action under those policies, including if such failures are a result of the use of Gen AI.
Substantive original writing by Gen AI, including either verbatim or closely paraphrased use of Gen AI content, for, e.g., chapters, or parts of chapters, including introduction or conclusion chapters or for a literature review, would fall under the definition of plagiarism or be otherwise a failure of research integrity and is therefore not permissible..
The use of generative AI to produce plots or data visualisations directly from prompts is prohibited. Such tools can obscure the data-generation process, alter or invent data, and prevent verification or reproducibility. To ensure transparency and academic integrity, all plots and data visualisations must be created using approved, auditable methods rather than AI-generated outputs. The restriction does not apply to the use of Gen AI for refining existing, verifiable plotting codes, where the underlying data and methods remain fully transparent and reproducible. In such cases, the code used should be made available to the examiners without reservation and without exception.
Private or confidential data must not be entered into third-party AI tools. Such information may be stored, transmitted, or reused—either in its original or a processed form—for purposes such as training AI systems or for distribution to external parties. Examples of confidential data include (but are not limited to) non-anonymised interview data, protocols and IP owned by your supervisors’ laboratories, proprietary information supplied by an industrial partner, and data obtained under a licensing agreement. Where confidential data relates to IP, any specific departmental IP policies will apply to permitted use of the data.
Acknowledging and referencing use of generative AI tools
Transfer and confirmation assessors are invited to discuss Gen AI use with the student. This will include questions on how far Gen AI has been used by the student in their research and preparation of materials submitted for or presented at transfer and confirmation of status. Students should be able to give a detailed and critically reflective account of their use (or lack of use) of Gen AI to their assessors.
Students are required to include a statement on their use of Gen AI in their final submitted thesis. This is effective as of submission in Trinity Term 2026, but it is recommended that such a statement is included in every thesis submitted from the point of publication of this guidance. The statement should be placed immediately after the abstract. The statement must include a formal declaration that any Gen AI use complies with University, divisional and (where applicable) departmental guidance, where and how Gen AI has been used in preparation of the thesis and summarising how specific uses of Gen AI will be referenced in the text (this could simply amount to reference to a scientifically accepted standard in place at the time of submission of the thesis).
Students should include a statement on their use of Gen AI in their submissions for transfer and confirmation of status. As departmental practice varies as to the format of work submitted for transfer and confirmation, this guidance is neutral as to the placement of such a declaration, but it should include a formal declaration that any Gen AI use complies with University, divisional and (where applicable) departmental guidance, where and how Gen AI has been used in preparation of the submitted/presented work and summarising how specific uses of Gen AI will be referenced in the text/presentation (this could simply amount to reference to a scientifically accepted standard in place at the time of submission of the work).
A statement of acknowledgment shows the reader/examiner which AI tools have been used and for what specific purpose. The statement should include the following information:
- Name and version of the generative AI tool e.g. ChatGPT5, Copilot
- Publisher (name of company that provides the Gen AI system) e.g. Microsoft, OpenAI
- Brief description of the way in which the tool was used (e.g. background research, refining code)
It is worth remembering that it there may be considerable time between your use of AI and writing up your final thesis. You are therefore encouraged to keep a record of any use of generative AI tools which will need to be acknowledged in your final submission. Whereas you can always go back and check references for journal articles, use of generative AI may not be easily reproducible at a later date.
