Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

Researchers from across the University gathered last week for a full day of discussion and debate on the rapidly evolving relationship between artificial intelligence, ethics, and society.

Vice-chancellor Professor Irene Tracy gives her opening remarks as the conference opens in the Mathematical Institute

Setting the Tone: Leadership Reflections on the AI Landscape

The day opened with a warm welcome from Vice-Chancellor Professor Irene Tracey, who offered a candid assessment of the current AI landscape and the pressures students now face in an increasingly competitive job market. She encouraged delegates to think beyond immediate challenges: to be ambitious, socially responsible, courageous in their ideas, and active in shaping opportunities that serve both individuals and society.

Following this, Professor Tom Stoneham explored the ethical complexities of AI systems designed to predict human behaviour. His talk highlighted key concerns including embedded bias, diminishing diversity, and the persuasive authority that chatbots and automated systems can subtly hold over users. His remarks underscored a central theme of the day: AI ethics is as much a human issue as a technical one.

AI in Practice: Understanding Before Application

The morning continued with Dr Heloise Stevance, who shared insights from astrophysics, where AI is transforming data analysis and discovery. She reminded participants that while AI can assist in doing the science, it cannot replace deep understanding – “If it works but you don’t understand it, it doesn’t work.”

A lively series of lightning talks followed, showcasing innovative research from departments across MPLS and beyond. Topics ranged from machine learning methods and responsible deployment to applications in environmental science, health, and engineering.

Afternoon Focus: Grimpact, Responsibility, and the Real-World Impact of AI

The afternoon commenced with Professor Gemma Derrek, who introduced the concept of 'grimpact'  the unintended societal consequences of AI and research. She emphasised the importance of anticipating risks early, maintaining accountability, and remaining alert to how convenience-driven AI can undermine public trust.

Delegates then heard from a diverse range of speakers:

  • Sam McIlroy – Academic Writing with AI: A Practical Introduction for Researchers
  • Dr Dominik Lukeš – Teaching During the Cognitive Revolution: The Future of Learning in the Age of AI
  • Amelia Griffiths – Navigating AI and Intellectual Property: From Protection to Commercialisation
  • Elisha Ward – Beyond the Model: Considering the Equality Dimensions of AI in Research

Each session prompted thoughtful discussion on how AI is reshaping academic practice – from writing and teaching to the way knowledge is governed, shared, and commercialised.

Panel Reflections: A Provocative End to the Day

To close the conference, panellists reconvened for a “suitably provocative and challenging” question time session, chaired by Prof. Jim Naismithwith Prof. Sir Nigel Shadbolt joining the keynote speakers for the roundtable. Conversation flowed across topics including the future of research careers, the role of AI in peer review, the tension between efficiency and integrity, and the broader ethical responsibilities of researchers navigating automated tools. The audience brought sharp and thoughtful questions, ensuring a lively final hour. 

Celebrating Excellence: Poster and Lightning Talk Awards

The day concluded with recognition of standout contributions from early‑career researchers.

Best Poster Awards

  • Winner: Lena Easton-CalabriaPredicting heat, reproducing risk? Ethical dimensions of artificial intelligence and machine learning in climate adaptation.
  • Runner-up: Sanaz KazeminiaControlling structure-based diffusion models with reinforcement learning: The challenge of reward design.

Best ECR Lightning Talk Awards

  • Theme: Data Science & Quantum Technologies I; Methods & Modelling – Benjamin Walker, Mathematical Institute – Structured linear controlled differential equations.
  • Theme: Data Science & Quantum Technologies II; Governance, Safety & Assurance – Dr Jakob Zeitler, Statistics – Causality, AI and ethics: What works and what doesn’t.
  • Theme: AI, Creativity & Human-Centred Computing - Leslye Dias Duran, Computer Science – How can AI support children’s agency?
  • Theme: Health & Biomedical AI - Dr Yvonne Lu, Engineering Science – Low-latency, privacy-preserving remote health monitoring with in-network machine learning.
  • Theme: Food Security, Biodiversity & Evolution - Jessica Frater, Biology – Participant characteristics and the social acceptability of increased deer culling in Scotland.
  • Theme: Climate, Materials & Space - Dr Xuesong Lu, Engineering Science – Simulation and assessment of ammonia reduction of iron ores in the shaft furnace.

A Community Effort

We extend our sincere thanks to everyone involved in planning the conference, our speakers for their expertise and energy, and all attendees for contributing to a vibrant, thoughtful, and inspiring day. Events like this highlight the extraordinary breadth of research across MPLS – and the importance of coming together to reflect on the ethical, social, and scientific questions that will shape the next decade of AI.