Cookies on this website
We use cookies to ensure that we give you the best experience on our website. If you click 'Continue' we'll assume that you are happy to receive all cookies and you won't see this message again. Click 'Find out more' for information on how to change your cookie settings.

The Department of Computer Science will lead the EPSRC-funded project entitled ‘ReEnTrust: Rebuilding and Enhancing Trust in Algorithms’, while researchers from the Faculty of Law will play a key role in a related project to make future artificial intelligence systems more transparent and accountable.

Oxford to lead major new project to help establish trust in artificial intelligence

In order to gain the public’s trust in artificial intelligence (AI) online systems, Oxford will be leading a project to develop a tool that allows users to evaluate and critique algorithms.  

The ‘ReEnTrust: Rebuilding and Enhancing Trust in Algorithms’ project has received a grant of almost £1 million from the EPSRC. As well as setting up an online tool to help users assess major web-based platforms, the initiative aims to develop techniques to allow all parties to explain their views and suggest possible compromise solutions when trust in algorithms has been lost.

Researchers, who are from the Universities of Oxford, Nottingham and Edinburgh, also plan to develop a methodology for deriving a ‘trust index’ for online platforms that would allow users to easily assess the trustworthiness of web-based platforms.

‘We will be working with industrial partners to make sure that our research relates to real-world examples, and that it fairly represents all those affected by this work,’ said Professor Marina Jirotka from Oxford’s Department of Computer Science, who is leading ReEnTrust. ‘We also believe that by talking extensively to users, platform service providers and others stakeholders while developing this tool, we will gain a deeper understanding of what makes AI algorithms trustable.’

Marina’s team will be using their expertise in responsible research and innovation to help create methodologies that support responsible development of AI online systems. The project will also consider to what extent user trust can be regained through technological solutions and if it might be necessary and appropriate to seek other ways to rebuild trust, for instance through policy, regulation or education.

The ReEnTrust project is one of 11 initiatives that have been allocated funding by the EPSRC after a call for research to further the understanding of Trust, Identity, Privacy and Security (TIPs) issues in the digital economy. Collectively, the projects will receive almost £11 million in funding over the next three years. Oxford is also involved in the Realising Accountable Intelligent Systems (RAInS) project, a multi-disciplinary initiative led by the University of Aberdeen with £1.1 million EPSRC funding. Working with the public, the legal profession and technology companies, RAInS aims to create prototype solutions to allow developers to provide secure, tamper-proof records of intelligent systems’ characteristics and behaviours. Professor Rebecca Williams, Professor of Public Law and Criminal Law at Oxford’s Law Faculty, will work on the project with researchers from the Universities of Aberdeen and Cambridge. For more on this project, see the University news page.