In order to gain the public’s trust in artificial intelligence (AI) online systems, Oxford will be leading a project to develop a tool that allows users to evaluate and critique algorithms.
The ‘ReEnTrust: Rebuilding and Enhancing Trust in Algorithms’ project has received a grant of almost £1 million from the EPSRC. As well as setting up an online tool to help users assess major web-based platforms, the initiative aims to develop techniques to allow all parties to explain their views and suggest possible compromise solutions when trust in algorithms has been lost.
Researchers, who are from the Universities of Oxford, Nottingham and Edinburgh, also plan to develop a methodology for deriving a ‘trust index’ for online platforms that would allow users to easily assess the trustworthiness of web-based platforms.
‘We will be working with industrial partners to make sure that our research relates to real-world examples, and that it fairly represents all those affected by this work,’ said Professor Marina Jirotka from Oxford’s Department of Computer Science, who is leading ReEnTrust. ‘We also believe that by talking extensively to users, platform service providers and others stakeholders while developing this tool, we will gain a deeper understanding of what makes AI algorithms trustable.’
Marina’s team will be using their expertise in responsible research and innovation to help create methodologies that support responsible development of AI online systems. The project will also consider to what extent user trust can be regained through technological solutions and if it might be necessary and appropriate to seek other ways to rebuild trust, for instance through policy, regulation or education.
The ReEnTrust project is one of 11 initiatives that have been allocated funding by the EPSRC after a call for research to further the understanding of Trust, Identity, Privacy and Security (TIPs) issues in the digital economy. Collectively, the projects will receive almost £11 million in funding over the next three years. Oxford is also involved in the Realising Accountable Intelligent Systems (RAInS) project, a multi-disciplinary initiative led by the University of Aberdeen with £1.1 million EPSRC funding. Working with the public, the legal profession and technology companies, RAInS aims to create prototype solutions to allow developers to provide secure, tamper-proof records of intelligent systems’ characteristics and behaviours. Professor Rebecca Williams, Professor of Public Law and Criminal Law at Oxford’s Law Faculty, will work on the project with researchers from the Universities of Aberdeen and Cambridge. For more on this project, see the University news page.