Cookies on this website
We use cookies to ensure that we give you the best experience on our website. If you click 'Continue' we'll assume that you are happy to receive all cookies and you won't see this message again. Click 'Find out more' for information on how to change your cookie settings.
Meet the women driving oxford2019s ai research

Once upon a time the concept of machines that could think and act like people was a fantasy - or more often than not, the recipe for a male-dominated, blockbuster movie. Fast forward thirty years and artificial intelligence is transforming - at pace, both the world around us and the way we live, work and communicate within it.

Despite its topical prevalence, AI is a social hot potato that is regarded as a gift and a curse depending on who you talk to, and its purpose is widely debated. Well documented issues include the technology’s impact on the labour market and  concern around the gender-gap driving the designs behind the scenes - evidenced by the perceived white male bias in the algorithms that they generate. However, the field is gradually changing and more women are not only building a future in tech, but driving some of the incredible breakthroughs that are shaping our society.

As the University prepares for its first AI Expo event next week, the women closing the inter-disciplinary AI research gender-gap at Oxford University will discuss their experiences, career highlights, and some of the biggest challenges facing the industry withScienceBlog.

 

Image credit: Marina Jirotka
Professor Marina Jirotka.

Marina Jirotka is Professor of human-centred computing, Associate Director of the Oxford e-Research Centre and Associate Researcher at the Oxford Internet Institute. 

 

Putting people at the heart of computing

As Professor of Human-Centred Computing, Associate Researcher at the Oxford Internet Institute and governing body fellow at St Cross College, Marina Jirotka's work focuses on keeping people at the heart of technological innovation. Her research group undertakes projects that aim to enhance the understanding of how technology effects human collaboration, communication and knowledge exchange across all areas of society in order to inform the design and development of new technologies.

What is human-centred computing and how did you come to specialise in it?

Human-centred computing puts people at the heart of computing, so they have some control over how technology affects their lives. However, as technology has become more advanced, particularly with new developments in AI and machine leaning, this becomes harder. I am very keen to keep people at the centre of the drive towards machine learning.  

I became interested in computational models of the brain in the 1980s when I was studying anthropology, and my interest in AI and its societal impact grew from there. I took further studies in computing and artificial intelligence after that.

My first research position was on one of the Alvey projects. Alvey was a large UK government sponsored research programme in IT and AI which ran from 1983 to 1987. The programme was a reaction to the Japanese fifth generation computer project and defined a set of strategic priorities for channelling British research into IT improvements. I was involved in building a planner to give people advice about the welfare benefits system. The final product was a great example of early inter-disciplinary collaboration, fusing STEMM technology with social sciences understanding.

As a society we are striving to create artificial intelligence without really understanding what intelligence is, or how to get the most from it, and that is a problem.

What drew you towards a career in science and AI?

Science has always been a big part of my family; my parents and my grandfather were chemists in the Czech Republic. My mother was actually one of the first women in the country to get a degree at Charles University.

I became interested in computational models of the brain in the 1980s when I was studying social anthropology and psychology. My interest in AI and its societal impact grew from there when I studied computing and artificial intelligence.

As a society we are striving to create artificial intelligence without really understanding what intelligence is, or how to get the most from it, and that is a problem. I personally like seeing where AI can actually go and where it can take us - what it really can do compared with the Hollywood hype, and then using that knowledge to hopefully make a difference.

How has the field changed for you as a woman in AI, and what can be done to encourage more women to join the field?

When I first started I was a real oddball, not only a woman but also a social scientist. But, now that there are more of us, I notice it less. I can’t generalise but I think the human-centred theme could be a big draw. In my experience women are keen to see the outcome of an application and understand what their work and contribution will actually achieve. Whereas some of my male colleagues are more driven by product development and the theoretical side.

What are the biggest challenges facing the field?

It is important to consider the kind of world that we want to live in, build from there and start thinking about the impact that developments will have on society and institutions. At the same time, it is paramount to involve and engage people in those visions, so that human society is taken on the AI journey as well, rather than left behind.

What research are you most proud of?

In the early days it was my contribution to Alvey, but currently I would say it is the Digital Wildfire Project.

The project grew from a desire to understand and address the spread of hate speech and misinformation online. For example, public reaction to events such as the New York Stock Exchange, Hurricane Sandy and crucially the spread and impact of hate speech.

In everyday society there are safeguards in place to protect people from hate speech, but in an online environment these defences do not exist. As a result, people sometimes feel that they can say things and behave in ways that they wouldn’t in any other area of life. People are subjected to abuse that they would not normally be faced with.

Our research looked at this phenomena and offered advice to people on how to engage with and control it. We worked with a number of different stakeholders from those trying to prevent and manage it, such as the police and schools, and to those who are most vulnerable, children.

More recently we have worked with policy makers, such as the House of Lords Select Committee, on communications to advise on and support children’s digital rights. I was specialist advisor to the committee which produced a report “Growing Up with the Internet”, making recommendations relating to how internet policy should involve participation of multiple stakeholders and for the promotion of digital literacy for children. The report was debated in the House of Lords. Following this, the Secretary of State for Digital, Culture, Media and Sport, responded to the Committee’s report and announced the launch of the Government’s Green Paper for an Internet Safety Strategy in which a digital literacy programme was proposed that involves different stakeholders in order to protect children when they are online.

I have learned so much from this project, particularly about how government works. It has also been a great way of engaging with the public. We have worked with technology companies and sponsors like Santander to engage with young people and get them to share their experiences online through art and other channels.

When I first started I was a real oddball, not only a woman but also a social scientist. But, now there are much more of us, it isn't unusual, it's vital. The challenges that we have to face in the 21st century can’t be solved by one discipline or mindset alone.

What excites you most about the future of AI?

Given the state of the planet, the ways in which AI is being used in areas that humans have not been able to access, such as extreme environments and also to help wildlife and conservation, these areas are really exciting.

I’m also equally interested in and worried by transhumanism - the notion of embedding technology into a human, in order to give them super human abilities. There is already research taking place in the US, which aims to improve people’s cognitive faculties through neuro science.

What can be done to help public understanding of AI?

People want to know how things apply to them and how something is going to affect them. We need to convey the current knowledge about machine learning to people so that they understand its potential and capabilities. In many ways this is much more interesting than the current media hype.

What role does interdisciplinary collaboration play in machine learning and AI?

It is imperative, to the point that research councils actively encourage interdisciplinary work now. The challenges that we have to face in the 21st century may not be solved by one discipline alone.

You are chairing the AI & Ethics debate panel at next week's AI Expo, what are your thoughts on the event?

The AI Expo is a great idea, that will hopefully serve as a reminder of Oxford’s commitment to supporting well considered machine learning progression. I hope it will inspire more events of its kind in the future.

Learn more about Professor Jirotka’s research here

 Digital Wildfire Project: #TakeCareOfYourDigitalSelf

 

 

In part two we meet a tech lawyer, working to support transparency in the use of AI and robotics in society

Story courtesy of The University of Oxford Science Blog