Cookies on this website
We use cookies to ensure that we give you the best experience on our website. If you click 'Continue' we'll assume that you are happy to receive all cookies and you won't see this message again. Click on 'Find out more' to see our Cookie statement.
Test bed description

Reliance on video data in disciplines such as animal and wildlife sciences has increased in recent years with technological advances in data collection. The accumulation of large video databases has huge potential for tackling scientific questions which require longitudinal data. However management of these resources is lacking, and limited access and bottlenecks with manual processing by researchers is constraining the flexible application of cutting-edge artificial intelligence research.

An international collaboration led by Oxford has pioneered a new field of ‘computational ethology’. It has already produced computer vision models using deep learning which can identify wild animals from video footage and automatically process and analyse decades worth of data collected in the field.

Despite this progress, bringing together methods and workflows from these disparate fields poses considerable challenges, as much of this data is hosted on local servers, restricting remote access from international collaborators and computer vision experts for training deep learning models. There is a need for resources to host and process this data, as well as new reproducible workflows which can manage increasingly unmanageable ‘data lakes’ of different streams of multimedia collected from the field. The goal of this test bed is to develop a solution to these challenges by leveraging cloud-based workflows which standardise the storage, annotation and processing of video data using computer vision pipelines.

PIs: Andrew Zisserman, Susana Carvalho, Daniel Schofield

Related themes