Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

Researchers in the Department of Engineering Science have developed new machine learning approaches that significantly reduce the time needed to design and control semiconductor quantum devices, while automating complex tuning processes.

Generated illustration of an AI processor chip with "AI" text on its surface, surrounded by glowing blue and orange circuit pathways and vertical data streams.

Researchers in the Department of Engineering Science, led by Professor Natalia Ares, have developed machine learning methods that make it faster and easier to design, set up and control semiconductor quantum devices. Working with NVIDIA's accelerated computing platform, the group has demonstrated new ways to cut simulation times, automate the process of tuning devices and allow systems to adapt more quickly to new hardware.

Quantum computers rely on qubits, the quantum equivalent of bits. Unlike the 0s and 1s of conventional computing, qubits can exist in a superposition of both states at once, providing computational capabilities that could solve certain problems exponentially faster than today's machines. Realising this potential remains difficult. Even devices built to the same design often need painstaking adjustment before they behave as reliable qubits. Researchers can spend hours or even days carrying out tuning, often guided by extensive simulations.

‘For a long time, tuning semiconductor quantum devices has been like searching for a needle in a haystack,’ said Professor Natalia Ares. 'With our new approach, we could achieve this in as little as 15 minutes on average.’

The research team has been addressing these adjustment bottlenecks by using artificial intelligence (AI) to remove three of the slowest steps. In design, they have trained neural operators (AI models that can learn the physics of how devices behave) that can predict the behaviour of a device layout in milliseconds, instead of hours, and even suggest designs that match a desired outcome. In experiment, they have built a transformer-based model, TRACS, that automatically identifies the right settings for qubit control. In learning from scarce data, they have shown that adaptive learning techniques enable models to adapt to new devices from very limited information, reflecting the variability that is common in practice.

These AI breakthroughs are enabled by GPU acceleration. Generating the large training datasets needed for these models would take weeks or months with standard computing methods, but using the NVIDIA CUDA-Q platform the team has achieved speed-ups of more than 100 times. The platform makes it possible to simulate larger systems and push performance further with multi-GPU computing.

Taylor Lee Patti, senior research scientist at NVIDIA said ‘AI-assisted optimisation of traditional computer hardware is a ubiquitous and powerful practice. The use of CUDA-Q to unlock the same techniques for quantum computing hardware is key for developing lower noise and larger scale quantum devices.’

This integration of design, simulation and experiment is unusual, and points to a more automated and scalable approach to developing quantum hardware. While still at an early stage, these are working methods rather than just proposals. TRACS has been trained and benchmarked, neural operators are already delivering rapid predictions, and adaptive learning models have been shown to adapt from scarce data.

The team now aims to integrate these tools into control routines and to pair them with optimisation strategies, paving the way for more autonomous operation of future quantum processors. 'We are now extending this approach to more complex quantum device architectures,' said Professor Ares. 'Efficient and reliable tuning is essential to scale up quantum technologies.'

Read the blog Enabling Semiconductor Quantum Computing by Neural Operators, Transformers, and Meta-Learning online at https://eng.ox.ac.uk/case-studies/enabling-semiconductor-quantum-computing-by-neural-operators-transformers-and-meta-learning/

 

Similar stories

Three Oxford academics elected as Royal Academy of Engineering Fellows

Emeritus Professor Ronald Roy, Professor Andrea Vedaldi and Professor Michael Wooldridge are among leading figures elected by the Royal Academy of Engineering to its Fellowship this year.