Solving intelligence through research.

We’re working on some of the world’s most complex and interesting research challenges, with the ultimate goal of solving intelligence. To do this, we’ve developed a new way to organise research that combines the long-term thinking and interdisciplinary collaboration of academia along with the relentless energy and focus of the very best technology start-ups. This approach is yielding rapid progress against a set of exceptionally tough scientific problems, with our team achieving two Nature front covers in under a year, receiving numerous awards, and publishing over 200 peer-reviewed papers. And we’re hiring!

Highlighted Research

Awards

Featured Publications

View All Publications
Nature 2017

Mastering the game of Go without Human Knowledge

Authors: D Silver, J Schrittwieser, K Simonyan, I Antonoglou, A Huang, A Guez, T Hubert, L Baker, M Lai, A Bolton, Y Chen, T Lillicrap, Fan Hui, L Sifre, G van den Driessche, T Graepel, D Hassabis

A long-standing goal of artificial intelligence is an algorithm that learns, tabula rasa, superhuman proficiency in challenging domains. Recently, AlphaGo became the first program to defeat a world champion in the game of Go. The tree search in AlphaGo evaluated positions and selected moves using deep neural networks. These neural networks were trained by supervised learning from human expert moves, and by reinforcement learning from selfplay. Here, we introduce an algorithm based solely on reinforcement learning, without human data, guidance, or domain knowledge beyond game rules. AlphaGo becomes its own teacher: a neural network is trained to predict AlphaGo’s own move selections and also the winner of AlphaGo’s games. This neural network improves the strength of tree search, resulting in higher quality move selection and stronger self-play in the next iteration. Starting tabula rasa, our new program AlphaGo Zero achieved superhuman performance, winning 100-0 against the previously published, champion-defeating AlphaGo.

View Publication News & Views Read Blog
arXiv 2016

WaveNet: A Generative Model for Raw Audio

Authors: A van den Oord, S Dieleman, H Zen, K Simonyan, O Vinyals, A Graves, N Kalchbrenner, A Senior, K Kavukcuoglu

This paper introduces WaveNet, a deep neural network for generating raw audio waveforms. The model is fully probabilistic and autoregressive, with the predictive distribution for each audio sample conditioned on all previous ones; nonetheless we show that it can be efficiently trained on data with tens of thousands of samples per second of audio. When applied to text-to-speech, it yields state-of-the-art performance, with human listeners rating it as significantly morenatural sounding than the best parametric and concatenative systems for both English and Mandarin. A single WaveNet can capture the characteristics of many different speakers with equal fidelity, and can switch between them by conditioning on the speaker identity. When trained to model music, we find that it generates novel and often highly realistic musical fragments. We also show that it can be employed as a discriminative model, returning promising results for phoneme recognition.

View Publication Blog Post

Nature 2016

Hybrid computing using a neural network with dynamic external memory

Authors: A Graves, G Wayne, M Reynolds, T Harley, I Danihelka, A Grabska-Barwinska, S Gomez Colmenarejo, E Grefenstette, T Ramalho, J Agapiou, A Puigdomènech, K M Hermann, Y Zwols, G Ostrovski, A Cain, H King, C Summerfield, P Blunsom, K Kavukcuoglu, D Hassabis

Nature 2016

Mastering the game of Go with Deep Neural Networks & Tree Search

Authors: D Silver, A Huang, C J Maddison, A Guez, L Sifre, G van den Driessche, J Schrittwieser, I Antonoglou, V Panneershelvam, M Lanctot, S Dieleman, D Grewe, J Nham, N Kalchbrenner, I Sutskever, T Graepel, T Lillicrap, M Leach, K Kavukcuoglu, D Hassabis

Nature 2015

Human Level Control Through Deep Reinforcement Learning

Authors: V Mnih, K Kavukcuoglu, D Silver, A Rusu, J Veness, M G Bellemare, A Graves, M Riedmiller, A Fidjeland, G Ostrovski, S Petersen, C Beattie, A Sadik, I Antonoglou, H King, D Kumaran, D Wierstra, S Legg, D Hassabis

View All Publications

Latest Research News

More Research News

Working at DeepMind

We're looking for exceptional people.

Meet some of the team

Andreas Fidjeland

Andreas is our Head of Research Engineering and joined DeepMind in 2012. One of his earliest memories of DeepMind is having meetings on the 'meeting picnic blanket' in Russell Square, after having run out of space in the first office! Previously Andreas was a postdoc at Imperial College London, working on spiking neural network simulations using GPUs in the Cognitive Robotics Lab. His team works to accelerate the research programme at DeepMind by providing the software used across all research projects, as well as directly working on research projects. Andreas’ main focus is making sure his team is getting to work on interesting problems and that the research team functions smoothly and has the tools and support it needs. He says DeepMind is a “great collaborative environment and the best place to be at the forefront of developments in AI.”

Raia Hadsell

Raia is a Senior Research Scientist working on Deep Learning at DeepMind, with a particular focus on solving robotics and navigation using deep neural networks. She joined DeepMind following positions at Carnegie Mellon and SRI International as she saw the combination of research into games, neuroscience, deep learning and reinforcement learning as a unique proposition that could lead to fundamental breakthroughs in AI. She says that one of her favourite moments at DeepMind was watching the livestream of Lee Sedol playing AlphaGo at 4am surrounded by the rest of the team, despite the difference in timezone!

Frederic Besse

Frederic joined as a Research Engineer in July 2015. Prior to DeepMind, he was a research engineer at the Foundry, a VFX software company. Frederic’s job is to accelerate research, and take the lead on the engineering side of projects. He mainly focuses on generative models, which is a family of models belonging to the field of unsupervised learning. He describes his job as trying to teach a computer to process data like the human brain: "To dream and imagine things that it has never seen before. One way to achieve this is to show the computer a lot of data and let it figure out why things look like they do.” Frederic joined DeepMind to be a part of our exciting and challenging mission to solve intelligence. His favourite DeepMind memory was watching the AlphaGo vs Lee Sedol match: “The suspense and atmosphere in the office was amazing.”