Audio Wave
The VoxCeleb Speaker Recognition Challenge 2022
A large scale audio-visual dataset of human speech

Welcome to the 2022 VoxCeleb Speaker Recognition Challenge! The goal of this challenge is to probe how well current methods can recognize speakers from speech obtained 'in the wild'. The data is obtained from YouTube videos of celebrity interviews, as well as news shows, talk shows, and debates - consisting of audio from both professionally edited videos as well as more casual conversational audio in which background noise, laughter, and other artefacts are observed in a range of recording environments.

Timeline (Tentative)

This is the rough timeline of the challenge. We will post the exact dates as soon as possible.

Late June Release of training data and development kit
Early July Release of test data
Late July Evaluation server open
Early September Deadline for submission of results; invitation to workshop speakers
September 23rd Challenge workshop


VoxSRC-2022 will feature four tracks. Track 1, 2 and 3 are speaker verification tracks, where the task is to to determine whether two samples of speech are from the same person. Track 4 is a speaker diarisation track, where the task is to break up multi-speaker audio into homogenous single speaker segments, effectively solving ‘who spoke when’. Details will be provided soon.

Track 1
Fully supervised speaker verification (closed)
Track 2
Fully supervised speaker verification (open)
Track 3
Semi-supervised speaker verification (closed)
  • In this track, the participants will be allowed to use a large set of unlabelled data and a small set of labelled data to train the speaker model. Details will be announced later.
Track 4
Speaker diarisation (open)
  • Participants are allowed to use any data except the challenge test data


The data will be provided soon.

Challenge registration

Details will be provided soon.

Previous Challenges

Details of the previous challenges can be found below. You can also find the slides and presentation videos of the winners on the workshop websites.

Challenge Links
VoxSRC-19 challenge / workshop
VoxSRC-20 challenge / workshop
VoxSRC-21 challenge / workshop


Note the evaluation server is NOT active yet


Andrew Brown, VGG, University of Oxford,
Jaesung Huh, VGG, University of Oxford,
Arsha Nagrani, Google Research,
Joon Son Chung, KAIST, South Korea,
Andrew Zisserman, VGG, University of Oxford,
Daniel Garcia-Romero, AWS AI,
Jee-Weon Jung, Naver Corporation, South Korea


Mitchell McLaren, Speech Technology and Research Laboratory, SRI International, CA,
Douglas A Reynolds, Lincoln Laboratory, MIT.

Please contact abrown[at]robots[dot]ox[dot]ac[dot]uk or jaesung[at]robots[dot]ox[dot]ac[dot]uk if you have any queries, or if you would be interested in sponsoring this challenge.


Will be determined soon