Audio Wave

The VoxCeleb Speaker Recognition Challenge 2023

Welcome to the 2023 VoxCeleb Speaker Recognition Challenge! The goal of this challenge is to probe how well current methods can recognize speakers from speech obtained 'in the wild'. The data is obtained from YouTube videos of celebrity interviews, as well as news shows, talk shows, and debates - consisting of audio from both professionally edited videos as well as more casual conversational audio in which background noise, laughter, and other artefacts are observed in a range of recording environments.

More details coming soon....


Details coming soon.....

Previous Challenges

Details of the previous challenges can be found below. You can also find the slides and presentation videos of the winners on the workshop websites.

Challenge Links
VoxSRC-19 challenge / workshop
VoxSRC-20 challenge / workshop
VoxSRC-21 challenge / workshop
VoxSRC-22 challenge / workshop

Technical Description

All teams are required to submit a brief technical report describing their method. All reports must be a minimum of 1 page and a maximum of 4 pages excluding references. You can combine descriptions for multiple tracks into one report. Reports must be written in English.

See here, here and here for examples of reports.


details coming soon....


Jaesung Huh, VGG, University of Oxford,
Jee-weon Jung, Naver, South Korea
Andrew Brown, VGG, University of Oxford,
Arsha Nagrani, Google Research,
Joon Son Chung, KAIST, South Korea,
Andrew Zisserman, VGG, University of Oxford,
Daniel Garcia-Romero, AWS AI


Mitchell McLaren, Speech Technology and Research Laboratory, SRI International, CA,
Douglas A Reynolds, Lincoln Laboratory, MIT.

Please contact jaesung[at]robots[dot]ox[dot]ac[dot]uk if you have any queries, or if you would be interested in sponsoring this challenge.


This work is supported by the EPSRC(Engineering and Physical Research Council) programme grant EP/T028572/1: Visual AI project.