Challenge Results

We are pleased to announce the official results of the TidyVoice 2026 Challenge. We sincerely thank all 42 participating teams for their outstanding contributions to advancing cross-lingual speaker verification.

Deeper Analysis: We have prepared a detailed deeper analysis for each team. You can download it from the corresponding link in the "Deeper Analysis" column of the leaderboard below. You are free to use this analysis in your paper if you wish.

Ground Truth Release: The ground truth labels for speaker identity and language will be released in March 2026, after the Language Recognition Challenge at Odyssey 2026, which uses the same dataset. After that, the dataset will serve as a public benchmark for both Automatic Speaker Verification (ASV) and Language Identification (LID).

Call for System Descriptions: All participating teams are welcome and encouraged to submit a paper describing their systems and approaches to the Interspeech 2026 platform. The final acceptance decision for each paper will be made by the reviewers through the official review process.

During the submission process, in the "Subject Areas" section, please select: "14.15 TidyVoice Challenge: Cross-Lingual Speaker Verification".

Submission deadline: 25 February 2026.

If your paper is publicly available (e.g., on arXiv), we would be happy to link it here for the community. Please share the link with us at: aref.farhadipour@uzh.ch


Team tv26_eval-A tv26_eval-U Deeper Analysis Paper
EER (%) minDCF EER (%) minDCF
T01 1.39 0.097 1.95 0.058 Download
T02 2.21 0.180 2.99 0.205 Download
T03 2.43 0.175 2.84 0.189 Download
T04 2.46 0.206 4.45 0.288 Download
T05 2.52 0.189 3.43 0.224 Download
T06 2.53 0.190 3.40 0.196 Download
T07 2.61 0.195 3.39 0.220 Download
T08 2.74 0.253 2.86 0.281 Download
T09 3.56 0.272 5.57 0.367 Download
T10 3.64 0.236 9.16 0.359 Download
T11 3.70 0.278 6.41 0.329 Download
T12 3.92 0.296 5.59 0.391 Download
T13 4.29 0.285 5.82 0.332 Download
T14 4.31 0.33 5.22 0.37 Download
T15 4.34 0.290 6.13 0.347 Download
T16 4.75 0.306 6.30 0.390 Download
T17 4.81 0.350 7.01 0.422 Download
T18 5.07 0.363 9.89 0.440 Download
T19 5.10 0.335 7.74 0.567 Download
T20 5.49 0.434 7.24 0.421 Download
T21 5.50 0.349 7.24 0.448 Download
T22 5.94 0.384 8.84 0.545 Download
T23 6.12 0.410 9.47 0.624 Download
T24 6.64 0.502 6.30 0.445 Download
T25 6.68 0.498 6.03 0.423 Download
T26 7.02 0.524 9.84 0.525 Download
T27 8.35 0.402 9.79 0.566 Download
T28 8.40 0.649 12.15 0.630 Download
T29 8.96 0.544 10.62 0.525 Download
T30 9.06 0.658 11.60 0.607 Download
Baseline 9.06 0.658 11.60 0.607 Download Evaluation Plan
T31 9.18 0.573 10.71 0.539 Download
T32 9.27 0.512 11.28 0.508 Download
T33 10.47 0.508 11.06 0.729 Download
T34 10.54 0.728 11.18 0.710 Download
T35 12.43 0.702 13.68 0.765 Download
T36 13.99 0.860 15.22 0.865 Download
T37 15.03 0.731 18.41 0.717 Download
T38 15.05 0.700 19.24 0.804 Download
T39 15.51 0.674 17.93 0.739 Download
T40 20.15 0.993 20.91 0.970 Download
T41 21.25 0.999 22.36 0.996 Download
T42 21.44 0.987 23.03 0.974 Download

Teams are ranked by Task 1 EER (%). The Baseline row shows the official baseline system performance.


Citation

If you use the TidyVoice dataset in your work, please cite the following:

@misc{farhadi2026tidyvoice, title={TidyVoice: A Curated Multilingual Dataset for Speaker Verification Derived from Common Voice}, author={Aref Farhadipour and Jan Marquenie and Srikanth Madikeri and Eleanor Chodroff}, year={2026}, eprint={2601.16358}, archivePrefix={arXiv}, primaryClass={eess.AS}, url={https://arxiv.org/abs/2601.16358}, }