The objective of this study is to generate high-quality speech from silent talking face videos, a task also known as video-to-speech synthesis.
A significant challenge in video-to-speech synthesis lies in the substantial modality gap between silent video and multi-faceted speech.
In this paper, we propose a novel video-to-speech system that effectively bridges this modality gap, significantly enhancing the quality of synthesized speech.
This is achieved by learning of hierarchical representations from video to speech.
Specifically, we gradually transform silent video into acoustic feature spaces through three sequential stages - content, timbre, and prosody modeling.
In each stage, we align visual factors - lip movements, face identity, and facial expressions - with corresponding acoustic counterparts to ensure the seamless transformation.
Additionally, to generate realistic and coherent speech from the visual representations, we employ a flow matching model that estimates direct trajectories from a simple prior distribution to the target speech distribution.
Extensive experiments demonstrate that our method achieves exceptional generation quality comparable to real utterances, outperforming existing methods by a significant margin.