Researchers at Korea University have developed a new speech synthesizer called HierSpeech++. This research aims to create synthetic speech that is robust, expressive, natural, and human-like. The team aimed to achieve this without relying on a text-speech paired dataset and to improve existing models’ shortcomings. HierSpeech++ was designed to bridge the semantic and acoustic representation gap in speech synthesis, ultimately improving style adaptation.
Until now, zero-shot speech synthesis based on LLM has had limitations. However, HierSpeech++ has been developed to address these limitations and improve robustness and expressiveness while addressing issues related to slow inference speed. By utilizing a text-to-vec framework that generates self-supervised speech and F0 representations based on text and prosody prompts, HierSpeech++ has been proven to outperform LLM-based and diffusion-based models. These speed, robustness, and quality advancements establish HierSpeech++ as a powerful zero-shot speech synthesizer.
HierSpeech++ uses a hierarchical framework for generating speech without prior training. It employs a text-to-vec framework to develop self-supervised address and F0 representations based on text and prosody prompts. Speech is produced using a hierarchical variational autoencoder and a generated vector, F0, and voice prompt. The method also includes an efficient speech super-resolution framework. Comprehensive assessment uses various pre-trained models and implementations with objective and subjective metrics such as log-scale Mel error distance, perceptual evaluation of speech quality, pitch, periodicity, voice/unvoice F1 score, naturalness, mean opinion score, and voice similarity MOS.
Superior naturalness in synthetic speech is achieved by HierSpeech++ in zero-shot scenarios, with enhancements in robustness, expressiveness, and speaker similarity. Subjective metrics like naturalness mean opinion score and voice similarity MOS were used to assess the innocence of the speech, and the results showed that HierSpeech++ outperforms ground-truth speech. Incorporating a speech super-resolution framework from 16 kHz to 48 kHz further improved the naturalness of the address. Experimental results also demonstrated that the hierarchical variational autoencoder in HierSpeech++ is superior to LLM-based and diffusion-based models, making it a robust zero-shot speech synthesizer. It was also found that zero-shot text-to-speech synthesis with noisy prompts validated the effectiveness of HierSpeech++ in generating speech from unseen speakers. The hierarchical synthesis framework also allows for versatile prosody and voice style transfer, making synthesized speech even more flexible.
In conclusion, HierSpeech presents an efficient and potent framework for achieving human-level quality in zero-shot speech synthesis. Its disentangling of semantic modeling, speech synthesis, super-resolution, and facilitation of prosody and voice style transfer enhance synthesized speech flexibility. The system demonstrates robustness, expressiveness, naturalness, and speaker similarity improvements even with a small-scale dataset and offers significantly faster inference speeds. The study also explores potential extensions to cross-lingual and emotion-controllable speech synthesis models.
Check out the Paper, Project and Github. All credit for this research goes to the researchers of this project. Also, don’t forget to join our 33k+ ML SubReddit, 41k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.
If you like our work, you will love our newsletter..
Sana Hassan, a consulting intern at Marktechpost and dual-degree student at IIT Madras, is passionate about applying technology and AI to address real-world challenges. With a keen interest in solving practical problems, he brings a fresh perspective to the intersection of AI and real-life solutions.