AI Multispeaker Lip Sync Check

AIBUSINESSBRAINS

Introducing AI Multi-Speaker Lip-Sync: The Latest Advancement in AI Technology

Multi speaker lip sync

Rask AI, a company specializing in video and audio localization tools, just rolled out their exciting new Multi-Speaker Lip-Sync feature. This AI-driven lip-syncing allows their 750,000 users to translate videos into over 130 languages, making the speakers sound as natural as if they’re native speakers. Previously, dubbed content often had mismatched lip movements and voices, which many experts think is why dubbing wasn’t very popular in English-speaking countries. This new technology makes localized content feel more realistic and engaging.

Research by linguistics expert Professor Yukari Hirata has shown that watching lip movements can significantly help in understanding difficult sounds in a second language. It’s also a key part of how we generally learn to speak.

Rask’s new feature takes this to the next level. The AI smartly adjusts the lower part of the face in videos, considering the speaker’s appearance and what they’re saying, to make everything look more natural.

How AI Multi-Speaker Lip-Sync works:
  1. Upload a video with one or more people speaking.
  2. Translate the audio into a different language.
  3. Use the ‘Lip Sync Check’ feature to see if the video is suitable for lip-syncing.
  4. If it passes, hit ‘Lip Sync’ and wait for the process to complete.
  5. Download your updated video.

Maria Chmir, the founder and CEO of Rask AI, says this feature will help content creators reach a wider audience. The AI adjusts the lip movements so the characters seem to be fluently speaking the new language.

This technology is powered by a generative adversarial network (GAN), which includes a content generator and a quality-checking discriminator. They work against each other to continually improve the output quality.

The beta version of this feature is now available to all Rask subscription customers.

Leave a Comment