Google’s prototype AI translator translates your tone as well as your words

Google’s prototype AI translator translates your tone as well as your words

We all know that communication depends on more than just what you say. How you say it is often just as important. That’s why Google’s latest prototype

HQ Trivia’s first live event was ruined by people who weren’t in the audience
Alexa now knows how high swans fly thanks to Wolfram Alpha
Boeing under increased scrutiny as new details surface about approval of crashed jet

We all know that communication depends on more than just what you say. How you say it is often just as important. That’s why Google’s latest prototype AI translator doesn’t just translate the words coming out of your mouth, but also the tone and cadence of your voice.

The system is called Translatotron, and Google’s researchers go into detail on how it works in a recent blog post. They don’t say that Translatotron will be coming to commercial products any time soon, but that will probably happen in time. As Google’s head of translation explained to The Verge earlier this year, the company’s goal at the moment is to add more nuance to its translation tools, creating more lifelike speech.

You can hear what this sounds like in the audio samples below. The first clip is the input; the second is the basic translation; and the third tries to capture the original speaker’s voice.

Input (Spanish)
Translatotron translation
Translatotron translation with inflection

As you can hear, it’s not a seamless translation, but it’s impressive nonetheless. You can listen to many more audio samples from Translatotron here.

Although capturing the inflection of a speaker’s voice is what’s most impressive to laypeople, Translatotron’s attraction for AI engineers is that it translates speech directly from audio input to audio output without translating it into the usual intermediary text.

This sort of AI model is known as an end-to-end system, because there are no stops for subsidiary tasks or actions. Google says making translation end-to-end produces results faster while avoiding the risk of introducing errors during multiple translation steps.

Perhaps even more interestingly, the data the model is processing isn’t raw audio. Instead, it uses spectrogram data, or detailed visualizations of sound. In essence, that means we’re translating speech from one language to another using pictures, which is mind-boggling.

As ever with Google’s translation efforts, there’s reason to be skeptical about how systems like this will work in the wild. The company often unveils ambitious new speech and translation tools, and they often perform less fluidly than we’d hope. Still: the future marches on, and AI translation is only getting better.

This article is from The Verge

COMMENTS

WORDPRESS: 0
DISQUS:

​Double ​Your ​In​come

​​Get expert advice on how to build multiple streams of income until you fire your Boss:

long-arrow-down