Connect with us

Hi, what are you looking for?

[stock_market_widget type="ticker-quotes" template="chart" color="#5679FF" assets="MSFT,AAPL,NFLX,GOOG,TSLA,NFLX,AMZN" animation="true" display_currency_symbol="true" api="yf" speed="50" direction="left" pause="true"]

Tech

VALL-E's quickie voice deepfakes should worry you, if you weren't worried already

The emergence in the last week of a particularly effective voice synthesis machine learning model called VALL-E has prompted a new wave of concern over the possibility of deepfake voices made quick and easy — quickfakes, if you will. But VALL-E is more iterative than breakthrough, and the capabilities aren’t so new as you might

vall-e's-quickie-voice-deepfakes-should-worry-you,-if-you-weren't-worried-already

The emergence in the last week of a particularly effective voice synthesis machine learning model called VALL-E has prompted a new wave of concern over the possibility of deepfake voices made quick and easy — quickfakes, if you will. But VALL-E is more iterative than breakthrough, and the capabilities aren’t so new as you might think. Whether that means you should be more or less worried is up to you.

Voice replication has been a subject of intense research for years, and the results have been good enough to power plenty of startups, like WellSaid, Papercup, and Respeecher. The latter is even being used to create authorized voice reproductions of actors like James Earl Jones. Yes: from now on Darth Vader will be AI generated.

VALL-E, posted on GitHub by its creators at Microsoft last week, is a “neural codec language model” that uses a different approach to rendering voices than many before it. Its larger training corpus and some new methods allow it to create “high quality personalized speech” using just 3 seconds of audio from a target speaker.

That is to say, all you need is an extremely short clip like the following (all clips from Microsoft’s paper):

To produce a synthetic voice that sounds remarkably similar:

Advertisement. Scroll to continue reading.

As you can hear, it maintains tone, timbre, a semblance of accent, and even the “acoustic environment,” for instance a voice compressed into a cell phone call. I didn’t bother labeling them because you can easily tell which of the above is which. It’s quite impressive!

So impressive, in fact, that this particular model seems to have pierced the hide of the research community and “gone mainstream.” As I got a drink at my local last night, the bartender emphatically described the new AI menace of voice synthesis. That’s how I know I misjudged the zeitgeist.

But if you look back a bit, in as early as 2017 all you needed was a minute of voice to produce a fake version convincing enough that it would pass in casual use. And that was far from the only project.

The improvement we’ve seen in image-generating models like DALL-E 2 and Stable Diffusion, or in language ones like ChatGPT, has been a transformative, qualitative one: a year or two ago this level of detailed, convincing AI-generated content was impossible. The worry (and panic) around these models is understandable and justified.

Contrariwise, the improvement offered by VALL-E is quantitative, not qualitative. Bad actors interested in proliferating fake voice content could have done so long ago, just at greater computational cost, not something that is particularly difficult to find these days. State-sponsored actors in particular would have plenty of resources at hand to do the kind of compute jobs necessary to, say, create a fake audio clip of the President saying something damaging on a hot mic.

I chatted with James Betker, an engineer who worked for a while on another text-to-speech system, called Tortoise-TTS.

Betker said that VALL-E is indeed iterative, and like other popular models these days gets its strength from its size.

“It’s a large model, like ChatGPT or Stable Diffusion; it has some inherent understanding of how speech is formed by humans. You can then fine tune Tortoise and other models on specific speakers, and it makes them really, really good. Not ‘kind of sounds like,’ good,” he explained.

When you “fine tune” Stable Diffusion on a particular artist’s work, you’re not retraining the whole enormous model (that takes a lot more power), but you can still vastly improve its capability of replicating that content.

Advertisement. Scroll to continue reading.

But just because it’s familiar doesn’t mean it should be dismissed, Betker clarified.

“I’m glad it’s getting some traction because i really want people to be talking about this. I actually feel that speech is somewhat sacred, the way our culture thinks about it,” and he actually stopped working on his own model as a result of these concerns. A fake Dali created by DALL-E 2 doesn’t have the same visceral effect for people as hearing something in their own voice, that of a loved one, or of someone admired.

VALL-E moves us one step closer to ubiquity, and although it is not the type of model you run on your phone or home computer, that isn’t too far off, Betker speculated. A few years, perhaps, to run something like it yourself; as an example, he sent this clip he’d generated on his own PC using Tortoise-TTS of Samuel L. Jackson, based on audiobook readings of his:

Good, right? And a few years ago you might have been able to accomplish something similar, albeit with greater effort.

This is all just to say that while VALL-E and the 3-second quickfake are definitely notable, they’re a single step on a long road researchers have been walking for over a decade.

The threat has existed for years and if anyone cared to replicate your voice, they could easily have done so long ago. That doesn’t make it any less disturbing to think about, and there’s nothing wrong with being creeped out by it. I am too!

But the benefits to malicious actors are dubious. Petty scams that use a passable quickfake based on a wrong number call, for instance, are already super easy because security practices at many companies are already lax. Identity theft doesn’t need to rely on voice replication because there are so many easier paths to money and access.

Meanwhile the benefits are potentially huge — think about people who lose the ability to speak due to an illness or accident. These things happen quickly enough that they don’t have time to record an hour of speech to train a model on (not that this capability is widely available, though it could have been years ago). But with something like VALL-E, all you’d need is a couple clips off someone’s phone of them making a toast at dinner or talking with a friend.

Advertisement. Scroll to continue reading.

There’s always opportunity for scams and impersonation and all that — although more people are parted with their money and identities via far more prosaic ways, like a simple phone or phishing scam. The potential for this technology is huge, but we should also listen to our collective gut, saying there’s something dangerous here. Just don’t panic — yet.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Stocks

SAN FRANCISCO (MarketWatch) — Among the companies whose shares are expected to see active trade in Thursday’s session are BlackBerry Ltd., Oracle Corp., and...

Mining

NAL spodumene concentrate production remains targeted for H1 2023 with revenue potential in Q3 2023. Credit: Piedmont Piedmont Lithium (Nasdaq: PLL; ASX: PLL) announced...

Tech

This holiday season, consider giving the gift of security with an ad blocker. That’s the takeaway message from an unlikely source — the FBI...

Top Stories

There have been major developments out of Japan this week. The Bank of Japan surprised the market by widening its yield curve target by...

Advertisement