An AI company has built a fake Joe Rogan that has to be heard to be believed
By now, all of us should be well and truly aware that we live in a horrifying Sci-Fi dystopia. But judging by this recent bit of news, we’ve probably all pictured just how AI might end up shafting us all wrong. Yeah, nah, it looks like it won’t f**k us over by controlling our machines and having our kitchen appliances whisk, grill and blend us to death. That’s because it doesn’t need to. It turns out it can just impersonate voices…in that way, it can give orders to whoever it needs to and bring about the end of the world as we know it…
Yeah, nah, this little demonstration of AI f**kery is a pretty quick and simple demonstration of how Orwell didn’t quite get it right. Big Brother won’t only be watching us. He’ll be f**ken chatting to us under the guise of our lawmakers, politicians and loved ones.
Anyway, getting to the fore-mentioned technology, Canada-based company Dessa has created an AI that mimics the voice of Joe Rogan. Obviously, most of you blokes and blokettes know who Rogan is, but if you don’t, he’s the guy who gave Elon Musk his first hit of whacky tobaccy.
He also has a podcast and various stand-up shows. He’s a known quantity – and one of the things he’s known for is his voice. In that way, it should be pretty clear how to tell the real Rogan’s voice, right? Well, the Dessa technology, called RealTalk, is a pretty f**ken kickass audio deepfake.
The script that it speaks is pretty humorous, but we’ll let you judge that side of things for yourself. Once you’ve thought about the potential nefarious uses of this s**t, we’ll give you some peace of mind. It turns out we don’t have to stress about the implications of this yet. Dessa reckon they won’t be releasing the technology because it’s too dangerous.
Alex Krizhevsky, Dessa’s principal machine learning architect, reckons, “[It’s] one of the coolest, but scariest, things I’ve seen yet in artificial intelligence. Unlike The Singularity, which is this theoretical thing that could happen in 40, 100 years, speech synthesis is soon going to be a reality everywhere.”
Final thought: The whole concept of audio deepfakes is potentially troubling for a few reasons, but just imagine if this technology was used to impersonate someone like a Prime Minister, or a Government official. F**ken hell.
Just in case you missed it, here’s one of Ozzy’s latest commentary videos…Ozzy Man Reviews: Window Battle
Video Link: RealTalk
Video Link: Joe Rogan