AI Voice deepfakes pose growing scam threat: how to protect yourself.

Artificial intelligence is advancing rapidly, and soon it will be “nearly impossible” to distinguish fake voices from real ones, experts have warned. AI voice “deepfakes” are becoming so convincing that scammers could exploit the technology to impersonate loved ones and request money.

Security expert Paul Bischoff explained that AI can clone voices in seconds, making it possible for criminals to pose as a spouse or friend over the phone. While detecting AI in text or voice conversations will become harder, he advised using safe phrases and unexpected prompts during live conversations to expose fakes. However, pre-recorded messages or voicemails make it much more difficult to differentiate between real and AI-generated voices.

Jamie Beckland, chief product officer at APIContext, emphasized that legitimate companies using AI will inform users, but scammers won’t. To protect yourself, he suggested switching from voice to video calls when suspicious, as deepfake tools struggle with real-time video. Engaging in casual conversation and trusting your instincts can also help reveal scams.