This free Google Chrome plugin provides accurate deepfake voice detection to quickly spot manipulated audio

This free Google Chrome plugin provides accurate deepfake voice detection to quickly spot manipulated audio


  • Deepfake detection tool available as Google Chrome browser extension
  • Provides a ‘deepfake score’ that determines if media is authentic
  • Hiya claims its service can detect issues with only one second of audio

One of the most concerning developments in AI technologies is the rise of deepfakes, highly realistic audio and video forgeries able to mislead audiences and disrupt businesses.

As deepfake tools become more accessible, the need for reliable ways to detect them grows, especially for professionals relying on accurate information to make critical decisions.

Hiya has introduced its Deepfake Voice Detector, a free extension for Google Chrome that quickly identifies manipulated audio and video content, providing results in seconds, making it easy for users to spot suspicious media.

The fight against deepfakes intensifies

By integrating AI-powered detection capabilities directly into the browser, Hiya claims its tool offers a practical solution for businesses, journalists, and individuals navigating an increasingly complex information landscape.

The Deepfake Voice Detector harnesses the power of AI to identify manipulated audio and video with up to 99% accuracy. The tool analyzes voice patterns within online content and provides results in just seconds, giving users a fast way to evaluate suspicious material, regardless of the audio channel or language.

With the ability to analyze as little as one second of audio, the tool provides real-time detection and multi-language support. Once analyzed, the extension offers an authenticity score ranging from 0 to 100, with 100 indicating a genuine voice and 0 signalling a deepfake.

This browser plugin is designed for use on social media or news platforms, allowing tools like the Deepfake Voice Detector to offer support to newsrooms and businesses seeking to validate content.

Several media and fact-checking organizations such as AFP Fact Check, RTVE.es, the Deepfake Analysis Unit, and TrueMedia.org already rely on Hiya’s solution, with Brad Smith, Microsoft’s vice chair and president, also recently praising the tool, calling it an excellent example of “using good AI to combat bad AI.”

“Deepfake scams can lead employees to share confidential company information or expose critical IT system passwords,” said Kush Parikh, President at Hiya. “The consequences of falling for these scams are immense, especially as vishing is increasingly used with deepfakes to extort or blackmail individuals.”

You might also like

administrator

Related Articles