Welcome to Eye on AI, with AI reporter Sharon Goldman. In this edition, a new startup is tackling AI impersonation…legal AI startup Harvey raised $160 million at an $8 billion valuation…VC ‘kingmaking’ is happening earlier than ever with AI startups…Why AI writes like that…Microsoft lowers sales staff’s growth targets for newer AI software.A year ago, I
spoke to several cybersecurity leaders at companies like SoftBank and
Mastercard who were already sounding alarms about AI-powered impersonation threats, including deepfakes and voice clones. They warned that fraud would evolve quickly: The first wave of scams were about scammers using deepfakes to pretended to be someone you know. But attackers would soon begin using AI-generated video and audio to impersonate strangers from trusted sources, such as a help-desk rep from your bank or an IT administrator at work.
A year later, this is exactly what’s happening: The Identity Theft Resource Center reported a 148% surge in impersonation scams between April 2024 and March 2025, driven by scammers spinning up fake business websites, deploying lifelike AI chatbots, and generating voice agents that sound indistinguishable from real company representatives. In 2024 alone, the Federal Trade Commission recorded $2.95 billion in losses tied to impersonation scams.
Now, a new startup is stepping directly into the breach. imper.ai aims to stop AI impersonation attacks in real time, and today announced its public launch and $28 million in new funding. Redpoint Ventures and Battery Ventures led the investment round, with participation from Maple VC, Vessy VC, and Cerca Partners.
Instead of trying to spot visual or audio anomalies—an approach that is rapidly becoming almost impossible—imper.ai says it analyzes the digital breadcrumbs attackers can’t fake. These include device telemetry (the background data your device gives off, like location, operating system, hardware details, and network behavior), network diagnostics, and environmental signals. Its platform runs silently across systems including Zoom, Teams, Slack, WhatsApp, Google Workspace, and IT help-desk environments, flagging risky sessions before a human ever gets deceived.
CEO Noam Awadish, a veteran of autonomous-driving pioneer Mobileye and a longtime member of Israel’s 8200 cyberwarfare unit, said AI has supercharged classic social-engineering tactics—the kind of attacks that manipulate people into giving up sensitive information or approving actions that compromise security. Whether through impersonation, fake urgency, or psychological pressure, attackers are increasingly using AI to trick victims into revealing passwords, financial details, or remote access.
A recent example is Jaguar Land Rover. Last month hackers used fake credentials to carry out coordinated phishing and “vishing” (voice-phishing) campaigns impersonating JLR’s IT support staff to harvest credentials and gain access. The attack forced the automaker to shut down critical IT systems and ultimately its production lines, resulting in estimated losses of $1.5 billion so far.
Imper.ai’s founding team of Awadish, along with other 8200 veterans Anatoly Blighovsky and Rom Dudkiewicz, believes their background as both cyber attackers and defenders gives them an edge. “I think that people don’t understand that most of the major breaches start with social engineering,” Awadish told me, adding that AI is a game changer because emails, videos, and voice clones have become almost perfect.
In addition, he pointed out that collaboration tools have multiplied far beyond email and phone calls. Now attackers have dozens of communication tools, and AI lets them generate “spear-phishing” messages (personalized phishing emails) at scale, as well as cloned voices, and deepfake videos at massive speed.
That’s why imper.ai avoids trying to out-detect AI impersonation directly from the AI-generated content itself. “We don’t want to get into an AI arms race,” Awadish said. Instead, the startup focuses on what attackers cannot fake—mostly metadata.
As the company’s traction has accelerated, so has investor interest. “We want to build a platform that safeguards the entire communication space,” Awadish said. “ It’s not something small, it’s not like a plugin that one of the giants is going to build.” With the new funding, he said that the company can double its R&D headcount and triple its go-to-market organization in the US.
“At the moment, there is really high traction, so we need to keep up with the pace, so we need to grow,” he said.
Note: I am super-excited to be headed to San Francisco for Fortune Brainstorm AI on Monday and Tuesday! I’ll be interviewing Prakhar Mehrotra, SVP and global head of AI at PayPal, and Marc Hamilton, VP of solutions architecture and engineering at Nvidia, on the main stage. I’ll also be moderating a spicy roundtable session all about AI data centers. Plus, I’m looking forward to seeing some of the other speakers, including actor Joseph Gordon-Levitt, OpenAI COO Brad Lightcap, and Ali Ghodsi, CEO of Databricks. And with that, here’s more AI news.
Sharon Goldmansharon.goldman@fortune.com@sharongoldman