
Google introduces Gemini Nano-powered scam detection in Chrome 137
Google is stepping up its fight against online scams with a new layer of protection in Chrome 137. By harnessing the power of on-device artificial intelligence, specifically the Gemini Nano large language model (LLM), Chrome can now detect and block tech support scams more effectively.
What are tech support scams?
Tech support scams are a type of online fraud where users are tricked into believing their computer has a serious problem. Scammers typically display fake pop-up warnings that mimic legitimate security alerts. These messages often create panic by using full-screen takeovers or disabling keyboard and mouse input. The goal is to convince the victim to pay for unnecessary services or software—or worse, to grant remote access to their device.
How Chrome 137 fights back?
With the release of Chrome 137, Google is using its on-device Gemini Nano LLM to help identify suspicious behavior in real-time. When a user lands on a potentially dangerous page, Chrome will trigger the Gemini Nano model to analyze the content of that page. This AI-powered analysis looks for known signals of tech support scams, like the use of the keyboard lock API.
The LLM determines the intent of the page and generates security signals. These signals are sent to Google’s Safe Browsing service, which uses them along with other data to assess the threat level. If Safe Browsing flags the page as likely malicious, Chrome will immediately display a warning to the user.
Why use on-device AI?
The decision to use AI locally on the user’s device is crucial for both speed and privacy. Malicious websites often exist for just a few minutes, making it difficult for traditional web crawlers to catch them in time. On-device models can respond instantly — seeing threats as users see them.
Moreover, scammers often design sites to appear differently depending on the user. This technique can help them evade automated scanning tools. But by analyzing pages on-device, Chrome can better assess how they are presented in real-world scenarios.
Privacy and performance considerations
Despite the complexity of AI analysis, Google says performance won’t take a hit. The LLM runs asynchronously and is triggered only under specific conditions. Resource usage is managed through token limits, GPU throttling, and quota controls.
Importantly, the system only shares LLM-generated signals with Google’s Safe Browsing service if users have opted into Enhanced Protection mode. Users on Standard Protection won’t share this data but will still benefit as Chrome updates its blocklists based on new threats detected by others.
What’s next?
This new LLM-powered feature is just the beginning. Google plans to expand this technology to identify other scam types, including package tracking scams and unpaid toll scams. Later this year, the same protections will roll out to Chrome on Android.
Google is also working on ways to prevent potential exploits of AI, such as prompt injection or timing-based attacks, and is collaborating with research teams to explore long-term solutions.