Claude Opus 4.6 found 22 vulnerabilities in Firefox — a lesson for software cybersecurity and Ukraine’s defense

An experiment by Anthropic and Mozilla showed that AI can rapidly find critical bugs — both an opportunity and a warning about the technology’s dual-use.

154
Share:
Ілюстративне фото: Depositphotos

Experiment result

Anthropic, in collaboration with Mozilla, launched a test in which the Claude Opus 4.6 model analyzed the Firefox browser code. Over two weeks the AI found 22 vulnerabilities, of which 14 were classified as critical. Some of the issues discovered have already been fixed in the Firefox 148 (February) update.

Key details

The model began working with the code and in less than 20 minutes found a use-after-free bug in a component related to JavaScript execution. During the analysis Claude reviewed about 6,000 C++ files and sent more than 100 reports to the Mozilla team.

Anthropic also tested the ability to create exploits — special code to realize the discovered vulnerabilities. Despite hundreds of tests and roughly $4,000 in API costs, working exploits were obtained in only two cases.

"The results demonstrate that AI can become a powerful auxiliary tool for continuous security monitoring of complex software, but it requires clear usage rules and oversight"

— Anthropic researchers

"Most of the issues found have already been fixed in the Firefox 148 update"

— a Mozilla spokesperson

Risks and limitations

The experiment highlighted two key points. First, AI can significantly speed up vulnerability discovery and reduce the resources needed for code review. Second, the technical capability can be ambiguous: the same AI that helps close bugs can potentially assist in creating exploits. In this context, not only the technical results matter but also access policy, auditing, and control over model use.

There is also a geopolitical dimension to regulation: Claude was placed on the Pentagon's "blacklist," and there have been reports that some Chinese companies trained their own models on Claude without Anthropic's consent. This underscores issues of intellectual property and the risks of technology spreading without control.

What this means for Ukraine

First, for Ukrainian developers and government agencies this is a signal: investing in AI-based tools for vulnerability discovery is smart and effective. Second, it is necessary to account for the expected increase in availability of such tools in the hands of adversaries: automation of bug discovery can accelerate cyberattacks on critical infrastructure. Finally, this is an argument in favor of international cooperation in cybersecurity and clear rules for model transfer and use.

Conclusion

The Anthropic and Mozilla experiment is an example of how artificial intelligence can change software security practices: finding problems faster, but also creating new challenges for control and ethics. For Ukraine, it is an opportunity to strengthen protection of digital infrastructure and a reminder of the need for policies that separate the beneficial from the dangerous.

World news