Hacker News with Generative AI: Vulnerability Research

Using Large Language Models to Catch Vulnerabilities (blogspot.com)
In our previous post, Project Naptime: Evaluating Offensive Security Capabilities of Large Language Models, we introduced our framework for large-language-model-assisted vulnerability research and demonstrated its potential by improving the state-of-the-art performance on Meta's CyberSecEval2 benchmarks. Since then, Naptime has evolved into Big Sleep, a collaboration between Google Project Zero and Google DeepMind.
Escaping the Chrome Sandbox Through DevTools (ading.dev)
This blog post details how I found CVE-2024-6778 and CVE-2024-5836, which are vulnerabilities within the Chromium web browser which allowed for a sandbox escape from a browser extension (with a tiny bit of user interaction). Eventually, Google paid me $20,000 for this bug report.
Advancing Memory Safety (googleblog.com)
Error-prone interactions between software and memory1 are widely understood to create safety issues in software. It is estimated that about 70% of severe vulnerabilities2 in memory-unsafe codebases are due to memory safety bugs. Malicious actors exploit these vulnerabilities and continue to create real-world harm. In 2023, Google’s threat intelligence teams conducted an industry-wide study and observed a close to all-time high number of vulnerabilities exploited in the wild.