Aisle, the company blog authoring the post, is an AI security tool. Recently, Antrophic reported finding 500 vulnerabilities across various products. This has a problem, though: they don't discuss the severity breakdown, target selection, or maintainer response at all. At Aisle, they test ONLY against the most secure software projects with no retrospective comparisons.
The Aisle tool recently found twelve new vulnerabilities in OpenSSL. One of these was a buffer overflowin the CMS message parsing that could have been remotely exploitable without valid key material, with a rating of 9.8 out of 10. In five of the twelve cases, the AI system even proposed the fix.
Daniel Stenburg, the creator of curl, recently closed their bug bounty program due to LLM spam. They noted that AI can be effective for open-source security when used responsibly. It's an interesting perspective, given his history with the slop on his own bug bounty program. Aisle previously identified three vulnerabilities in curl, which were reported and fixed.
A great quote: "There's a temptation in this space to lead with big numbers. Five hundred vulnerabilities sounds impressive. But the number that actually matters is how many of those findings made the software more secure." The failure mode is now drowning maintainers in noise and declaring victory rather than actually improving the security posture. AI is collapsing the median via slop and raising the ceiling; it just depends on what side you're on.
Aisle has a PR review tool that appears to routinely find bugs. Daniel Stenburg even uses it on his own pull requests. They found a buffer overflow in a curl PR recently, as well as two UAFs in OpenSSL changes. The goal is to prevent vulnerabilities before they can occur. Good report on what good AI security looks like!