To answer the question in the title: It's SAST (Static Application Security Testing) + LLMs. Traditional static analysis tools are poor at detecting certain bug classes, such as authorization and business logic bugs. AI can sometimes understand code context and identify issues.
At the beginning, they do outline some issues with it. First, is the cost of it. Tokens are cheap because they are subsidized. However, not that cheap; there's a ton of software to analyze. Secondly, there context rot. LLMs remember data from the beginning and the end, but not as much of the middle. Finally, LLMs are not deterministic; AI may review different code in different ways each time it sees it.
The author sees there being four inputs for LLM native tools: main input, prompt, RAG and context. The main input is the suspected vulnerable code or code that we're trying to look at. The prompt is the objective such as "does this code have XSS in it?" RAG is a framework for retrieving data to add as context to the call with more specific information about the task, such as XSS payloads and descriptions of XSS.
They have a few different mechanisms for using SAST with AI. The first one is Prompt + Code - simply give the AI code and tell it to analyze it. This is simple but better than not doing it at all. This can be paired with AI analyzing pull requests or using it as a classifier/triager before passing it to a more expensive model.
The next mode of operation is Prompt + Agent. This is the process of prompting the AI to find issues, giving it a set of tools to work with, and coming back a few hours later to see what it's found. This is the same as the first one but asking more specific prompts on code and seeing if it can find anything interesting on the particular targets that you gave it.
The third one is Tailored Prompt + SAST Result. This process is simple: run a SAST tool and give AI tailored prompts based on its findings. For very tailored SAST rules, this isn't helpful. For more "hotspot" types of issues, this can significantly reduce the noise. To make the AI more useful, we can add data flow analysis to it as well.
The final one that raise is Agent + Code Graph + SAST MCP. The author mostly uses and recreates an existing tool called ZeroPath for this. According to them, they use Tree-Sitter to parse the function graph and then enhance the steps with AI, such as adding notes for CSRF protection. MCPs (Model Context Protocols) give AI the ability to use tools, such as SemGrep, source code reading, and many other things. They also explain Embedding Models that allow for better data retrieval than MCP.
According to the author, the more you hold the AI's hand with tooling, the better the results will be. We still need the static analysis tools to augment the LLMs usage, as they can't purely understand complicated code on their own yet. Overall, a good post on the state of AI and how this engineer uses it themselves.