The impact of security vulnerabilities is hard to price, unless you're dealing with smart contract funds. So, Anthropic decided to see how well AI could find vulnerabilities in smart contracts. Smart contracts are also minimal with a well-defined set of security definitions, making them ideal for AI capabilities.
First, they created a benchmark of 405 smart contract vulnerabilities across 3 EVM-compatible chains from 2020 to 2025. The agent was given a large set of tools via MCP and a 60-minute time limit. They evaluated the success of this across 10 different models, resulting in exploits for 51% of the vulnerabilities. They also evaluated a set of 34 problems after the cut-off date. This resulted in about 50% of the exploits being successful. Finally they tried to uncover some zero days and found two bugs where the exploit was slightly more profitable than the API cost at 3.4K.
The first novel vulnerability it found was an access control issue. The contract had an access-control bug: it forgot the view modifier on a function that changed the caller's funds. By repeatedly calling this, they were able to claim all funds under the contract. The bot was able to completely steal funds and sell them for a profit on its own. Crazy!
The second vulnerability was an input validation issue. The contract was a one-click token launch. When the token was created, the contract collected trading fees associated with the token. These fees are split between the contract and the beneficiary address specified by the token creator. If the creator wasn't specified, the contract fails to force a default or validate the field. So, anybody could make the call on behalf of the token creator. This was used to steal 1K worth of funds in the real world.
The cost to run these models was about the same as the profit gained. In practice, they claim that attackers could have better heuristics for finding vulnerable code and the code of tokens is going down. According to the post, the median number of tokens has declined by 70% or a 3.4x increase.
The AI agent has gone from exploiting 2% of vulnerabilities to 55% of vulnerabilities within the benchmarking area. They claim that more than half of the blockchain exploits in 2025 could have been carried out by autonomous attackers. I feel this is somewhat exaggerated, given the total stolen amount was only 4.6 million when the actual amounts since March are MUCH higher than this. I'd like to see it reason about more complicated bugs rather than simple input validation or access control issues.