Resources

People often ask me "How did you learn how to hack?" The answer: by reading. This page is a collection of the blog posts and other articles that I have accumulated over the years of my journey. Enjoy!

AI will make you a faster security researcher. It will also make you worse.- 1915

Martin MarchevPosted 4 Days Ago
  • AI is starting to become REALLY good at finding security vulnerabilities. Is it going to replace us? The claim of this author is that it hollows you out and makes you dumb. This post is about the journey of this Certora researcher as they use AI in their security workflow.
  • Early on, AI was a huge help. Understanding codebases quickly, bouncing reasoning off something... lots of hard things being done very quickly. This was great, until they realized something: they were reaching for AI earlier and earlier in the process. They started using it NOT for context, but for judgment calls. Is this code really exploitable? They would just ask the AI instead of tracing through everything themselves and accept the answer.
  • There is a blurry line between good and bad prompts. One is asking the AI to do all of the work; the other is asking a comprehension question and doing the work yourself. The latter is a force multiplier while the former is hardly convincing. Worse than that, because you didn't reason through this process, you can't tell if the logic is off. You never had a mental model to begin with. The LLM may be wrong, and you would never know.
  • Threat modeling is a muscle. Sitting with uncertainty for hours with a hypothesis is a skill that either breaks or holds. This uncertainty is uncomfortable. So, the AI is able to remove this uncertainty and make them more confident. According to the author, the feeling of I think there's something here but I can't prove it yet IS a major part of the process. It's important to sit in this.
  • The author says that many folks are delegating this at great cost. They are faster and can cover more code. But their hit rate hasn't gone up. AI gave them breadth but took their depth. This is a bad, bad trade in the world of security research.
  • Here's the process that the author now uses:
    1. Write the attack scenario in plain language.
    2. Use AI to verify the mechanics. Execution paths, state transitions, etc. AI excels at understanding the logic of a complex chain of calls. If the AI is wrong somewhere, you can quickly disprove it with your context.
    3. Try to disprove the finding. The AI is useful for gathering evidence here.
  • They leave with a good quote: "The security researchers who will thrive with AI are the ones who treat it like a debugger. A tool that extends your reach without replacing your judgment. The ones who will quietly decline are those who let it think for them, one prompt at a time. They will never notice the moment they stopped being the researcher and became a triage layer for an LLM."