Resources

People often ask me "How did you learn how to hack?" The answer: by reading. This page is a collection of the blog posts and other articles that I have accumulated over the years of my journey. Enjoy!

A White Mage’s Guide to Web3 Bug Hunting- 1890

WhiteHatMagePosted 1 Month Ago
  • WhiteHatMage was in the top 3 on both Immunefi and HackenProof for web3 bug bounties last year. This post explains how they identify projects and the realities of finding vulnerabilities in live projects with impact.
  • What makes vulnerabilities more likely? First, bugs hide within complexity. Most serious issues they find are simple mistakes, but in layered/complicated systems. Many fixes are just one-line changes. Next, innovation creates space for new bugs. When projects adopt a new approach, they are unlikely to consider attack paths correctly. Implementation-level innovation matters too. With new implementation experiments come subtle bugs. As ecosystems mature, there are fewer bugs. New chains, uncommon languages, etc. tend to have more basic issues simply because fewer people have looked at them.
  • Optimizations are at the root of a lot of evil. Heavy assembly usage, manual memory management, rewritten math expressions... Optimizations often obscure edge cases that developers did not anticipate. Next, code quality tells a story. When developers rush a feature or lack attention to detail, bugs are more common. Poor code is hard to secure. Non-functional issues, such as sloppy comments, are even noteworthy. Ignoring best practices, missing of basic security patterns like CEI are all warning signs. Projects with poor code quality are high-risk for vulnerabilities.
  • Audit reports are a useful context. Multiple critical findings are a serious red flag. Fixing every issue introduces the risk of introducing another issue or applying an improper patch. Depending on the quality of the codebase and the auditing company, they look for different types of issues.
    • Good codebase with good audits. Novel or very complex bugs.
    • Good codebase with average audits. Complex paths and known security pitfalls.
    • Average codebase with good audits. Review audit fixes and leftover weak design.
    • Average codebase with average audits. Missed but not extremely complex exploit paths.
  • Being first is super important. Right after a big launch, there's a large chance of vulnerabilities. So, the author will often speedrun basic security checks on a project if they hear about a launch. In the first few weeks, more complex attack paths may be discovered that auditors didn't identify. When a project first gets the bug bounty program, the competition is intense. So, they tend to check early and then come back for a deeper pass later.
  • The approach for finding critical bugs is very similar to my process. They focus only on critical paths; they ask themselves which invariants must hold for this to be secure. Over time, this builds intuition. After a while, they come back to a codebase. This is beneficial because they may have new techniques or the system may have changed. It's important to note that time is limited, so every decision matters.
  • They add a list of bug bounty archetypes:
    • The Digger. Goes super deep into a single program.
    • The Differ. Compares one mechanism across many different projects.
    • The Speedrunner. Reviews new programs quickly.
    • The Watchman. Monitors deployments and upgrades.
    • The Lead Hunter. Develops ideas around lesser-known attack vectors.
    • The Scavenger. Inspired by obscure writeups or little-known incidents.
    • The Scientist. Builds major tooling for analysis.
  • When choosing a bounty program, they also consider the project's reputation itself. If they are well-known for lowballing or not paying, it's not worth your time. Do they have the money to pay you in the first place? Are the rewards and scope clear? Do they take security seriously via audits, or is it just a checkbox? Once they find a single bug then report it and see how the process goes. Only after this do they look for more. For them, red flags are vague rules, low caps, prior disputes, or a lack of response.
  • A fantastic article from a fantastic security researcher. Thanks for taking the time to write up!