Resources

People often ask me "How did you learn how to hack?" The answer: by reading. This page is a collection of the blog posts and other articles that I have accumulated over the years of my journey. Enjoy!

Cosmos SDK Security Discussion/Presentation- 1678

Alpin Yukseloglu & Spearbit    Reference →Posted 8 Months Ago
  • The video explores the Cosmos SDK and highlights some of the key security considerations. The person giving the talk is a protocol engineer at Osmosis, a very prominent blockchain in the web3 space.
  • The Cosmos SDK provides developers with significantly more control over the environment in which they work. Many of the issues surrounding Cosmos SDK chains stem from a central concept: "with great power comes great responsibility."
  • With general-purpose smart contract platforms, many of the issues are taken care of for you. For instance, smart contracts will price you for each instruction that is executed. They also handle panics for you. In the world of the Cosmos SDK, this is not the case; all of this needs to be manually considered for each blockchain.
  • In the BeginBlocker/Endblocker, the code is free of most restrictions. There is no gas; there is no timeouts; there is no panic handlers. So, ensuring that a Golang Panic doesn't occur in this section of code by a malicious adversary is essential. It's common for projects to have generic panic handlers to deal with this.
  • Unbounded operations cannot exist here. Apparently, it's common for a sudo call to CosmWasm to a user-controlled contract to be called. Since there is no gas limit, a user can run an infinite loop, allowing this process to continue indefinitely. Simply adding a gas meter on user-controlled operations is a wise move.
  • Another big one is non-determinism issues. This just means code that may run differently on someone else's machine, leading to a consensus failure. Things like time-based checks, random number generators, floats and Go iteration over maps are not guaranteed to give the same result. The main solution is just to not use functionality that does these things.
  • Most L1 handles fees for you. In Cosmos, you can create your own fee markets. For instance, you can make execution free or free in specific scenarios. However, it's important to recognize the ability to exploit this - if you can infinitely add TXs for nothing then an attacker can halt the chain.
  • Overall, a good video from a knowledge developer/auditor. It's interesting because most of these issues stem from real world issues found on Osmosis.

“Localhost tracking” explained. It could cost Meta 32 billion. - 1677

Zero Party Data    Reference →Posted 8 Months Ago
  • Meta is known for not taking the privacy of people seriously. It wants to track people and sell the data at all costs. This post is about a mechanism for tracking on Android that bypassed the sandbox restrictions to link what you do in the browser with your real identity even if you never logged into your account.
  • The Meta Pixel is a piece of code to measure the effectiveness of advertising. This is on many, many websites that help track individual users. The Facebook app runs in the background once opened on Android. It opens a listener on a TCP or UDP port on the device. This is not that abnormal for an app to do.
  • The combination of the two above is what causes the issue. When you visit a website that has the Facebook pixel on Android, it will attempt to connect to this port. In particular, it will send the _fbp cookie limited to a particular session. Based upon this cookie, Facebook knows what website this was linked to. Once it's sent to the user app, it now knows who was visiting the site!
  • What's crazy about this is that you could be on a VPN or incognito mode and it still can track you. This has been coined localhost tracking. The captured data includes browsing history, products, registrations on websites and more. The author estimates that fines will be around 164 billion, which is an insane amount.
  • The localhost tracking is an interesting technique! It's sad that this was found in the wild though.

Bypassing GitHub Actions policies in the dumbest way possible - 1676

ENOSUCHBLOG    Reference →Posted 8 Months Ago
  • GitHub Actions provides a policy mechanism to limit the kinds of actions and reusable workflows that can be used. The policies eliminate the failure mode of adding malicious or harmful workflows without further consideration.
  • The restrictions can be applied to specific tags or commit hashes, as well as to particular organizations or repositories. It's a reasonably practical system for ensuring that a developer doesn't harm themselves.
  • This policy system can be "bypassed" via calling git clone on the repository and using a relative path. To me, this is a sane. If you downloaded something locally, then you're making an active choice to run the code. At the same time, is does work around the policy preventing of foot-guns.
  • The author suggests adding a new policy type that can explicitly allow or deny local usage of workflows. I'm personally on the fence about this though. Regardless, an interesting thing to know about for GitHub Actions.

Bringing ‘Clarity’ to 8 Dangerous Smart Contract Vulnerabilities- 1674

Jude Nelson - Stacks    Reference →Posted 8 Months Ago
  • The primary smart contract development language is Solidity. However, this contains many, many footguns that the developers of Stacks have tried to fix. This post goes into he design of their custom language Clarity, and the vulnerabilities that it helps prevent.
  • The most famous vulnerability in Solidity is reentrancy. This is when a smart contract can call into another contract that eventually recalls your contract. This allows for the manipulation of the state that shouldn't be possible. Clarity doesn't allow for reentrancy at all. Integer overflows/underflows are prevented at the VM level as well.
  • Clarity requires external contract calls to be explicitly handled. This ensures that errors are handled properly. On top of this, all functions return something similar to a Result type in Rust. This makes the error handling very explicit in Clarity.
  • All code in Clarity knows the exact amount of gas before execution. To do this, unbounded iterations and dynamic lists are not possible. This prevents many standard classes of DoS vulnerabilities in Solidity. Clarity contains a native VRF function that the VM can call to obtain actual random data. This prevents weird randomness vulnerabilities like in Ethereum-based blockchains.
  • The final section is my favorite: unknown unknowns. To prevent users from getting exploited, they have post-conditions on the outcome of the contract execution. If these are violated, then the contract just fails. This helps proactively protect assets even if you don't know the vector.
  • Overall, a good post on the benefits of the Clarity language.

Unexpected security footguns in Go's parsers- 1673

Vasco Franco - Trail of Bits     Reference →Posted 8 Months Ago
  • Golang's parsing for JSON, XML, and YAML has some peculiar properties that the author of this post decided to investigate. When unmarshalling JSON, the fields in Golang can be explicitly set with the settings for decoding. For instance:
    type User struct {
        Username string `json:"username_json_key,omitempty"`
        Password string `json:"password"`
        IsAdmin  bool   `json:"is_admin"`
    }
    
  • If the json: string is not included, then Golang will still unmarshal it to the exact name of the field - in this case Username. A less-senior developer may not know this and assume that a field without the `json:"is_admin"` cannot be set at all. To actually tell the parser to skip something - text can be used.
  • There's a funny quirk about this though! If - is used with any other data then the parser will assume that - is the literal field name! For instance, the definition `json:"-,omitempty"`. The author found two occurrences of this that they reported as vulnerabilities, and several thousand are currently on GitHub. Another misuse is setting omitempty in the JSON as the keyname instead of a setting. Both of these can be trivially found with semgrep rules.
  • The next class of issues revolves around parser differentials. They label several common issues of misuse: duplicate fields, and case-insensitive key matching. This mostly applies when parsing data in one language and then having it be processed by another.
  • The final bug class is data format confusion. In some cases, parsers are too lenient and try to get valid data out of whatever you need. The example they use is parsing a JSON file with an XML parser. In the case of Hashicorp, Felix from P0 found that they could smuggle in XML to an endpoint intended for JSON. By doing this, the controlled XML was processed instead of the legitimate one. Eventually this led to an auth bypass.
  • The XML parser will accept leading or trailing garbage data. All of the parsers in Golang will accept unknown keys that don't match the struct. Although this doesn't have an impact by itself, it helps construct malicious payloads when exchanged between parsers.
  • Overall, a good post into the weirdness of parsing libraries in Golang.

Crowdsourced Audits Timelines- 1672

VigilSeek    Reference →Posted 8 Months Ago
  • A list of all crowdsourced audit platforms. CodeArena (C4), Cantina, Sherlock, and HackenProof are all on there. This makes it easier to choose a contest platform by being informed about what's going on.

LLM users consistently underperformed at neural, linguistic, and behavioral levels- 1671

Rohan Paul    Reference →Posted 9 Months Ago
  • Rohan summarized a research paper about the effects of LLMs and Google on brain usage alongside effectiveness. The study took people in three situations when writing essays: brain only, Google + brain and LLM + brain.
  • For brain usage, it's what you would expect. The usage was the highest with only brain, lowest with LLM and in the middle for Google.
  • Essays produced with ChatGPT were clustered in terms of words and thoughts. Google was more spread out but was very much influenced by the search engine results. The creativeness of just the brain was the best.
  • In terms of memory, Google and Brain Only were the best - they were able to recall most passages from the essay. With ChatGPT, only 17% of sentences were remembered.
  • The scariest part to me was the lingering effects. When a ChatGPT only user tried to write only using the brain, they had 32% less brain activity. I guess the brain thinks that the tool is coming?
  • When the brain-only writers switched to ChatGPT, their revisions were fantastic and brain usage increased. To me, this demonstrates that starting with only your brain is better than starting with the LLM tools.
  • Overall, an interesting study into the effects of LLMs!

Prompt Engineer by Google- 1670

Lee Boonstra - Google    Reference →Posted 9 Months Ago
  • Tips and strategies for writing better prompts. In short, be concise + positive, use examples, and specify the format of output.

How I used o3 to find CVE-2025-37899, a remote zeroday vulnerability in the Linux kernel’s SMB implementation- 1669

Sean Heelan    Reference →Posted 9 Months Ago
  • The author of this post found a vulnerability in the Linux kernel SMB implementation then used it to find a variant in the codebase later. This is the story of that happening.
  • CVE-2025-37778 is a use-after-free vulnerability and the original bug that was spotted. In the session setup request for Kerberos, if the state is SMB2_SESSION_VALID then the sess->user object is freed. This is done in order to prevent a UAF later. Sadly, there is a code path that allows for the usage of this entity even when it's not initialized via concurrency issus. This is the basis of this vulnerability.
  • CVE-2025-37778 is a use-after-free vulnerability and the original bug that was spotted. In the session setup request for Kerberos, if the state is SMB2_SESSION_VALID then the sess->user object is freed. This is done in order to prevent a UAF later. Sadly, there is a code path that allows for the usage of this entity even when it's not initialized via concurrency issus. This is the basis of this vulnerability.
  • The general prompt contained the following:
    1. Look for use after free vulnerabilities.
    2. A deep explanation on ksmbd, its threat model and architecture.
    3. Be cautious. Favor not reporting false positives.
  • At the end of this, the author repeated the experiment 100 times. Out of these runs, 8 of them found the bug, 66 didn't find the bug and 28 reported false positives. When running the code with ALL of the command handlers at once, it led to 1 out of 100 discovery rate. It's interesting to see the discovery fluxate so much.
  • While running these scans on their vulnerability, there was a new bug being reported about a UAF via bad concurrency handling in the SMB2_SESSION_LOGOFF code. The author shows the direct output from the LLM and it's pretty precise! It's able to reason about two workers hitting the code at the same time, leading to a UAF.
  • The signal to noise ratio is high at about 1:50. Still, this is a good step in the right direction and the tooling will onlt get better going forward. Awesome write up on vulnerability discovery in the Linux kernel using LLMs!

CVE-2025-47934 – Spoofing OpenPGP.js signature verification- 1668

Thomas Rinsma    Reference →Posted 9 Months Ago
  • PGP is a JavaScript implementation of PGP that implements the OpenPGP standard for RFC 9580. It's used for encrypted emails, signing git commits and many other things.
  • The PGP payload consists of a list of packets with no overarching header. The packets implementa custom binary protocol that can be sent as is or base64 encoded. The format is VERY flexible as a result. Different types of packets can be sent in any order.
  • The vulnerability is around the unnecessary parsing of extra data on a PGP packet. The signature data should be the final part of the packet according to the specification. Crazily enough, it doesn't have to be!
  • The verification code will iterate over until the signature packet. However, the usage code takes all of the blocks. This means that dangling data at the end is still vaild, even though it was never verified. This applies to both encryption and signature verification.
  • Overall, a good post! These issues around double parsing of blocks are becoming more and more relevent and this is a trick to keep in mind.