Resources

People often ask me "How did you learn how to hack?" The answer: by reading. This page is a collection of the blog posts and other articles that I have accumulated over the years of my journey. Enjoy!

The Path to Memory Safety is Inevitable - 1656

Shaen Chang - Hardened Linux    Reference →Posted 9 Months Ago
  • Memory safety is a common topic when discussing programming languages. However, it's not well-defined what is being talked about.
    • Compiler-based static analysis.
    • Compiler-based code creation that is expected to be memory-safe.
    • Runtime mechanisms like garbage collection and array bounds checking.
    • Security hardening. Aka, just trying to make the system safe against attackers.
  • Lisp, traditionally memory safe language, has features that may allow for memory corruption. C/C++, traditionally considered a memory unsafe language, can also be done safely. Static analysis tools, strict code review and good runtime detectors do well. This demonstrates that memory safety isn't the sole responsibility of the compiler or the runtime - it's a coordinated effort. The author of this post comes from a project called Hardened Linux that's goal is to create a version of Linux that is resistant to compromise.
  • The lifecycle of vulnerability has a few stages:
    1. Identifying a bug and assessing whether it can be exploited.
    2. Writing a PoC.
    3. Adapting the PoC into a stable exploit.
    4. Digital arms dealers integrating it into a weaponizing framework.
  • Most of the effort in preventing vulnerabilities is around a bug not existing in the first place. However, there are other ways to keep users safe besides removing bugs - we could just make them unexploitable to get RCE.
  • Fil-C is a customization on Clang/LLVM compiler that catches many memory safety vulnerabilities by doing a combination of garbage collection and capability checking on pointer accesses made by Epic Games. Buffer overflows, type confusions, use after frees and many classes of vulnerabilities can be prevented using this.
  • Another strategy is around mitigation techniques at the hardware or software level. NX, CET, etc. are good examples of this. Many vulnerabilities would have been harder to exploit with some of these protections, if not outright impossible. Every protection is another roadblock that makes it less likely that exploitation will occur.
  • Practically speaking, I like this take on simply rewriting software: "Rewriting software and libraries using memory-safe languages is an expensive endeavor. If you have thoroughly considered this approach and decide to proceed, please consider rewriting them in Lisp/Scheme." Great post on the practically on exploiting systems!

CVE-2025-30147 - The curious case of subgroup check on Besu- 1655

Antonio Sanso - Ethereum    Reference →Posted 10 Months Ago
  • Elliptic Curve Cryptography is the basis of most signature verification, hence identity, in modern blockchains. Prior to the recent Pectra release, only the bn254 elliptic curve was allowed. There are some precompiles for curve pairing checks and multiplication/division that were defined in previous releases for efficient gas-wise computations.
  • Invalid curve attacks are a known issue surrounding ECDSA systems. For non-prime order curves, it's important that they're in the proper subgroup. If it's not in the correct subgroup then cryptographic operations can be manipulated/compromised. To check if a point is valid, there are two things to check: it must be on the curve and belong to a subgroup. If P is untrusted, then these verification's are crucial.
  • In the Besu implementation of the EVM, is_valid_point was not checking if the point was on the curve - it was only cehcking if it was in the subgroup. So, can you create a point that lies in the correct subgroup but off the curve? This requires choosing a very well-chosen curve. In particular an isomorphic curve. There are more details on the math but I don't really understand them :)
  • Why does all of this matter though? In this case, the main issue was a consensus failure. Since the Besu implementation was the only one with this particular issue, it would have diverged from the other clients, potentially leading to a chain fork. Besides this, they imply that it has other security concerns but didn't say it specifically.
  • To me, up time is not a huge concern compared to the benefit of multiple clients. If there's a loss of funds bug to be exploited in the EVM, it would have to appear in 66% of the clients; this is the benefit of client diversity. Good bug that was very specific to cryptography none-the-less.

Solana: The hidden dangers of lamport transfers- 1654

OtterSec - Nicola Vella    Reference →Posted 10 Months Ago
  • Lamports are the smallest denomination of SOL on Solana. Sending SOL to an account can cause major havoc to an executing program in certain situations.
  • They have a game called the king of SOL as a demonstration. At a high level, whoever has donated the most SOL wins, and it reimburses 95% of the funds to the original king. However, several DoS bugs are lurking in this codebase.
  • In Solana, an account (place where data is stored) has a minimum balance of lamports to be alive. Storage has a cost. So, this is used to combat account DoS attacks. Rent exemption is itself an attack vector though. Consider the case where a transfer is going from one account to another. If the account goes below rent-exemption then the transaction will always fail.
  • Accounts in Solana have a few properties - readable, writable, and executable. An account that is executable is unable to receive SOL via set_lamports. So, forcing a transfer to happen this way will also lead to a DoS.
  • Some programs are silently downgrades from writable to read-only. This happens for reserved system programs/accounts. In Anchor, specifying an account to have the writable requirement is common. By combining both of these, we can create situations where a transfer of lamports will always fail.
  • Overall, this is an interesting article on transferring imports and the security consequences associated with it. I didn't know all of these!

Post-Mortem: PT Collateral Pricing Incident- 1653

Loopscale    Reference →Posted 10 Months Ago
  • Loopscale is a modular lending protocol deployed on Solana. It recently suffered a 5.7M hack, which affected many of the platform's users. So, what was the bug?
  • In Solana, all programs and accounts that are interacted with must be specified beforehand. The program's usage can drastically change if these addresses are not properly checked. In this situation, a cross-program invocation was being made to the RateX vault. However, the RateX vault's usage was not correctly verified on the call.
  • I'm not sure what value was supposed to be returned from the RateX contracts, but it was something important for tracking assets. From reading tweets, it appears that the prices were being manipulated. Of course, if you can specify the incorrect price, you can perform trades at terrible price points to steal money.
  • Otherwise, the program had a good design. The exploit was limited to RateX principal tokens, which meant that no other vaults or lending positions were affected. Market isolation and collateral segregation really helped reduce the impact. In the future, they are adding time-based limits, exposure limits, and loan approval on giant loans, further giving protocol control. Finally, several updates will be gated by a multisig.
  • Going forward, they will expand their audit coverage. Small changes can have devastating consequences, so to combat this issue, they plan on having all code reviewed before launching. They also plan on launching a bug bounty program. Overall, an interesting report and set of takeaways from a real world hack.

AI Slop Is Polluting Bug Bounty Platforms with Fake Vulnerability Reports- 1652

Sarah Gooding    Reference →Posted 10 Months Ago
  • Bug bounty programs allow security researchers to disclosure vulnerabilities to get patched. Many of these programs pay money for reporting these issues. Given that there's money on the line, there's an incentive to get a payout even if there's no real vulnerability.
  • LLM's are great at generating content. Unfortunately, they can create content for anything, including bug bounty reports. Security is very contextual and subtle things can change whether something is exploitable or not. Because of this, incorrect LLM generated reports are becoming a major issue in the security realm.
  • The problem with these reports is that, at a glance, they seem legitimate. To disprove the issue, it requires a large amount of context on the codebase and a deep understanding of security issues. Historically, we have assumed "good faith" research but this is starting to be abused. The is the problem is that triaging these issues takes a large amount of time.
  • Some projects do not have the bandwidth to handle these security reports. So, they end up just paying a small bounty to avoid the delay and PR fallout. It's just cheaper to pay for the bug than hire an expert to perform the true analysis.
  • In the case of curl, they have a large amount of reports to handle from LLMs. At curl, they have very technical folks and are able to handle these. They are usually able to identify fake reports but it still takes time. If this keeps up, restrictions may be added to bug bounty programs on the users doing it.
  • What's the solution? Detectors and verification in my opinion. A few detectors:
    • It's common for these reports to not include reproduction steps, making the vulnerability impossible to reproduce. So, adding a hard requirement on PoCs that run would be useful.
    • It's common for reports to have illegitimate code links. If code being linked doesn't exist then, then it's likely trash.
    • Making vulnerabilities needlessly complex.
    • The styling of ChatGPT and other LLMs really likes Markdown with a lot of bullets.
  • On the other side is verification. Platforms, like HackerOne, need to have better account verification. Once an account has been flagged as using spam, they need to ban the account, the IP and the email going forward. Sort of like cheat detection repercussions on Chess websites. Eventually, the beg bounty people would likely stop reporting things altogether.
  • This is a hard problem to solve but it'll eventually be worked out!

XNU VM_BEHAVIOR_ZERO_WIRED_PAGES behavior allows writing to read-only pages - 1651

Ian Beer    Reference →Posted 10 Months Ago
  • The proof of concept starts with a write of a bunch of A's to a file owned by root and read only. Next, they execute a C file that uses mlock on that file. The file is still read only and owned by root but now contains a bunch of 0's.
  • VME's define the privileges which a particular map has over a regions vm_object. The behavior VM_BEHAVIOR_ZERO_WIRED_PAGES can be set by a task on any vm_entry. However, there are no permission checks on this, causing the zero_wired_pages flag to be set. In vm_map_delete, the unwire function looks up the page of the underlying object and zeros the portion of it out. Again, no permissions are checked in this case.
  • The next challenge is getting the page wired to something interesting. mlock is a wrapper around mach_vm_wire_kernel which contains the ability to do writes. Using this, it's possible to mmap an interesting part of a page, mark it with VM_BEHAVIOR_ZERO_WIRED_PAGES, mlock the page and it'll zero out parts of the data.
  • A pretty classic, yet complicated to exploit, permissions issue. Neat!

Bug Disclosure: Reentrancy Lock Bypass- 1650

Bunni    Reference →Posted 10 Months Ago
  • The contract BunniHub is a pool contract. There was a vulnerability that allowed for calling back into this code while the pool was in an unintended state, classic reentrancy, via a user-defined hook. Inevitably, this would have led to lost user funds. Pashov audits found this reentrancy vulnerability during their audit.
  • To mitigate the original issue, they introduced a set of functions for prevent reentrancy. This was done by adding two functions: lockForRebalance and unlockForRebalance. This locked the rebalance before the order and unlocked it once the order was executed. These locks are per contract and not per pool.
  • A Bunni pool can have a hook contract that triggers this functionality registered by anyone. Since the locks are global, an attacker can create a hook contract, call it and disable the reentrancy lock themselves. Now, manipulation is the same as before and leads to loss of funds. Cyfrin, a web3 auditing company, found this bypass.
  • To patch the issue immediately, they created a whitelist on who is able to execute rebalancing actions. The attack was prevented, theoretically. To be cautious, they asked Cyfrin if any other reentrancy attacks were still possible and they did more research into it. They found a similar vulnerability when interacting with a malicious ERC-4626 vault that broke the accounting of the pool to withdraw more assets than they should be able to. To resolve this new issue, all functionality was paused until a proper fix could be made.
  • The contracts were audited by Pashov Audit Group and Trail of Bits. Currently, and they are being audited by Cyfrin as part of the Uniswap Foundation Security Fund. Patching vulnerabilities is hard; patches need to be taken really seriously when they're suggested. Otherwise, you'll end up with more issues like this.

Hash What You Mean - 1649

Giuseppe Cocomazzi    Reference →Posted 10 Months Ago
  • The Horton principle is "mean what you sign and sign what you mean". The reference comes from a Dr. Seuss book but does have profound impact. When signing/hashing data, no changes should be possible to it. Although this sounds simple, even a single item is difficult to do correctly. With multiple items, this becomes very complex.
  • For multiple items with hashing, it's common to turn the data into a string and hash. But, this just moves the problem to the serialization. By using something like ASN.1 or JSON or Protobuf, this mostly solves the problem though. These make the representation unambiguous, which isn't so simple to do.
  • When doing this for a Merkle tree, a load of problems comes into affect. The depth, the data in the nodes and levels all matter for the tree. In a binary merkle tree, the representation of how we hash the leaves matters. What happens if it's not a perfect element of 2? Do we go from the left or the right? All of these matter for the Horton Principle.
  • Domain separation is important - add byte separators for specific bytes. For instance, adding a 0x00 for a single entry element. In CometBFT, the hash byte slices are ordered and order-preserving. In Ethereum, they do something different - all unpopulated leaves are just empty. This means that the empty hash is used a lot, making it perfectly balanced in theory. Both of these satisfy the Horton principle.
  • The Open Zeppelin library takes in an input of leaves of length L. It starts pairing from the tail and takes a double hash concatenation to disambiguate between internal nodes and single leaves. A major deviation is that the leaves are sorted prior to hashing. This means that the tree does not preserve the order of the input sequence. In the documentation, they say it's assumed that the inputs are ordered.
  • A cross-chain protocol called Omni Network ported the Open Zeppelin version for their own use in Golang. Naturally, they forgot about the optionally of the leaves pair sorting and try to always sort. This is done by the blockchain for transmission but isn't actually required by the verifier. Great.
  • The Omnichain Network merkle tree data contains a LogIndex that is a monotonically increasing value. However, this value is NOT in the data used for hashing. Combining the optionally of leaf sorting with the neglect of this value in the hash, leads to the ordering of messages not being preserved. Practically speaking, this means that differently ordered xchain.Msg sequences lead to the same Merkle Root.
  • Even more crazy is that you can change the LogIndex to be different as well. {"Ping", 1} and {"Pong", 2} is just as valid as {"Pong", 1} and {"Ping", 2}. The ordering is not kept, as we can see. The author includes a PoC.
  • How did this sneak through the cracks of the project? The author has a great quote on this: As seen too often with cryptographic constructions, too many moving parts and options become the ingredients for the proverbial "hazardous material".. With the copying of the porting from the Open Zeppelin with a weird assumption being made in the library, it was bound to lead to an issue. Great write up!

Google Cloud Account Takeover via URL Parsing Confusion- 1648

Mohamed Benchikh    Reference →Posted 10 Months Ago
  • On the Google Cloud CLI gcloud, the authentication process works using OAuth and a server that is quickly setup on the computer at localhost:50000. This means that http://localhost is actually a valid redirect_uri on OAuth! Given that we have a browser parsing doing the redirect and a backend parser validating the redirect, this becomes the perfect chance to find an account takeover via an evil redirect.
  • At first, this didn't make sense to me. Most of the time, these are static string checks and are an allowlist, giving very little wiggle room. The author of the post tried 127.0.0.1 and noticed that this worked. This meant that they were parsing the URL rather than just doing a static string check on the backend. With two parsers in play, it's time to find a difference!
  • They wrote a Python script that performed a large amount of URL mutations. Encoding trips, private IPs, weird schemas, IPv6, etc. After running their fuzzing script for a while, they found a match:
    http://[0:0:0:0:0:ffff:128.168.1.0]@[0:0:0:0:0:ffff:127.168.1.0]@attacker.com/
    
  • This URL is super weird. The @ symbol is used as the separator between the username and the password on a URL. It's actually invalid to have two of them. Chrome mitigates this edge case by encoding all non-reserved characters and earlier occurrences of reserved characters. However, the parser on the backend likely ignored the attacker.com part of the URL and grabbed the proper data from the set positions. Neat!
  • What's interesting is that this only happened when using IPv6. When using IPv4, this didn't work. A working redirect_uri is as follows: http://[::1]@[::1]@attacker.com. The server would parse the second [::1] as the server information and skip the attacker.com entirely. However, Chrome would parse attacker.com as the host.
  • Mixing this with OAuth gave the author an arbitrary redirect to steal the OAuth to be logged into the users account. That's a pretty rad bug with good visuals and background.

Exploiting the Synology DiskStation with Null-byte Writes- 1647

Jack Dates - Ret2    Reference →Posted 10 Months Ago
  • Pwn2Own is a hacking competition with fairly large prizes. In 2023, no compromises of the Synology DiskStation had been found. So, they decided to add a few non-default but first-party packages to the scope. Packages are add-ons for the device that can be installed.
  • One of the services they analyzed was the Replication Service. It has very high privileged and easy communication from the outside world. The service listens on port 5566 for the synobtrfsreplicad. The service is just a forking server that continually accepts connections from a remote client.
  • Each request takes a cmd, sequence, length and a complete data section. If the length of the data is larger than 0x10000 then an error is returned on the cmd receiving function. However, there is a case of bad error handling here. The code returns the error value from a previous function call instead of setting it to a real error. This leads to the error being ignored!
  • Directly after the error verification is a null byte write into a buffer based upon the len of the packet. This creates a relative write to anywhere in the buffer but only with a nullbyte. This really does look like a CTF challenge! The device has all mitigations enabled so this was going to be trippy.
  • To break ASLR, they abused two key points: this is a fork-server that reuses the same address space on each process and a crash in the program didn't have any affect on the rest of the service. Instead of brute forcing it straight up, they do some crazy pointer shenanigans to create useful oracles for leaking the offsets. This part is worth a read :)
  • Using the primitive from before, they are able to corrupt a heap pointer in the .bss section. Since they control this address and can force it to be freed, they are able to corrupt this chunk to perform tcache poisoning techniques. Now, they can add arbitrary contents to the tcache, giving them an arbitrary write primitive.
  • With the arbitrary write, they wrote a pointer to the GOT entry for delete to be system. When the call to delete is made with the controlled pointer for delete, it executes the bash command. This gives them RCE on the box! The patch was simply to return 1 instead of returning 0. Nice!