Resources

People often ask me "How did you learn how to hack?" The answer: by reading. This page is a collection of the blog posts and other articles that I have accumulated over the years of my journey. Enjoy!

codehash check in factory contracts does not account for non-empty addresses- 1305

MiloTruck - Wildcat C4    Reference →Posted 2 Years Ago
  • In the Wildcat protocol, the WildcatMarketControllerFactory is used for deploying a contracts. The function determines if the contract has been deployed to by checking if the codehash is bytes32(0). At first glance, this seems reasonable but has a weird flaw.
  • Addresses only return 0x0 as the codehash if they are empty. What does empty mean here? The 0x0 is only returned when is it COMPLETELY empty. If it is an account without account (aka funds exist there), then the empty hash is returned.
  • If anyone transfers 1 wei to that address, then the protocol is harmed. The deployments of many things will simply NOT work. MiloTruck seemed very proud of this finding, and I agree, it's pretty sick! I didn't know the difference between an empty account and a non-existent account for a codehash.

Users using EIP-1271 for signatures can be forced into loans as the wrong party without consent- 1304

MiloTruck    Reference →Posted 2 Years Ago
  • When a user is creating a loan, they provide the signature of the opposite party. If the sender of the call is a borrower, then the lender would need to be the other signature. All of these signatures are validated within the protocol.
  • The side of the signature is not validated though. As a result, an attacker can force a signature to be used for the wrong purpose. In particular, a signature for a borrow can be used to force a user to become a lender.
  • Cryptography is great! However, the use case is just as important as the math.

A year of Competitive Audits - 1303

MiloTruck    Reference →Posted 2 Years Ago
  • In 2023, MiloTruck made the most money on Immunefi at 172K. In this post, he goes through the year and what they learned. I'll be going through some of their takeaways, as these provide the most value.
  • The first takeaway was that math is complicated. By getting the only medium finding via an unsafe cast, they learned this. If it's complicated to you, it's likely complicated to the developer. If it's complicated to the developer, there is likely to bugs.
  • Later down the road, they did 5 audits in one month. In this, they only spent a small amount of time in each. This was a mistake since many bugs come from a deep, deep understanding the vulnerability. During this month, they learned that contests should be chosen according to your skill level. Additionally, simple and small contests should be avoided, as there aren't many bugs and if there are bugs, they'll be dupped to hell.
  • The next big audit was Chainlinik CCIP. This had a payout of $185K for the H/M pot. For this, they went all in. They read through documentation, similar protocol audits and talks before the contest. By doing this, they understood the bugs to look for and had a deep understanding of the protocol quite quickly. This led to 3/3 highs with an 8th place finish.
  • During an audit where they found 8/8 mediums, they only reported 6. They didn't report 2 of them because they considered the issues acceptable risks. They learned to always ask the protocol team whether the behavior you described is intended. Worse case scenario, you report it as well.
  • In an audit of the Wildcat protocol, they learned to always read the documentation and whitepaper of the protocol. This allows you to understand the expected use cases of a protocol, which may not always be obvious.
  • At the end, they mention why they were able to achieve this: super competitive and a high standard. By reviewing previous misses they were able to adapt their auditing methodology to not miss bugs in the future. Second, they wrote up PoCs on every bug that was possible and wrote very, very good reports, which gave them nice bonuses. Additionally, really, really understanding the protocols and in-depth with hard edge cases is what they seem to be good at.
  • The final section might be the most interesting: Are Contests Worth It? You can't earn millions from C4 with how much competition there is. So, the top talent is moving away to other locations. Sherlock tried fixing this with a Lead Senior Watson, which takes a fixed amount of the pool. Private audits, Immunefi, auditing firms and Spearbit are much more lucrative.
  • So, should you do contests? It's a great place to learn and get opportunities. From this, they got offered from Trust Security and Spearbit. Overall, awesome post on their learnings and perspective from the year of auditing.

Beanstalk's all BEAN can be drained by hacker. Due to bug in convertFacet and LibWellConvert- 1302

Beanstalk    Reference →Posted 2 Years Ago
  • Beanstalk is a stablecoin protocol. In order to peg the price of BEAN, the function convertFacet() is used. When passing in token addresses for the stablecoin pool, there was no validation that the well address is valid.
  • The function convert() takes in three parameters: convertData structure, an array of ints called stems and an array of int called amounts. When providing a list of stems and amounts, there is no validation that these are NOT zero length. I imagine that a loop contained some validations but didn't consider this case.
  • If the convertData has a type of WELL_LP_TO_BEANS, it contains a well address. When using this, the well address not was verified to be an allowlisted value. This allows for an expected and trusted contract to be spoofed with arbitrary values.
  • Later on, a call to _wellRemoveLiquidityTowardsPeg is made. This has a call to removeLiquidityOneToken on the well, which can return extremely small values. So, the convert function is made with a BEAN deposit without withdrawing any real tokens. Eventually, these can be claimed by an attacker through a different function call.
  • There are two issues within the code. The first one is a lack of validation of the zero length arrays leads to several values being zero during the conversion. The second issue is with the missing validation on the trusted address for the well. These together lead to a horrible issue where 22M could have been stolen.
  • Overall, a fairly simple attack. The zero length array trick was fascinating to me on this though. This bypassed some math been done and sanity checks, which was super cool to see.

Oracle Multithreading- 1301

Oracle    Reference →Posted 2 Years Ago
  • When writing code that needs to be high performance with multithreading, data may need to be read or written across various threads. If you do not do this securely, then you end up with race conditions, putting the system into weird states. So, these are strategies on how to do it securely.
  • There are three main types of data: stack, global/static and heap. Stacks are per thread and then global/static & heap data is per process. So, locking needs to performed on global/static information.
  • The first method is the most simple: mutual exclusive locks. With this method, there is a variable that is associated with the rest of the data that determines if the information can be accessed. If the data is 0, you can access it but you must acquire the mutex by setting it to 1. Once you're done, release it by setting it to 0. If you come across the lock at 1, then a thread sleeps until it's 0 again.
  • The second pattern is readers/writers locks. This is used in cases where the thread needs multiple readers at the same time with only infrequent writes. To obtain an initial lock, call the function rw_enter. Many users can use this for a read at once.
  • If a thread needs to write, the mutex can be upgraded with a call to rw_tryupgrade. Once done with the write, a call to rw_downgrade can be made to move this to a reader lock. Finally, a call to rw_exit can be used to drop the lock entirely. Although this isn't explicitly stated, I'm guessing that the writers lock waits for all reads to finish and prevents any other future reads from occurring.
  • Semaphores are the final method. Mutex is a binary semaphore. So, semaphores contain an integer with the maximum amount of items that can be shared. These contain two operations: wait and signal.
  • Some datatypes are atomic - meaning that they are updated within a single instruction or operation. This is useful with small datatypes, like integers, across threads.
  • When using locks, there are several things that can go wrong that lead to panics that are not intuitive. First, reentering a mutex from the same thread will pause a kernel panic. Second, releasing a mutex that the thread doesn't hold will also cause a panic. Of course, a common one is forgetting to release the mutex will lead to locks as well.
  • From a security perspective, the mutex or operation needs to be used naturally. If the programmer forgets to use the locking mechanism and accesses the data anything, then threads can still interfere with each other. Denial of services can occur when bad code is written as well, like multi-enters and so forth.

CVE-2023-22524: RCE Vulnerability in Atlassian Companion for macOS- 1300

Ron Masas    Reference →Posted 2 Years Ago
  • Recently, some researchers found a vulnerability within Atlassian Companion App. The issue was that the program has a blocklist of file types of about 350. The author of the original post found that .class files were not in the blocklist, giving them RCE.
  • Why the blocklist? Atlassian does not add the quarantine attribute to files downloaded because it would make the user experience worse. However, a blocklist is commonly a bad idea, as an attacker just needs to find one file type that works.
  • The file type .fileloc is similar to a symbolic link but acts as a shortcut on macOS but it accepts a full path to another file on the system. Since this was not a blocked file type, this was a good candidate for exploitation. While reverse engineering the application, they noticed that files in the blocklist were still downloaded but inaccessible. Weird!
  • The name of this directory was random and they needed the macOS username as well. So, they found a websocket API that would return the folder UUID and another to retain the username in an error message. With all of this, we have a full chain.
    1. Make a websocket API call to leak the UUID.
    2. Make a websocket call to leak the macOS username.
    3. Download a malicious file that should be blocked. It will be stored on the system but we now know the path.
    4. Download the .fileloc file, which points to the absolute path of the malicious file above.
    5. Pop a shell!

Arbitrary Address Spoofing Attack: ERC2771Context Multicall Public Disclosure- 1299

Open Zeppelin    Reference →Posted 2 Years Ago
  • Sometimes, security bugs do not come from an individual issue but the combining of technologies together without considering the implications. There are two separate contracts in this story: Multicall and ERC2771.
  • Multicall is a method of calling multiple functions within a contract at a given time. This is useful because it saves on gas when performing multiple calls at once.
  • ERC-2771 is a standard for meta-transactions. This standardizes how the caller address should be resolved for calls that are made by a trusted relayer, when the user cannot sign the contract. In the ERC2771Context implementation, this is done by overriding the msgSender() and msgData() functions.
  • So, what's the issue? When these two contracts are combined, the address is spoofable. By going through the trusted forwarder with a multicall(), we can make the addresses be spoofed to the victim.
  • By making the address one of the victim, we can act on behalf of them. I love bugs that are not a vulnerability in a single thing but from a result of joining things together.

Twitter XSS + CSRF Issue- 1298

Chaofan Shou    Reference →Posted 2 Years Ago
  • Recently, somebody posted about an XSS on the analytics.twitter.com domain. At first glance, this looks to be nothing more than an alert popper since the cookies are HTTPOnly, there are CSRF tokens on Twitter and the SameSite cookie flag is set to strict.
  • Some APIs on api.twitter.com will accept cookies. So, this solves problem 1. Reading the JavaScript notes that the CSRF token is just a hash of the cookie csrf_id, which is NOT HTTPOnly. So, we can read the cookie as well.
  • SameSite doesn't kill everything! Think about the settings of cookies and protections in place, as doing security cross a large list of sub-domains is difficult. It must be well-thought-out to ensure that compromise on one subdomain doesn't affect the rest of the website.

Finding a Critical Vulnerability in Astar- 1297

Zellic    Reference →Posted 2 Years Ago
  • Polkadot is a multi-chain env that uses a lot of crosschain communication. Each specialized blockchain is known as a parachain. Astar, the focus of this post, is a Polkadot parachain which supports both WASM and EVM support. Parachains like Astar are written in Rust with a framework called Substrate with many different modules to choose from. For EVM support, Frontier is used.
  • Frontier allows for users to add in their own precompiled contracts as well. One of these is called assets-erc20, which allows to developers to deploy native assets. It adheres to the ERC20 standard where the address precompile has the top four bytes by 0xFFFFFFFF and the ID is the bottom part of the address.
  • Slots in Ethereum are 32 bytes or 256 bits in size. The amount in the precompile contract, was set to be a u128. Why is this bad? Integer truncation is absolutely horrifying when handling assets like this.
  • When calling transfer() on the precompiled contract, the integer would be truncated to 128 bits. So, if an attacker passed in type(uint128).max + 1 then the value being transferred would be zero. However, the smart contract assumed that the transfer succeeded, resulting in the contract having tracked this many! This allows for free transfers.
  • This bug is awful. However, in practice, Uniswap and PancakeSwap pools were not vulnerable, since they do an actual balance check. They found a contract project called Kagla Finance that did not perform balance checks and assumed that the transfer worked. After successfully setting up a test environment, they used the bug to exploit the contract for 260K. Neat! This vulnerability existed in another Parachain as well but they couldn't find a good way to exploit it.
  • This bug was very similar to the pwning.eth bug in Frontier in the past, which I was thinking about while reading it. So, how did this pop up again? The Frontier EVM has a fundamental flaw: Rust doesn't support u256 natively, forcing developers to truncate without special focus on this.
  • Overall, a very interesting post on an integer overflow vulnerability. I didn't know much about Substrate based chains but this contains a good background as well.

That's FAR-out, Man- 1296

Dataflow Security    Reference →Posted 2 Years Ago
  • This report is about an information leak that was discovered by accident. In some cases, the userland processes on an XNU system could crash with a kernel pointer in the far register. This was a great information leak that could be done over and over again. However, the cause of the bug is very complicated.
  • Exceptions happen when the userland program has done something illegal, such as access unmapped memory. On arm64, the Vector Base Address Register holds a table where the kernel should jump to. There are many different exception types that are all handled by specific functions.
  • The addresses we see in GDB aren't real - they're virtual. The CPUs create an abstraction layer between the physical memory and the virtual memory using page tables. This allows for the reuse of the same page and separate permission settings on that.
  • When a fault occurs, the address will be put into the FAR_ELn register for some exception types. However, if the register is not used on the exception, it's also NOT cleared. This creates some ambiguity on what to do with it.
  • Within XNU, the entire core state is copied into the threads data structure. This is done in order to optimize the hot code, as opposed to only copying in things conditionally. Finally, we have enough to understand the bug!
  • Consider the following scenarios:
    1. A write triggers a fault on a unmapped page, resulting in a data abort exception.
    2. The adress of data is copied to FAR_EL1.
    3. The exception is handled by XNU.
    4. The core switches to executing in userland once again.
    5. Another exception occurs, such as a breakpoint debug instruction.
    6. FAR_EL1 is NOT updated because of the specific exception type but other data is.
    7. XNU copies the cores state.
  • Using the scenario above, step 6 gets stale data copied in. This is a typically uninitialized memory issue. This isn't just a standard memory leak though! This pointer is a pointer to the thread information that can be used to query information about the crashing state of a task. Since this is using the wrong task, we can get leaked from various other tasks and leak kernel pointers. Sick!
  • To make this more reliable, they found that forcing crashesat a specific memory offset within an unmapped page made it better. An additional way was relying on the object having a pre-known size, which would leak a bunch of information after that.
  • The bug is fascinating. I do not fully understand where the information is truly leaked at bug enjoyed it regardless. Sometimes, observing strange behavior can lead you to crazy bugs.