Resources

People often ask me "How did you learn how to hack?" The answer: by reading. This page is a collection of the blog posts and other articles that I have accumulated over the years of my journey. Enjoy!

Exploiting null-dereferences in the Linux kernel - 1083

Seth Jenkins - Project Zero (P0)    Reference →Posted 3 Years Ago
  • Null dereferences are commonly known as an unexploitable bug. Sure, it's a denial of service but not much else. Well, in the context of some situations, we can make it more. In Linux, when a memory corruption issue is found, the kernel will attempt to recover; this is called an oops. The oops recovery path will NOT clean up the existing code correctly. Most of the time, it simply leaves this alone and kills the entire task.
  • In C, there the main way to track the amount of open references is by using a counter known as a reference counter (refcount). If a new reference is made, the counter is incremented. If it's removed, it's decremented. If the refcount hits 0, then the object is freed, since there are no references to it.
  • The act of leaving the state alone sounds like it would be fine. But, in fact, it's very scary to trigger these handlers, since the code is never fully completed. In the case of a refcount, we could trigger an oops with open references that never get closed. Over enough time, a 32-bit uint could overflow. Eventually, when this is set to 0, the object will be freed, creating a use after free on the object.
  • The author had written in bug report about a null dereference in the kernel. Simply put: when a task's VMA is not mapped at all, the mm_struct_mmap will be null. Trying to access this will lead to a null dereference in the kernel, simply by reading process mapping files. Once the kernel oops occurs, a few things are left in weird states:
    • struct file, mm_users and task struct have a recount leak.
    • Two locks will never be freed, resulting in a hang forever on certain operations.
  • Because of the locks, only the refcount leak on mm_users has the potential for exploitation. Even with this though, it uses the overflow safe atomic_t type. With some Linux shenanigans that I don't fully understand, this doesn't matter though. After avoiding deadlocks and other mm_users specific problems, this is possible to overflow though.
  • The author believes that on a server that print serial logs to console, this would take over 2 years to exploit. On a Kali Linux box with a GUI, this took 8 days to hit. To turn this into an actual exploit, the author uses this UAF to cause further havoc within the AIO subsystem. They took this to a double free crash but didn't exploit it any further.
  • The solution to this new attack idea? Introduce an oops limit that will cause the kernel to panic after so many oops occur. However, I doubt this will be picked up for most OS's, since little bugs happen all the time. It would be a bummer if your server crashed because it had a oops once every month. This is a very clever exploit strategy that I hope to see more in the future about!

Exploit Report - Equalizer Finance- 1082

Equalizer Finance    Reference →Posted 3 Years Ago
  • Equalizer Finance is a decentralized market focused on flash loans.
  • The number of liquidity tokens created while minting was calculated based upon the amount of underlying tokens in the contract. The code for calculating this is shown below:
    uint256 denominator = stakedToken.balanceOf(address(this)) 
                         * factor / total_supply();
    
    tokensToMint = amount * factor / denominator 
    
  • The token amount of LP tokens owned by the contract can be manipulated using the flashloan functionality. By calling flashloan, the balance of the contracts LP tokens is drastically decreased. Why is this bad? The denominator shown above takes into consideration the amount of LP tokens it owns! The smaller the denominator (hence amount of tokens in the contract) the more tokens that would be minted.
  • The flow of the attack is as follows:
    1. Take out a flash loan from the vault for almost all of the LP tokens. In the real attack, they took half of the funds (50K USDC).
    2. Add the flash loaned funds as liquidity to the protocol. This brings back up the funds to 100K USDC and we get about 50K in LP tokens.
    3. Remove the liquidity. The flawed calculation shown above for minting returns a ridiculous amount of USDC back: 100K USDC.
    4. Pay back the flash loans. A massive profit has been gained.
  • This service had been audited by Certik once and this vulnerability was missed. To me, this is a fairly straight forward attack so it's scary to me this was missed. Overall, good write up and explanation of the bug though!

XNU VM copy-on-write bypass due to incorrect shadow creation logic during unaligned vm_map_copy operations- 1081

Ian Beer - Project Zero (P0)    Reference →Posted 3 Years Ago
  • Copy on Write is functionality in the Linux kernel for only remapping memory once it has been written to after a fork. This is a major optimization, since forked code can reuse memory from other processes. The copy only occurs only a write to the address space occurs.
  • The function vm_map_copy_overwrite handles large copies with two different routes: unaligned and aligned. With the unaligned route, the extra condition checks whether the mapping is VM_PROT_WRITE. If this is true, it will create a shadow copy of the page only once it is writable.
  • The condition of VM_PROT_WRITE should NOT be possible, with this code being later in the chain. The usage of needs_copy and VM_PROT_WRITE should not b possible. However, this can be raced! If we change the page mapping back from VM_PROT_WRITE after the verification in the upper code path but BEFORE the shadow copy call, we can hit this condition.
  • How can we exploit this?
    1. Start a privileged process.
    2. Fork from the process with readable regions, such as the code sections.
    3. Use the vulnerability above with unaligned mappings to make the address mappings editable.
    4. Edit the code from the privileged process! This can be used to get root relatively easy I would guess.
  • Overall, I love this vulnerability. This is a major memory corruption vulnerability that would have NOT been picked up by Rust, since the page mappings are a logic bug. A good explanation of this can be found at DayZeroSec as well.

Trader Joe v2 contest Findings- 1080

Code4rena    Reference →Posted 3 Years Ago
  • Code4rena is a crowd sourced security audit platform. Recently, Trader Joe V2 received a security audit. Joe V2 is a decentralized exchange based on Liquidity Book, an AMM protocol.
  • The first major vulnerability was bad book keeping because of temporary variables. The code snippet is very short:
    uint256 _fromBalance = _balances[_id][_from];
    uint256 _toBalance = _balances[_id][_to];
    
    _balances[_id][_from] = _fromBalance - _amount;
    _balances[_id][_to] = _toBalance + _amount;
    
  • What's the big deal? There's no verification on to being the same as from. Walking through the values if they are the same with the starting balance being 10 and the amount being 10:
    1. _fromBalance = 10
    2. _toBalance = 10
    3. _balances[_id][_from] = 10 - 10 (_fromBalance)
    4. _balances[_id][_to] = 10 + 10(_toBalance)
  • The fourth line of code does not take into consideration the third line of code. Hence, the amount of funds in the account cane be infinitely doubled. The second high finding was an incorrect calculation between V1 and V2 pools. This incorrect calculation could have led to funds being lost for the user - but not stolen; both 3 and 4 were similar to this as well. Weird to me this is considered a security finding.
  • Trader Joes is similar to most LP pools; a user can call mint() to provide liquidity and get an LP token in return. Additionally, it can call burn() to return the LP token to get their underlying assets as well as call collectFees to get the fees they are owed.
  • The fees are calculated using the debt model. In other words, it is keeping track of the last time the fees were obtained by the user and subtracting this from amount. This is to ensure that the user doesn't collect fees for the entire growth of the token; they should only get the fees for the time they have been active.
  • When calling a multitude of functions for token swaps (mint, burn, transfer), the hook _beforeTokenTransfer is called to update the user debts. The hook only updates the cache fees via _cachefees() if the address is not 0 and the address does not equal the address of the pool. This was done since the contract is never expected to own the LP tokens.
  • So, what's the bug? The code for _cacheFees is never triggered! This means that the time we got in (debt) does NOT get added to the calculations. When calling mint, the recipient of the token can be chosen. Because of further bad bookkeeping, the funds can be taken out as well.
  • Steps for the exploit:
    1. Transfer funds to the address to mint in the next step.
    2. mint with the recipient being the contract itself.
    3. Call collectFees() with pair address as the account. It will send itself the fees. Now, the token believes that the user sent it money, even though the contract paid itself! This is because of it keeps an internal balance and comparing that to the actual balance of it. This difference convinces the protocol that we indeed set money to it!
    4. Call swap to collect all of the fees.
  • There is an integer overflow within an unchecked block that allows for the fees calculation to go rouge. This allows the attacker to steal the entire reserves, instead of the total fees accrued only. I absolutely love this bug! Super crazy attack which starts by sending your money away.

Huawei Secure Monitor Vulnerabilities- 1079

Alexandre Adamski & Maxime Peterlin - Impa Labs    Reference →Posted 3 Years Ago
  • Huawei's security hypervisor is leveraging the virtualization extensions of the ARMv8-A architecture. Additionally, it makes use of ARM TrustZone - a hardware enforced separation from called Normal World an Secure World. When communicating between the worlds, a Secure Monitor is used. The Secure Monitor runs at EL3, the highest privileged level with TrustZone. The implementation is based around the AFT project provided by ARM.
  • The Secure Element (SE - aka HISEE) is a peripheral on Hisilicon-equipped devices. for monitoring only accessible in the Secure World. Of course, the Normal World needs a way to access this; this is done via the Secure Monitor with SMC commands and the hisee driver.
  • In the Linux kernel, it is important that a user making a syscall doesn't provide addresses that exist in the kernel; hence, there is a wrapper function that does all of the sanity checks for us. A similar function exists within the Secure Monitor code. However, there are locations where the addresses are not validated! As a result, an attacker can pass in a addr and size outside of the shared CMA region. We can either 0xAABBCC55 to our address + 0x4 or a value between 0x0 and 0xC to the address X + 0xC.
  • The vulnerable command is CMD_HISEE_FACTORY_CHECK used in the logging component shown above. They chance the address of g_cma_addr to be 0xC and the size to 0xAABBCC55. This changes the range of allowed addresses from the CMA region to being practically infinite. With the secure monitor address space verification defeated, we can use other functions to perform even worse operations.
  • Using the function CMD_HISEE_FACTORY_CHECK a third time will from the SE to a destination address of our choice. This primitive can be used to hijack many of the functions pointers in the data section; I am assuming there are not write protections in the Secure World. They overwrote one of the SMC handlers to be 0x14230238 since there the gadget BLR X2, which can be used to jump to an arbitrary location with the controlled X2 register. They use this to obtain a temporary arbitrary write primitive with ROP.
  • They create an even better read/write primitive by using the primitive above to overwrite even more SMC handler pointers. The next step was getting full code execution, even though the code sections as labeled as read only. This was done by remapping the the same physical memory to a different mapping as RWX instead of just read only. With the double mapped address, code execution is gained by writing shellcode.
  • There is an issue with shared memory checks as well. There is functionality that simply copies logs. When performing this operation, the input from the user is an address and size. The code is shown below for sanity checks:
    buf_addr >= 0x3c000000 && buf_addr + buf_size - 1 < 0x40000000
    
  • Do you see the mistake? The second check can UNDERFLOW if the value is too large. Hence, the check would pass. This oversight allows for us to write addresses relative to our chosen buffer address with some limitations on the locations of memory we can overwrite. The shared control structure for logging (hlog_header) is used to keep track of the current position in the shared memory buffer. The structure is in a modifiable location from our attack, allowing us to modify its fields, including a size and address value.
  • Using a memset gadget that sets bytes to zero, the author of the post overwrites a function pointer close to the gadget BLR X2 with 00 in the first byte (which is just where it points!). From there, they use the same exploitation strategy as before.
  • Overall, an amazing article! Great diagrams, code snippets, explanations and background on strange functionality. I wish more articles were like this!

ApolloX Signature Replay Attack- 1078

p0n1    Reference →Posted 3 Years Ago
  • ApolloX Finance is a decentralized derivatives (short, longs, etc.) exchange, with the token APX. The contract was using a signature system for various calls given from an off-chain variant of the service itself.
  • The signature system was used when a user wanted to make a withdrawal. The off-chain system would send a signature to the user, which could be used to call claim to get their funds. During this, the signature is validated to originate from the backends signature. There is also a claim history to ensure this cannot be simply replayed.
  • There were two other functions using the signing system as well. In particular, there is a separate contract with the same claim function. The only difference is an extra field being taken from the signed message. By complete luck, the deadline (date) in the second contract was the same field as the reservedAmount variable. Since the amount was large enough (and seen as the deadline date), this verification would pass.
  • This means that the already expired signature of previously made transactions but the funds would not be sent to the author of the call. Instead, they can get the signed object themselves and perform this action three times instead of one; effectively tripling the amount of money that should be able to obtain from this.
  • Overall, an interesting problem of reusing cryptographic keys across different areas. This could be have been solved by having different keys for different contracts or having a global system for keying track of signatures. Good find by the hacker on this one!

Horton Hears A Who!- 1077

Nadav Hollander     Reference →Posted 3 Years Ago
  • The Horton Principle is a principle that should always be followed in cryptography: "mean what you sign and sign what you mean". Any time this can be violated, there is a major security problem.
  • The Wyvern protocol is a decentralized digital asset exchange running on Ethereum. This is the standard listings and offers style of market, such as trading NFTs.
  • Wyvern listings contain different parameters used to indicate listing or offer information and authenticate other involved smart contract calls. This is aggregated into a single commitment that the user signs and the contract checks. This is done in order to save funds, since it is signed off chain, and can be signed with expiration times as well. Pretty neat functionality!
  • Aggregating data is sketchy business if you come from the world of website hacking. The bytes being concatenated had no delimiter. Why is this bad? Consider A as 0x01 and B being 0x0101. If we concatenate A + B together, we end up with 0x010101. However, the signed data would be the same if A was 0x0101 and B was 0x01! This violates Horton's Principle mentioned above. In the context of the smart contract, this means we can change the parameter that the bytes were signed for!
  • Practically, this can be exploited in a very bad way. In the Wyvern system, NFTs are swapped for other digital items. To allow this, the Wyvern listings have a calldata field to allow a callback with specific parameters. In some cases, the calldata must be mutated at the time of fulfillment. This can be done by a replacementPattern - a bitmask to alter the calldata. This is necessary for OpenSea, the address of the offer taker must be added to the calldata.
  • In calldata, the first 4 bytes are the function selector. In the context of OpenSea, we need to call transferFrom(address,address,uint256). Using the modification primitive from above, an attacker could shift bits between the callData and replacementPattern to modify the function selector! The closest selector is getApproved(uint) at only 10 bits of difference.
  • There is a 1 in 1024 chance that a random 4 byte bitmask (with the shift) has the correct bitmask to change the selector to the proper location. Additionally, the address could be started at any point in the pattern. Hence, this reduced the exploit down to 1 out of 64 offers being exploitable. Using the getApproved(0) primitive, an attacker could have taken WETH from these users, despite them never approving or still owning the NFT.
  • Overall, a super interesting bug that was never exploited. Handling signing and data parsing is not nearly as simple as one would think.

Missing zero-address check for the beneficiary address- 1076

code423n4    Reference →Posted 3 Years Ago
  • In the smart contract code, there is a function that takes in several address for storing an NFT. This includes the deployer (owner) and the beneficiary.
  • When it does the saving, there is no validation that the address of the beneficiary is 0x0. This is rated as a high finding because of the major loss of funds that could occur. This is especially bad since there is no way to change the storage, even as an administrator, to fix this mistake. Should they ALWAYS check that the location being sent to is valid? Hmmm. To me, in the context of this contract, setting beneficiary to the wrong address (not just 0x0), would be bad. I don't understand why only 0x0 is called out.
  • 0x0 address is very special. It is the ERC20/ERC721 specification that the burn function is used to destroy and the mint is used to transfer from the zero address. In this case, they are entirely separate. However, it is not uncommon to see this code shared between other functions. So, with the shared code path and a 0x0 address, this could lead to a burned NFT by accident. Yikes!
  • Overall, simple issue with weird impact. Thanks to bytes032 for all of the fun Solidity challenges lately.

External Burn Function Allows Market Manipulation- 1075

CertiK    Reference →Posted 3 Years Ago
  • The Mint function is used to create tokens. The Burn is a function used to destroy tokens. Both of these are standards with ERC20 tokens. This is the case with cryptoBurgers (BURG). BURG is a token based on the Binance Smartchain.
  • The Mintand Burn functions should not be publicly callable. Normally, these are called internally once some operations has been performed, such as sending ETH to the platform in exchange for the token.
  • The Burn function is external in the source code. This can be seen here. This allows the number of tokens in the pool to be arbitrary decreased. Why is this bad? This breaks the prices of AMMs and tokens pairs.
  • Hospo token had the same exact vulnerability. This was exploited by doing a major burn on the token, syncing the price then performing a swap. Naturally, the price had been drastically manipulated upon doing this, giving them a major profit.
  • The tool ethtx.info is used to make the transactions look real nice here! Overall, two really simple bugs; it's amazing this made it through an audit...

Taking home a $20K bounty with Oasis platform shutdown vulnerability - 1074

Trust Security    Reference →Posted 3 Years Ago
  • Oasis offers leveraged trading and borrowing, among other things. Oasis Earn had recently been added to the scope of Immunefi. Since they had audited this before with no findings, it was worth taking some time to go through the new code.
  • Oasis Earn offers a variety of trading strategies. This is done by users keeping their funds in a private DSProxy smart wallet. To perform a trade, they delegate execution to the Earn smart contracts, which implement the trading strategy. Users send ETH to DSProxy, which will perform the specified actions for trading automatically.
  • The Earn smart contracts use a delegateCall pattern. This allows for the editing of state variables from any location but is a very dangerous case. Earn has a large collection of Actions, such as Swap and SendToken. The real meat of this call is the executeOp and its allowances:
    • ServiceRegistry: A mapping between service names and their ETH address.
    • OperationRegistry: An array of allowed actions.
    • OperationStorage: State for the operation execution. This is done because the local state belongs to the user's smart wallet.
    • OperationExecutor: Contract called directly from the original DSProxy call. This receives an array of calls with calldata to execute the operations explained above.
  • What's the bug then? Oasis Earn made a nice but terrible assumption: executeOp will run within the context of a user's DSProxy. What can we do with this? Storage is unique per execution (as explained above) but a selfdestruct() would be a permanent brick if we could make that happen.
  • This attack had a few constraints that made exploitation extremely difficult and limits the options for exploitation:
    • OperationsRegistry must be in the array of actions for each operation name.
    • delegateCall target must be within the allowlisted service addresses.
    • OperationExecutor storage must be empty, since we didn't call it from the proper location - DSProxy.
  • Once here, the author thought about code reuse (ROP, JOP, etc.) attack from the binary exploitation world. The primitive is an arbitrary delegateCall to a limited set of contracts with arbitrary calldata. The list of contracts that can be called is for the entire Oasis platform and NOT just Earn. The author wrote a script to parse all of the potential functions they could jump to using this method.
  • InitializableUpgradeabilityProxy is a proxy for the AAVE LendingPool implementation. If the data provided for the initialize call is empty, and the previous implementation is 0, then we have created a strange state confusion problem. The best part: there's a delegateCall at the end of this with data we control! Now, we've taken a limited address delegateCall and turned this into an arbitrary address!
  • One last trick... the call we want to make will get rejected because a function verifyAction will reject it since LendingPool was not a registered action by the Oasis Earn. The function hasActionsToVerify validates that there is a list of actions; if so, it will verify. However, passing in a CustomOperation bypasses this verification while making the action still usable. Neat!
  • The POC:
    1. Create the selfdestruct contract.
    2. Generate the calldata passed to executeOp.
    3. Call executeOp() with initialize() calldata, targetHash=InitializableUpgradeableProxy service hash,operationName = CustomOperation
    4. Hit the boom! Contract destroyed and funds are lost.
  • Overall, a very complicated article and project! First, a design decision had unintended consequences. Then, the complexity of the modular system created the opportunity for exploitation via a classic binary exploitation technique. Pretty rad bug and exploit!