Resources

People often ask me "How did you learn how to hack?" The answer: by reading. This page is a collection of the blog posts and other articles that I have accumulated over the years of my journey. Enjoy!

Lending/Borrowing DeFi Attacks- 1155

Dacian    Reference →Posted 2 Years Ago
  • In DeFi, there are many lending and borrowing platforms. Users on these platforms can either lend tokens to receive interest or borrow tokens to conduct other activities. Naturally, the borrowers need to provide collateral to take out these loans.
  • If the borrower doesn't make the repayment schedule or the collateral drops below a certain price threshold, then the load is liquidated. This allows other players to payback the loans assets. In exchange, they get a discount on the collateral that the user provided. The original user keeps the assets in this case.
  • The first vulnerability is Liquidation Before Default. The liquidation process allows for a liquidator to make money by purchasing the collateral at a discount. If it's possible to force this early, then money can be stolen.
  • In the first example case, there is a function to return whether a payment is on time or not. If they are overdue, then a user can liquidate the loan. If the loans first payment hasn't been made, then the value is 0, which results in it being liquidatable. Another case allows for the loan parameters to be changed after the loan has been given. Another was a lack of an updated variable leads to instantly written off loan.
  • The next bug is borrower can't be liquidated. If a borrower can devise a loan offer that results in the collateral not being able to be liquidated, this creates a major problem. The AddressSet type will override existing values. So, if an attacker has a loan they could potentially override their collateral with 0, making it impossible to get any collateral out. Another case is providing a malicious oracle.
  • If it's possible to keep the collateral and keep the borrowed funds, the whole system falls apart. In the example, a user can take out multiple loans. When they close out a non-existent loan, it will subtract from the counter without actually doing anything. Hence, a user can take out a large loan, subtract the loan counter and keep the collateral. Some of the other examples are logic flaws that allow for the code to perform unintended executions.
  • The next two findings are related to access controls: repayments paused while liquidations enabled and Collateral Pause Stops Existing Repayment & Liquidation. It's common for smart contracts to be pausable in the event of a vulnerability or market crash. However, should everything be pausible?
  • Depending on the case, it can have problems. For instance, if repayment is blocked then the crash of a cryptocurrency asset being used as collateral could drastically effect the protocol. Additionally, if repayments are not allowed but liquidations are, then it's not fair since the proper payments cannot be made on the funds.
  • A few more... if is a liquidator paying back the collateral for the loaner but the amount is insufficient. And, infinite loan rollover with the borrower continually getting an extension on the loan without the lender being able to get it back.
  • There are many more ways that loans can go wrong. denial of service and several other financial related issues. Overall, a very interesting post on loans related issues with good links to the previous findings.

Uncovering Real-Life Examples of Denial of Service Attacks on Smart Contracts- 1154

Bloqarl    Reference →Posted 2 Years Ago
  • Denial of Service (DoS) are attacks that come from disallowing access to a service. In the context of blockchain applications, this can be completely rejects access to the service to somebody else or to yourself accidentally. Both of these are valid findings are auditors on Code4rena.
  • The first one the author mentions is DoS by Underflow. In Solidity, there are now built in protections to prevent overflows and underflows. However, if there is a flaw in the contract, the protection could become a DoS itself. An example of the underflow DoS was a contract where there is a senior and junior vault.
  • When a junior vault would transfer it to the senior vault, it had to convert from WETH to USDC to stake. The senior vault gains value by increasing the debt of the junior vault. There is validation to ensure that the vault is not beyond its cap, which was covered by integer overflow protection. If that is true, then it will revert on the same call for withdraw and deposit. To solve this, return 0 on the value instead of underflowing.
  • Another common bug is DoS via gas limit. In the example, a user deposit was tracked in an array. Later on, when withdrawing USDC, it goes from an index of 0 to an infinite size. By making the array too big, users may be denied access from gas limitations. This can be solved by programming without controllable array, removing previously used values that are no longer needed or having indexed based calls in cases where the array is too long.
  • DoS by nonReentrant modifier. An example of this starts with contract had various operations related to staking, rewards and transferring funds with nonReentrant modifiers on them. There existed a unstaking code path that hit a transfer function where BOTH had the nonReentrant modifier on it. Since this was the case, all calls to the unstake function with the case of there being vault rewards to fail. The solution the problem could be done in a few ways. In particular, having an external function with the modifier and an internal function without the modifier to access the functionality.
  • The next item refers to external calls reverting. The example they give is a chainlink oracle may reject calls. The solution to this appears to depend on the external call being made. However, to me, having a fallback for handling an oracle is dumb... the whole point is that this is a trusted entity that is secure. Any logic against this could lead to compromise.
  • DoS via Malicious Receiver. There is commonly logic that allows for the arbitrary callback of contracts. If this is implemented for various bits of functionality, an attacker could force a revert to happen, blocking the contract forever. In the example case, it a contract with basic loan functions liquidate() and endAuction() which call the users defined function. An attacker can revert all of these calls to make it impossible to end an auction or liquidate.
  • Overall, interesting post on denial of service problems in Solidity.

Ocean Life token hack analysis — Flash Loan Attack- 1153

MetaTrustAlert    Reference →Posted 2 Years Ago
  • The Ocean Life token on BSC was hacked. Ocean Life token is a deflationary token. This means that over time the token will disappear. Why? With less supply comes more demand. For more on Deflationary tokens, read understanding-deflationary-tokens-and-their-benefits.
  • The balanceOf mapping is normally obvious. With deflationary tokens, this is dynamic and calculated based upon the supply.
  • How does this removal of assets occur? The function _reflectFee() takes a small fee every time that a call to transfer() is made by sending it to charity and a few other places. The totalSupply() variable _tTotal is subtracted from and some internal accounting tracks the amount of funds now owned by the token.
  • Control over the totalSupply or balanceOf is generally a bad idea. But why? Many locations calculate the price of a token in a pool based upon the amount of tokens available or the amount of tokens in a pool. By being able to burn() an arbitrary amount of tokens, we can manipulate the price of funds in a pool. Or can we?
  • The deflationary token developers thought of this problem for AMMs. So, there is a denylist of addresses that are given their true balance instead of the dynamic balance. How did this go wrong then?
  • The vulnerability in this contract isn't the dynamic supply... the PancakeSwap pool was NOT included in the denylist of addresses. This means that the theorized attack about manipulating the supply of Ocean Life tokens by making them more scarce is possible. This was misreported in a few places like here.
  • In this attack, the attacker did a few things:
    1. Took out a large flash loan to get OLIFE tokens.
    2. Swapped with themselves continuously. This was done in order to force a large burn/destruction of tokens.
    3. Call sync() on Pancake swap to update the price in the pool.
    4. Transfer OLIFE tokens for BNB at the inflated rate to get much more BNB than should be possible.
  • Overall, a super interesting vulnerability that is simply a configuration problem.

Admin Brick & Forced Revert- 1152

Dacian    Reference →Posted 2 Years Ago
  • The project being tested was from Alchemist. They developed the Fjord Foundry platform which has an ERC-20 token called MIST, a staking and rewards platform called Aludel and a smart wallet called Crucible.
  • Within the ERC-20 token, the function advance() mints a new inflation according to the newly set parameters. This is a two step process within a TimelockConfig which is controlled by a multi-sig admin wallet.
  • Changes are proposed in step 1 with the requestChange() with a waiting period of 7 days. This can be cancelled with cancelChange(). Both of these functions are only accessible to the administrators with the onlyAdmin modifier.
  • The confirmChange() function is used to enact the proposed change. This does not have an administrative modifier on top of it though. At first glance, this seems fine... the validation of the date works as expected. However, this does open up a new attack surface though!
  • The vulnerability is that confirmChange() assumes that a change has been proposed for a given ID via this two step process. In reality, an external user can call this function without any proposals for a given ID. The only validation is that the block.timestamp is greater than the proposed time.
  • Here's the problem: if the configID doesn't exist, this mapping of timestamp to configID will return 0! Since 0 is less than the current timestamp, the function believes this is a valid call. In most languages, a missing dictionary entry will crash the program... but not Solidity!
  • In the rest of the function call, it sets the new admin config. Since all of these are set to 0, it bricks some parts of the contract. In particular, it's impossible to call advance() on the smart contract now. Overall, a bad developer assumption caused a major security flaw.

Hack Analysis: Platypus Finance, February 2023- 1151

Immunefi    Reference →Posted 2 Years Ago
  • Platypus Finance is an AMM protocol on the Avalanche blockchain. It has asset liability management and swapping capabilities. In February of 2023, they introduced USP, a new stablecoin.
  • MasterPlatypusV4 is the Masterchef-like orchestrator. The emergencyWithdraw() function allows for main contract to withdraw their LP tokens from a given pool without caring about rewards. The contract literal has "EMERGENCY ONLY" within the code lolz.
  • When performing this operation, the only check on the function is whether the user is solvent enough. What does this do? It validates that the user's debt is less than equal to the USP borrow limit. The better question is "what doesn't this do?"
  • The contract does not check if the user has taken out funds via a loan. Because of this, an attacker can withdraw their collateral and keep the funds from their loan. Yikes! In the real attack, $8.5M in assets was stolen.
  • To perform this attack, an attacker can take the out a flash loan. Once they do this, they deposit the funds into the Platypus pool for LP tokens. TODO....

Deposit Proxy Contract Post Mortem- 1148

DyDx    Reference →Posted 2 Years Ago
  • DyDx is a trading platform perpetuals, leveraged trading and general trades that runs on Ethereum.
  • DyDx had a smart contract for currency conversion that worked by trading all assets to USDC then to the DyDx exchange in a single transaction. The goal was to make this doable in 0x API calls.
  • Initially, this was achieved by calling taking in user provided input for an exchange address and the call data. This is shown below:
    (bool success, bytes memory returndata) = exchange.call(data); 
    require(success, string(returndata)); 
    
  • Being able to contract the location of the call and the parameters from within the contract feels bad. However, the design of this was okay because of the design. There are three contracts: the currency converter proxy, the exchange wrapper and the exchange.
  • A user would call approve() for an asset on the currency converter proxy. Then, the funds would be exchanged to the wrapper and eventually the exchange. This call() with the arbitrary data was being made from within the wrapper after some input had already been provided. Since the wrapper didn't have the approvals, there were not funds at risk.
  • The problem comes up with a redesign of the system. Instead of having three contracts for the one call, they made only TWO. Now, the currency converter proxy was the contract with the arbitrary call() within the code. Since this contract had the approvals, this was bad.
  • A malicious user could provide an arbitrary ERC-20 token and arbitrary calldata. Since the contract was approved by users, a malicious user could steal anybody's funds! How did this get through? The code had 100% coverage. It seems the reason this slipped through the cracks was a lack of importance on a small contract and a last minute change for performance reasons.
  • Overall, interesting yet simple vulnerability. The finder of this bug received 500K and all of the funds were whitehat hacked by DyDx.

Security Guide to Proxies - 1147

yAcademy     Reference →Posted 2 Years Ago
  • This article goes into the security problems that can occur while using proxies. This website is meant to be all the research to do with proxies in the blockchain space.
  • The first vulnerability mentioned is unintialized proxy. When updating the proxy to use a new implementation, there's a problem: the constructor will not be evaluated on the contract. Naturally, we need to initialize state when adding a new implementation. So, the implementation contract should have an initialize() function. The initialization step must happen separately from the deployment. So, there is a race condition where the function could be called. If an attacker called this, or the function was forgotten about, an attacker could cause major havoc.
  • To test for the uninitialized contract, a few cases should be run:
    • Is the contract initialized?
    • Can it be reinitialzied?
    • Is there a race condition between implementation deployment and initialization execution?
    • Is there a access control on this function?
    Wormhole and OpenZeppelin are great examples of this.
  • The second vulnerability is storage collision. When calling delegateCall the implementation contract is using the storage of the proxy contract. If there is a collision between these two contracts for variables, then havoc can ensure. In the case of Audis, proxyadmin was stored in the initializable field for the contract. This allowed the contract to be reinitialized and steal the funds.
  • To test for this vulnerability, sol2uml can be used to visualize the storage slots from proxy to implementation. Additionally, using the artifacts between compilations of different versions would work too. This vulnerability can be particularly common with updates, since this could reorder the variables.
  • Function clashing occurs when the 4 byte identifier of a function selector is the same. If a proxy function and an admin function have the same selector, this can cause problems. Slither is able to detect this problem automatically.
  • The next two vulnerabilities are with using delegateCall. Redirecting to an arbitrary contract allows for the contract to alter internal variables. The next issue is figuring out a selfdestruct call from the initial call in the proxy. By doing this, the address and variables are ruined forever.
  • The final issue is calls with delegateCall not checking the result. By not checking the result, the function would have executed without anything happening. delegateCall doesn't revert on not calling a contract; it only returns a boolean to mention this.
  • Overall, a good read into proxy based vulnerabilities.

What Is Primacy Of Impact?- 1146

Immunefi    Reference →Posted 2 Years Ago
  • In some way, shape or form, the Bug bounty scope needs documented scope. On Immunefi, this typically labels contracts or websites in scope and assets at risk. So, what happens when the company writes a new contract but it is not put on the scope list? Well, the contract is no longer in scope!
  • To me, this is really dumb. Obviously, you want to define what you pay out for. At the same time, shouldn't it just be that there are customer funds at risk? Funds at risk is funds at risk. If whitehats find a bug and don't feel they can get paid out, they may cross over to the dark side.
  • This new Primacy of Impact is meant to get rid of this. We didn't mention that contract in scope yet you can steal millions worth of assets? Yep, we'll pay out for that! This rule is trying to prevent programs from not paying out for bugs but feeling the bug is bad enough to warrant a fix. If there are funds are risk, then a pay out should occur.
  • In the DeFi space, where million dollar hacks happen regularly, it makes sense to have this rule. I think it's a good step forward for security.

yAcademy Proxies Research - 1145

yAcademy    Reference →Posted 2 Years Ago
  • The article dives into the different proxy patterns and the security issues that can arise from this. First, there is the simple proxy. This makes a delegateCall to the implementation code. The delegateCall is used in order for the proxy to have the storage of the contract be decoupled from the code. The original proxy did not have the ability to upgrade though.
  • Obviously, we want to be able to upgrade the implementation address. This is done by storing the implementation address within the proxy contract and have an admin function to update it. An Initializeable Proxy is a proxy that can be initialized. This is necessary because the constructor will only run on deployment. So, if we set a new implementation, we need a way to initialize or update the state of new variables.
  • The patterns above have a few problems though. The proxy has an issue with storage collisions if a user is not careful. To account for this, EIP-1967 was created for unstructured storage. This stores the implementation address at a nearly random slot within the contract to ensure that collisions don't occur.
  • The second problem is function clashing when the proxy and implementation share a function selector. In the Transparent Proxy pattern, the admin of the contract hits a special section of code only meant for admins while the rest of the users hit the normal fallback code.
  • In all of the cases before, the upgrading was happening within the proxy contract. The Universal Upgradeable Proxy Standard (UUPS) flips this on its head. The upgrading functionality is in the implementation contract instead. This allows for editing the upgrade functionality, which can be a good thing.
  • The Beacon Proxy pattern is useful when there are a large amount of proxy contracts are needed. If the implementation address changed, then all of the proxies would need to change as well. Instead of the proxy->implementation flow, this pattern adds in a third contract. The flow is now proxy->beacon->implementation.
  • Diamond Proxy has a list of implementation contracts that point to different contracts depending on the function being called. This allows for smart contracts larger than the official size. However, this does have the limitation of storage being quite weird across the different contracts.
  • The Metamorphic contract is weird in that it doesn't have a proxy for updating. Instead, it calls selfdestruct then redeploys the contract to the same address. Honestly, this pattern makes the most sense to me for user engagement.
  • This is a really good article on the types of proxy contracts. I felt like they explained the pros, cons and gave example implementations on the way.

Yearn Finance Hacked from Misconfigured Tokens- 1144

Rekt    Reference →Posted 2 Years Ago
  • Yearn Finance is a suite of products to yearn yield on digital assets. This includes staking tokens to earn interest and selling/buying votes. For the yield-bearing assets, users can put positions into Aave, Compound, DYDX and BzX's Fulcrum.
  • The bad contract was the legacy iEarn USDT token contract. What was wrong with it? There is no inherit security issue with the code; it was a misconfiguration of the pools being used. In particular, the contract used the Fulcrum USDC address instead of the USDT address.
  • Why is this bad? This leads to manipulation on the pool being possible. From my basic understanding, the attacker needed to hit the code path for this misconfiguration. This was done by rebalancing the protocol to use Fulcrum instead of AAVE and Compound. To make the hack as financially profitable as possible, they forced all of the funds to be sent back to the contract instead of be in the pools.
  • Finally, to exploit the misconfiguration, they sent a single USDT token to the pool. Since it didn't have any USDT, the USDT amount (which was really USDC amount) was divided by the real amount of yUSDT, which is 1 from our donation. This leads to 1.2 quadrillion of yUSDT being minted when it shouldn't have been. Yikes!
  • The attacker trades these funds to other locations in the Yearn Finance ecosystem in order to profit heavily from the issue. This dumb copy-and-paste issue had a complex manipulation occur in order to exploit this, which is pretty wild. Overall, good read but hard to follow without understanding the codebase.