People often ask me "How did you learn how to hack?" The answer: by reading. This page is a collection of the blog posts and other articles that I have accumulated over the years of my journey. Enjoy!
ExtractLinearSum to convert a value into a linear sum expression. For instance, (x+(2+3)) - (-3) can be transformed into x+8. This type contains three parameters:
This type contains three parameters:
ExtractLinearSum is used multiple places in the Ion compiler, one of which is folding or simplifying the linear expressions. The function TryEliminateBoundsCheck is trying to merge bounds checks on the same object to simplify things. For instance, array[i+4]; array[i+7] will generate two bounds checks. To do this, it will create a bounds check object that can keep track of what's going on, eventually leading to a value of 7 being checked on the length. MathSpace is useful, it's not rigorously verified. In the case of bounds checks, this seems pretty important! Module makes sense in some math cases but doesn't make sense in the case of bounds checks - infinite does. So, what if we can find a way to make the numbers being used in this operation of type Modulo on a bounds check? i is slightly less than 2^32: array[(i+5)|0]; array[(i+10)|0]. The |0 is used to force this to be 32 bits. The check will overflow because of the MathSpace being set to Modulo, leading to a faulty bounds check. This is only possible with really large arrays, requiring typed arrays to be practically feasible. Map objects were nice for getting a addrOf and fakeObj primitive. Once there, exploitation is trivial. Commitment,QueryPoint as the key and to a Value. This key isn't unique enough! It's possible for a "query collision" to occur, where two independent queries have the same key, even if their values are expected to be different. In the context of Halo2, the consequence is horrible: one evaluation can silently overwrite the other. This means that it's possible to forge proofs in many situations. increasePosition, which did NOT update the globalShortAveragePrices in the ShortsTracker contract. Later, when the execution decreases, the value is updated, though. Entries update, but exist to not update. This is not really a vulnerability by itself but a quirk of the protocol.enableLeverage on the core code before performing any of the trades. There was a backend off-chain service that would trigger this functionality. While Keeper made this call, it was possible to redirect execution and call the GMX contract while leverage was still enabled. This is the vulnerability that makes this possible.PositionManager to enable leverage. The Orderbook would then execute executeDecreaseOrder(), update the attacks position and pass execution to the contract via the collataral token being in WETH.fallback function, would transfer 3000 USDC to the vault and open a 30x leverage short against WBTC using increasePosition. Because of the second design flaw, the globalShortAveragePrices were not updated. During a future call to the ShortsTracker contract, the globalShortAveragePrices would be updated. This dropped the price of WBTC to about 57x less than it should have been.mintAndStakeGlp to mint a lot of GLP. Next, the attacker would call increasePosition to deposit a large amount of USDC on WBTC. This would update the globalShortSizes, resulting in AUM increasing dramatically. Finally, the attacker would call unstakeAndRedeemGlp to redeem way more tokens than they were entitled to. But why? globalShortSizes was not. When performing calculations on the trades, the manipulated value of the trade was far above the market price, making the trade appear deeply unprofitable. Naturally, this increases AUM by a lot. By doing this over and over, they got more funds from the trade of GLP than they actually should have.initialize function can be called by attackers before the real user sets malicious settings. In reality, if this happened, a legitimate developer should recognize the failure and just try again. At least, that's the argument I've been hearing for a long time. So, what's different here?guest.microsoft.com. Once logged in via a phone number, no information was given. This seemed like it wasn't meant to be publicly accessible./api/v1/config/ with a JSON parameter called buildingIds. Since they had not visited any buildings, none of the information was provided, though the array of buildings was empty. By providing an ID of 1, they were able to see some building information. /api/v1/host. By providing an email, PII about the employee, such as phone number, office location, mailing address, and more was provided. The same issue existed on guests based upon their email as well...%2f..%2f..%2f or ../../../ URL encoded, they were able to get an Azure functions page. But why!? The proxy was decoding the URL encoded / and being used by the actual Azure function. Neat! /api/visits/visit/test. Eventually, they managed to get this working to retrieve a wide range of invitation and meeting information. Sadly, they got nothing for the vulnerability: it was moved to review/repo, fixed, and no payment was ever made. Regardless, it was a good set of vulns!PUT /api/lead/cem-xhr that fetched data. This was likely proxying information to a Candidate Experience Manager (CEM) via an XHR request. This contained a lead_id parameter. ProcessUnstakeWithdrawals iterates over a list of unbonded entries. This loop fails to deal with the situation of multiple withdrawal requests coming from the same delegator. Via some funky state handling, this led to a panic from too many coins attempting to be burned.