Resources

People often ask me "How did you learn how to hack?" The answer: by reading. This page is a collection of the blog posts and other articles that I have accumulated over the years of my journey. Enjoy!

LeftoverLocals: Listening to LLM responses through leaked GPU local memory- 1365

Tyler Sorensen and Heidy Khlaaf    Reference →Posted 2 Years Ago
  • GPUs are parallel and fast co-processors. They are designed to handle high throughout graphics and machine learning workloads. GPUs are made up of compute units for various computations, all of which have global memory. These have both compute and memory components.
  • Some of the GPUs have section of memory called local memory for a given compute unit. This is a cache for processing elements within a given compute unit where global memory is too slow.
  • The execution model is different than most programs called kernels. A GPU program is written in a Shader language like OpenCL, Metal or Vulkan with a single entrypoint function to be executed by various invocations.
  • The vulnerability is that the local memory of programs being executed by different users on a compute unit are not properly cleared. As a result, it's possible to steal information across the different program runs!
  • For instance, if there's a GPU job being executed by one process then a malicious process could execute a job directly after this one to steal information from local memory of the original one.
  • The rest of the post is doing this on specific platforms and actually extracting the information from various platforms. The main interesting thing is that all applications on various platforms (like Android) have access to GPUs, making any application an potential attacker to exploit this. Although this is interesting, I don't think it's worth putting into this post but may be worth coming back to.
  • The disclosure process was done through all GPU providers such as Apple, AMD, ARM and many others. Many of these were fixed, which is awesome. Overall, a good post for a relatively simple bug. I personally felt that it was theatricized too much with the name, impact, images, etc. I just love when bugs are talked about and explained :)

Joomla: PHP Bug Introduces Multiple XSS Vulnerabilities- 1364

Stefan Schiller    Reference →Posted 2 Years Ago
  • Sonar Source people go crazy on web security issues! Definitely one of the best blogs to read through for cutting edge security research. In this case, they have a wild XSS in the Joomla CMS.
  • The CMS was removing all HTML tags besides ones that were not explicitly allowed. To do this, the function cleanTags removes all of the illegal content about the tag (attributes and things) but leaves the value within the tag alone.
  • This code is very security sensitive. So, while reviewing the implementation in detail, they noticed that mb_strpos and mb_substr handle invalid UTF-8 sequences differently. Formb_strpos, if it encounters an invalid sequence it jumps back to the second byte being processed. The other function skips over the continuation bytes when this happens.
  • The inconsistency creates a major problem - it may be possible to smuggle in angle brackets and other useful characters by abusing this. Since one function uses a different index than another, it processes different information. By inserting multiple invalid UTF-8 sequences to break the offsetting math in the various functions.
  • By inserting multiple invalid UTF-8 sequences to break the offsetting math in the various functions. For instance, \xF0\x9FAAA<BB will see the invalid sequence and add the <BB as a valid part of the processing even though much of it was thrown out.
  • PHP actually fixed the underlying issue to this problem but didn't backport it because they didn't consider it a security issue. Overall, a fascinating issue of exploiting the intricacies of multibyte characters. Super post with awesome diagrams explaining the vulnerability.

Chainlink CCTP Multi-Sig- 1363

Daniel Von Fange    Reference →Posted 2 Years Ago
  • Multi-signature wallets are a mechanism to defend against a single key compromise leading to the stealing of all funds. Additionally, it's common for timelocks to exist to allow for auditing of changes.
  • However, this creates an issue: what if there's an emergency? We need to be able to act first to make changes sometimes. So, there's a balance to be had here.
  • Chainlink CCTP has a one day timelock. However, for an instant action, 6 out of 6 signatures can be used. For 24 hours, 4 signatures can be used. Finally, 2 sigs are a complete veto.
  • Overall, it's interesting. This feels like a good balance of decentralization as well as the ability to act fast in an emergency.

Debugging Deployed Smart Contracts from Etherscan- 1362

bytes032    Reference →Posted 2 Years Ago
  • Being able to debug live code deployed on mainnet is a real pain in the butt. So, this is a strategy to do that.
  • First, fork the chain you want to work with using Foundry. This gives us control over a network to do what we want.
  • Next, download the flattened source code from Etherscan. Now, we can make changes, such as print statements to the code to help out. We need to be careful not to modify the state or storage slots of anything.
  • Finally, call vm.etch with our new code. This will overwrite the code at our target contract with our debug version but with the state of the mainnet one! Just a small tip to debug live contracts deployed on mainnet better.

CRLF injection in Twitter- 1361

S3C    Reference →Posted 2 Years Ago
  • Carriage Return - Line Feed (CRLF) or response splitting is a vulnerability where a newline can be added to an HTTP response in order to modify it. For instance, it can be used to change incoming headers, force save a response and much more. It's always felt like a mystery to me and how it works. So, I just read through some reports.
  • The one linked is fairly simple: add in a %0d%0aKey:Value. The %0d%0a allows the adding of an arbitrary header. This report also has more linked reports that are interesting that are related to Twitter.
  • This one is interesting because the CRLF injection did not work with CRLF. Instead, they had to do some funky unicode encoding with %E5%98%8A. If I had to guess, this was a server-level protection and had nothing to do with the software that Twitter built.
  • Another pattern I noticed was this occurring with redirects. With these, a redirect from http to https sets the path of the URL to be the content of the path. Since this wasn't escaping the newlines, this led to a serious CRLF injection within the redirect.
  • I tend to blame the server implementation for this. Anything being added into an HTTP response that contains a newline should simply be escaped - there's no reason this shouldn't be the standard. According DayZeroSec, this is also a common Nginx misconfiguration with some variables being used in locations that are unintended.
  • Super weird bug class but CRAZY impact when discovered. Redirects and different encodings seem to do the trick in many cases.

Seneca Protocol - REKT- 1360

Rekt.news    Reference →Posted 2 Years Ago
  • Seneca did virtually everything wrong and then got hacked. So, sort of a funny setup.
  • Seneca was supposed to do an audit with Sherlock but was suddenly closed for code licensing issues. They decided to launch with only an audit from Halborn (which reported a similar bug but not the terrible one that was used).
  • In the Post Mortem, Seneca details the vulnerability. The function PerformOperations contained a mechanism for making an arbitrary call to an arbitrary contract from the context of their contract at here. There is a denylist here but probably not a great one.
  • With an arbitrary call from the context of the Seneca smart contract, there are many paths to go. However, the easiest one is abusing allowances from other users. By making calls to tokens that had approvals from other users, a malicious actor can trick the contract to send them funds.
  • To me, this is a pretty clear sink within the smart contract. Arbitrary calls are catastrophic like this. This would have been quickly found by lots of people on Sherlock; it's weird that this wasn't found by Halborn tbh.
  • Overall, the takeaway is take security seriously! If you don't you'll get hit hard. As security folks, it's okay to call things out as insecure in order to protect users.

WOOFi sPMM exploit post-mortem- 1359

Woo    Reference →Posted 2 Years Ago
  • Woo is some sort of finance platform that is on various blockchains. Recently, they had deployed everything on Arbitrum.
  • WOOFi has a system that adjusts the oracle prices based on trade value. By using oracle manipulation within a low-liquidity environment, it was possible drop the price of the asset to steal funds.
  • The attacker borrowed 7.7M WOO then sold the WOO into WOOFi. Now, the algorithm for the price incorrectly created an extreme price close to zero. From there, an attacker swapped out 10M WOO for almost nothing in USDC. They did this 3 times in order to make a large profit of about 8.75M.
  • Instead of using a standard Automated Market Maker (AMM), they used their special sPMM (synthetic proactive market maker). Within their protocol, the error resulted in this going outside of the range to $0.00000009. In theory, a fallback should execute Chainlink but the threshold wasn't reached, resulting in this major issue.
  • A few things stood out to me and rekt.news. First, going to different chains doesn't come without any risk. Having low liquidity can be an issue for these types of attacks. Second, things that are not battle tested and well audited shouldn't have millisions in them.

Multiple bugs chained to takeover Facebook Accounts which uses Gmail- 1358

Youssef Sammouda    Reference →Posted 2 Years Ago
  • Facebook has an extra security mechanism after logging in to ensure the user is valid. This could be a captcha, MFA but is commonly referred to as a chcekpoint. This is implemented within an iframe that is a sandboxed URL.
  • When implementing this, the outside domain URL is shared with the sandbox URL. Since this could be within some OAuth flow, the code for OAuth could be leaked inside of this iframe. So, if we could communicate with this iframe we could potentially steal oauth codes to takeover accounts.
  • By chance, the author already had an XSS with the domain www.fbsbx.com. Since the domain is a sandbox, it's actual by design though. On a particular page, it's possible to upload HTML files on this domain.
  • To steal the URL passed to the iframe we need to have a relation to the two windows. First, we need access to the Facebook checkpoint page. Next, we need a window with the XSS on the Facebook sandbox referenced as well.
  • To do this, we can open the first window in a new tab from our malicious website. For the second one, we can create an iframe on this page with the sandbox within it.
  • The iframe for the sandbox and the window with the login code with the page have the same origin. Hence, they are able to read the some of the same information! In particular, the location.href can be stolen to get the code.
  • The one other complication to this is that we need to force the account into this checkpoint state. So, we can use a login CSRF and a logout CSRF to do the trick. However, if we do this to Facebook directly then we won't get anything out of it. So, the trick is to do this to a OAuth provider, like Gmail.
  • To perform the attack, do the following steps:
    1. Logout CSRF
    2. Login CSRF
    3. Open up the Gmail OAuth provider with a redirect to Facebook with window.open(). This will redirect to Facebook's checkpoint page with a code in the iframe.
    4. Wait a few seconds for all of the interactions to happen.
    5. Access the frames information using the XSS to get the Google OAuth code.
    6. Start the password recovery process on Facebook.com and connect via Gmail.
    7. Use the code and state in order to login to Google.
  • The deep understanding of client side security of this hacker always amazes me. On top of this, just having an XSS in the pocket was awesome. I wonder if there's a good place to find all of the client side security things? If you see one, please let me know!

SolChat Messages Insecure Encryption Method- 1357

h0wl    Reference →Posted 2 Years Ago
  • SolChat claimed to be an encrypted chat application and audio calls using WebRTC. So, the author decided to take a look at it.
  • They first took to reviewing the JavaScript code. Since the JS map files were easy to download they could deobfuscate it using sourcemapper. While doing this, they discovered that the encryption/decryption of messages was happening client side with a large coded key!
  • In particular, these were stored within process.env.REACT_APP_API_URL. Since the client side needs this information, it was exposing the secret information when used there. I'm guessing that the developer didn't understand the difference between frontend and backend.
  • So, they took some messages off of Solana and decrypted them on the spot. Talk about a horrible blunder! Even after making this only on the server side, it's still bad that a single entity has all of the keys for decrypting all messages.
  • I love that the author went and verified these claims. People who make bad claims about the security of something need to be exposed in order for the world to be more secure.

Learning by Breaking - A LayerZero Case Study - Part 3- 1356

Trust Security    Reference →Posted 2 Years Ago
  • In the first two posts they found two vulnerabilities that were already patched in LayerZero. This time, they go through a vulnerability in a different section of code.
  • When calling an external smart contract, you can specify the amount of gas for the call. When doing this, you can pass in 63/64 of the gas in the call. Why? Because the current smart contract also needs some gas to finish executing! If all the gas was actually passed, there's a chance that the current one would run out of gas. Even with the 63/64 out of gas rule, it's possible to get the 1/64 to be eaten up, which is what this bug will exploit.
  • LZ has two cross-chain token implementations: ONFT(ERC721) and OFT (ERC20). In the ONFT contract, is implements the specification correctly, including the onERC721Received() function. There is no gas limit on this external call, which means we can eat up a lot of gas.
  • In the _blockingLzRecieve() function, it will try to store the reason for the revert. However, if we use 63/64 from the other contract (which we can from the onERC721Received() entrypoint) then we will be able to force this to revert which not enough gas.
  • To freeze an ONFT, a user can bridge a low-value NFT then have a malicious contract there. To unfreeze it, the NFT owner can call forceResumeReceive() to remove the payload from the endpoint. Then, it can be resubmitted for 1.5M gas.
  • When disclosed to LZ on Immunefi, it was pointed out that all NFT/ONFT payouts were limited to the low range of 1K-10K compared to the 25K-250K range for a high impact normally. The easy way to fix this is to override _safeMint() to pass the remaining gas for the transfer iteration and not 63/64th o f the gas.
  • The small payout for this bug and the previous 2 bugs being dups really disincentivized further research. To make it worse, it seems that many of the bugs are not fixed within the core contracts, making it very difficult to test issues. On top of this, crosschain things are just difficult to test in general.
  • The series was an interesting perspective into LayerZero bug bounty program and DoS bugs in weird spots. To me, some of the design decisions, like force removing a bad TX, are interesting for limiting the actual damage on the blockchain. LZ seems to generally have good defense in depth and design features.
  • Besides this, it seems like LZ has some sketchy fixes and practices for bug bounty. From these readings, I personally wouldn't tackle their bug bounty program for the claimed $15M as it's A) very secure with good design and B) complicated to test with the fixes being all over the place and C) a tad sketch on payouts.