Resources

People often ask me "How did you learn how to hack?" The answer: by reading. This page is a collection of the blog posts and other articles that I have accumulated over the years of my journey. Enjoy!

Rook to XSS: How I hacked chess.com with a rookie exploit- 1335

jake.skii    Reference →Posted 2 Years Ago
  • Chess.com is a very popular online Chess platform. The author decided to look into this site for security issues.
  • On the platform, you can add friends. When reviewing this request, it is a GET request to a registration invite with a long hash. So, if a user were to click on this link then they would accept the friend request.
  • To make this more obtainable, they learned you could add an image link to your profile. This image link could be a direct link on chess.com! So, the URL for the link could be easily change to the friend request URL, which when loaded into an image on the profile would force the person to accept the request. Pretty neat! When the developers tried to fix this, it was easy to bypass via a domain redirect.
  • They wanted to find true XSS on the website, which led them to a TinyMCE editor. This has a configuration that they started reading. They noticed that the background-image stly attribute was in the allowlist. While adding this attribute, a double quote had been added to the URL, resulting in a context break!
  • Since the double quote was added, this led to an escape of the given context. They could add extra attributes for the tag using this. They were able to add in a onload tag with JavaScript in it! However, not many characters were allowed to be used like parens or backticks.
  • The author goes through a long process of enumerating the restrictions of the exploit. Only being able to use a ? once, dots being allowed, and many other things. When you're testing blackbox, figuring out what you have and the limitations is super important to do. To start, they realized they could read cookies and redirect the page to extract those.
  • Overall, an interesting XSS article. I wish that the explanation of why they tried specific things was more clear but I enjoyed the vulnerability.

SMTP Smuggling - Spoofing E-Mails Worldwide- 1334

sec-consult    Reference →Posted 2 Years Ago
  • SMTP, the Simple Mail Transfer Protocol, is the base email protocol that helps run the world today. Finding emails in servers could allow for terrible email spoofing and mass havoc being caused. The SMTP protocol is newline based, similar to HTTP. Unlike HTTP, it has commands. These commands include setting the sender, recipient, subject and more. The ending sequence is \r\n.\r\n and is the ending of the data.
  • HTTP smuggling is a vulnerability where two different interpreters of the protocol (nginx vs. apache) see the data coming in separately. This leads to an attack where one server may think it's one thing while the other sees it as another. Being able to break the underlying parser in this way can allow for smuggling or adding of unintended information by an attacker. The authors decided to look for a similar type of bug but in SMTP to see commands within SMTP.
  • SMTP servers support SMTP pipelining for a series of requests. Breaking out of these would be amazing to change the information being used on the next set of emails. They decided to try various different ending sequences that were typically invalid to the specification but may be supported. \n.\n, \r\n and many other things were tried. GMX was vulnerable to the \n.\r\n method.
  • On Microsoft Exchange, \n.\r\n broke the parsing as well. They were using BDAT, where the size is specified. However, if the server doesn't support the BDAT then it will default to DATA once its came from Outlook. This worked on their own server as well as Amazon, Paypal, eBay and many others. This was on the outbound that caused problems btw.
  • They started fuzzing various servers on the inbound size. If it timed out, then the EOL was not accepted. Otherwise, they had figured something out. This was useful for testing the interpretation more quickly than just sending emails and looking at the responses. It turns out that \r.\r got accepted by Cisco Secure email
  • Many of the protections on email are bypassed from this method. I found that particularly interesting.
  • The response from Microsoft was very sad. They claimed it wasn't a big deal since it required a non-standard sequence understanding on the other side of this. Honestly, going forward, I expect to see more server vulnerable to SMTP smuggling, similar to how it got popular once HTTP smuggling became popular.

Cookie Crumbles: Breaking and Fixing Web Session Integrity- 1333

USENIX Security 2023 - Various Authors    Reference →Posted 2 Years Ago
  • Cookies are a major place for security in the browser. They can hold secrets via the HTTPOnly place, can only be sent over HTTPs via the Secure flag and help to prevent CSRF via the SameSite flag. We always assume that cookies operate correctly. There are many fields in a cookie, such as the path, expire time, host and more. However, when we have specific primitives, we can cause massive damage when cookies aren't being handled as they should.
  • The authors of this paper consider a few different scenarios. First, a same site attacker from our a sub-domain takeover, bad site design or XSS. The second, is the same but without an HTTPs/TLS connection. The final attacker they consider is a user with full control over the cleartext traffic being sent back via a MitM attack. These specific roles are important, as they grant the user access to specific abilities in the browser.
  • There are some old-school techniques to talk about. First, cookie tossing. Only the first cookie sent in the request of a particular name is parsed by most web servers. So, how are cookies of the same name ordered? First, it's sorted by path then creation time. If an attacker can set a cookie on one domain with a value with a more specific path, then it will get parsed first! For the double submit CSRF pattern, this is a devastating attack. Second, the cookie jar has a limit on FF and Chrome of 180 per schemeful site with oldest cookies getting kicked out. So, an attacker can set a ton of cookies to boot out even HTTPOnly cookies.
  • Nameless cookies are a cookie with a value but not a name, which was added to the specification in 2020 because many servers were doing this anyway. However, as cookies are just strings, the security implications of concatenating regular cookies with nameless cookies was NOT considered. When taking a nameless cookie like =sid=evil; it would be parsed as sid=evil;. This could even be used to bypass the host cookie and secure cookies from an insecure origin. Firefox and Chrome followed the standard to the tee, resulting in CVEs on their side. The solution was to drop nameless cookies that beginning with __Host or __Secure.
  • They decided to see how cookies were being parsed on the server-side. Since there wasn't a set standard on this, many browsers and servers differ in this aspect. For instance, in PHP, all dots, spaces and square brackets are rewritten to be underscores instead. So, an attacker can spoof a secure or host only cookie by doing ..Host-sid=evil, which will be translated into __Host-sid=evil. ReactPHP URL decoded the cookie name, causing a similar type of issue. Wekzeug, which is used by Flask, and API gateway from AWS, removed leading equal signs leading to the same issue as before.
  • FireFox had an issue that desyncs the actual cookies being sent versus what was returned by the Document.cookie API. Since HTTP headers, like CSRF tokens, often use cookies, this could create an issue. To find these types of issues, they reviewed the cookie standard thoroughly then tested both browsers and server parsing. Discrepancies would cause interesting desync issues like mentioned above.
  • Cookie fixation is a method is setting a cookie as an attacker then having this value be used for the service down the road. For session cookies and CSRF protections, this can be real bad, as an attacker would know a secret value to bypass protections. They found that many frameworks would use a CSRF token set as a cookie on the site prior to login would also use it while authenticated. They found that Express/Koa/Fastify in the NodeJS stack, Symfony, Flask and CodeIgniter4 were vulnerable to this fixation issue.
  • They found that Passport, Fastify and Sails frameworks were vulnerable to trivial session fixation vulnerabilities by simply setting the session cookie prior to login. Yikes! That's an easy account takeover. Overall, an amazing set of tricks and things to keep in mind when trying to escalate XSS on a crappy domain or a sub domain takeover. Both old and new things were presented which was awesome.

Uncovering a crazy privilege escalation from Chrome extensions - 1332

Derin Eryilmaz    Reference →Posted 2 Years Ago
  • Chrome extensions have lots of power but do have limitations. They can read the DOM but they can't execute exe files, change settings or many other things. Securing Chrome Extensions from taking over your computer is an important security model of the browser.
  • File extensions are weird. Extensions can't read files, unless the "allow access to file URLs" flag is turned on. Some apps, like Text App, are able to edit local files as well. Being able to read or write to arbitrary files would be real bad for the system.
  • ChromeOS is a super weird OS based upon Chrome. There are no executable files and you do everything in your browser. Since the OS settings are coupled, a user accesses files through chrome://file-manager or settings through chrome://os-settings. If an extension can run code within the context of one of these pages on chrome:// it can do whatever it wants to the system.
  • The author was poking around the chrome://file-manager page when they saw the URL filesystem:chrome://file-manager/external/Downloads-878f28a3486b11359f7db348414fed3b5a15e573/file.txtt in local storage of the website. Functinoality, this is just like the file:// URL but not with as many restrictions.
  • From this, they started playing to see what permissions this had and what could be done. They opened a file that had HTML and simple JavaScript tags in it. To his surprise, it worked! No CSP blocked it or anything else. This is super weird, especially considering the creation of trusted types and other things.
  • With XSS on the super privileged URI chrome://, they knew this could be a big one. So, they dove into what this page had access to. They could read other pages, issue requests to preferences, read/write local files and more. Hype! But how do we get the user to execute this, especially with the random hash in the file name?
  • A Chrome extension can easily download files; so, this isn't a problem. To get the hash, it's simply a hash of the username that can be queried from the standard Chrome extension APIs. With this, they could download a file, read the hash, open the file and perform very bad actions on the device. Awesome!
  • While reviewing other sections, they found a very similar bug on the filesystem:chrome-extension URI that is specific to each chrome extension. The URL can read from chrome://resources. The more important thing is that it can execute scripts in the context of this page as well, giving another Chrome XSS!
  • This bug existed for 7 years within Chrome! That's pretty wild for how impactful this bug is. But why does this occur? ChromeOS extended the legacy filesystem URL but didn't consider that this could be rendered from the browser since it was never meant to be. Since protections were never put in place, it led to an easy XSS.
  • The privilege escalation itself was a change made to make the filesystem:chrome:// a real Chrome URI, giving it access to more features. This small change allowed for the XSS to go too far. The author has a great takeaway from this... "I think this type of bug is really interesting because it shows that vulnerabilities don't always come from simple mistakes; sometimes, decade-long design choices in massive and complex projects like Chrome/ChromeOS can be exploited in creative ways. "

Alchemix Missing Solvency Check Bugfix Review - 1331

Immunefi    Reference →Posted 2 Years Ago
  • Alchemix Finance is a synthetic asset protocol around tokenizing future yield. Using the DAO, it's possible to access the future yield. This is done by issuing a synthetic token that represents the fungible claim of assets left by the depositors.
  • The synthetic debt token, or alAssets, represent the user's future yield. These are backed by the corresponding underlying asset at a near 1:1 value. These can be realized by swapping the underlying asset for the alAsset over time. Or, things can be traded on the open market.
  • Within loan based protocols, there is a process called liquidations. This happens when a user's collateral drops below a certain threshold; loans are over collateralized so the protocol doesn't lose money. When this happens though, they want the original funds back but the user took them with their loan. So, a liquidation is the process of trading the collateral at a discount to get the loaned asset.
  • The vulnerability is a bad liquidation check, leaving the protocol with bad debt. In particular, a sandwich attack can occur to mess up the price of a given asset. The exploitation of this is pretty weird but makes since in the context.
  • Here's the step by step on how to exploit this missing check:
    1. Deposit 100 stETH and take a loan for 50 alETH. The maximum is 50% in order to prevent the protocol from losing money.
    2. Call the liquidate() function on the loan that was just taken out. Crucially, set the minOut to be 0. We are trying to sandwich the trade call made within this.
    3. The unwrap() function is susceptible to sandwich attacks. So, we can make the trading ratio between the two tokens next to nothing!
    4. The protocol sends the liquidator the collateral for providing the small amount of loaned out value.
    5. The protocol didn't get enough of the tokens to pay off the bad debt but still paid out for the liquidation! We profited almost 50% from this.
  • This essentially allows us to bypass the typical solvency checks on the account. The attack gives us back all of the collateral and allows us to keep the loaned asset. This is done by sandwiching the trade on ourselves in order to make a really bad ratio for asset A to Asset B. Neat attack!
  • Overall, a weird use case for sandwiching. Most of the time, we're sandwiching other users. In this case, we sandwiched ourselves to break the math being done. To fix this, Alchemix ensures that the remaining collateral from a liquidation (if any) divided by the debt is above the minimum colleralization ratio.

Metamask Snaps: Playing in the Sand- 1330

OtterSec    Reference →Posted 2 Years Ago
  • Metamask is a popular crypto wallet in the web browser. Even if you're not using it to store your funds, it's likely interacting with your hardware wallet. Obviously, having a safe crypto wallet is a must.
  • Metamask supports snaps, which are modules built to extend the functionality of Metamask. This is ran in a sandboxed environment with very serious permission boundaries. Metamask warns users of each permission that the snap wants, putting the burden of security onto the user in this case.
  • The sandbox is composed of three parts: iframe, LavaMoat and SES sandbox. Browser iFrames are a well known way to isolate the risk of code. In this case, Metamask has written an API that allows the iFrame to communicate with Metamask to perform various actions.
  • Lavamoat is a policy mechanism that limits the permissions heavily that a given snap can run. To prevent supply chain attacks there are limitations on which packages can interact with the Metamask post message API.
  • The final layer of protection is Secure EcmaScript (SES) sandbox. The first part of this locks the JavaScript builtins to prevent prototype pollution bugs and removes sensitive info from some functions and objects. The SES has compartments to force the globalThis variable to only be available for secure functions.
  • With all of these protections in mind, they set out to try to break the security model. When processing an incoming call from the sandbox, much validation is done on this. However, we can do some schengians with JSON objects to cause problems! Using the Metamask iFrame API, we can overwrite a call to toJSON() with our malicious content. Since this function is used later in the process, we pull the ol' switcheroo on the running code!
  • The impact of this is quite severe. The promised validation and permissions model has been broken. A prompt to sign a malicious transaction can be done from the snap, even if the permissions say that it can't. If you're reading this and are confused then go read the proof of concept in the post. This was helpful for seeing what's going on.
  • Overall, a good breakdown of the Metamask snap security model. Even with this, arbitrary transactions cannot be ran, since it requires users to sign off on it.

Rounding Bugs: An Analysis- 1329

Robert Chen - OtterSec    Reference →Posted 2 Years Ago
  • Rounding bugs that lead to massive loss of funds have alluded me for a while. I see them in large hacks but don't understand where they're useful and how to find them. This post is a good step for me on that journey through the lends of accounting in the share and token model.
  • In Solidity, there is only fixed precision. For instance, you can either have 1 or 2 but nothing in between. Some things, like ERC20, use the first 18 digits as being this decimal point though. Most of the time, 1 or 2 may not be a big different. However, given the right circumstances, it can make a huge difference.
  • Many systems, such as ERC 4626 tokenized vaults, use the token to share model. When passing in tokens, the user gets back a share or percentage model of the system. These shares directly correspond to the value provided but may help with rewards as well. Over time, these shares can accrue more and more value.
  • The authors give an example. Say we have shares to tokens with a one to one ratio to beginning this and start with 1000 shares. After accuring fees, the new ratio is 1001 tokens to 1000 shares. If a user takes out 999 of these shares, we run into a precision problem. Should they get 1000 tokens or 999 tokens?
  • This demonstrates the first of the issues: rounding direction. Generally speaking, we should round against the user. So, we'll round to 999 tokens in this case. The direction of rounding is an important thing to consider, as small discrepancies can lead to lost value over time if exploited millions of times.
  • Now, back to the weird situation from above with only 2 token being left in the vault. The ratio is now 2:1 for tokens to share! If a token is donated, this becomes even higher at 3:1. There are many cases of this inflation leading to weird situations that led to stolen funds.
  • For Radiant Capital, who was hacked for 4.5M, it's a simple thing like we described above. If the inflation was done to make each share worth 1000 each, then the user withdraw $1999 dollars, only a single share could be burned. This gives them a free $999 because they still have the one share left.
  • In Wise lending, there was a rounding issue in liquidation code - you shouldn't be able to bankrupt yourself. This is best explained by example. We have one share worth $1000 and try to borrow $500. Now, we try to withdraw $1 of our collateral. The code will round up to use our one share worth $1000 dollars, making it liquidatable. This is rounding against the user but stills causes problems. If they round down, they also have a bug, since we could steal money this way. What do? Force shares to be withdrawn and not the token itself.
  • In the case of DeFi, these are a few important things to consider when exploiting these rounding attacks. First, the share value needs to be inflatable via the methods described above. Next, we need a nearly empty pool. Finally, we need the weird rounding or accounting to occur. To prevent this, the easiest way is to add assets to the vault at deployment or have an artificial shares at the beginning.
  • If we zoom out from the vault case, there are some bigger takeaways. First, having large and small values interaction can cause major issues with rounding. Second, rounding directions are important to consider. By default, things will round down. So, evaluating each division and the consequence of the round seems like a good takeaway. Finally, edge cases are where the exploitability of these things occur. Good post!

How to get tf out(crypto bridge edition)- 1328

Madhav Goyal    Reference →Posted 2 Years Ago
  • A blockchain bridge is used when you want to have one asset owned by one blockchain on another. Having lots of funds on different blockchain makes it harder to use so bridges are a good thing. This post dives into the basic components of a good bridge.
  • There are three components that are necessary. . One contract needs to accept proof that a state change indeed happened. The other contract needs to handle incoming calls for the asset transfer. Finally, a relayer acts as a communication layer between the two blockchains.
  • The relayer and proof mechanisms are the big differences within bridges. One method is using a Trusted Threshold Network. This is a mechanism to handle cross chain requests that are verified by some trusted party. This is used by Gravity Bridge and Wormhole.
  • The other type of relayer is a light client bridge. In this type, a quick proof is ran by the client to see if a transition occurred or not. In this type, we're relying more on the underlying cryptography than the honest voting process but this has its limitations.
  • Does ZK solve this? It allows for some confidentiality when going between chains but comes at the cost of large computational overhead. Since this ZK verification must be done on chain, it's computational requirements are important to consider for gas costs. Gnosis Succient Bridge and Layer Zero are using ZK light clients with some chains.
  • How do we actually transfer value? Well, we don't. Instead, we use a lock/mint and burn/unlock. When value is sent from chain A to chain B, we are locking the asset on Chain A in the smart contract. On chain B, we're creating a wrapped version of the asset that is pegged to the original. Going backwards, we burn the representation on chain B then send the original back on chain A.
  • What if a bridge gets hacked? Some chains have a robust set of nodes, making this difficult to do. However, some chains have a single node for proofs and other things, making it entirely possible. Many chains have escape hatches that allow for quick exits for a user.

Solidity Mutation Testing- 1327

Rare Skills    Reference →Posted 2 Years Ago
  • Finding bugs dynamically via testing frameworks is amazing as a development team. Security issues and general bugs get through less and it requires less person power to go through. There are many ways to go about testing. In article, they introduce the concept of mutation testing.
  • The idea is simple: let's intentionally introduce bugs into the code and see if the test suite catches it. Remove a modifier, flip comparison operators and deleting code are all great examples of this. By doing this, you test the capability of the test suite to actually find bugs.
  • Doing this manually would theoretically work. However, it would be time consuming. So, the people at Rare Skills built a tool that works in Solidity for this! The tool Vertigo-rs, a Foundry add-on, is meant to find bugs but randomly mutating the running code. Overall, an interesting way to test code; I'm curious to see how much this takes off.

[GitLab] Account Takeover via password reset without user interactions- 1326

DayZeroSec    Reference →Posted 2 Years Ago
  • Gitlab is a platform similar to Github. Recently, a user found an awful password reset issue that borks the security of the entire system.
  • I love the beginning sentence from the DayZeroSec folks: "Dyanmic typing strikes again!" Languages like Java, C# and others are super serious on data structures being passed in. In Ruby, PHP, Python and others, there are virtually no rules. I've definitely written code over the years that returns different types in different situations, which I know I shouldn't do though.
  • When passing in an array for the email instead of a string, weird things happened. The lookup function for emails took in an array OR a string. This lookup would only parse the first email in the list though.
  • When actually sending out the password reset tokens, all of the emails in the array would be used. According to Z on the audio version of the podcast, the function for parsing the email to reset had email in the name while the second one had emails in it.
  • Using this, an attacker can trigger a password reset on a victim that will send the link to their own email. To fix this issue, you can't even specify an email anymore. Instead, it's derived from the user record itself, which is much more secure. How do people find this types of bugs!? Gotta love the creativity of these folks.