Resources

People often ask me "How did you learn how to hack?" The answer: by reading. This page is a collection of the blog posts and other articles that I have accumulated over the years of my journey. Enjoy!

ASA-2025-004: Non-deterministic JSON Unmarshalling of IBC Acknowledgement can result in a chain halt- 1606

Zygimantass    Reference →Posted 1 Year Ago
  • Blockchains are effectively code that runs on a bunch of different computers. Naturally, it's important that all of these computers have the same result. If the output is different between computers, then consensus can fail. If consensus fails, then the entire blockchain will likely stop working.
  • This vulnerability affects the Cosmos SDK IBC Go library. When deserailizing a cross-chain acknowledgment, the JSON can be unmarshalled differently in some cases. Why is JSON non-deterministic here? Probably because it didn't NEED to be deterministic in the past. Even an extra space can cause issues here.
  • Here's the PR for the fix. It simply unmarshals and remarshals the ACK packet data then compares the values. If some weird non-deterministic behavior was happening here then this would fail.

Hacking High-Profile Bug Bounty Targets: Deep Dive into a Client-Side Chain - 1605

Vitor Falcao    Reference →Posted 1 Year Ago
  • Client-side path traversal (CSPT) is a classic path traversal but on the client-side. In particular, it's about tricking how an API works to make requests to the incorrect API. This can be used to get XSS with the wrong content type and several other issues. This paired with open redirects can be useful as well.
  • On this private in-person competition, the author noticed an API vulnerable to CSPT. This API was used for signing an S3 bucket path on the backend, where the filename was the controlled parameter that was part of the URL. By URL encoding the path, we get /categories/..%2Fredirect%3Furl%3Dmalicious.com that will result in /categories/../redirect. Neat!
  • By itself, this isn't very helpful, though. A friend of the authors noticed that using ?redirect=true on the API would result in a 301 redirect. This returns the file URL instead of the raw contents of the file. This means that we may be able to get XSS from it!
  • Initially, they ran into some CORS issues. Since CloudFront was used for caching, it was caching the initial download of the file without the CORS header. When trying to access the file from the page, it would then fail with a CORS violation. By ONLY using the redirect route instead of the regular file download, this issue can be avoided. Sometimes, debugging web things is complicated and annoying.
  • This XSS is a self-XSS because the store is bound to another account or only during the buying process. The application was vulnerable to a login/logout CSRF as well, giving them another primitive to work with.
  • The final piece of the puzzle surrounds cookies. Cookies are scoped to a particular path, and there is a limit to the number of cookies that are allowed to be there. Both of these are important for exploitation.
  • There's the full exploit path:
    1. Add the malicious cart item with the self-XSS to a dummy account.
    2. Convince a user to click on your web page. Using this site, a logout/login CSRF will happen to log them into the attackers dummy account.
    3. Self-XSS is triggered on the dummy account. This will cookie bomb (hit cookie limit) the current session, then add our own cookie at the proper path for the CSPT.
    4. The server will force logout the user because of the cookie bomb.
    5. Original XSS opens the login page to prompt the user to login again. This is the trick - there are two logged in users now! The actual user and the dummy user at the particular path.
    6. Logged in user is redirected to the CSPT vulnerable path for the second XSS payload.
    7. The second XSS payload will call any of the state changing APIs they want with the main users creds.
  • The usage of scoped cookies to have two session cookies be valid is super clever! It's crazy how much effort was required to exploit this. Client-side security is baffling to me.

Do you know this common Go vulnerability? - 1604

LiveOverflow    Reference →Posted 1 Year Ago
  • Go is built to run concurrent code. In this CTF challenge, a subtle issue is abused around concurrency.
  • The challenge has key-value store HTTP service. The service also has an arbitrary file read vulnerability by specifying the name. However, the flag file cannot be simply read because there is a flag string check.
  • This protection can be bypassed using /proc file system. However, this requires the intended solution to have a file descriptor open for the flag. This works but wasn't the intended solution. Still, a super clever abuse and solve!
  • The intended solution is a subtle issue with Go. The /get and /set HTTP handlers allow for concurrent access. The err variable for the arbitrary file read flag check is global! This means that other threads, such as set can use this variable as well.
  • So, here's how to exploit it:
    1. Use the arbitrary file read to read the flag on the /get API. This will return an error because of the string check.
    2. Use /set to change the error variable to be false.
    3. The check on /get call on error will now fail because it was set in the other thread.
    4. Flag is read!
  • Concurrency is Golang is a core component of the design. As a result, insecure concurrent uses should be checked for. I loved both the intended and unintended solutions for this!

No More Bets - How Ctrl+F led to breaking Polymarket's polling markets- 1603

Trust Security    Reference →Posted 1 Year Ago
  • Gnosis wrote the Conditional Token Framework (CTF). It is a complex tree of tokens, each representing some subset of choices. When a bet is made, users deposit collateral for a "full" (all options) token. Then, they trade sub-tokens in an external market to eventually arrive at a final position. Polymarket is a prediction market web3 company that uses CTF. While looking for variants of a bug found during an audit of Buter Conditional Funding Markets, they found a vulnerability in Polymarket.
  • In CTF, the function prepareCondition() creates the new condition for a position. This takes in an oracle, question and answer count as parameters. After this has been done, SplitPosition is used to split into the various outcomes. It has a very crucial condition: this function can only be called once.
  • Before Trust audited Butter, a patch with pretty clear security implications around CTF was submitted. Most protocols using this are permissionless, if an attacker can submit arbitrary parameters to prepareCondition(), then it prevents others from doing so in the future. This is a clear denial of service issue with the integration of the CTF library.
  • When Trust comes across a bug, they see if others have made the same mistake. So, they went to Github and searched for prepareCondition() not being wrapped correctly. In Polymarket, they noticed that an admin can call initialize() to create a new poll. By frontrunning this submission, it's possible to ensure that no questions can ever be answered.
  • Trust is/was on the Immunefi suspension list. So, they tried to reach out to Polymarket directly. When doing so, Trust initially was didn't want to disclose the exact issue until a payment range was decided. Even when they refused to give a range, the report was given to them.
  • According to Polymarket, the bug had been reported to them through Immunefi already, making this a dup. Since this had been known prior and things like Polygon fastlane can prevent it, it wasn't an issue to them. Additionally, Polymarket pointed out that this impact wasn't directly in scope, which is a terrible reason since there is real user impact. The bug being paid out in the first place and, therefore, nothing going to Trust is a legit reason not to pay out, though.
  • Unfortunately, Polymarket is not going to fix the issue. This puts users at risk and it all takes is a bad actor to prevent any/all usage of the platform. According to Trust, this was to dodge paying a big bug bounty under Immunefi terms but it's hard to say. I do wish that the security issue was documented as a known issue though - that's probably something that more programs should do imo.
  • Trust seems to be very good at variant analysis, which is awesome way to find bugs. I've been doing this lately and had pretty good success at it. Interesting bug and bug discovery!

LayerZero’s Cross-Chain Messaging Vulnerability- 1602

Heuss    Reference →Posted 1 Year Ago
  • Layer Zero is a cross-chain messaging protocol. It allows for the customization of various entities involved in the protocol. In particular, the relayer who triggers the message on the destination chain can be set arbitrarily.
  • Blockian found a race condition in this functionality. By having the relayer set to LZ, changing it to your own with zero fees and then switching it back, LZ would relay the message for free. The remediation to this issue was modify the protocol contracts, which led to another even worse security issue.
  • The function setConfig is used to change the oracle/relayer of a UA. If this is set in the same transaction that a message is sent, then the relayer should NOT relay the message. Only the owner of a UA is able to change the configuration. So, this seems like a sane remediation.
  • The consequence was that the relayer was checking if the AppConfigUpdated happened at all. Consequently, it wasn't checking that it was the same UA that triggered the update as the one that was being executed. This meant that it was possible to get the relayer to drop messages from legitimate calls, such as Stargate.
  • The consequences are somewhat LZ specific. LZ has an increasing nonce that requires that everyone is done in perfect order. By dropping a message, it's possible to prevent all messages after this point to become stuck. In the case of a well-used app like Stargate, this is pretty neat.
  • LZ could manually force the stuck message through though. Although things would be stuck for a bit, it wouldn't be permanent. So, this was paid out as a medium instead of a critical as a result. To fix this issue, the relayer just needs to see if the SetConfig event UA matches the TX being submitted.
  • An interesting part is that the test would have been necessary to perform on a testnet, since none of the off-chain infrastructure is open source. If somebody would have discovered this beforehand, then major damage could have been caused to LZ.

Blackboxing LayerZero Labs’ off-chain Relayer for 25,000$- 1601

Blockian    Reference →Posted 1 Year Ago
  • Layer Zero is a cross-chain messaging protocol. The architecture is as follows:
    1. User Application (UA) calls endpoint.
    2. Endpoint emits an event on chain A.
    3. Off-chain infrastructure attests the message.
    4. Relayer sends the message through on chain B.
    5. UA receives the message on chain B.
  • The application allows for configuraion of the relayer and the oracle per application. This seems that anyone can implement an off-chain relayer and use it themselves. The author had a question: "when does the LayerZero Labs Relayer stop listening to messages?"
  • In the contract UltraLightNodeV2, the function send() handles the event emission process for a cross-chain message. Interestingly, the event does NOT emit the relayer address itself. This peaked the authors interest! If it's not in the event, then the LZ relayer must keep track of each User Application (UA) that it supports. This feels racy.
  • Remember, there's no source for the off-chain infra! So, they started asking questions... what happens if a user changes their config? They submitted a PoC on chain where the Relayer and Oracle price submissions were 0 then changed the Oracle/Relayer back to the original LZ default.
  • By diong this, the LZ relayer relayed the transaction without getting paid during the submission process. This means that you can use LZ for free and drain the funds from the LZ relayer wallet. Naturally, if these funds are drained then the other apps would no longer work.
  • The smart contracts are open source but none of the off-chain code is. The author decided to black-box test some code to see how it would react. To me, this is interesting but crosses a important trust threshold. What if a malicious actor was looking at these transactions and then mimiced the exploit? Unlike web2, where your traffic is your own, doing live testing on-chain could lead to further issues.

Microsoft Edge Developer VM Remote Code Execution- 1600

Roman Mueller    Reference →Posted 1 Year Ago
  • The Microsoft Edge Developer VM were images that Microsoft published to make testing on different versions of Edge or IE easier. One day, while looking at processes on Windows, they noticed a Ruby script associated with Puppet running. Puppet is a configuration management system that they had seen in the past.
  • Confusingly, the Puppet configuration was never setup. By initializing it yourself, you're able to take control of the instance. This requires the ability to edit the hostname puppet to point to a particular IP though.
  • Software NOT being configured or being able to reconfigure is a real bug class that needs to be considered. Low impact but still interesting none-the-less.

Achieving RCE in famous Japanese chat tool with an obsolete Electron feature- 1599

RyotaK    Reference →Posted 1 Year Ago
  • Chatwork is a Japanese chat application similar to Slack. It is an Electron desktop app.
  • While reviewing JavaSctipt files, they noticed the usage of shell.openExternal(). In Electron, this is a known bad sink that can open arbitrary URLs. Notably, passing in file:// with a user-controlled file can lead to code execution. This was available in the preload context, meaning that it was available before the disabling of the node API in the web browser portion. This isn't code execution yet, but it is a good start.
  • Digging deeper into the code, they found an instance of BrowserWindow with webviewTag set to true. This is a deprecated feature that has dire security consequences when handled incorrectly. By providing arbitrary tags to the webviewTag, it's possible to disable security features in that processing window in a preload context.
  • Again, we have a way to execute arbitrary code within a window but still need a way to add this code ourselves. The opening of the vulnerability code path was done via the function createNewWindow with a user-controlled but validated URL. In particular, a list of very specific patterns was used and verified to prevent adding the webview tag that the author wanted.
  • Upon testing this service, they found that the usage and client-side validation were slightly different. The backend server URL decoded the request path but the Electron app did not. This means that we can use a directory traversal on the Chatwork app with something like https://www.chatwork.com/gateway/download_file.php%2F..%2F..%2F to circumvent the location of the call. Now, using the OAuth redirect, we can go to an arbitrary page!
  • Here's the full flow of the attack:
    1. User clicks on a malicious link.
    2. The link uses a directory traversal and URL encoding to use a redirect from the OAuth page to an attacker controlled site to be rendered within Electron.
    3. The malicious site webview tag. This loads a file from an SMB share.
    4. The file from the SMB share will then exploit the openExternal to execute native code on the computer.
  • Overall, a great chain of bugs! The progression and timing of each bug was interesting to me. Some folks go from JavaScript control yet others start from the bottom of the exploit chain. To me, it depends on where you see the impact. The ability to load a web page in the context of Electron is a good primitive but not a game-over bug by itself.

Make Invalid States Unrepresentable- 1598

Andrew Watson    Reference →Posted 1 Year Ago
  • The article begins with a hypothetical. You have a class Person with a field called age. What type should it be?
  • The first suggestion is a String. This is obviously wrong but why is it bad? It's bad because validation would need to be performed on any and every operation. An example would be the age Jeff. This could be done with "stringly-typed" data but is super annoying to do.
  • The next is an Int. It's easier to write, read and it fails fast. This is better than the String type. This is because we remove the capability for many invalid states! The purpose of the article that the invalid states are now unrepresentable.
  • There are still many invalid states with an Int though. For instance, -1 and 90210 are technically valid according to the program but invalid ages. The goal is to constrain the type to make these invalid states also unrepresentable.
  • In a statically-typed language, runtime assertions can be added. For instance, an assertion that throws an error if the age is less than 0 or greater than 150. This is an integer with constraints.
  • The next, and final, value they consider is using age type constraints. One problem with the current approach is that an integer used for an age is the same as an integer used for weight. So, having an explicit type for the age, as opposed to using integers, works well. They use the newtype pattern from Haskell to talk about this.
  • They do make a comment that the model needs to be done correctly, which takes time. It's easier to move from more specific than to less specific types. So, prefer specificity over generalization. The restrictions being added should always be carefully thought out.
  • Overall, a great post. The core concept of make invalid states unrepresentable is a good development principle that will stick with me for a while!

What Okta Bcrypt incident can teach us about designing better APIs - 1597

n0rdy    Reference →Posted 1 Year Ago
  • Okta had an interesting security incident. If the username was above 52 characters, then ANY password would be sufficient for logging in. If the username was 50 characters, then it would be only two. In Okta, the password hash included the concatenation of the userId, username and password.
  • Why did this happen? The hashing function BCrypt! In BCrypt, there is a limit to the size of the input to 72 characters. Since the user ID and username were included, it was possible to go above this. In some libraries, this would lead to a silent truncation of the data. With how much data Okta was using besides the password, this led to the entire password being truncated.
  • The author was curious about why the library even allowed this in the first place. A simple check on the input length would be sufficient in the library for preventing this from happening. They evaluated several implementations of libraries in different languages, all of which handled this case differently. Some errored out and some silently truncated the data. Why the truncation? They were conforming to the BSD implementation from years ago!
  • My personal favorite part of the article is the end. The author goes into Secure API Design. Recently, this has become a bigger concern of mine in my job so it was interesting to see these come out.
  • The first point is Don’t let the people use your API incorrectly. It should explicitly reject invalid input in order to prevent errors like this. If the functionality is required, then gate it behind a feature flag or unsafe variant of the function.
  • The next point was Be predictable. Good API design should be intuitive and obvious. Of course, this is subjective but we can use some common sense here.
  • No ego is the next one. Expecting users to read every bit of documentation or fully understand the implementations is totally unreasonable. Making systems easy to use for novice users should be the goal, with more advanced functionality being added to the more advanced ones. Good input validation goes a long way here.
  • The final point of note is Be Brave. To the authors point, be the new solution that is better. It's easy to fallback onto old implementations since it's always been there. Do something to make the world a better place.
  • Overall, an interesting read that further evaluates the Okta issue. I enjoyed the parts about secure API design the most that used Okta as a case study.