Resources

People often ask me "How did you learn how to hack?" The answer: by reading. This page is a collection of the blog posts and other articles that I have accumulated over the years of my journey. Enjoy!

Fuel Network Argument Parsing Vuln- 1535

minato7namikazi    Reference →Posted 1 Year Ago
  • The Fuel Network ran an Immunefi contest for the entire network. From their custom VM to compilers to the bridge... lots of attack surface. The author of this post dove into the compiler and contract runtime.
  • When contract A calls contract B, there is an ABI to preform type safety. The arguments are encoded into raw bytes in order to make the actual call in the contract. In the bytecode of the callee contract, there is implicit type information based upon the source code. One has compile time checks and the other has runtime verification on the values.
  • In the EVM, extra data at the end or in the middle of a structure is ignored. If the type is completely incorrect (like a string where an integer should be), then it reverts. This is more a Solidity compiler added protection than an actual protection added by the VM.
  • In Fuel, if an extra value is added to a struct it's not ignored - it corrupts the next value! For instance, if a struct only had a value called key with 32 bytes but we passed in an extra u8. The value of the u8 is just added to the next type instead of being ignored. All types keep their size but can be changed to unexpected values. I'm guessing that this corruption happening after the verification of the type but I'm not entirely sure from the post.
  • Why is this useful? The boolean type is usually guaranteed to either be a 0 or a 1. Given that the compiler knows this, it will do checks in weird ways that may be bypassable. The author provides an if statement with two options: option == true and option == false without an else clause. Since a boolean value of 100 wouldn't fall into either of these, we can break logic that assumes a binary value for a boolean.
  • An additional impact is that a boolean could be stored with a non-zero value in storage. This could cause a DoS when loading the value or cause more corruption as well. An interesting impact is that since this happened in the compiled code, all deployed code would have to be redeployed with a new version of the compiler.
  • I'm slightly confused on why the corruption must happen. From my end, it appears that we could just make a boolean any value. My guess is that there is verification in the compiled code that happens first then the decoding happens that corrupts the values. Interesting bug and thanks for sharing!

Why exploits prefer memory corruption- 1534

sha1lan    Reference →Posted 1 Year Ago
  • Memory corruption vulnerabilities are 60%-70% of the issues exploited in the wild. There are many other classes of bugs so why are these so popular? This is what the article tackles. Ironically, it's the simplest way to do it.
  • It all comes down to how expressive and unconstrained memory corruption vulnerabilities are. The usage of system() runs attacker controlled input on a computer, giving us lots of freedom. In the case of memory corruption, it's the same; we can create our own path with the infinite space of a weird machine. This expressive nature is really only offered with a few bug classes.
  • With a simple logic bug, like an authorization issue, it is limited in nature. It has much more defined capabilities. Things like MTE and movement towards memory safe languages like Golang and Python are making memory corruption bugs harder and harder to exploit though.
  • The author does make a distinction between memory corruption and memory unsafety. Memory corruption is commonly the effect of a memory unsafe bug. They reference a type confusion bug leading to memory corruption as an example. The author believes that true memory corruption vulnerabilities, like page table management in the kernel, will stay around but memory unsafety bugs will start to die.
  • At the end of the day, memory corruption vulnerabilities are still likely to be used. They provide huge capabilities that cannot be paralleled. Additionally, they are easily abstractable. If I find an arbitrary read/write primitive, I can hide away the details to this with an API of sorts to continue using my exploits. This does not work well with logic bugs most of the time.
  • Overall, a good post on why we like memory corruption vulnerabilities so much! It creates reusable primitives in an environment that can be repeated. Other bugs can't provide the same things, making them harder to find and harder to exploit for real gain.

Code auditing is not the same as vulnerability research- 1533

sha1lan    Reference →Posted 1 Year Ago
  • Cybersecurity is an incredibly broad topic. Even the category of offensive cybersecurity is quite broad. In this article, they do a comparison between code auditing and vulnerability research.
  • Vulnerability research is all about understanding the practical threat landscape of a system or area of code. In this work, vulnerabilities are not enough. Instead, we care about how exploitable these bugs are and the real impact they can have given the constraints of real attackers. The output of a real proof of concept can even be helpful.
  • Code auditing has the goal of improving security within an area of code over a given time frame. This is usually about finding the greatest number of bugs without an emphasis on real exploitability. Code quality or configuration improvements, like missing binary protections, can be found here as well as actual bugs.
  • Both of these are valuable but serve different purposes. If it's a new codebase that's about to ship then a code audit to find many issues is a good idea. If vulnerability research was done on the codebase then they would likely find only a few horrible things but leave many risks and bugs still in there that weren't worth tracking down.
  • Sometimes, it's the opposite though - vulnerability research is needed instead of a code audit. On a large codebase with lots of risk in a merger and acquisition or bug bounty are good examples of when this is necessary. If fuzzing is done on a library with little to no exposure to the outside world and lots of shadow bugs are found, it's not a realistic view of the security of the application. Instead, decisions should be made on the most impactful locations and bugs should attempted to be found in this.
  • According to the author, the latter case is more likely to happen. A common issue is when the client signals that higher quantity of bugs is better than a few high impact ones, which leads to a code audit instead of vulnerability research. A good way to assess this (to me) is the likelihood and impact metrics.
  • Overall, a good article on the differences between a code audit and vulnerability research! They are different things that are similar, leading to issues within various organizations.

Filecoin Boost: Clients can create PublishConfirmed but never-AddedPiece (handoff) deals- 1532

Qiuhao Li    Reference →Posted 1 Year Ago
  • Filecoin is a decentralized p2p network allowing users to store and retrieve files on the Internet. Users (data owners) pay to store their files with storage providers (computers that store files). Filecoin does this using a blockchain to record all of the information but using IPF under the hood.
  • A deal is a contract between the user who owns the data and the data provider agreeing to store the information for them. In Proposal.StartEpoch, the function checks to ensure that a proposed deal hasn't already elasped a certain time frame. This is to ensure there's enough time to perform the operation.
  • In AddPiece(), the code is ran by the miner every 5 minutes until 6 hours has been reached.
  • The deal's start epoch (group of blocks) is checked against the current epoch + a sealing buffer (480 epochs). For a deal to be created, accepted and closed takes time. An attacker can create a deal in which the start epoch is closed to the current epoch, which will pass verification. However, after the deal is published but before it's added, the epoch will grow larger than the specified start epoch.
  • This is exploiting the weird boundary on times between various actions. One item doesn't use the StartEpochSealingBuffer into consideration while the other one does. By doing this, AddedPiece() will always fail! This loses gas for the Service Provider. Additionally, this could lead to a denial of service if the collaterals reach their limits.
  • Race condition vulnerabilities are commonly hard to find/understand but can show a fundamental weakness in the software design. Concurrency is nearly impossible to get 100% correct. Good write up but I do wish there was a little more background since I had no idea what Filecoin was prior to reading this.

$150,000 Evmos Vulnerability Through Reading Documentation- 1531

jayjonah.eth    Reference →Posted 1 Year Ago
  • EVMOS is a Cosmos SDK blockchain that integrates the EVM into it. From reading the documentation (shown in the next bullet point), they sent the distribution module some tokens. As stated in the documentation, this broke an invariant and crashed the program.
  • The author talks about just reading documentation to find the vulnerability but I think there is a lot more going on here! The docs say: "The x/bank module accepts a map of addresses that are considered blocklisted from directly and explicitly receiving funds. Typically, these addresses are module accounts. If these addresses receive funds outside the expected rules of the state machine, invariants are likely to be broken and could result in a halted network."
  • So, what's really going on? The Cosmos SDK has a set of invariants that run at the end of every block. In the distribution module, one of these is that the accounting and actual tokens must line up. By sending tokens to the module, this invariant breaks and crashes the blockchain.
  • So, why can we send tokens to this account then? The Cosmos SDK Bank module initialization contains a list of blockedAddrs found here. According to the documentation, this should block all Module Accounts as it may brick the chain. In the case of EVMOS, they did not include all of the modules that would result in invariant breaks.
  • The EVMOS project has not been on Immunefi for a long time - I'd guess two years. So, this vulnerability is quite old. If I had to guess, the author of the post popped every chain they could with this misconfiguration and just published this. It's funny how the news picked up on this considering how old this vulnerability must of been.
  • Overall, a good vulnerability but the post is somewhat deceptive. Although it was "just reading documentation" the why and the how are important for popping this. Additionally, not talking about disclosure timelines also feels wrong. I'm curious to see if Cosmos changed the invariants that led to this vulnerability or not as well.

Exploiting a Blind Format String Vulnerability in Modern Binaries: A Case Study from Pwn2Own Ireland 2024 - 1530

synacktiv    Reference →Posted 1 Year Ago
  • Pwn2Own is a prestigious hacking competition for various devices. This entry was for the Synology TC500 camera running ARM 32-bit. The authors found a format string vulnerability in a custom print_debug_msg function that was passing inputs into vsnprintf.
  • Since the format string was in a debug log, it was blind. Additionally, ASLR, NX, Full RELRO, and PIE were all enabled on this device. On top of this, the payload was restricted to 128 bytes and could not contain nullbytes or characters lower than 0x1F.
  • Format string vulnerabilities are ridiculously powerful. The specifiers allow for reading and writing to arbitrary spots in memory if you know what you're doing. Initially, they used the vulnerability to edit a pointer to a looping variable to be somewhere else on the stack via a single-byte write of the pointer. This variable was then being written to with our input. In practice, we could edit the location some data was going to be written to with relative bytes, giving an effective relative out-of-bounds write primitive.
  • Once they had an arbitrary write on the stack, they needed to build a ROP chain. In the vulnerable function, they used the unused stack space. Using the format string specifier %*X$c, it's possible to read a value on the stack from a specific offset. This value is then stored in an internal character counter. Using the %Y$c will increase the count further by the value we control. Since the first value can be from the stack and we control the second one, we can effectively bypass ASLR and PIE!
  • Once the values are set, %Z$n can be used to write the value onto the stack. Using this over and over again gave them a solid ROP chain to eventually call system(). To hijack the control flow, the same relative write trick could be used to overwrite the return address on the stack to point to the ROP chain.
  • Modern binary protections are not enough for security with capable folks like the ones at synacktiv. An awesome post on their exploit path for this. It's sad that this was patched before the competition :(

An analysis of the Keycloak authentication system- 1529

Maurizio Agazzini - HN Security     Reference →Posted 1 Year Ago
  • Keyclock is a single sign-on provider. While on a project for a client, they identified a flaw in the authentication system.
  • In Keyclock, the levels of security depend on the level of authentication. First level is just the username and password. Level 2 is username, password and OTP. According to their setup guide, the default browser flow is used by most apps.
  • This levels system sounds good in theory but has a flaw: level 1 authentication has access to account settings. An attacker could login with credentials to a level 1 website, add a new OTP method then use this on the level 2 website. This creates a really dumb bypass for 2FA auth. This vulnerability was known about, according to the security team, but took 10 months to fix.
  • Several of the administrative endpoints were reachable via a unprivileged user. Of these, the testLDAPConnection was the most serious because it could be used to steal LDAP creds by setting a custom connection location. This required some information that could be queried using this same vulnerability on a different API.
  • The final issue was poor brute force protections. The protections were turned off by default but were insufficient anyway. It was possible to send multiple requests simultaneously to allow more login attempts than what should be allowed. Use those locks!
  • Overall, a serious of fairly simple yet impactful bugs. Good writeup!

Cracking into a Just Eat / Takeaway.com terminal with an NFC card- 1528

Marcel    Reference →Posted 1 Year Ago
  • Takeaway.com is an online food delivery system. The author of this post found an Android-based kiosk online for super cheap so they decided to buy one.
  • Their goal was a Kiosk escape while using the system to perform various bad actions as an actor. After several deadends, such as using keyboard shortcuts, they found that Android will open apps automatically using NFC. So, they wrote to an NFC card with a particular package name and Android opened it! In their example, they use the Android settings.
  • They used the settings to enable the status and navbar. With this, it's much easier to work on the Android device. Using a file system app on the device, they were able to extract the APK from the device to reverse engineer. They found that 14611 sent the device into a factory test menu and 59047 gave an app launcher that is both hardcoded.
  • Using a male to make USB cable, it would be possible to connect via ADB, since it uses a userdebug ROM in production. This would allow dumping the file system, overriding the OS and many other things. Good jailbreak post!

“CrossBarking” — Exploiting a 0-Day Opera Vulnerability with a Cross-Browser Extension Store Attack- 1527

Guardio    Reference →Posted 1 Year Ago
  • Browser extensions have extra capabilities compared to web pages but are still sandboxed from running full code on the system. Extensions have access to some extra APIs but it's still quite restrictive.
  • Some domains and extensions have "special" privileges in the Opera browser, which is the focus of this research. For instance, the Pin add-on quickly takes a screenshot of the page but this requires extra permissions to do. The author decided to see if there were any domains in the list that were no longer registered to Opera.
  • Several domains, such as crypto-corner.op-test.net, were found not to be registered, even though they had access to these APIs. So, the authors of the bought the domains to gain the special privileges that came with them. What can we do these with privileged APIs?
  • The chrome.cookies API can be used to extract all session cookies and hijack user accounts. Additionally, the settingsPrivate allows for changing of various browser settings. An attacker can even change the DNS settings to create a Man in the Middle attack with this. Although, since most things use TLS, I'm not sure if this is very practical.
  • Opera carefully reviews on extensions before adding them to the store. So, the authors were afraid of their bug report being vetoed for this reason. Instead, they found a workaround. Opera allows for Chrome extensions to be used! So, they wrote their proof of concept as a Chrome extension that another user would download.
  • To remediate it, Opera did a few things. First, they removed content scripting on high-permission domains to prevent obfuscation I think. Next, they removed the privileges from some domains entirely. Overall, a fun vulnerability with some clever workarounds. Personally, I found that the article had a surprising order to me which confused me on my initial read though.

Hacking 700 Million Electronic Arts Accounts- 1526

Sean Kahler    Reference →Posted 1 Year Ago
  • Bug bounty is great for finding bugs that stem across multiple products at a company that have massive impact. This is one of those vulnerabilities on Electronic Arts. At the beginning of the article, they got access to one of EA's development environments for EA Desktop by finding a privileged access token in a games executable. But, they had no idea what this was used for or what they could do with it.
  • They decided to scan for API documentation to see what this token could do. On /connect, they got a 404 HTML page with a server response that made it clear that this was a reverse proxy. When connecting to /connect/api-docs, no data was returned. This indicated that a different service must exist here. After some more fuzzing, they got a swagger file with some unexpected docs.
  • EA Desktop has a GraphQL API called the Service Aggregation Layer to combine multiple backend APIs into one. The api-docs did not work on this site though, hiding a lot of routes. When querying on the testing environment, the routes are returned, giving us much more to work with. More recon!
  • This API required a specific OAuth scope. After searching around, they found some creds that worked. After fiddling around for hours, they started messing around with the /identity/pids/{pidId}/personas/{personaId} API. What's a persona? It seems to be extended account information and settings like displayName. Given that they could update their status to be banned or unbanned, this seems like it was intentional to access.
  • One of the fields was pidId for the account ID associated with this account. They decided to update this to their friend's account Id and their Steam ID. Shockingly, this worked and they had successfully gotten access to an EA account that wasn't theirs! Unfortunately, 2FA blocked the account takeover so now what?
  • To work around this limitation, they could go the other direction! Instead of adding another Steam ID to their persona, they could add another Persona to their Steam ID! This gave them the ability to ban players, steal usernames and other things. Still, we had 2FA though... it was a trusted network thing.
  • Eventually, they figured out a way around the trusted network. First, move an Xbox persona to another account that is trusted on your network. Next, log into an EA game on an Xbox using this account. Finally, login to the victims account on your network, since it is trusted from the Persona step. This leads to a complete account takeover, which is wild!
  • With anything in life that is first come first serve, you need to do something better than everyone else - this is where the real work is at. In this case, the author of this post really did his homework on recon to open up new attack surfaces that others had not seen. The understanding of the underlying system to exploit this was pretty wild and time-consuming as well. Awesome bug report!