Resources

People often ask me "How did you learn how to hack?" The answer: by reading. This page is a collection of the blog posts and other articles that I have accumulated over the years of my journey. Enjoy!

NES DPCM Workaround Vulnerability Leads to ACE in SMB3- 1515

100th Coin    Reference →Posted 1 Year Ago
  • The Nintendo Entertainment System (NES) was built in an era of CRT TVs, where rendering it entirely different than LEDs. Most graphical changes happen during a blanking period; so, there is an interrupt to ensure this is the case. The VBlank interrupt is a Non-Maskable-Interrupt (NMI).
  • The game console also has Interrupt Requests or ICQs for short. Depending on the game mode that the gameplay is in, the IRQ will behave differently. Additionally, the NES had logical blocks of code and assets in banks, where only one bank can be loaded at a time.
  • The NMIs swap out the PRG bank during graphics changes. Eventually, by the end of the NMI, the proper banks are swapped back in. What if we could trick code to run with the improper banks loaded? This is how the vulnerability that was found works!
  • DPCM audio samples have the ability to corrupt controller inputs. This is because a register is shifted one too many times. Since the DMA read is asynchronous and this is a hardware issue, we must find a way to workaround this. To fix this issue, the most common fix was to simply poll the controller over and over again until the same buttons were seen twice in a row.
  • So, what's the bug? By changing buttons at a rate of 8K inputs per frame, we can trick the polling code for controller inputs to be stuck forever! This paired with an interrupt leads to a situation where code from a bank never intended to be executed in this context will be ran!
  • By some miracle, the code runs fine. Eventually, a RTS instruction will jump the code to 0x0000 on the stack. The NMI continues to happen every frame - it records button press inputs to $17,$18,$F5,$F6 and $F8. Through careful planning, the controller inputs can be used to write somewhat arbitrary asm to execute.
  • $17 is the total held buttons on controller 1 and $18 is the new buttons pressed using a bit for each button. $F5,$F6 and $F8 have similar limitations to $17/$18. This creates a limitation of which bytes can be used for the second byte. Additionally, left and right as well as up and down cannot be held at the same time, further limiting the instructions.
  • With these limitations in mind, our goal is to warp to the end credits. There are 6 criteria that need to be met with 3 of them already there once we start relating to banks. The first is the stack be larger than 0x30, second is NMI mode at address $100 must be 0x20 and we need to jump to $B85A.
  • Previous versions of the TAS had to work around the limitations above. However, the author found a special case - bytes 0x0-0x2 uses these for scratch addresses at the end of an NMI. They happen to be for controller inputs INCLUDING the conflicting inputs. By using this property, we have more control over these two bytes, which happens to be enough :)
  • The TAS is 3 frames long of game play. Here is what happens:
    1. Write JSR $9000 at the scratch address using two controllers. Using the only inputs PUSH a value of 0xFA to register SP.
    2. The next NMI occurs and writes our controller inputs to the stack. This time, our inputs result in JSR $0000 being executed.
    3. JSR $9000 is executes from the previous write after our jump occurs. Since the SP is sane this works.
  • The video for this explains a slightly simplified version, which is what the example is based on. However, the concepts are the same. A funny change they made was using a different version of the game because of the addresses are slightly different.
  • Overall, the article and video are amazing resources! Beating SMB3 is less than a second is hilarious and I very much enjoyed learning about this. From the vulnerability itself to a making the exploit work, it's truly magic :)

gaining access to anyones browser without them even visiting a website- 1514

Eva    Reference →Posted 1 Year Ago
  • Arc is a new browser focused on security and privacy. They recently added cloud functionality for storing CSS and JavaScript browser customization's called boosts.
  • Firebase is a database-as-a-service. Instead of writing a full backend, you write security rules for what usres can and can't do. Although this tool is awesome, many folks have messed up the rules in the past.
  • Reading the Firebase security rules, we can't modify other users data directly because it's queried by CreatorId. However, we can specify our boost to have another users ID! Most of the time, adding information to a user blind isn't helpful. In the case of JavaScript being ran in the browser, it's real bad though.
  • To find user ids, an attacker can look for referrals, published boosts and whiteboards. To make matters worse, privileged pages in Chromium, such as chrome://settings were affected by this. Since these pages have special permissions, it's likely that RCE was possible.
  • Arc decided to migrate off of Firebase in light of this issue. I personally haven't spent too much time looking at Firebase but it seems popular yet difficult to use securely. Good find!

Vest in Peace: Freezing Cosmos account funds through invalid vesting periods- 1513

ForDefi     Reference →Posted 1 Year Ago
  • In the Cosmos SDK, a vesting account is a type of account whose coins are locked for some vesting schedule. A periodic vesting account will give out funds at defined intervals. A clawback account has an additional locking period, after which the vesting funds are received.
  • Both periodic and clawback accounts do not validate their input upon account creation. The code fails to validate that the amount in each vesting period is positive. There are several variants of the input validation being missing here in forks of the Cosmos SDK as well.
  • So, what's the impact? Initialize a vesting account but make the funds impossible to withdraw. By adding negative token amounts such as -1stake, the validation of the bank module to ensure a user isn't overdrawing amounts will panic.
  • To make this work, the authors claim that you would want to see a new account being created, frontrun it and poison it. This account can now receive funds back it cannot take them out. Frontrunning is unlikely to occur in Cosmos but is technically possible.
  • To fix the bug, simply validate that all amounts are positive. Overall, a good read and learning into vesting accounts in the Cosmos SDK.

Ruby-SAML / GitLab Authentication Bypass (CVE-2024-45409)- 1512

Project Discovery    Reference →Posted 1 Year Ago
  • SAML is a common protocol for exchanging authentication and authorization data between IdPs and Service Providers (SPs). SAML is written in the markup language XML.
  • In SAML, the core element is the Assertion. This holds information about user details in most cases. To ensure it hasn't been tampered, the assertion is hashed then verified with a digital signature.
  • The Signature value is passed inside the SignatureValue element. The hashed data is in the SignedInfo block. This contains a DigestValue and a Reference URI pointing to the assertion.
  • To verify the signature a service provider receives the SAML response then performs two checks: digest verification and signature verification. The digest verification calculates that the Assertion data hashed matches the DigestValue in the SignedInfo block to prevent tampering. Next, it validates the digital signature over the top of the hash.
  • The Ruby-SAML library has several validations before the signature validation. In XPATH, used for finding elements in an XML document, / will select the root of the document and // will select any node from the document that it can find.
  • Finally, on to the vulnerability! When getting the DigestValue via XPATH, the query was //ds:DigestValue. This will find the first instance of the DigestValue in the document! This allows an attacker to smuggle in the value into the document.
  • Finally, on to the vulnerability! When getting the DigestValue via XPATH, the query was //ds:DigestValue. This will find the first instance of the DigestValue in the document! This allows an attacker to smuggle in the value into the document.
  • This is bad! In the SAML validation, we can bypass the verification with the following flow:
    1. Insert a DigestValue into an unsigned element with a modified Assertion block.
    2. XPATH will extract the smuggled value instead of the one from the SignedInfo block. This bypasses the first step above of checking that the DigestValue is correct.
    3. Signature verification occurs on the DigestValue from the SignedInfo block. From previous verification, it was assumed that the actual hash and the one in this block must match.
  • The author includes an XML document that is super interesting to look at from a security perspective. An awesome find in a technology that I'm not super familiar with but enjoyable none-the-less.

Eliminating Memory Safety Vulnerabilities at the Source - 1511

Google Security Blog    Reference →Posted 1 Year Ago
  • The blog post revolves around Google Androids security program but the results apply to other places. Android has produced more and more code in memory-safe languages like Rust instead of unsafe ones like C. The analysis of this post is around the number of memory corruption vulnerabilities over the years.
  • Over the course of 6 years, most new development has occurred in memory-safe languages. Even though the amount of code is slowly growing in the memory unsafe languages and the original unsafe code still exists, the amount of memory corruption bugs has dropped significantly. Why though? Doesn't all memory-unsafe code need to be rewritten?
  • According to this article, the answer is no. Vulnerabilities are much more likely to be discovered in new code, as found by a Usenix paper from years ago. According to the details from Android and Chromium bugs, 5-year-old code is 3.4 to 7.4 times less likely to have a bug than new code. So, if the new Android code is 6 years old, is much less likely to have bugs in it. As a result, we don't need to rewrite all memory unsafe code, saving lots of money and bugs along the way.
  • In terms of designing software, killing bug classes from the beginning is the way to go. If you use a memory safe language, you kill a bug class entirely, which is amazing. This is opposed to the original and expensive style of reactive patching, exploit mitigations like ASLR, NX, etc. and proactive vulnerability discovery. Overall, great article on where to hunt for bugs at!

Web3 Ping of Death: Finding and Fixing a Chain-Halting Vulnerability in NEAR- 1510

Faith - Zellic    Reference →Posted 1 Year Ago
  • Rust is perfectly safe and we never have to worry again, right? In Rust, error handling is tedious and most be specifically handled. Because of this, many denial of service (DoS) vectors revolve around handling errors in Rust.
  • In P2P networking, you are communicating with other computers which in turn communicate with other computers. So, this is a necessity of communicating in a blockchain network and must be externally exposed in some way.
  • The author of this post found two locations where errors were not being handled correctly. First, when verifying a public key the from_slice() function requires that it must be 32 bytes in length. When processing this in the handshake code of P2P, expect(), a nice wrapper for unwrap() is called. If the public key isn't 32 bytes, then a panic is triggered.
  • The second vulnerability has to do with signature parsing. The ECDSA code from_i32() converts the recovery ID value from a single byte to an i32. When doing this, the value is required to be between 0-3 but it can in reality be 0-255. Later on, unwrap() is called, causing a panic upon the error path being taken.
  • Both of these vulnerabilities cause a panic that crashes the node. To me, it's weird that a small parsing issues crashes the node and there is no recovery that happens on the node, similarly to how Golang can. Between the two vulnerabilities, they got 150K in bug bounties, which is awesome! It's fascinating how such little functions in error handling can have catastrophic consequences on the uptime of the software.

CharismaBTC hack incident analysis- 1509

ExVul    Reference →Posted 1 Year Ago
  • The smart contract runtime environment of this exploit was Stacks. This is a Bitcoin layer 2 solution that uses the Clarity smart contract language. Honestly, I couldn't follow this article. I also don't know how Stacks/Clarity works either. So, I had to ask a friend about how this exploit worked. So, take this with a grain of salt.
  • In most smart contract runtimes, you send funds alongside your call. In Stacks, the end user wallet specifies post conditions that determine what can be done, making this fail open instead of fail closed. In theory, if there is no post condition that disallows a contract taking all of your tokens, then it's legal to do.
  • In Solidity, there are two senders: tx.origin and msg.sender. One is used for the original executor of the transaction and the second is the most recent caller. This same concept exists in Clarity as well.
  • When making an external call to another contract, the AsContract command can override the tx.origin of the original caller. This is super important because this is what the post conditions are based around!
  • The post conditions can only be set by the original executor and NOT the smart contract. When the AsContract command is used, if the call is made to an untrusted contract then there are no post conditions restricting where the money can go for this! This lack of access control on the smart contract call is the reason for the bug. By becoming the contract, we can now drain all of the funds from it. Yikes!
  • The existence of AsContract is weird to me. I get there are situations where you want to act as the caller but it's such a security liability here. Again, not a great write up but an interesting vulnerability class none-the-less.

CSP Bypass Website- 1508

renniepak    Reference →Posted 1 Year Ago
  • Content Security Policies (CSP) are an XSS defense mechanism. Of course, if you found XSS, you want to circumvent the CSP. This is a website with XSS gadgets known on various popular programs.

Content Type XSS Research - 1507

BlackFan    Reference →Posted 1 Year Ago
  • The Content-Type response header is used to tell the browser how to render a file. This page is a list of Content-Type header with the format they render that can be used for XSS. It even has a list of browsers that this works on.
  • Many of the types are obvious like text/html as an HTML format. There are even weirder ones referenced too, like text/xsl being rendered as HTML.
  • A space, , and ; can all used as Mime Type separators. For instance, text/plain; x=x, text/html is a valid HTML format when rendered by the browser.
  • Additionally, ( and 0x9 are able to be used as separators. For instance, text/html(xxx is a valid content type that will be render as HTML.
  • A comma can also be used for multiple content types. Typically, the last one is the content type processed.
  • My personal favorite part is that they have links of each content type to a website that will prove that this works on the spot. Amazing and simple resource that I love.

Bedrock vulnerability disclosure and actions- 1506

Dedaub    Reference →Posted 1 Year Ago
  • Bedrock protocol is a liquid staking protocol for various assets, one of which is Bitcoin.
  • The Dedaub team discovered an issue in the protocol then messaged the developers on Twitter about it. Eventually, after not getting a response for 20 minutes, they messaged SEAL 911 to create a war room to contain the issues. During the two hours of the war room, the vulnerability was exploited for 2M. In reality, this was fine because the third-party protocols that could have been rugged were contacted and turned off the functionality.
  • At first glance, 20 minutes is too aggressive to escalate to a third party outside the company. The Twitter message at the bottom of the has a response of "please don't ignore me" after three minutes, which seems fast. Somebody could just be in the shower or sleeping. However, given that it was immediately exploited, it seems warranted. To me, it's weird that two groups found the same vulnerability for a live contract at the same time.
  • The vulnerability was in the mint() function. On the BTC vault, there was a 1 to 1 mapping from Ethereum to BTC. Since BTC is much more expensive, performing this trade would result in an instant profit to the attacker. Although the BTC contract couldn't be called directly, the vault was a trusted minter that could still trigger this.
  • Fairly simple bug but it's always interesting to see the incident response on them!