Resources

People often ask me "How did you learn how to hack?" The answer: by reading. This page is a collection of the blog posts and other articles that I have accumulated over the years of my journey. Enjoy!

Sui Temporary Total Network Shutdown Bugfix Review- 1257

Immunefi - F4lt    Reference →Posted 2 Years Ago
  • Sui is a layer 1 blockchain is famous for its speed and concurrency. By being architected in this way and using Rust under the hood, it hits incredible speeds. The tldr; of the vulnerability is a out of memory denial of service bug that's not particularly interesting. However, the explanation of the eco-system is interesting and I'll post that for myself here.
  • Sui uses Narwhal as a mempool (pending transaction list) implementation and Bullshark for the consensus engine (synchronize network between validators). This is done by Narwhal parallel orders of transactions into batches where Bullshark figures out a DAG to form these from. Under the hood, Bullshark uses the BFT consensus algorithm.
  • Sui network transactions happen with the following steps:
    1. Send transaction to a full node, which will send to all of the other validators, which perform checks on these.
    2. A quorum of 2/3 (after weights on the voters) is collected. Once this is true, the information about the vote is broadcasted across the network with a combined certificate.
    3. Each validator checks the certificate. If it's valid, it will execute the transaction locally.
    4. Optionally, the quorum driver can collect an effects certificate based on the previous step and return it to the sender as proof of finality.
  • When processing the incoming certificate, the logic for this does not consider a malicious user. A user can put an infinite amount of digests within the certificate then grabs the corresponding certificates for these digests. By providing a large amount of digests and large certificates, this turns into a denial of service vulnerability.
  • Sending a 37MB payload with 1.2M digests triggers an out of memory exception, crashing the blockchain. Honestly, I wish the report was smaller. Most of the information wasn't required to understand the bug... but, DoS to take down blockchains is interesting none-the-less.

Binarly REsearch Uncovers Major Vulnerabilities in Supermicro BMCs- 1256

Binarly    Reference →Posted 2 Years Ago
  • Baseboard Management Controllers (BMC) are used for the remote monitoring of systems. Typically, this is a specialized chip on a server on a different wired connection than the server. It can be used to change/update level items like UEFI or give console access to the server.
  • Since this can be accessed remotely, ensuring that the BMC device is secure is incredibly important. One way of accessing this is via the IPMI protocol. This device has a web interface for interacting with this. The first vulnerability is a command injection within the email notification functionality. This does require administrative login to setup though.
  • The next three vulnerabilities are all reflected XSS bugs. Using this, an attacker can trick a user to visit their maliciously crafted link to create a user account or perform other bad actions.
  • Paired together, these vulnerabilities allow for a one-click RCE. By chaining the XSS to create an account into the command injection, RCE is gained. Overall, the bugs are pretty standard and nothing special. The interesting part to me is the impact and the target that is being hit.

How I Exposed Instagram's Private Posts by Blocking Users - 1255

003random    Reference →Posted 2 Years Ago
  • Instagram allows for the embedding of posts. When embedding a post, it's simply a popup with embedded HTML that makes a request to https://www.instagram.com/api/v1/oembed/. This will get the post information, author, title and several other things. If this is provided from a private account, then a 403 is sent back. If the account of the post is blocked and is private, then the response will be 404 not found.
  • This is a perfect case for XS-leaks! Within the iFrame, if an error occurs, then the account was blocked and is part of a subset of users. This creates a de-anonymization primitive for users of a website using Instagram, which is not great. So, a medium severity bug.
  • As any good bug bounty hunter does, they were gathering evidence and testing things out. While making this call with Burp repeater, Burp and Chrome were doing different things. What's going on? It turns out that the user-agent header was being processed. After fuzzing the header, they noticed that the code being ran on mobile was different than in the browser.
  • So, by blocking the user and making a request to the private account, the embedded endpoint was returning data! According to Meta, the oEmbed endpoint had an error case for mobile agents. This error was normally triggered by region blocking but the developers wanted all users to be able to access the items even when this error occurred. To fix this, a superuser was used to make the request instead.
  • By blocking the user, the generic error handler was possible to hit as well! This allowed for us to access the post, even though we'd been rejected from it. Overall, a super unique vulnerability that required a lot of puzzle pieces to make happen. To me, the big takeaway is that unexpected functionality should always be explored deeper.

Looney Tunables: Local Privilege Escalation in the glibc's ld.so (CVE-2023-4911)- 1254

Qualys    Reference →Posted 2 Years Ago
  • The GNU C dynamic loader (ld) will find and load shared object libraries needs by the program. The dynamic loader is extremely security sensitive since it runs with whatever permissions of the binary, such as setuid permissions. Finding a vulnerability in this leads to catastrophic consequences.
  • At the beginning of execution, the function __tunables_init() is called; this processes the GLIBC_TUNABLES environment variable. For each variable that it finds, the program will make a copy of the variable, parse it, sanitize it and edit the original inline. The goal of the parsing is to remove all dangerous tunables out of it.
  • The expected format is tunable1=aaa:tunable2=bbb. However, there is bad input validation on the validity of the format. Providing a value of tunable1=tunable2=AAA will cause some major problems.
  • First, the program copies the entire tunable into the temporary string. Next, since the pointer is not incremented because of the missing ":", the pointer still points to the beginning part of the tunable (tunable1) instead of the next tunable in the list (tunable2). Finally, the second iteration will strcpy into the same buffer, leading to a buffer overflow on the inline write of the variable.
  • Exploitation is super interesting. At this point, regular malloc does not exist. the minimal version of malloc calls mmap() to get memory. So, the authors had to find a way to exploit this by corrupting the mmaped pages. The read-write ELF section makes for an interesting target but the authors could not find a way to get their allocation behind it for the overflow.
  • So, they decided to tackle the mmaped pages written by the tunables_strdup() function. mmap is a top-down allocator. So, by creating a tunable without corrupting then performing the overflow in a second variable, it is possible to overflow the first variable. This ended up not being very fruitful though.
  • Within the loading of the link_map structure, they noticed that not all members are initialized to zero. Additionally, unlike regular malloc, the minimal malloc with calloc() does not initialize to zero. With this, it is possible to control the pointers of the structure! This completely breaks the logic of ld.so in favor of the attacker.
  • Out of all the different data structures, the library search path was the holy grail. This could be used to force ld.so to use a directory that it was not intended to use for loading binaries. Mostly, importantly, we could force it to load malicious libraries as root within the process. This was a pointer though; so, they put data a ton of data (16GB) onto the stack via environment variables and brute forced this. This takes about 30s on Debian and 5m on Ubuntu/Fedora because of Apport.
  • Overall, another amazing blog post from Qualys! Their exploit techniques and innovation are always awesome. In particular I enjoyed the explanation of pointer usage and the various exploit paths that didn't work.

Mitigations against flash-loan enabled attacks- 1253

Inrdpss - Smart Contract Research Forum    Reference →Posted 2 Years Ago
  • Flash loans are crypto loans that do not require collateral. This is possible because an attacker can either payback the loan or the transaction will revert. This allows a user to get access to near infinite liquidity in order to arbitrage and various other things.
  • The holder of the liquidity gets a fee for providing the flash loan service. While these have a legit use case, it can cause lots of havoc. Distorting the price of, pump and dumps, oracle manipulation, wash trading and much more are possible
  • So, how do we protect against these attacks? The first option is probably the best: breaking logic into two transaction. If the calls cannot be performed in the same transaction then the manipulation is usually worthless.
  • Next, relying on robust oracles. For instance, using Chainlink instead of an on-chain calculation. Finally, keeping track of items to ensure that there's a limit on the change. For instance, letting a slippage limit.
  • Besides the active measures, we can have inactive actions as well. Blacklisting suspicious actors and adding in a pausing ability to occur with runtime monitoring works well too. But, having the first measures discussed are much more important. The final reactive measure is storing funds in a vault to pay users back.
  • Personally, it's a combination of the active and reactive measures that should occur. By having on chain defenses, they can be restricted. But, if something does happen, the developers should be fast to move.

A tale of two bugs- 1252

Matt Luongo - Threshold    Reference →Posted 2 Years Ago
  • tBTC is a bridge that brings BTC to the Ethereum network. This is done using the threshold protocol.
  • Redemption's requested when going from BTC to tBTC on Ethereum can be rquested. Then, a list of decentralized relayers using a multi-sig wallet must approve the transaction that occurred. The threshold is 51/100.
  • FTX got "hacked" a while ago. Well, did they? Or was this Sam just hiding money? We're not really sure. Anyway, some of this money was moving through the network and somebody noticed. A hacker noticed.
  • Somebody found a way to pause the tBTC network network. This was done by manually crafting a transaction that caused the validator signing clients to stop working. In particular, the client thought that the wallets were busy and unable to service anymore requests. An 0-day was dropped!
  • There's a second bug that's more of a design flaw than anything else that allowed the first bug to be possible. There is only a single approver address nominated by the DAO, creating a single point of failure. If this was compromised then the whole thing would be shattered.
  • Further, any system that requires specific approval can have an issue like this. So, instead of using an approval based mechanism they decided to move to a veto-based setup. They describe this as all things going through by default but specific addresses have the ability to veto or pause transactions. This is similar to the Guardian role in optimistic minting.
  • Overall, interesting post on a DoS bug within a popular protocol. The design decision discussion is very fascinating to see as well.

Solidity Yul Return Opcode Funniness- 1251

MiloTruck    Reference →Posted 2 Years Ago
  • The tweet starts with an image of Solidity code: here. It's mostly Yul Assembly with two functions calls in it. The first function g(), which calls storeAndReturn().
  • Inside of storeAndReturn() is a Yul assembly block with assembly {return(0,0)}. In most languages, return exits the function. However, in Yul, (unlike standard Solidity), this stops execution of the contract at that moment instead of simply returning to the next function. I ran into this once and thought it was extremely weird.
  • The author just learned this. While looking at an Immunefi program, they noticed a function called functionCallWithValue within a library. This was calling return in Yul thinking that it was returning back to the function. Instead, it was completely ending execution.
  • What if additionally checks or calls need to be made? This could lead to a security issue. In the case of this program, they didn't find anything directly exploitable. Instead, a user using batchExecute() would have unexpected results because of this functionality. They got 5K for finding this bug.

2023 Microsoft Office XSS- 1250

@adm1nkyj and @justlikebono    Reference →Posted 2 Years Ago
  • Microsoft office allows users to put videos into Word from external locations, such as YouTube, via the Online Videos feature. When the video is embedded in the document, Office checks that the video is from a trustworthy via a regex.
  • If the link is proper, then it will make a web request to get information about the video title and other information. While it's doing the processing of the title, it adds it into an iFrame tag without input validation.
  • This turns into a classic HTML injection vulnerability via the title within the iFrame. Using this, the context of the iFrame attribute can be escaped, leading to the ability to add other attributes. The beginning of the payload is simply " onload=..."
  • Loading arbitrary JavaScript into the iFrame is game over for Word. An attacker can make a request to an arbitrary location then execute this code dynamically to get RCE.
  • Returning the malicious JavaScript can be used to load arbitrary applications. The example JS is window.open("calculator://. This does require some user interaction but nonetheless it's interesting seeing XSS within such a weird contest.

MacOS "DirtyNIB" Vulnerability- 1249

Adam Chester    Reference →Posted 2 Years Ago
  • Entitlements are privilege capabilities on application within MacOS. These are stored as key-value pairs embedded within the code signature of the application.
  • In MacOS, apps will have a UI defined with a NIB file. For whatever reason, Gatekeeper doesn't invalidate access to an app if the NIB file has been swapped.
  • Why is this a big deal? A modified NIB file is trivial to get code execution with. In particular, this allows for the usage of the entitlements of the application that is running. By design this creates a pretty horrible privilege escalation. The author demonstrates how to do this using XCode.
  • With MacOS Ventura, a new mitigation called Launch Constraints made this much, much harder. An application can be constraints on what can be done to it, such as copying with the same permissions as before. The previous POC didn't work because of the launch constraints on the binary.
  • They found a new candidate binary which was vulnerable to the same attack as before. And then another bypass on a later version. Apparently, they tried reporting this to Apple in 2021 but things just never got fixed. This seems like such a simple vulnerability; it's crazy this hasn't been fixed yet.

Canary in the Kernel Mine: Exploiting and Defending Against Same-Type Object Reuse- 1248

grsecurity - Mathias Krause    Reference →Posted 2 Years Ago
  • grsecurity has a Linux kernel with a bunch of extra security protections in it. In this post, they detail a protection they created that was inspired from a real bug they found within the Nitro Enclaves driver via bad error handling.
  • The authors found a bug in the kernel driver resulted in a stale file pointer being in the processes file descriptor table. If a reallocation of the file object happened then this dangling reference would have allowed for sensitive data to be viewed, such as /etc/shadow.
  • What's interesting about this bug is that this was immune to all other mitigations in the kernel, including the ones added by grsecurity. Type confusion isn't required, ASLR leaks or anything else. All we need is to get lucky with the file pointer and we're good to go. In essence, we have a same-type, same-address use after free bug.
  • The authors chose to add an extra field to the struct file type that can be three values. During the getting and setting of the pointer data, these values are checked for validity. This isn't enough to completely kill the bug class altogether though from the reuse of a dangling pointer, since the updating may make the pointer valid once again.
  • To fix this, they added a layer of randomness to it. Reallocated objects will use a different memory address now. This makes the dangling pointer not point to the beginning of the reallocated object. Since the magic value cannot be found, the validation fails. This only works 90% of the time though. They found another occurrence of this vulnerability class within the vmwgfx driver. Once they triggered it, the check found the invalid FILE pointer. Pretty neat detection of the vulnerability.
  • This helps for the FILE object but what about the other types? The struct cred reuse can be a horrifying vulnerability class that they decided to mitigate as well. They added a canary to the structure but didn't want to fix all accesses of it. So, they added a GCC compiler plugin to do this for them automatically! This was tested with a known vulnerability to see if it worked as well.
  • Overall, this is an interesting post into the world of kernel security and mitigations. Good explanations and walk through of various mitgiations.