Resources

People often ask me "How did you learn how to hack?" The answer: by reading. This page is a collection of the blog posts and other articles that I have accumulated over the years of my journey. Enjoy!

Security Advisory: Clock Fault Injection on Mocor OS – Password Bypass- 1222

One Key    Reference →Posted 2 Years Ago
  • Mocor OS is a proprietary OS from UNISOC. This OS is used in various phone vendros such as Nokia, TCL and others.
  • During the initial boot up process, there is a user-lock password on the phone. Without knowledge of this, it should not be possible to access data on the phone.
  • The author found a weird (and not very well explained) loophole in the code. When a software reboot is triggered on the SoC via a crash, certain permission checks are not done compared to the regular boot.
  • By glitching the chip, this can be done. In fact, it does not require fancy equipment. Simply connect GND to the CLK for 50-100 ms during the password check and it will bypass the check.
  • This article was confusing to me. But, it seems that the soft reboot during the password prompt assumes that the system booted securely. So, it takes a shortcut if a soft reboot occurs after this point. To be honest, not sure if this is true but with the large timing window, this almost appears to be a software bug than a hardware bug.

Cookieless DuoDrop: IIS Auth Bypass & App Pool Privesc in ASP.NET Framework (CVE-2023-36899)- 1221

Soroush Dalili    Reference →Posted 2 Years Ago
  • On the web, the go to method for maintaining state in the stateless HTTP protocol is cookies. The .NET framework included a way of putting cookies into the URL for clients who couldn't support cookies. This had the view of S(aaaaaaaaaaaaaaaaaaaaaaaa) in part of the path of the URL.
  • Historically, this has been real bad for WAFs and session related issues such as session fixation, session hijacking and more. The author includes a link to various posts about previous issues. Due to these security concerns, the feature was removed from the .NET core in newer versions.
  • From the WAF bypasses the author posted on Twitter, it's clear that putting sessions into the middle of a URL causes weird problems. While testing new WAF bypass techniques, they noticed two weird anomalies.
  • The cookieless feature could circumvent the protected directories and URL filters in IIS. Normally, paths in these locations would be blocked. However, by including two sessions in the URL, the validation was bypassed. Why does this occur?
  • In the rewrite portion of the cookieless paths in the .NET framework, it appears to only perform the removal once for verification. Then, during the resolution process, it will remove the second session and allow the user to access the path. At least, this is what it seems like but it's not explained very well.
  • In IIS, there are application pools. Some paths could use one pool while others would use another. By using the double session path from above, it is possible to create pool confusion. This can lead to a privilege escalation, given the right scenario.
  • Interesting bug! It turns out that string parsing is very hard to do correctly. Double adding a value is something I'll be testing for in the future.

JTAG 'Hacking' the Original XBOX in 2023- 1220

Markus Gaasedelen - RET2    Reference →Posted 2 Years Ago
  • The original XBox was pwned hard very soon after its release through various methods. One method that was thrown out early on was the idea of using JTAG. This was a gold mine if possible though; this would give amazing debugging that has never been possible on it.
  • There were two reasons for this. First, the TRST# line was holding the chain in reset under the chip, making it difficult to remove. Second, reverse engineering the JTAG interface would have been non-trivial as well. But, it's 2023! So, the authors gave it a try.
  • Instead of modifying the hardware to get JTAG working, the author decided to remove the chip entirely. By creating a breakout PCB, they could isolate the JTAG signals from the CPU signals. This would drastically help out in the reverse engineering process. This costed them $20 USD, which is super cheap.
  • What's an interposer board? Great question! For the BGA chip, the idea is to add the CPU on the top part of the chip. Then, solder the interposer board onto the original CPU location. This would allow for the CPU to function normally, with the ability to see and interact with the JTAG signals from breakout pads.
  • There are not one but TWO reflows here. This is incredibly complex to do correctly. From there, they purchased a Pentium III JTAG debugger to attempt to connect.
  • This did not work straight away because the System Management Controller (MCU) on the original XBox expects the CPU to pass a set of integrity tests at the beginning of boot. the debugger continuing upon attaching was not fast enough to pass these checks. So, the author setup a Arduino sketch on the I2C bus to fulfill these actions.
  • With that, they had a JTAG debuggable system. Extracting the secret ROM was now a trivial feat. Overall, an interesting feat in its own right. I enjoyed the interposer board setup and guide to performing this. Awesome post!

Tunnel Crack- 1219

Mathy Van Hoef    Reference →Posted 2 Years Ago
  • VPNs are used in order to prevent snooping or internet tracking. In this article, the authors go over widespread issues they found with VPN apps.
  • When a user joins a network, the subnet is set. However, there is no validation on whether this IP address is proper. If the IP address of a domain is 1.2.3.4, then setting the subnet to 1.2.3.0/24 will allow for the
  • This happens because the VPN app allows direct access to the local network while using the VPN. What happens? We can force the VPN to send traffic outside of the tunnel by sending it to a local IP. This effected all iOS apps, many on MacOS, Windows and Linux.
  • The second attack abuses the fact that most VPNs do not encrypt traffic towards the IP of the VPN server. The traffic should already be encrypted, so this shouldn't matter. This is vulnerable to a classic DNS issue of spoofing the response for a domain to be a different IP.

Leet Swap- 1218

BlockSec    Reference →Posted 2 Years Ago
  • LeetSwap is a decentralized token exchange. It's a fork of Solidly.
  • In Solidity, private and internal functions are started with an _ (underscore) by convention. In practice, the visibility is the important part. In the case of this protocol, the function _transferFeesSupportingTaxToken() was set to public, even though it had an underscore at the beginning.
  • Although the name says taxTokens, the functionality takes in a token address and amount then sends it to the fees contract owner. So, what's the big deal? The attacker does not get sent the money.
  • How do we exploit this? Since this is an automated market maker (AMM), the prices are dedicated by the amount of the assets in the protocol. Since we can arbitrarily move assets out of the protocol, we can manipulate the trading rates. Here's a step by step for hitting a single pool if we were attacking a WETH-SOMETOKEN pool:
    1. Swap WETH for SOMETOKEN at the market rate.
    2. Call _transferFeesSupportingTokenTax() to transfer out the SOMETOKEN from the protocol. This will make the exchange rate for trading SOMETOKEN to WETH favorable.
    3. Call the sync() function to fix the pool amounts used for calculations.
    4. Swap back SOMETOKEN for WETH at the favorable rate to drain the protocol of most of its WETH.
  • Get audits people! Security is hard. A junior auditor would have trivial caught this bug.

Curve, Vyper- 1217

Rekt    Reference →Posted 2 Years Ago
  • Curve Finance is a central protocol within the DeFi ecosystem. The protocol was written in the Vyper language because of its gas efficiency.
  • Most people assumed that the exploits were due to a known read-only reentrancy within Curve. However, the more people dove into the issues, the more they realized this ran deeper. It looked like a reentrancy issue in Curve. But, how is this possible? It had been audited multiple times!
  • How did the compiler mess this up? According to the here, there is a mismatch in the slots that are being checked for reentrancy. This means that the protection was per function instead of per contract, which is really bad for the protocol.
  • From the commit hash, it appears that a check was missing to see if a reentrancy lock had already been made. This resulted in a lock being made per function, which makes the reentrant lock possible to work around. BlockSec's image shows the change that made the code vulnerable.
  • Initially, it was crazy to me that basics tests did not call this. However, a developer would write a test to call the same function back-to-back. In this case, the protection would have worked. Instead, one function in the contract then another one would have been called in order to test this. Performing the extra test cases and a test suite can pay off!
  • A crazy bug led to the destruction of this. I wonder if people will use Vyper in the future or if they will only use Solidity.

Use-after-freedom: MiraclePtr - 1216

Google - MiraclePtr    Reference →Posted 2 Years Ago
  • Half of the exploitable bugs in Chrome were use after frees (UAF). Killing this bug class with mitigations would save a lot of exploitable 0-days.
  • The Chrome browser runs in a sandbox. Compromising the render is the easy part and there is not much gain from doing this. The sandbox is the hard part to exploit. It has less ways to interact with it and is scoped down in terms of attack surface.
  • We only care if an attacker can escape this process and not the renderer itself. So, the idea is to reduce the attack surface of the browser process in order to make exploitation harder. How? Miracle pointers.
  • The goal is to rewrite the code base with MiraclePtr instead of standard pointers. The algorithm works just like reference counting under the hood for each pointer.
  • The main difference is that when the memory does not have any references, the PartitionAlloc allocator will quarantine the memory region before releasing it. Additionally, they set the pointer with garbage memory so that a UAF would not be very useful.
  • The authors of this post rewrote 15K raw pointers. Although this is not all pointers, it will reduce the attack surface drastically. Additionally, they hope to move this into more parts of the code base too. Overall, this is a super interesting mitigation method in software for memory corruption bugs.

Finding Two 0-Days by Reading Old CVEs- 1215

Sagitz    Reference →Posted 2 Years Ago
  • Sagitz read about a Linux kernel privilege escalation labeled CVE-2023-0386. The vulnerability exploited an OverlayFS where SUID files from a nosuid mount could be copied to outside directories. By doing this, escalating to root is trivial.
  • To mitigate this problem, a check was made to verify that the owner of the modified file is present in the current user namespace. This solves the SUID exploit since SUIDs must be owned by root to be effective.
  • Where this is one bug, there are may be several variants of this issue. The people thought "Is there any other way to elevate privileges?" There are also file capabilities; these are a way to grant root-like capabilities to a file without needing it to be owned by root.
  • By using file capabilities, the same exploit method can be used instead of SUID binaries. The exploit only worked on one of the authors systems, but why? They decided to reverse their search: are there any places where file capabilities are copied without conversion?
  • By using this approach, they found a variant of this issue in another place. The vulnerability is so easy to exploit it can be done with a bash script. For more on these bugs, read here.
  • The mindset of going from old CVE to new bug to another new bug was awesome to see. Really good commentary of how hackers find vulnerabilities and how to use other research to propel yourself.

Hacking Auto-GPT and escaping its docker container- 1214

Positive Security - Lukas Euler    Reference →Posted 2 Years Ago
  • Auto-GPT is a command line application for getting a high level description of a goal then breaking it up into sub tasks. This works by taking in the initial text from the user and basing the data to an LLM. Based upon this data, a command will be executed depending on what we ask. This ranges from browsing websites to writing files to executing Python code.
  • The authors took the direction of seeing if incoming input from other mediums besides the users text could be a security threat. So, they focused on browse_website and other functions along these ideas. One idea would be to force a sponsored result to return tainted data that could act as malicious input to the system.
  • When grabbing data from the website, it was passed back into the LLM. So, the data being returned back to the user had to be part of the response from the LLM. TO get around this, they found that data included in a hyperlink was directly included in the output and they used more prompt injection to return arbitrary data as well.
  • Once there, they wanted to convince Auto-GPT to execute arbitrary code. They wanted to make the code as small as possible to ensure that no rewrites happened. Their plan was to use requests to eval a script from the internet. Auto-GPT saw a security issue with this so they used some misdirection with curl to trick the program to thinking that the usage of eval was safe in this case. This level of code execution was within Auto-GPT though.
  • Their goal was to get code execution within the Docker container and not the LLM. They found multiple command that made this trivial: write_to_file and execute_shell were easy to do. There is a catch though: many of these commands require a confirmation from the user.
  • The authors found that ANSI escape sequences would be rendering in the terminal. This could have been used to spoof model statements, which is a pretty awesome bug. At this point, even with code execution, we are still within the container though.
  • The docker file (docker-compose.yml) mounts itself into the container. Because of this, an attacker can write to this in order to escape the container on the next call. There is an additional setup where the python code is executed within a clean docker container with no issues. However, execute_python_code has a directory traversal vulnerability that allows for the modification of python scripts from outside the directory.
  • Overall, a super interesting post that dives into the future. Multi-layer prompt injection to get access to dangerous functionality then abusing this functionality to get code execution. Pretty neat!

PalmSwap Hack- 1213

Quill Audits    Reference →Posted 2 Years Ago
  • PalmSwap is a decentralized leveraged trading platform. The calculations for betting on the price going up or down must be done properly. There are two tokens at play: USD Palm (USDP) and Palm Liquidity Provider (PLP).
  • When removing liquidity, the price is calculated using the getAum() function. This multiplies the pool amount by the price of the token from an external oracle to get the amount of received tokens.
  • When calling buyUSDP(), there is a function to increase the price of USDP and increase the pool amount. Within the removal process, there is no decrease price though. The flaw is that the calculations are not 1 to 1 between adding and removing assets. The call gives a 1 to 1.9 ratio, which is way to easy to make money from.
  • How was this attack performed?
    1. Flash loan for 3 Million USD.
    2. Purchase a large amount of PLP with purchasePLP(); about 1 Million from the original amount. Under the hood, this will buy USDP and mint PLP with a 1 to 1 ratio. Finally, it stakes this for the user.
    3. Purchase USDP directly by calling buyUSDP() with the rest of the funds. The problem is that the exchange rate has gone up between USDP and PLP, even though nothing has really changed.
    4. Unstake the amount from step 2 ino rder to get USDP at the inflated rate.
    5. Call sellUSDP() to sell all of the staked amount.
  • Another report can be found here from BlockSec as well. Overall, a bad functional bug led to a major exploit. It's weird that this was not caught in testing.