Resources

People often ask me "How did you learn how to hack?" The answer: by reading. This page is a collection of the blog posts and other articles that I have accumulated over the years of my journey. Enjoy!

Exception(al) Failure - Breaking the STM32F1 Read-Out Protection- 1666

Marc Schink & Johannes Obermaier    Reference →Posted 9 Months Ago
  • Microcontroller's require that the binary be loaded into the memory of the chip. On many production chips, it's important that this information cannot be read out. If it could, an attacker could steal industry secrets or secrets particular to the device, like cryptographic secrets. So, many chips disable the reading out of the chips flashed data altogether.
  • The STM32F1 series does not provide a feature to disable the debug interface (SWD), allowing it to continually be used but with the flash readout protections on. When initially testing how this worked they noticed that the reset halt command gave them information about the Program Counter (PC) with a valid address located in flash memory. How is the chip able to read out from this section?
  • When an exception is generated, like a halt, the processor loads the corresponding entry address from the vector table. This procedure is called vector fetch. Upon device reset, the vector table is located in flash memory even though the flash memory has read-out protection enabled. In the documentation of the manual, this is explained: the reset vector is fetched via the ICode bus similar to an instruction fetch which IS allowed. Can this be abused?
  • The initial idea is simple: use the vector table address information to leak the contents of the flash via the PC register. In the case of this device, it's possible to manually set the address of the Vector Table because debug access is not disabled. By changing this over and over again, triggering the particular exception and reading the PC once halted, we can leak the contents of the hidden flash!
  • Not all of the entries can be triggered though but there's a cool workaround. If the Vector table is unaligned and exceptions are generated with numbers greater than the vector table size then it wraps! Using this, it's possible to access contents that should normally not be accessible. Of course, this assumes that all exceptions can be triggered. The article explains how to trigger these. Even with this trick, not all data can be extracted though.
  • Overall, a small yet important observation led to a cool vulnerability. Great write up on the discovery and exploitation.

Cork Exploit Post-Mortem- 1665

Cork Protocol    Reference →Posted 9 Months Ago
  • On March 4th 2025, Cork Protocol Beta was exploited for 3,761 wstETH. This article explains the exploit methods used in the real attack. The project had two audit contests from Sherlock and Cantina, two audits from Quantstamp and Spearbit, formal verification work done by Runtime Verification. It's interesting that this wasn't caught in earlier stages but it's hard to say whether this vulnerability was alive during the audits. The exploit contained two exploits to make his viable.
  • The first vulnerability was a rollover pricing issue. The Cover Token Extraction involves a pricing mechanism tied to automation to determine the price of the Cover Tokens/Depeg Swaps. The mechanism for price computing for risk premium trades on DS trades (buy and sell) is calculated based upon the trade amount and expiry. For instance, 0.02 with 1 year to expiry is a 2% risk premium. When the market progresses towards the expiration, it should gain in value. The shorter timeframe poses less possibilities for a disaster to strike.
  • The algorithm used to calculate this is (F/Pt)1^T, where T is time left. When the expiry time is very close to the end, this creates a crazy edge case. For instance, 0.02 price home hour before expiry creates a 17520% risk premium. This skyrocketed premium calculations at the end.
  • The exploiter purchased 2.5 DS with 19 minutes prior to the expiry which resulted in a 1779.7% risk premium. When this price rolled over to the next step, it was highly skewed. In particular, the exploiter converted 0.0000029 wstETH to 3760.8813 CT. This drained the entire supply of the Cover Tokens from the AMM.
  • The second vulnerability was a pretty deep access control issue. In UniswapV4, there are hooks that can be added to calls. The Cork hook functionality on the FlashSwapRouter contract contained an access control vulnerability. Although it contained some validation on the callback origins, it was bypassable via spoofing particular input parameters.
  • The Cork article says this was highly sophisticated; the Rekt.news article said it was pretty standard. I find it interesting that these vulnerabilities were missed. According to the rekt.news article, it was out of scope for some of the audits.

GitHub MCP Exploited: Accessing private repositories via MCP- 1664

Invariant Labs    Reference →Posted 9 Months Ago
  • The Model Context Protocol (MCP) is a standard for how AI models can interact with external data sources and tools. This is important for cases where the model needs context-specific information from the user or the company that is somewhat dynamic, such as calendar information. On GitHub, there is an implementation of MCP for getting GitHub information, such as issues that has a vulnerability in it.
  • The attack setup assumes there are two repos: a public and a private repo. When the GitHub MCP server gathers the information from the public repo (such as issues that anyone can open), there is the possibility of prompt injection. Once the session is poisoned, the LLM can use further privileges to do malicious things.
  • The LLM can be tricked into using existing MCP integrations to leak information on private repos now. For instance, it can create a PR or create an issue on a public repository with the data from the private repo. It's pretty neat that the public data can be used for prompt injection to perform dangerous actions.
  • Fixing this issue is not super straightforward, though - this is an architecture design flaw in the GitHub integration. Using more fine-grained access controls on GitHub tokens sorta works; however, the company has a tool for context-aware access control that's interesting. Although I did enjoy the vulnerability, there is too much marketing in the post for my taste.

Splitting the email atom: exploiting parsers to bypass access controls- 1663

Gareth Heyes - Portswigger Labs    Reference →Posted 9 Months Ago
  • The email standard comes from a multitude of RFCs written over 50 years ago. Emails can have quoted values, comments, escapes, encodings, and much more. Many applications use email for user identification. This post discusses exploiting email parsers to bypass account isolation mechanisms that rely on the perceived domain of the email.
  • While giving weird inputs to PostFix and Sendmail, they saw an interesting error message: DSN (delivery status notification) with an invalid host. In particular, oastify.com!collab\@example.com had caused this error to occur. UUCP is an ancient protocol that existed before the Internet and email, allowing messages to be sent between Unix systems. The exclamation mark is a separator between the domain and the user part of the email. This domain goes to oastify.com because of the UUCP support. We're getting somewhere!
  • Doing similar things on Postfix gave them the output of collab%psres.net(@example.com via the ancient protocol source routes. Source routes allow you to chain servers together to send mail. This was sent to BOTH example.com and psres.net as a result. The key is that the ( is commenting out the domain part of the email which Postfix uses the local-part of the source route to send the email to an unexpected location. Regardless, all of this made Gareth want to dive deeper into email parsing.
  • Encoded word is an email scheme that allows different characters to be used than what is provided. In encoded word, =? can be used to indicate the start of an encoded word. Next, you provide the charset, type of encoding within question marks and then the encoded data. Finally, the encoded word is ended with ?= in the email. For instance, =?utf-8?q?=41=42=43@psres.net results in ABC@psres.net.
  • Upon testing this on several systems, they found that several websites using Ruby were accepting it. The library accepted utf-7 and several other charsets. Armed with all this knowledge, they began tackling real targets, such as GitHub, to see what they could uncover. The main goal was to use their ability to trick the email provider to domain to gain access to something that they shouldn't have access to. One problem with exploitation is the lack of feedback though.
  • On GitHub, they played around with double quotes, encoding and such for a long time. Eventually, they noticed that encoded @ signs would be processed. Unfortunately, it was still failing because of the rest of the data following. So, they just added an extra encoded null byte to it, which cancelled the rest of the email data out. This allowed them to have an arbitrary trailing domain but have the email get sent to a user-controlled location. This allowed bypassing the IdP verification on GitHub. A similar thing was done on Zendesk.
  • On GitLab, they were able to use encoded spaces to get an email sent to more than one user! By including =20 to encode the email, it would add a space between the emails. According to email standards, this is alright to do to send for multiple emails. The rest of the provided email was treated as the second email, even though the service itself only saw the second email for validation purposes. They found a similar exploit using > as well.
  • They found that another library for email parsing supported Punnycode within the email. They used this trick within Joomla to get XSS by converting the domain character being rendered in the page from a domain to an opening style tag. From there, they leaked the CSRF token via XSS and got RCE via installing a malicious extension. Pretty neat!
  • The end of the article has several other interesting attack vectors, like using double quotes to add optional fields into an email, like the ORCPT field. To protect against this, the article says to block all encoded-word related characters like =[?].+[?]=. Besides this, the domain of an email should not be fully trusted when it comes from an SSO provider. Great post on parser differentials, once again from Portswigger!

The Cetus AMM $200M Hack: How a Flawed “Overflow” Check Led to Catastrophic Loss- 1662

dedaub    Reference →Posted 9 Months Ago
  • Cetus, an AMM on Sui, was exploited for over $223M in losses. Token prices on Sui dropped by 80%.
  • The technical parts of the bug are pretty interesting. The AMM has tick concentration math, which is notoriously complicated to do correctly. In the function get_delta_a, there was a chance for an integer overflow when performing a trade. This overflow occurred due to the number of tokens that needed to be sent to execute the trade the user requested.
  • This integer overflow (really a truncation) was identified and protected against, though. There was multiplication to ensure that the shift could NOT exceed 192 bits. This should have been sufficient for preventing the exploit, but the check was flawed. A sane check would have been n >= (1 << 192). However, it was 0xffffffffffffffff << 192 instead. This is more similar to 2 ** 256 in reality. Crazy!
  • Due to the failed detection, the numerator of a division operation is truncated later when the math assumes the value is less than 192 bits. Since this is how the AMM determines the number of tokens required to trade in for the requested tokens, this is catastrophic. By skewing the pool via a flash loan and performing this trick, they were able to transfer in a SINGLE token as collateral.
  • Sui can block accounts and freeze funds at the validator level. This was done to prevent funds from leaving the ecosystem, which had already occurred through other bridges. The hacker has not responded to any of Cetus' asks so far.
  • The auditing side of this is very interesting. According to Rekt.news, the exploit was actually in a third-party math library. On top of this, this vulnerability was already found on the Aptos implementation by OtterSec. However, when it was ported to Sui Move, the vulnerability appeared again. Several audits were done on the Aptos version, but no bugs were found.

Cryptography Principles- 1661

Filippo Valsorda    Reference →Posted 9 Months Ago
  • These are the development principles of writing Cryptography in Golang. I find it cool that they take the design of Golang Cryptography seriously. There are four design principles: secure, safe, practical, and modern.
  • Secure is obvious but important to note. This is achieved by reducing complexity, making it readable, and conducting extensive testing and code review. When a big change is made, it is only accepted in the Cryptography libraries if there are enough maintainer resources to perform an ongoing security review. They get code professionally reviewed from time to time, such as with the FIPS 140-3 module.
  • Safe is the second one. The goal is to make unsafe functionality hard to use and have very explicit documentation on it. By default, only secure versions are used. Since this is done for most use cases, this limits the opportunities for issues.
  • Practical is the third. Libraries should provide developers with mechanisms to do what they want to do easily. By supporting common patterns as first-party, the library is easy and safe to use. This is super unique compared to other libraries that just expose RSA and AES functions directly. Instead, the library has a Hash() function that defaults to the most secure and up-to-date hash function. All of this takes away the decision-making of algorithms and implementation from the developers, which is good. I love this approach!
  • Finally, the cryptography should be modern. All primitives should be modern and up-to-date. Functionally, a legacy function should be marked as deprecated. Because of the slow development process, third-party projects will implement things first but that's okay. I personally don't like this a ton -- somebody is going to implement this functionality, so it should be the people who know it best. By waiting for the issues to stop, you're preventing issues from creeping into your library but you're also leaving users at risk.
  • The Practical section has an interesting quote: "Note that performance, flexibility and compatibility are only goals to the extent that they make the libraries useful, not as absolute values in themselves."

[CVE-2025-37752] Two Bytes Of Madness: Pwning The Linux Kernel With A 0x0000 Written 262636 Bytes Out-Of-Bounds- 1660

D3VIL    Reference →Posted 9 Months Ago
  • The SFQ Qdisc system in Linux distributes fair packet throughput between different network data flows. If the packet amount exceeds the limit, it's dropped. Eventually, a type confusion occurs through a complex interaction of 3 packets. This type-confusion leads to an integer underflow on an index, leading to an out-of-bounds write with the value 0x0000. In practice, it writes 262636 bytes (4 * 0xFFFF) after the vulnerable Qdisc object.
  • The initial vulnerability was patched by not allowing a limit of 1 on the QDisc limit. However, it can get SET to 1 because of a min operation performed later. This was discovered through Google Syzkaller fuzzer. Is a two-byte uncontrolled location and uncontrolled set of bytes even useful for a primitive? Memory corruption is a powerful beast! The focus of the article is on the exploitation of this issue.
  • The first goal was to reduce the number of crashes. Right after the OOB write occurs, two invalid pointer accesses occur. One of them can be conquered by spraying valid objects with pointers in the proper malloc slab. The other case was a little more tricky - they solved it using application-level setups to ensure that the path that led to a crash never happened. Now, the OOB is stable!
  • The offsets where this exploit could take place were very limited. They had previously made a tool that converts the Linux structures to a Python-queryable interface and searched for all fields within these ranges that could be useful. After a lot of review, they came across the pip_inode_info.files field in the kmalloc-192 slab. From reading this code, they could set a counter to 0 to trigger a page-level use after free!
  • With a page-level write, the author figured out how to overwrite the process credentials with zeros to get root privileges. This exploit worked about 35% of the time. To make this more reliable, there are likely side channels to work around one of the main crashes. On the mitigation instance, they found guard pages reduced the exploitation substantially.
  • Overall, a super technical and excellent blog post on exploiting the vulnerability. At first glance, this seems unexploitable but this just proves how powerful memory corruption can be. I found the section on making the aftermath not crash to be super interesting as well.

How Broken OTPs and Open Endpoints Turned a Dating App Into a Stalker’s Playground- 1659

Alex Schapiro    Reference →Posted 9 Months Ago
  • Another rushed app launch and another set of horrific vulnerabilities. Writing secure code is hard, takes time and lots of effort to get right. This is a prime example of what can go wrong. In this case, the author reviewed an app called Cerca briefly to find some bad issues.
  • First, they downloaded the app and opened it in a proxy. The app uses an OTP-based sign-in (code to phone number). When looking at the response for submitting this request, the OTP was simply in the response. Obviously, this means that you can access anyone's account with just a phone number. Yikes.
  • The website had an openapi.json file that described all of the endpoints on the website. The goal was to find a way to enumerate users, get their phone numbers, and compromise all accounts. The endpoint /user/{user_id} returns exactly this. Since these IDs were sequential, they could just brute force all accounts very quickly.
  • The data accessible to them was vast—sexual preferences, passport information, personal messages—all of the good stuff. This is a complete invasion of privacy. The company fixed the vulnerabilities once they were reported, but made no public announcement about it—this is likely to avoid a PR nightmare.
  • Privacy is hard to get correct and requires careful design. Should user be easily identifiable and found with just an ID? How about a phone number? These considerations depend on the app but it's always something to think about.

One-Click RCE in ASUS’s Preinstalled Driver Software- 1658

Mr Bruh    Reference →Posted 9 Months Ago
  • The author of this post bought a ASUS motherboard for their PC. Under the hood, it installed a bunch of software into the OS. One of these pieces of software was the Driver Hub. Its job was installing software from driverhub.asus.com via a background process.
  • The website uses RPC to talk to a background processing running on the system. The background process hosts an application locally on 127.0.0.1 on port 53000. Given that any website can interact with 127.0.0.1 on your local system, this was a pretty interesting attack surface. The ability to install arbitrary software would be pretty cool!
  • The driver had a check to ensure the origin was set to driverhub.asus.com. However, the origin check was flimsy. It was a startsWith check it appeared. So, driverhub.asus.com.mrbruh.com was also a valid request to it. After a long while of reverse engineering the .exe, they found a list of callable functions, including InstallApp and UpdateApp. The UpdateApp would take a URL (which was poorly validated again) and run any signed executable by ASUS. The signature check likely means that RCE isn't possible.
  • The way UpdateApp works has some nuances though. Here's the flow:
    1. Saves the file with the name specified at the end of the URL.
    2. If the file is executable by ASUS then it will be executed with admin permissions.
    3. If the file fails the signing check, then it does NOT get deleted.
  • The author looked into the packaging of the WiFi driver. It contained a ZIP file with an executable, a command script and a configuration file. The AsusSetup.exe from this package is a signed installer that uses other components inside of the zip file to install things. Based upon the information within the configuration file, it would execute SilentInstallRun without any signature checks. Additionally, adding the -s flag made this not even pop up a box for installation.
  • Here's the full exploit:
    1. Create a website with the domain driverhub.asus.com.* .
    2. The website will make a request to download a binary via UpdateApp This is not executed right away..
    3. Call UpdateApp again with the custom AsusSetup.ini file.
    4. Call UpdateApp one final time to trigger the vulnerability.
  • Overall, a great find and a solid bug report!

Statistical Analysis to Detect Uncommon Code- 1657

Tim Blazytko    Reference →Posted 9 Months Ago
  • Statistical analysis is used all the time in computer science for solving hard problems. In particular machine learning has hit a big boom lately. Sometimes, simple statistical analysis can be used to solve hard problems instead of the insanity of LLMs. In this post, we get one of those.
  • n-gram statistical analysis is common in linguistics. Simply put, it uses a grouping of tokens, such as words, and shows what the likelihood of this is to occur. Based upon this, it's possible to predict words in linguistics by using the most likely next word.
  • The author has chosen to use this technique for binary analysis on machine code. From testing, they figured out that 3-grams work well without over fitting. I'm guessing they tried this with several different N-gram amounts for analysis. Previous work has shown the ability to identify both anomalies in code and find patterns to help reverse engineer unknown ISAs.
  • To do this analysis, the author lifted the binary into a binary ninja intermediate language. Additionally, they removed registers and memory addresses to make it more generalized. From this, they analyzed a large amount of binaries to get a ground truth. Now, they can start analyzing new binaries to look for anomalies!
  • While looking into malware, they were able to identify control-flow flattening obfuscation techniques. Every function identified by the heuristic is obfuscated or pinpoint a helper function managing the obfuscated state. In the Windows kernel, they analyzed the Warbird Virtual machine. By finding an obscure pattern of code in the asm, they were able to find VM handlers that were obfuscated in the VM.
  • They analyzed Mobile DRM that plays encrypted multi-media content. Using it, they were able to identify arithmetic obfuscated areas via Mixed Boolean Arithmetic and usages of hardware encryption. This was enough to demonstrate they were looking in the proper area.
  • Stats don't lie! Statistics is useful for many things, including binary analysis. Great post on using techniques from other disciplines in the realm of security.