Resources

People often ask me "How did you learn how to hack?" The answer: by reading. This page is a collection of the blog posts and other articles that I have accumulated over the years of my journey. Enjoy!

Critical SQL Injection Vulnerability in Django (CVE-2025-64459) - 1785

Endor Labs    Reference →Posted 4 Months Ago
  • Django, a Python web framework, contains an Object Relational Mapper (ORM). This is a set of APIs for performing data storage that uses SQL under the hood but doesn't actually require the writing of SQL. There is a set of QuerySet methods that interact with an underlying database. From a security perspective, this is great because it should prevent SQL injection from the beginning.
  • When interacting with the QuerySet methods, there are mandatory and optional parameters. An example QuerySet method is get(), which can be used with specific parameters. In Python, using the syntax func(**var) in a function call will treat var as a key/value pair where the key is the named parameter to use a particular value.
  • After reviewing the code of the QuerySet APIs, they noticed that two parameters, _connector and _negated, were not being filtered adequately for SQL injection. The thing is, these aren't usually controllable values.
  • The ability to set internal parameters to a function call is fine. But, this is where the **var syntax comes into play. If an attacker could control the contents of **var being used in one of the vulnerable functions, they can control the parameters vulnerable to SQL injection! They claim this can lead to authentication bypasses, data exfil and privilege escalation, which is true but context-dependent.
  • This is labeled as critical with a CVSS score of 9.1. Personally, I find the post slightly exaggerated in terms of impact. Yes, there's a SQL injection (which is a good find), but how many applications follow the pattern above? Probably not a lot. Showing the impact of libraries is hard because there are no direct things at risk; it all depends on how people use them.

Agents Rule of Two: A Practical Approach to AI Agent Security- 1784

Meta    Reference →Posted 4 Months Ago
  • The Agentic vision is expected to improve our lives drastically through automation. There's a problem with this, though: prompt injection. If an agent can access existing data, sensitive data, and then act upon it, this becomes a significant problem. A prompt injection could trick the service into disclosing sensitive information. An email bot is a good example to consider in this context.
  • If prompt injection is insecure, then how can we secure these agents? Meta created the Agents Rule of Two. An agent can satisfy no more than two of the following properties:
    • An agent can process untrustworthy inputs. To me, this one is the most sus because of potential unexpected attacker inputs.
    • An agent can have access to sensitive systems or private data.
    • An agent can change state or communicate externally.
  • If an agent possesses all three, then the autonomous aspect is a security risk. With any two of the three, however, there are no potential risks for data exfiltration or modification from external parties. They use the email example to explain why this works. The tldr; is that if all three are required for impact, then just don't do all three ;)
  • This isn't the end-all, be-all for the security of LLM-based applications. It's a great defense-in-depth or secure design measure, similar to sandboxing and binary protections like Nx. There are other things to consider, like LLM protections from prompt injection as well. Great article and design principles!

runc container breakouts via procfs writes- 1783

Aleksa Sarai    Reference →Posted 4 Months Ago
  • The report discusses three vulnerabilities found in runc, the underlying containerization used by Docker and Podman. All of them allow for writing to the /proc file system to escape the container.
  • runc will mask several files. In practice, this means that the value just points to /dev/null in the local container. However, there is a race condition around this. It's possible to use the race condition on the creation of a bind-mount to create a symlink for the target on the host system. The ol' switcheroo! By getting read/write to /proc/sys/kernel/core_pattern via this trick, it's possible to get a container escape with the coredump privileged upcalls.
  • There was a second variant to this issue. If /dev/null is deleted on the container, then runc would ignore the error, and the masking process becomes a no-op. In practice, this means that an attacker could read the /proc files. This was found after the first one and was also fixed.
  • The second full issue is similar to the first: a TOCTOU issue with /dev/console bind-mounts. When creating the bind mount to /dev/pts/$n, an attacker can replace /dev/pts/$n with a symlink. Naturally, this allows for writing to files on the host machine. This bug is after the pivot from root but the core_pattern trick from above can still be used.
  • The author also found some issues around os.Create() that were stress-inducing. Although not directly exploitable, they decided to provide fixes for them anyway. Around race conditions on /dev/pts/$n writes, they added additional protections. A single bug should really trigger a large set of security improvements while you are there.
  • The final vulnerability is a more sophisticated variant of CVE-2019-16884. Linux Security Modules (LSM) put labels or metadata to every process and file on the system. The original vulnerability was able to trick the LSM to write these labels to a dummy tmpfs instead of the correct location. This led to a bypass of the protections put in place. The trick was to have the images startup instructions mount /proc to a tmpfs.
  • The patch for the original vulnerability ensured that it was applied to a real procfs file system before performing the LSM label write. The new variant allowed for using a real tmpfs file that would effectively be a no-op. For instance, force it to write to /proc/self/sched instead of the proper one. This was done via a symlink. runc thinks that it was writing to /proc/self/attr/exec but it wrote to another file instead.
  • This bug makes the write into a no-op. An attacker could also redirect the write to a malicious target on the host system. Using this file write, it's likely that a container escape is possible. The development team was concerned that other write operations might be redirected in this way. They conducted further analysis on the system to determine if this was possible. They hope to write some custom linters in the future to try to prevent this.
  • youki, LXC, and crun were found to have very similar flaws, requiring patch coordination between all of them. Interestingly enough, LXC doesn't consider these attacks in its threat model because non-user-namespaced containers are fundamentally insecure. All of these attacker require startup-time exploits as opposed to being in an already-running container. Overall, a great set of bugs!

In-Depth Analysis: The Balancer V2 Exploit- 1782

BlockSec    Reference →Posted 4 Months Ago
  • $125M was stolen from Balancer's V2 Composable Stable Pools, alongside several forked projects of it. This article is a breakdown of the incident. Composable Stable Pools are assets that are expected to have a nearly 1 to 1 parity, allowing large swaps with minimal price impact. For instance, USDC to USDT. The pool uses stable math with Price(BPT) = (D / total supply) where D represents the pool's virtual value and BPT is Balancer Pool Token.
  • If D becomes smaller, then the BPT price will appear cheaper. In Balancer, batchSwap() is used to perform multi-hop swaps. The user can specify the exact amount in or the actual amount out to receive.
  • To normalize calculations across different token balances, Balancer has to perform scaling. Sometimes, this involves upscaling, while other times it is downscaling.
  • When performing a batch swap and specifying the amount out, the function _swapGivenOut() must calculate the input amount that is required for the transfer to succeed. Upon doing this, the function upscale() rounds down to benefit the user. It is standard practice always to have rounding benefit the protocol; otherwise, incidents like this can occur.
  • For example, let's say the pool uses 89 to calculate a fair amount in for a trade going from wstETH to cbETH. To perform a trade to get 8 cbETH, the math is (amount * 100) / 9 = ?. This means (8 * 100) / 9 = 800 / 9 = 88.888 in practice. Now, the upscale function will round to 88 instead of 89. In practice, this is catastrophic. It allows a user to exchange a smaller amount of one underlying asset (wstETH) for another (cbETH)! Over time, this decreases the invariant D, since there is now less liquidity. As a result, the corresponding BPT becomes deflated.
  • The process for manipulating the pool works as follows:
    1. Swap BPT for the underlying asset to make one of the assets be on the edge of a rounding boundary, such as being at 9. This sets up the precision loss.
    2. Perform a swap using the crafted amount to trigger the rounding error. The delta can be set up as 8.918 to 8. Hence, this underestimates the number of tokens required for the trade from the user's perspective. This leads to the BPT price becoming deflated.
    3. Reverse swap the assets back into BPT. By restoring the assets at the deflated BPT price, the attacker gains a profit.
    4. Repeat this over and over again to get large amounts of BPT at a discounted rate.
  • Rounding errors that have massive impacts have always been weird to me. How can a token with nine decimals that loses a single point of precision lead to catastrophic losses? In this case, it's because of the D value. The price manipulation of D is how the attackers profit. In a separate transaction, they purchase the BPT at a significant discount and sell it for a substantial profit. This could have been done in the same transaction, but they likely did it in a separate one to prevent frontrunning.
  • The most challenging aspect of the attack was determining the optimal parameters to maximize the effect of the precision loss. The attacker performed off-chain calculations and then on-chain simulations for the hop parameters to manipulate the pool precisely for the exploit to succeed.
  • The code received several audits by Certora and Trail of Bits. The game is hard and we can't blame anyone, though: find all bugs or else it's a challenging game to play. The vulnerability was particularly complex, as indicated by much of Twitter, which initially thought this was an access control issue. Great write-up on the nuances of this issue!
  • Another article, written by one of the auditors Certora, that did formal verification. The primary purpose was to ensure the solvency of BPT supply, parity of assets, and minting of BPT. The properties they tested ensured that no token could be created from nothing and that user balances always reflected the actual underlying value. So, what happened?
  • Solvency was verified at a high level, but not strong enough to detect the rounding errors. The verified properties did not constrain the relationship between individual swaps or rounding behaviour. Iterative operations, such as token round-trip swaps, could then increase in value due to the rounding bias. Two additional properties for formal verification would have caught this: round-trip swap invariant and BPT share value invariant. Security is hard! I'm not the biggest fan in formal verification - I kind of wish it had a different name.
  • Another post provided additional insights into the aftermath of the exploit. The author claims that the attacker wasn't very experienced. First, they didn't use flashbots, which led to some frontrunners. The hacker took 20 minutes to finish the attack on the various chains as well. They even left forks on the table to be exploited. The first copy-cat took the contract code of the attacker, replaced pool information and profited heavily without my effort.
  • They also pointed out that Balancer didn't pause any of their pools. Since they aim to be decentralized, pausing is prevented after a specific deployment window. Decentralization and speed are often enemies. Another interesting aspect was that Polymarket had a bet on "Crypto Hack Over $100M in 2025" that lagged by about 10 minutes after the attack. What a crazy world we live in.

A Race to the Bottom - Database Transactions Undermining Your AppSec- 1781

Viktor Chuchurski - DoyenSec    Reference →Posted 4 Months Ago
  • Web applications can handle multiple requests simultaneously. Because of this, it's important to consider what happens when your code has <>multiple users at the same time - aka concurrency. In the case of databases, this is a huge deal. You can query a database to verify information. However, this data can quickly become stale and make the security check irrelevant if you're not careful. This is because of the concurrency of these applications.
  • Transactions define a logical unit of work within the database context. These consist of multiple database operations that must be executed simultaneously and successfully for the transaction to succeed. Isolation defines the level at which concurrent transactions will be isolated from each other. These are to used to prevent dirty reads, phantom reads, and non-repeatable reads because of modification. All of these can cause havoc on an application.
  • At the level of Read Uncommitted, all data, including uncommitted data, can be read. At Read Committed, it only reads fully committed data, preventing dirty reads from the last step. This is the default setting for all DBs besides MySQL.
  • The next level is Repeatable Read. In the previous setting, transactions that are finalized can affect information within an ongoing transaction. In this setting, all transaction changes that occur during the transaction are effectively ignored during its execution for individual rows being operated on. The final setting is Serializable. This prevents Phantom reads, which are when different data is read between queries. This requires locking an entire index.
  • Most vulnerabilities that occur from improper database locking settings appear as Time of Check vs. Time of Use (TOCTOU) issues. They use a bank transfer as an example. The first destructive pattern they call out is Calculations Using Current Database State. This is where a query is made to the DB and validation is performed. However, the information in the query doesn't consider the other transactions being executed. In the case of a bank transfer, this could allow two transfers of $100, even though your account only has $100 total. The first update puts it to zero, while the second puts it to -$100.
  • The next pattern is Calculations Using Stale Values. This happens when the code reads the current state of an entry, performs calculations, and then calls UPDATE based upon this. In the case of a bank transfer, this leads to multiple operations appearing as a single one. The value subtracted is $100 that should be done twice on the value. Instead, it's done a single time because of the values that the update has access to.
  • Given the complexity of current applications, they were uncertain about the viability of this attack. So, they set up an application in AWS Fargate with a chosen database in either Golang or Node. After running the attack described above on a bank-transfer-like endpoint, they were able to hit the attack on all settings except when the Serializable level was used. Pretty neat!
  • How do we mitigate this? Conceptually, critical sections should be put at the beginning of a transaction to ensure database entry isolation. In practice, the easiest thing to do is add Serializable transaction isolation level to these transactions. This would have a large impact on the application though. Another option is to add a MUTEX via FOR SHARE or FOR UPDATE on SELECT operations. This will instruct the database to wait until the transaction is complete, allowing for the reading/editing of these fields. A final way is to add a version row to each column. By comparing the version on the read vs. the write, it will prevent race conditions.
  • Overall, a great post on exploiting TOCTOU issues pertaining to databases. I particularly enjoyed the mitigations section of it, as this is a tough issue to fix.

Trivial C# Random Exploitation- 1780

Dennis Goodlett    Reference →Posted 4 Months Ago
  • Much of the time, breaking randomness requires fancy math. This post is about using the situational awareness of the random function to exploit the system. In this case, the author of the post was targeting a password reset token.
  • In C#, the PRNG is considered "insecure," meaning it's not truly random. It has a set path it goes on, and the randomness really relies on the seed. If no seed is provided, then the TickCount is used. This is the number of milliseconds since the machine was booted.
  • What's interesting about this, is that the seed is calculated for each call to random()! In the .NET framework, there is a note about this. "As a result, different Random objects that are created in close succession by a call to the parameterless constructor have identical default seed values and, therefore, produce identical sets of random numbers." So, if calls to random are made within the same 1ms, they will produce the same output. If you have your own password reset token and tried resetting another user's, it all went well then they should be the same.
  • Is this even possible to reproduce? 1ms is tight! Using the single packet attack documented by James Kettle, this is possible in Burp Suite. Use Burp's repeater groups to reset both passwords at the same time. There are still a lot of false positives while doing this, though.
  • This exact issue affects Python's UUID implementation. They have also seen similar types of things used in CTFs. The end of the post demonstrates how to break this algorithm using math, and it even reveals a bug in the C# implementation (a weird integer overflow). An excellent write-up for a bug they found!

The minefield between syntaxes: exploiting syntax confusions in the wild- 1779

YesWeHack    Reference →Posted 4 Months Ago
  • The author discusses how different syntaxes by different parsers can lead to security issues. URLs, URIs, content disposition headers, Unicode, etc. are great examples of this. In Python, the urlopen function can read local files, for instance. CVE-2023-24329 showed that a space at the beginning of a URL could trigger a SSRF if using blocklisting. The point is that parser differentials can lead to horrible security issues.
  • They have several examples in their bug bounty life. They had a cache poisoning issue where only the URL port was being cached. When sending specific ports, like 80 or 443, the application removed the port. When using a huge port number, the port was kept on the domain though. The goal was to get the server-side parser to treat the port as invalid before normalization but for the client/browser to see it as valid.
  • When using leading zeros on the port, they noticed this had some weird effects. For instance, the server would use http://example.com:000123:443, parse out http://example.com:000123, and then the browser would interpret this as http://example.com:123. The difference here was between the browser and the PHP backend.
  • The next vulnerability took 3 months of work to exploit. They had control over a URL, and this would return a response from a PHP CURL request. They learned that providing the @ character and a path that started with /tmp allowed them to read files from the file system in the file upload code. However, the data was BLIND, since the file contents were being added to the $_FILES global variable. If sent with multipart/form-data, the contents go into the $_POST variable but with no control of the file name.
  • They messed around with the Content-Disposition header to make this possible. They had the source code for this application, so they were able to see the sinks of this. The confusion happens in the second request. By adding a double quote to the request in the name, it reads the contents of /etc/passwd. Since the username parameter was the closest thing to the file contents, the file was added to the variable and returned in PHP. The rest of the data is effectively ignored because it's a very nice parser.
  • This would eventually return the contents of /etc/passwd to the user, demonstrating a full file read via SSRF. The key was bypassing the $_FILES variable restriction to inject the file contents directly into the $_POST parameter.
  • To mitigate these types of issues, they had a few suggestions. First, have a single consistent parser for handling input. Realistically, this is impossible to do. Some companies may use Python for one thing and NodeJS for another. Now what? The parsing will be different. Anytime there's a check and a use with different components, it's really hard to get correct.
  • Another suggestion is to just error out when parsing fails. Things should NOT fail open. If syntax is wrong, a failure should occur. A final good one is just input validation. If you have a file name, only allow for alphabetic characters and an extension - nothing else. Good post!

HackerOne 2025 Year in Review- 1778

HackerOne    Reference →Posted 4 Months Ago
  • This is a large article with trends from the HackerOne platform. Enjoy!
  • The vulnerability classes section is interesting. Access control issues have increased by 18% (IAC) and 29% (IDOR), while authentication issues have decreased by 9% and privilege escalation by 8%. Another category that has gone up is misconfiguration issues by 29%. SQL injection is down by 23%, code injection by 1% and XSS by 14%. Finally, business logic flaws are up 19% but down 5% in terms of payouts. AI vulnerability reported skyrocketed this year, as expected.
  • For XSS, SQLi, SSRF, and information disclosure, they claim it's because these "commodity" bug classes are reaching a maturity point. Hackbots could have something to do with this. In terms of total reports, XSS remains the most common vulnerability report, which is particularly interesting.
  • They examined bug bounty programs that had lowered payouts for similar types of bugs in the last year. Of these, 73% saw a decline in valid submissions and 50% were without a critical vulnerability in the last year. This indicates that if you pay out less, then you will get less people on your program. What entices researchers? Good scope documents, good triage/response times and fair/consistent payouts. These all build trust that the time is well-spent on the program.
  • They have a table of payouts by industry, divided into severity categories. Crypto/web3 has the highest payouts for bugs. After that is Internet/online services and Computer software. Things like financial services, government and retail are relatively low. The benefit of high rewards is that more people looking at the programs more.
  • The report discusses the exploit likelihood by industry. Bugs in finance are fewer but much likelier to be exploited. Within Government and technology, validated bugs carry a fairly high chance of being exploited in the wild.
  • Overall, an interesting report on the trends of security issues on HackerOne. Thanks for the open data!

Arbitrary PUT request as victim user through Sentry error list- 1777

Gitlab    Reference →Posted 4 Months Ago
  • In GitLab, you can specify a server for Sentry to generate function buttons for error tracking lists. By configuring the error information, you can modify the routing of subsequent requests to GitLab. This is a vulnerability known as Cross-Site Path Traversal (CSPT).
  • By using the ../../ in the error message, we can traverse up the path for other requests. In this case, it's possible to create arbitrary PUT requests on GitLab. The impact of this is immense! Trick users into adding admins, elevating membership, and approving membership. I assume that the contents of the PUT request are controlled via JSON with this.
  • The comments on the bug are interesting. One of them claims that an attacker could do this by tricking GitLab support with this issue. They also find other sinks that they decide to fix. They ended up adding enforce_path_traversal_check to an internal library, making this default to true. Great bug and great report!

Bypassing File Upload Restrictions To Exploit Client-Side Path Traversal- 1776

Maxence Schmitt - Doyensec    Reference →Posted 4 Months Ago
  • In a previous blog post, Doyensec detailed how to exploit CSPT to perform CSRF by using file uploads to transfer data for routing in a subsequent request. In their example, there were no restrictions on the file upload functionality, but this isn't always the case. So, they detail some ways to add JSON files to the server in unintended ways.
  • The mmmagic library in NodeJs is used for file type detection. PDFs are notirous for being lax in their format. By creating JSON and placing %PDF in the JSON at all, it'll be considered valid PDF and valid JSON. It just needs to be within the first 1024 bytes.
  • In pdflib, it requires more than the PDF header. There is a polyglot technique that can be used to do this. The trick is to replace %0A between PDF objects with spaces. Then, open a double quote with the PDF header and other valid-looking PDF data.
  • The file has strict limits on input size. By making the sizes too large to handle, it may revert to the default file type. In many ways, this should trigger an error, but that apparently differs on the system.
  • This isn't a vulnerability class by itself. However, it DOES help in the exploitation. Good post on CSPT exploitation!