Resources

People often ask me "How did you learn how to hack?" The answer: by reading. This page is a collection of the blog posts and other articles that I have accumulated over the years of my journey. Enjoy!

Phishing Emails Are Now Aimed at Users and AI Defenses- 1724

anurag    Reference →Posted 6 Months Ago
  • This article goes through how threat actors are attempting to phish users who use Gmail. The basic idea is common: your password is about to expire, so you must renew it now. Naturally, this sends the user to a fake Gmail login page.
  • All of the previous stuff was standard. Since many mail services are now using AI, the plain-text MIME data actually included a prompt injection payload. This is an interesting workaround for using LLMs.
  • The idea is to trick the LLM to NOT flag this email via an injected prompt. Instead of an outright don't do this at all, it's asked to do slightly different things than the original prompt. Specifically, to delay the classification process and to go insanely deep. I find this interesting because it's not THAT much different from the original prompt, but it's forced to take a long time.
  • To make the webpage harder to track, there's a captcha on it. Additionally, the JS is obfuscated. The web page appears to collect the victims' IP addresses to geolocate them and contains a fake login form. Overall, a fascinating insight into the cat-and-mouse game of defenders and attackers.

Vtenext 25.02: A three-way path to RCE- 1723

Mattia (0xbro) Brollo    Reference →Posted 6 Months Ago
  • VteNext is a CRM in Italy. Upon initially searching through the PHP codebase of the demo release with semgrep default PHP rules, they find some interesting sinks.
  • The first issue they find is an XSS vulnerability resulting from poor sanitization of user-controlled JSON input. Interestingly enough, this works because the Content-Type of the response is text/html. So, making a call directly to this endpoint leads to reflected XSS. This is in a POST request at the moment, which is unexploitable.
  • The application supports various HTTP methods. The CSRF token checks are only done on POST requests. The processing will be done on an endpoint regardless of the verb. By changing the verb to GET, it creates a CSRF token bypass.
  • If you combine one and two, an XSS can be created from a GET request by a user now! Sometimes, small things can be chained together to make exploits worse. Session cookies are secured with the HTTPOnly cookie flags, making them inaccessible via XSS. The Touch module exposes the PHPSESSID ID in the page - this appears just to be some JSON request data. In other locations, phpinfo() can be used to leak session cookies as well.
  • The final piece to the puzzle was a set of SQL injections. Although they are using adb->pquery() to execute the query, the user's input is directly inserted into the statement. It seems like they were trying to prevent SQL injection, but misunderstood it. In this case, the $_REQUEST['fieldname'] can be used to read any field from any table. They use this primitive to steal password reset tokens from the DB.
  • They found a password reset function that just didn't check the user's previous password. This is because a parameter skipOldPwdCheck is used on function calls, but it's never set. Overall, a good set of bugs!

How to Phish Users on Android Applications - Case Study on Meta Threads - 1722

remoteawesomethoughts    Reference →Posted 6 Months Ago
  • WebViews are commonly used in Android applications to display webpages inside of the app itself. To improve usability, deeplinks or custom URIs on the app, are commonly used.
  • When deeplinks are used, they can also be defined as browsable and exported in the app's manifest. This allows the activity to be interacted with from outside of the app.
  • Thus, the content being used for these links must be strictly verified. Otherwise, it can lead to phishing threats. If a webview is rendered in the app silently, then a user might trust the login page.

Cache Me If You - Sitecore Experience Platform Vulns- 1721

PIOTR BAZYDLO - WatchTowr     Reference →Posted 6 Months Ago
  • Sitecore Experience Platform is a Content Management System (CMS). There are at least 22K public Sitecore instances, making this a fairly impactful target.
  • The HTTP handler XamlPageHandlerFactory has had many issues in the past. This works by internally fetching the handler responsible for page generation. Sitecore will generate the page and initialize every component described in the XAML definition. There are several parameters that can control this dispatch - __SOURCE and __PARAMETERS. Any sort of dynamic dispatch has the potential to go wrong and must be reviewed thoroughly.
  • The gathered handlers iterate over a method and call methodFiltered.Invoke after checking to see if the function is allowed to be called. There are two somewhat similar implementations of this dispatch, but with the type XmlControl as a valid type in the filtering. This second type is only extended by the handler HtmlPage.xaml.xml! Crazily enough, this allows for nesting dispatch calls.
  • To do this, call XmlControl that passes the whitelist check. Then, create the arbitrary XAML handler and call it. So, what can this WebControl actually call? The best primitive they found was AddToCache - this leads to an arbitrary cache poisoning vulnerability that is super bad.
  • Using the first primitive effectively turns this into an authentication bypass, since we could poison any page. While going through the codebase, they found the sink Base64ToObject. After some effort, they found a mechanism to trigger this via an HTML editor API—basic sink-to-source analysis.
  • I enjoyed this cache poisoning issue a lot. This is because finding this primitive through all of the functions took a long time to think through. What's the worst thing that we can reasonably do given the impact we have? Sometimes, the bug is the simple part, and it's the impact that is harder to figure out.

All You Need Is MCP - LLMs Solving a DEF CON CTF Finals Challenge- 1720

Wil Gibbs    Reference →Posted 6 Months Ago
  • The author of this post is a member of the CTF team Shellphish. His team, a world-renowned one at that, had earned its way to compete in the DEFCON CTF this year. This is the Olympics of hacking and is home to many of the world's best CTF players. They had previously competed in the AIxCC competition, where LLMs attempted to identify bugs in code.
  • With all of this in mind, they decided to tackle a pwn challenge called ico. This was a small binary but contained over 6K functions, making this a classic reversing challenge. Throughout the event, Blue Water had solved two of the Live CTF challenges (small one-on-one challenges) using agents running in the background. So, Wil decided to spin up some LLM infrastructure to see if it could be solved this way.
  • They created a Docker container that contained the IDA MCP server and Cursor inside of it. They gave it a prompt of along the lines of "You are a great reverse engineer. Reverse the application and interact with the binary at this port when needed". After running GPT-5 for back and forth for a long time (with A LOT of tool calls), it outputted a script that did not work but had some good insights on the program. The author posts the exact prompts and output throughout the post, which is very nice to see.
  • The LLM asked them to create a better script with pwntools for interacting with the challenge based on the information for the commands given by the LLM. This helped but there was still no flag. The LLM hadn't updated the decompilation at all. The author made several changes, including to function names, to provide the LLM with more context on how it works.
  • After going back and forth a few more times with "we need the flag and not the MD5 hash of the flag", the LLM eventually figured out how to extract the flag from the challenge! Even cooler, they asked it to patch the binary and it was able to fix the challenge as well. Pretty neat!
  • According to the author, this was a perfect storm: a straightforward path to exploitation with no tricks, just reversing, a simple exploit (just 10 bytes required), and the problem was partially reverse-engineered already. They claim this could be used to solve some CTF challenges, but not most of them. In general, the process of "gather knowledge (from IDA) -> formulate hypothesis -> create exploit script -> analyze script output -> apply new findings to IDA" worked pretty well for them.

Github Secrets exposed due to RCE in Formatter Action from pull_request_target event - 1719

Anthony Weems    Reference →Posted 7 Months Ago
  • GitHub Actions permissions are really complicated to think about when secrets come into the mix. If someone makes a PR, do they have access to the secrets? There are different modes of these but it really makes a difference what code is ran on the repository.
  • In the case of a Java formatter in the typically "safe" pull_request_target, it was checking out the user's PR from the Pull Request. By placing in a malicious pom.xml file, RCE could be gained in the context of the PR. Since the action can have secrets, this is a serious security issue. Using the secrets and ACCESS_TOKEN, it may have been possible to edit the repository itself.
  • This attack is known as a "Pwn Request". To protect against it, developers should be very wary about externally facing actions on GitHub. Additionally, scope tokens down as much as possible. Good write up!

v1 Instance Metadata Service protections bypass - 1718

Anthony Weems    Reference →Posted 7 Months Ago
  • Instance providers, like GCP and AWS, have a service for getting credentials local to the server. Obviously, getting an SSRF to get this information is horrible for the client. So, some protections have been added to make this harder. One of these is the requirement of the Metadata-Flavor: Google header.
  • While on a pentest, the author of this post noticed that adding an extra slash to the instance removed the requirement of this header! But why!? Using http://169.254.169.254/computeMetadata<</>>/v1/instance/ with a single extra slash did the trick. Sometimes, fuzzing and trying weird things is the way to go! Our systems are just so complex nowadays that it's hard to understand how they work.

Cross-Site Request Forgery- 1717

Filippo Valsorda    Reference →Posted 7 Months Ago
  • Whether Cross-Site Request Forgery (CSRF) works or not is a combination of intentional security features and accidental legacy protections. CSRF is often known as the "session riding attack". When a website makes a request when you visit the page, cookies are always sent. So, what happens when malicious.com requests amazon.com? This post discusses when and why CSRF exploits work in excellent detail.
  • Cookies have been and will continue to be used on requests. So, the goal is to prevent attackers from using them via a CSRF attack. A classic mitigation is double submit protection; this places a large random value in the request body and in a cookie. Since the attacker can't read cookies cross-site, this works well. "Cookie tossing" can be done to remove this cookie if the attacker is on the same site though. So, the usage of __Host- can be used here instead.
  • The SameSite cookie flag can be used to prevent CSRF at a browser level. This has three modes: none, lax and strict. Some browsers default to none because it would break many SSO flows otherwise but others default to lax, breaking many CSRF attacks. Some browsers even default to just two minutes after the cookies were set. This is a very good protection but does have some integration issues.
  • The Origin header is a surprising safeguard as well. Since this cannot be spoofed, if the backend application knows its domain, it can reject based on the Origin very effectively. This creates some edge cases around the header being removed by Referrer-Policy and by Chrome extensions though.
  • CORS is not meant to protect against CSRF, but it sort of does! When a "non-simple request" is made, a pre-flight options request is made. Since this is coming from the wrong origin, the browser will reject the request. This is very limiting for CSRF attacks but there are clever work arounds.
  • Browsers recently introduced Fetch Metadata. On a request, the Set-Fetch-Site header will set it to cross-site, same-site, same-origin or none. Since the browser sets this, it provides excellent CSRF protection by checking this header on the backend. According to some articles, it is now the recommended way to prevent CSRF attacks.
  • Overall, a fantastic article on the state of CSRF protections in 2025. I'll be referencing this article for years to come!

Live EigenLayer Bug Discovered During Sidecar Security Review- 1716

Andy Li    Reference →Posted 7 Months Ago
  • EigenLayer introduces restaking on Ethereum. This allows staked assets to secure other applications, known as Actively Validated Services (AVS) rather than just Ethereum. EigenLayer runs alongside Ethereum, so its implementation is highly security-sensitive.
  • The EigenLayer sidecar is an off-chain worker supporting the main logic in the smart contracts. It listens for on-chain events, processes the data and performs computations on it, such as Rewards. AVSs submit reward details on the chain to the RewardsCoordinator.sol contract, where the sidecar process processes the amount and duration information.
  • The Solidity contract attempts to do input validation on the duration: it must be a divisor of CALCULATION_INTERVAL_SECONDS. This is checked by doing duration % CALCULATION_INTERVAL_SECONDS == 0. Technically, zero satisfies this requirement.
  • Within the off-chain codebase, there is a SQL query that performs division. This leads to a divide-by-zero error in the database. They found this issue by first seeing the division within the SQL query (sink) and tracing it all the way back to the source. I typically don't trace divide by zero bugs this way so that was interesting to see.
  • The impact is slightly dubious to me. A crash or exit doesn't necessarily mean a Denial of Service in all cases. Error handling and continuation need to be taken into consideration. In this case, since the SQL query failed, all AVS operators' sidecar operations for reward calculations would be halted. A good bug and an interesting trace!

Compiler Bug Causes Compiler Bug: How a 12-Year-Old G++ Bug Took Down Solidity- 1715

Kiprey - OtterSec    Reference →Posted 7 Months Ago
  • The post starts with a small amount of Solidity that crashes the compiler:
    // SPDX-License-Identifier: UNLICENSED
    pragma solidity ^0.8.25;
    
    contract A {
        function a() public pure returns (uint256) {
            return 1 ** 2;
        }
    }
    
  • Eventually, they traced it down to a line of C++ code on the G++ compiler backend that leads to infinite recursion. This turns out to be a 12 year old bug in G++, an outdated comparison operation in Boost and a small rewrite in C++20 that led to this crash.
  • In C++, there is functionality called overloading where the compiler can choose implementations for operations like equality. A member function has priority over a non-member function to overwrite this. G++ doesn't always follow this rule though. Clang would choose the member function and the G++ issue was reported 12 years ago.
  • In C++20, the spaceship operator (<=>) was introduced. This allows for comparisons to be done via overloading as before but will also do the reverse check as well. a=b and code>b=a. This rewrite becomes recursive to do the comparison over and over again if you're not careful.
  • The Boost rational class implemented both a member and a non-member function for operator==. Under C++17, this was safe because of the member vs. non member bug was fixed. However, with C++20 and G++ < 14, G++ would incorrectly choose the non-member operation first. This leads to an infinite recursion bug!
  • The Solidity codebase uses boost::rational to represent some compile-time constant expressions. Because of this, Solidity inherited the bug mentioned above. To have this happen you had to be using G++ < 14, Boost < 1.75 and C++ enabled for Solidity builds. This crash occurs with any compile-time rational comparisons.
  • Although it's not a security vulnerability, it does show how fragile modern stacks can be. A bug from 2012 led to a broken compiler. Neat!