Resources

People often ask me "How did you learn how to hack?" The answer: by reading. This page is a collection of the blog posts and other articles that I have accumulated over the years of my journey. Enjoy!

Minting Fees Out of Thin Air in zkSync Lite- 1885

Ehsan    Reference →Posted 1 Month Ago
  • zkSync Lite is a zkRollup L2 blockchain. The operator submits a proof attesting to the transition from the old root to the new root via state transitions. The L1 does not re-execute every transaction; the L1 just verifies the proof. Practically, this means that any bug that allows an invalid state transition to satisfy the circuit becomes the on-chain truth once proven.
  • zkSync Lite has operations that are processed in chunks. The circuit iterates over these chunks for verification. The first check is for state mutations, such as balances, nonces and such. The middle chunk is for pubdata consistency. The final chunk is done for fee accounting. From these separate locations in code came two definitions of valid: one for mutation validity (chunk 1) and another for tx validity (signatures, timestamps, etc.).
  • This discrepancy in valid is what causes the bug. The function ChangePubKey sets the account's L2s signing key. On the L1, the contract verifies that the pubkey change uses the nonce from the pubdata. pub_nonce equality is NOT checked in the tx validity, but IS checked within the mutation validity. When handling the fees, the validity was checked via tx validity and not both of them.
  • By putting these altogether, it's possible to create a transaction for ChangePubKeyOffchain where the transaction checker believes it's valid but the mutation doesn't believe so. On the fee accrual chunk, the fee accounting adds more fees than it should without increasing the user debit/nonce. In practice, this attack could be repeated with a malicious proof to mint infinite funds in fees. It appears that this was permissioned because of the reliance on the prover/sequencer/operator though.
  • With ZK vulnerabilities, the most common issues are around missing constraints. In this case, it was a control-flow issue with a semantic meaning mismatch that led to the vulnerability. So, the next time a complicated set of operations confuses you, maybe it confused the devs, too!

Building Agentic Infrastructure for Zero-Day Vulnerability Research- 1884

kritt.ai    Reference →Posted 1 Month Ago
  • Security research involves long hours of staring at code and is done only by a specialized group of people. With the rise of LLMs comes the ability to use AI tools to find vulnerabilities. They built a bot to think as security engineers do.
    1. Identify suspious behaviour
    2. Prove reachability of the code
    3. Prove controllability. Can the attacker influence the relevant data/state?
    4. Determine real world impact
  • Of those steps above, if any of them go wrong, then the bug won't be found. This is because it's long-form reasoning with compounding errors. Intuitive reasoning can be done locally, but it's bad globally. Precision decays the longer the chains get. The key insight is that you need checkpoints to enforce correctness and not just more tokens.
  • Instead of using better prompts, they created harnesses. This is a set of constraints, scaffolding and checks to force an agent to be systematic in its approach. They do this with the following steps:
    1. Generate hypotheses explicitly.
    2. Collect evidence before escalating confidence.
    3. Use deterministic tools when possible.
    4. Fail fast and prune dead ends.
    5. Produce artifacts a reviewer can trust
  • The post includes a great graph that explains their reasoning. First, it is an exponentially decreasing value that scales with reasoning length; the longer a chain, the worse it does. The other value on the graph is a shark tooth. For each verifiable subtask, the confidence is regained. After this, they have some good insights into what has worked for them.
  • First, the usage of deterministic tools when possible. Using CodeQL to find sinks is better than asking an LLM to do so. This is because it's deterministic and only requires the LLM to use CodeQL. Another point is that native tools work better with their home model. For instance, Claude Code works best with Opus.
  • Scanners have multiple issues. From multi-step flow identification to boundary issues, they do fail. The authors claim they use static analysis tools as much as possible and then rely on agentic reasoning to bridge the gap. This uses LLMs only when necessary, keeping things deterministic.
  • When reviewing code, not all lines are equal in terms of threat. Some repos/components only need shallow checks, while others need deep integration. By putting spend only onto difficult and promising areas, the costs stay lower and you will find more bugs.
  • The final major benefit is testing. If the code has a bug, this should be provable. Run the simulation, execute the PoC, and check whether the expected outcome occurred. This tends to remove false positives and improve confidence in an issue. Although not all tests are created equal, there's a major difference between an isolated unit type and a full simulation.
  • This bot found a max payout critical of $250K on Immunefi recently. No word on what the bug is but it's very interesting. They have other bugs on their profile as well.

web/framed-xss - 1883

m0z    Reference →Posted 1 Month Ago
  • The challenge uses Chromium and abuses HTTP Disk Cache keys to trigger a client-side cache-poisoning issue. It contains two endpoint: /view and /. /view only succeeds if the header contains From-Fetch but contains an XSS sink within it via the HTML parameter. / performs a call to /view via a Fetch and places the contents within iframe without script execution. This is the setup for the challenge.
  • The goal is to trigger the XSS but there's a paradox. /view cannot be called directly because of the header check. / places the code into an iframe so we can't do anything. The trick of the challenge is to trick the browser into adding the From-Fetch header to an unintended request.
  • Moderen browsers have a split cache in order to prevent cross-site leaks. This is derived via the Top site domain and the resource URL. Chromium added the cn_ prefix to prevent cache poisoning during mainframe navigation. In. Particularly, this is added when the top level of the page has its location.href modified.
  • By using the history API, it's possible to bypass the usage of cn_ on the page. Notably, history.back() doesn't count as a cross-site main-frame navigation for whatever reason!
  • So, the following sequence of events will lead to XSS:
    1. Do a window.open() to /?html=<XSS> to populate the cache.
    2. Redirect to another page and preform history.back().
    3. Redirect to /view?html=<XSS>.
    4. Do the final history.back() to load the cached version of the page to get XSS.
  • Weird challenge with some weird browser quirks. I still don't 100% understand this, but I appreciate the trick of telling the browser to use the cache when the requests are somewhat different.

Multiple cross-site leaks disclosing Facebook users in third-party websites- 1882

ysamm    Reference →Posted 1 Month Ago
  • Facebook is used by almost everybody. Being able to see who is logged in can allow for targeted attacks, account takeovers, and employee profiling. This article dives into several techniques they used to de-anonymize users.
  • The first issue occurs in Zoom callbacks in Facebook Workspace. When supplying the __cid and __user, an attacker can brute force the user ID of the Workplace community. If __user is correct, then an empty page with text/html is returned. If it's incorrect, the response in application/json, which will trigger CORB and block script execution. By observing onload and onerror events, it's possible to determine the user id of the logged-in user.
  • When embedding a Facebook plugin, such as the Like plugin, inside of an iframe, the rendering is different depending on the supplied user ID. If the __user is correct, everything renders as normal. If it's incorrect, then X-Frame-Options: Deny is returned, preventing the iframe from loading. This distinction allows brute-forcing the active sure or page ID by observing postMessage events rather than a timeout.
  • The endpoint https://www.facebook.com/signals/iwl.js?pixel_id=PIXEL_ID returns a JavaScript payload intended for internal Meta Pixel testing, including the Facebook user ID. This value is scoped inside a function. But by manipulating JavaScript prototypes before loading the script, it can still be extracted. Their PoC modifies the function prototype and prints the user ID of the object. Apparently, the script runs within the full context of your page, allowing for the reading of the data still. Neat!
  • They got 2.4K for the previous two bugs and 3.6K for the third bug. Good work by the author!

Instagram account takeover via Meta Pixel script abuse- 1881

ysamm    Reference →Posted 1 Month Ago
  • Meta's web ecosystem relies on cross-window messaging between first-party websites. The only security control is around origin checks on facebook.com or its subdomains.
  • Multiple Meta modules register window message listeners that must be from a trusted domain. One of these is < code>fbevents.js, the Meta Pixel script embedded on millions of websites. When loaded in a window, the message listener reacts to many events and sends them via the graph.facebook.com send. This includes location.href and document.referrer, which can contain OAuth codes and other sensitive values.
  • The author founa n endpoint that constructs an object from user-supplied parameters and forwards it via postMessage to a target Facebook domain specified by the attacker. This appears to be a classic confused deputy problem, where the data is passed through without any checks from a trusted domain.
  • The fbevents.js code receives messages originating from facebook.com. By using the primitive from above with an arbitrary message send and including an attacker's access_token for GraphQL, requests can be tricked into exposing OAuth code/tokens to the attacker. By doing this, an account takeover may be possible.
  • Here's the flow of the attack:
    1. Trick the user into clicking on a crafted link that abuses the issues from above. To start with, an OAuth callback on Instagram to developers.facebook.com.
    2. The page developers.facebook.com, contains the fbevent.js file and has the message listener. To prevent the page from consuming the token, an invalid nonce must be used.
    3. Attacker redirects their website to the postMessage sync discussed from before with the attacker-controlled GraphQL access token.
    4. fbevents.js will consume the message and issue a GraphQL request with the sensitive information, including the OAuth code.
    5. Attacker reviews the Graph Explorer to retrieve the Instagram OAuth authorization code.
  • There is no description of the patch. To patch this, I'd probably get rid of the postMessage sink first. Then, remove the href and referrer from the GraphQL endpoint data, if possible. The author claims that the attack surface expands beyond Meta properties and to third-party websites because of how widely deployed this is. They got $32K for this bug!

Leaking Meta FXAuth Token leading to 2 click Account Takeover- 1880

ysamm    Reference →Posted 1 Month Ago
  • FXAuth is Meta's shared authentication system used by a variety of services that they own. On the domain https://auth.meta.com/fxauth/, a signed token and blob are returned for using the website. The base_uri contains where to redirect back to.
  • Originally, base_uri had no restrictions on the value that was set. By exploiting this, it was possible to redirect to an arbitrary domain and extract the token. This made by an account takeover possible. The fix was to restrict it to Meta-owned domains, assuming that the path could not be controlled either.
  • Legacy locations exist where attackers can execute arbitrary JavaScript under a controlled path at https://apps.facebook.com/{app_namespace}. If an attacker owns an application, they can read parameters from the URL even if they do not control the path directly.
  • Once the user is redirected to the attacker's application, their JavaScript can exploit the token. Using this, it's possible to finalize sensitive flows, such as account linking, to get persistent access to the user's account. This led to two 32.5K payouts.

CodeBreach: Infiltrating the AWS Console Supply Chain and Hijacking AWS GitHub Repositories via CodeBuild- 1879

wiz    Reference →Posted 1 Month Ago
  • On AWS CodeBuild, there is functionality to trigger a build on specific GitHub repos. The main protection against this is a regex that checks the ACTOR_ID for validity when a PR is made. The validation is as follows: 16024985|755743|.... The | symbol is an OR operation in regex.
  • The regex above isn't anchored with a ^ and $. Practically, this means that any account that contains these values would be approved by the filter. So, is it possible for a GitHub user ID to contain one of the values in the regex?
  • From their research, about 200K IDs are made per day. Practically, this means there's a new registration every 5 days of these account values. Still, there's a bit of a race here. So, it's necessary to create a lot of accounts at once. The standard account creation has rate limiting, so this didn't work. The GitHub Enterprise API is used to create organizations and shares the same IDs. Sadly, this couldn't be used because orgs can't create PRs.
  • The GitHub App manifest flow can interact with pull requests as a bot user. This allowed for the creation of hundreds of apps at once, then visiting the confirmation page to create the IDs simultaneously. This made winning the race condition much smoother. They waited until the live ID was about 100 away and then visited 200 URLs at once. They were able to obtain the ID on many of these open GitHub repos.
  • With the ability to make PRs within the context of the build process, they were able to do a classic pwn request. In particular, create a PR that, once built, extracts GitHub credentials from the environment. With a personal access token (PAT), an attacker had full admin privileges over the repository. What repo was at risk? The AWS SDK JavaScript library! Since so many ENVs use this, a backdoor to this package would have compromised a large percentage of the Internet.
  • A severe attack of taking a small CI/CD misconfiguration to an Internet-compromising bug. Backdoored packages feel impossible to stop right now, which is what makes this very terrifying.

Defeating KASLR by Doing Nothing at All- 1878

Seth Jenkins    Reference →Posted 1 Month Ago
  • Address Space Layout Randomization (ASLR) prevents trivial exploitation by randomizes addresses of processes. The Linux kernel also supports ASLR. The author of this post had a vulnerability in the Pixel kernel but needed to bypass KASLR in some way.
  • Their target was looking into Linux Linear Mapping. This is a section of the virtual address space that directly represents physical memory. While reviewing the code for this, they learned that the mappings always start at 0x80000000. So, KASLR is effectively useless on these values. But why?
  • Linux and Android theoretically support hot plugging memory. This is when new memory is plugged into an already running system and must be usable by the Linux kernel addressing. The kernel virtual address space is limited to 39 bits.
  • Given that the maximum amount of physical memory is much larger than the entire linear map, the kernel places the linear map at the lowest possible address so that it can handle the largest amounts of further hot-plugged memory. The feature for randomizing the memory space was removed because DRAM may appear in inaccessible locations.
  • On Pixel phones, the bootloader compresses itself at the same physical address as well. Some phones, such as Samsung, do randomize this address on every boot, but not every phone does.
  • With the randomization issue, it's possible to access the .data entries of the kernel as R/W permissions. The offset0xffffff8001ff2398 will always map to modprobe_path, for instance; 0xffffff8000010000 is effectively the kernel base.
  • According to the author, this severely weakens the kernel's security. These issues were reported to the Linux kernel and Pixel teams, but they were denied as findings. Overall, a great report on a security issue and its very real origins.

On the Coming Industrialisation of Exploit Generation with LLMs- 1877

Sean Heelan    Reference →Posted 1 Month Ago
  • The author of this post wanted to see the capabilities of Opus 4.5 and GPT-5.2 when exploiting new vulnerabilities in the QuickJS JavaScript interpreter. They included many different challenges, such as various exploit mitigations and different target goals. Out of the 40 distinct exploits, GPT solved every scenario and Opus solved all but 2. These are the results of the experiment.
  • The vulnerability itself was documented at the beginning. Very quickly, both agents turned the QuickJS vulnerability into a read/write primitive API, making exploitation easier. From there, it leveraged known public weaknesses to build an exploit chain. In the hardest test, they included everything you could think of: fine-grained CFI, shadow-stack, seccomp sandbox, and more. GPT-5.2 created a chain of 7 function calls through glibc's exit handler to pop a shell on the hardest challenge with 50M tokens and $150.
  • The author found the vulnerability with an AI agent and then wrote an exploit using it as well. So, now what? The industrialization of exploitation. Now, the ability of an organization to complete a task will be restricted by the number of tokens it can afford, NOT by the number of people.
  • According to the author, exploit dev is perfect for industrialization. The environment is easy to construct. The tools are well understood, and verification is straightforward. The information is out there, and people know how to do this. The limitation tends to be on how many things a person can try and their hours; the computer is not limited by these.
  • This shows that new security issues can be exploited by LLMs because of their massive knowledge of the exploit game. They included source code for these agents as well.

Account Takeover in Facebook mobile app due to usage of cryptographically unsecure random number generator and XSS in Facebook JS SDK- 1876

ysamm    Reference →Posted 1 Month Ago
  • Mata provides several website plugins such as the Like button and Customer Chat. These are hosted at www.facebook.com and designed for use in iFrames. Communication between the host website and Facebook is implemented using postMessage.
  • The plugin sends messages to its parent window and the SDK on the Facebook side listens for those messages and dispatches them internally. To prevent arbitrary domains from interacting with it, the SDK enforces two checks on received messages: they must originate from Facebook, and they must include the proper callback identifier, a random string.
  • The Facebook JavaScript SDK registers a cross-window message listener for messages coming from the Facebook iframe. One of the iframe-handling functions injects an SVG directly into the DOM without sanitization, which could lead to XSS if invoked. There are two issues with this, though: 1) we need to send a postmessage, and 2) we need to have the random identifier.
  • The author of this post seems to know every quirk on Facebook. To solve problem 1, they found a URL that, when the page was visited, would send an iframe with user-controlled data. It's pretty crazy they found this primitive!
  • The random identifier was generated using Math.random(). This is insufficient for cryptographically random data and leaves a hole. The seed for randomness appears to be unique per page, so we need to leak the randomness somehow. The window.name() also uses Math.random(). If this could be leaked, it could be extracted.
  • The listener for the call init:post will reinitialize the iframe, generating a new ID. Since the name of a window can be public, it's possible to leak the name and reverse the random number generator to find the seed. From there, it's possible to calculate the callback string to trigger the DOM XSS on the website.
  • This attack has a few limitations... The XSS occurs on the user's website and NOT Facebook, and it requires lots of framing on websites to be allowed. Because this would be considered low to medium impact, they decided to review the internal use of this plugin to increase the issue's impact.
  • Most Facebook pages don't allow the framing required for this exploit. So, they decided to find a generic bypass for the framing. On Android and iOS, the XFO header with frame-ancestors set to any domain would place it in the XFO header with ALLOW-FROM. Since this isn't supported by modern browsers, this was a bypass of the iframe protection, but required frame-ancestors to be on the page.
  • They found an endpoint that would set the frame ancestors to break the iframe protections. However, it had a token that would require a login CSRF for the account. Since this was useless for XSS, a new constraint was added: keep a valid Facebook page inside an iframe with a useful body and ensure it does not refresh after a session change. They noticed that a business endpoint embedded this page on core facebook.com. We have everything we need!
  • Here's the full exploit chain:
    1. Victim visits attacker's Facebook App where the attacker opens a Facebook App Webview.
    2. Attacker creates that would contain sensitive values like an OAuth token in an iframe on their website. The attacker then performs log out and login CSRF into their own account.
    3. Attacker creates another iframe with the Facebook page with the customer chat plugin with the known attacker token; this is why the login CSRF was required.
    4. Attacker saves the name of the window for usage later. They force reinitialization of the iframe to get multiple values to defeat randomness. This allows them to calculate the Math.random() seed.
    5. Attacker can now send the payload message to the frame from facebook.com and the callback identifier.
    6. Payload from previous step triggers XSS on Facebook. Now, the script can read the victims OAuth token.
  • What a crazy set of issues. It requires SOOOO many small primitives in order to exploit and then even more to increase the impact. I appreciate the patience and the gadgets it took to earn the $66K bounty payout.