Resources

People often ask me "How did you learn how to hack?" The answer: by reading. This page is a collection of the blog posts and other articles that I have accumulated over the years of my journey. Enjoy!

Can't find Criticals? The problem is either your strategy, your execution, or both.- 1865

infosec_us_team    Reference →Posted 2 Months Ago
  • The author of this post had a DM conversation with a security researcher that has proven results on multiple platforms but has been doubting their skills from a lack fo recent bounties They adapted the DMs for a wider audience and posted it for others to read. They claim the issue is one of three things: unreasonable goals, strategy in bug hunting or the execution.
  • If you have a goal of finding a critical in Aave (a million dollar program) with only a 10 day window then you're likely to find nothing. Another bad example is people having a goal of 6 figures per year but then joining 2 month long contests with small rewards. Your goals need to line up with your choices.
  • The rest of the article is posts that have a revenue goal, strategy and execution plan all in place. The first one is a goal of $100K per year. To do this, participate only in large contested, hunt on programs that offer $20K-$50K for criticals. On the execution, 1) read all previous findings to see if there's a way to bypass fixes, 2) look for low-hanging fruit and 3) only look at a codebase for 10 days. For this case, they say to only hunt on programs that push code updates more often than they get reviews.
  • The second example of $200K per year. First, do contests that are over $300K in prize pools. Next, hunt on bug bounty programs that offer $50K-$200K per critical with mostly DLT-blockchain protocols that haven't had much public on the auditing front. On the execution, dive into the nitty-gritty details of the codebase looking for low-hanging fruit and then obscure edge cases; only stay on a project for 2 months. From there, move onto another codebase but capitalize on knowledge from this project to do contests they do and have code update monitoring.
  • Having a solid plan and reasonable goals is just as important as finding the bug itself. They gave real examples of strategies in this post, which I appreciated. If your plan isn't working then come up with a new plan and try again.

Datr cookie theft and AI leads to Facebook account takeover via trusted device recovery- 1864

ysamm    Reference →Posted 2 Months Ago
  • Facebook uses long-lived device identifiers to reduce friction for returning users to distinguish legitimate vs. illegitimate activity. A device that logs in repeatedly is considered trusted by the application, which relaxes some of the security requirements. One of the identifiers is datr.
  • The application https://www.facebook.com/recover/account/ is used to verify an account via email or phone number. In cases where requests originate from a trusted device an alternative flow canbe used to recover the account via uploading a document. This process is automated and is supposed to help legitimate users regain access easily. A core invariant of this flow is that trusted device cannot be easily impersonated.
  • The Facebook OAuth implementation, when interacting with the GraphQL API, can leak the datr value. When a datr is in the fields for an application with Facebook login, the machine_id is the same as this cookie. Although this data cannot be queried directly, Facebook's GraphQL allows chaining GraphQL API requests. By having multiple requests reference earlier responses, it's possible to propagate the machine_id to attacker-viewable output.
  • Here's the full attack flow:
    1. Generate your own access code information for OAuth. This just makes the calls require less interactions from the user.
    2. Get user to visit your malicious website.
    3. Within an iframe, use the BATCH API to trigger the OAuth call that will return the machine_id and then post that to your own Facebook account.
    4. Initiate account recovery with the new datr value. This should be easy to bypass with public information and fake documents.
  • A sick blog post on an account takeover on Facebook. I appreciate the knowledge around the importance of datr and the Batch API referencing previous values. Both of these require a lot of context, specifically on this target. They were awarded $24K for the bug, which is a solid payment. Another amazing write-up!

Multiple XSS in Meta Conversion API Gateway Leading to Zero-Click Account Takeover- 1863

ysamm    Reference →Posted 2 Months Ago
  • The Meta Conversions API Gateway is a server-side mechanism for businesses to send web events to bypass browser-based tracking methods like the Facebook Pixel. Even if a user has cookies disabled, ad blockers on, or other browser restrictions, this method still works.
  • The API gw.conversionsapigateway.com is Meta's own deployment of the application. The fbq JavaScript module includes the collection/processing module in a JavaScript file named capig-events.js. Any vulnerability in this script could inherit the privileges of the site it's on, such as business.facebook.com.
  • capig-events.js only triggers when the page has an opener window. This event includes various pieces of data, such as the message type. If the type is IWL_BOOTSTRAP then the script will check if the pixel_id exists in the list. The event origin is never explicitly verified, meaning that this can be called from any origin.
  • After some processing, the event.origin is used in order to dynamically load JavaScript via <origin>/sdk/<pixel_id>/iwl.js. Since an attacker can control the origin of arbitrary loaded JavaScript, this creates the opportunity for some nasty XSS. This is a classic case of not validating the origin data from a post message call and using the data anyway.
  • This isn't immediately exploitable because a CSP and Cross-Origin-Open-Policy (COOP) are enabled. CSP is setup to not allow arbitrary external scripts and most pages include a COOP of same-origin-allow-popups. On the surface, this appears to prevent the issue. However, the security is not evaluated on a single page or a single policy; it is evaluated across all contexts where the code runs.
  • For a CSP bypass, some major pages allow third-party analytics providers to be on the page. This expands the attack surface where XSS or a subdomain takeover would do the job. For the COOP bypass, an attacker can regain access to an opener by reusing a window.name. They found a vulnerability in a third-party application that allowed them to hijack the iframe to interact with the page with a CSP allowed site. Here's the full exploit chain:
    1. Load URL inside of Facebook application.
    2. Perform the opener bypass to a less strict CSP location.
    3. Hijack the iFrame of the third-party site. This sends a postMessage to the parent window to trigger the exploit.
    4. Host an attacker-controlled JavaScript file on the third-party host with the malicious JavaScript. Script is executed in the context of www.meta.com.
    5. They took this to a full account on facebook by abusing CORS permissions.
  • After reporting the previous vulnerability, they decided to review how the conversions API actually worked. When loaded on Meta, it displayed a graphical tool to the user. After experimenting with some of the rules for events and parameters, they noticed a POSt request for adding a rule. Upon reviewing the source, they noticed that the rule information was dynamically generating JSON within the capig-events.js script.
  • The JSON keys are supplied in the request and are used to construct a JavaScript string without any escaping or validation. So, "]} could be used to add attacker-controlled JavaScript in the generated output. In practice, this creates stored XSS within the capig-events.js file. Notably, the payload is served to every user, including Meta-owned domains. This isn't just stored XSS; this is a supply chain attack.
  • The author of the post got paid $62K for the first bug and $250K for the second bug. Absolutely insane! I really appreciate the author's intricate knowledge of Meta applications. On the first bug, the CSP and COOP issues would have been easy to move on from, since they couldn't be exploited immediately. Instead, they already had either A) the gadgets ready to go for this or B) known where to find them. This knowledge has served this security researcher very well!

The V8 (Heap) Sandbox - 1862

v8    Reference →Posted 2 Months Ago
  • v8 is a JavaScript engine that compiles JavaScript code into native machine code to make execution faster. The v8 Sandbox, a lightweight sandbox, is now a stable feature in Chrome. Why is this sandbox needed? Chrome is a huge target with a difficult history of memory corruption issues; these aren't classic memory corruption issues like UAF, and OOB reads though. These are very subtle logic issues that make languages like Rust or new features like memory tagging not useful.
  • The author includes an example that could likely lead to memory corruption from side effects. It's possible this could be solved by a good compiler check, like in Rust but this misses a fundamental issue with v8. v8 itself is a compiler! Memory safety cannot be guaranteed if the compiler is part of the attack surface.
  • Why doesn't memory tagging work though? A CPU side channel, which can easily be exploited in JavaScript because it's arbitrary code, can be used to leak these values. Hence, the attacker can bypass the mitigation. Additionally, due to pointer compression, there is no room in the bigs for v8.
  • The solution to this is using a sandbox to isolate the V8 heap's memory such that memory corruption cannot spread to other parts of the process memory. This is similar to userspace and kernel space in operating systems. The idea is that a bug in v8 shouldn't affect the rest of the hosting process.
  • In practice, the sandbox is replacing all data types that can access out-of-sandbox memory with sandbox-compatible alternatives. In particular, pointers and 64-bit sizes must be removed because an attacker could corrupt them. Due to constraints, the V8 heap is the only thing within the sandbox. They have a nice image that shows the security of it. The v8 objects are effectively entries into a table outside of the sandbox. This table entry then points to the external object. If you can only control the table index, there is not much you can do to exploit this.
  • This isn't perfect though. There are several invariants that this changes. For instance, they show an example with code that assumes that the number of properties stored in a JSObject is less than the total number of properties of the object. Theoretically, an attacker could corrupt one of them to break this invariant, leading to an out-of-sandbox access.
  • According to the author, this is okay though. First, many of these are simply memory corruption issues that can be fixed via simple bounds checks or UAF checks. These sandbox bugs are preventable by many other security features, such as Chrome's libc++ hardening.
  • To create a security boundary, it must be testable, and be created with a specific attacker model in mind. The attack model is assumed to have read/write access inside the v8 sandbox, with the goal of corrupting memory outside of it. To make this testable, debug builds include a memory-corruption API that can be used to read/write within the sandbox. Finally, they have a sandbox testing mode that determines whether a write violates the invariants.
  • A fantastic post on the v8 sandbox and the more in-depth v8 heap sandbox. I appreciated the well-defined threat model around the protection the most.

From object transition to RCE in the Chrome renderer- 1861

Man Yue Mo - GitHub    Reference →Posted 2 Months Ago
  • In JavaScript interpretters, there's a map (known as a hidden class) that represents the memory layout of a object. A map holds an array of property descriptors that contain information about each property, as well as the elements and their types. These maps are shared between objects that have the same layout. If a map doesn't exist, then a new one is created. When this happens, the old and new map are related by a transition that occurs to go from one map to another.
  • When doing this transition, the old map and new map have coinciding pointers to each other. A map can have multiple transitions. For instance, if property b is added and then property c is added, this creates two transition objects. If a field type is changed, such as going from an integer to a double, the map of the object is changed to reflect this via a transition.
  • In the example above, with o1 and o2 having a as an integer, if o1 gets a set to a double then the map in o2 is set to deprecated. This is because SMI (internal small integer) can be represented by a more generalized value. Eventually, the o2 object will be updated to the map of o1 once a property is accessed.
  • In v8, object properties can be stored in an array or a dictionary. Objects with properties stored in an array are fast objects while objects within properties in dictionaries are dictionary objects. Map transitions and deprecations specific to fast objects. Normally, when a map deprecation occurs, another fast map is created, but it's possible to make this not happen. In particular, if there are too many transitions in an object, then a new dictionary map is created instead.
  • Most uses PrepareForDataProperty are safe, there are two locations where the type can be updated to a dictionary map instead of the original object map. In CreateDataProperty, it may result in a dictionary map after an update. There are multiple routes to this but the usage of the spread syntax ended up being the most interesting.
  • When using the spread syntax (...obj1) and the usage of a property accessor, the function CreateDataProperty will be called while it's being cloned. While this cloning is happening, it's possible to deprecate the map while it's being used for the clone. This allows for the updated map to be a fast map instead of a dictionary map! A type confusion in the JavaScript engine leads to memory corruption now.
  • To exploit this, they used the type confusion to overwrite the elements to a large value within the underlying data structure for NameDictionary. By doing this, we get an OOB read for property values that leads to improper object access. Creating a "fake object primitive" is one of the best primitives in JavaScript engine exploitation. So, just arrange the heap in a nice way to create a fake object.
  • Once there, an arbitrary read/write is easy to gain. First, place an object into an array and use the OOB read to read the addresses of the objects stored within the array. For a write, do the same thing as a read but write to these objects instead.
  • Chrome recently implemented the V8 heap sandbox to isolate the V8 heap from other process memory, such as code, to prevent corruption within the V8 heap and access to other memory. So, to get code execution, it's a requirement to escape this requirement. To work around this, they modified DOM objects implemented in Blink. These are objects allocated outside of the v8 heap but are represented as API objects in v8. By causing type confusion in the API calls, it's possible to obtain a read/write primitive over the entire memory space.
  • Overall, a good post on exploitation and how to bypass a new defense-in-depth measure. Great stuff! If I had to guess how this bug was found, the author found a side effect that was not accounted for in some paths.

Why Anchor Accounts Go Stale After CPI (and When to Reload)- 1860

Taichi Audits    Reference →Posted 2 Months Ago
  • When making a Cross Program Invocation (CPI) in Solana via invoke or invoke_signed, you provide a set of accounts to be used. In raw Solana, you pass in AccountInfo directly, which is a handle to the in-memory runtime state. In Anchor, you pass in Account<'info, T>., which is a deserialized version of T and acts as a cached value.
  • Native Solana programs do not operate on the ledger directly. Instead, accounts are loaded into the runtime as a working set. Instructions mutate this in-memory state. Many things, like lamports, are read directly from the runtime state every time. If you reborrow the data, then the underlying bytes will also be updated.
  • In Anchor, the type T on the AccountInfo is a deserialized snapshot of the account data bytes. At the start of the instruction, Anchor constructs the accounts by deserializing them in a generated handler from the info.data on the account. This means that the data is copied onto the stack/heap as a Rust value and is NOT a live reference to the runtime bytes. At the end of the instruction, Anchor will serialize the data structure and write it back to the runtime.
  • In practice, this has a strange consequence: if a CPI modifies an account, the cached version will have stale data. For instance, for balance on a token account, a token transfer would show the same balance before and after the CPI, regardless of whether the account balance changed.
  • To solve this problem, Anchor accounts have reload(). This will reload the data from storage via re-reading and deserializing the data within AccountInfo.data. The account data is now no longer stale.
  • The author gives some tips on when to call reload(). It's required when A) a CPI can be used to mutate account data, B) the account needs to be read/validated later and C) you are reading a cached struct. If lamports or native runtime fields are being read, then reloading isn't necessary.
  • Overall, a great post on Solana CPI reloading and why it must be done. I had always wondered why lamports didn't need to be reloaded but the data did; now I know!

Insecure by Design: Default Configurations in Embedded Systems- 1859

Kevin Chen    Reference →Posted 2 Months Ago
  • The IoT OWASP top 10 includes Insecure Default Settings. To the author, this means a configuration that is insecure by default, a setting that the user must explicitly change, or a setting that is bad and unchangeable. They have several examples of this in the article.
  • The first target is the Kobo eReader, an alternative to Amazon Kindles. Using a debug shell, the default credentials are admin:admin. So, with access to a device, it's possible to login to it. Additionally, there is no key signing so it's trivial to reflash the firmware with arbitrary code.
  • The next thing they looked at was a Bitcoin ATM Kioisk. After clicking around for a while, they were able to access the Windows control panel. With access to the system logged in as an administrator, it would have been possible to backdoor the entire thing. To demonstrate this, they used Minikatz to extract creds and ran Doom on it.
  • A good post on some real-world issues. Insecure defaults have existed for years and will likely continue to do so. Good finds!

The economic failures of penetration testing- 1858

Zeyu    Reference →Posted 2 Months Ago
  • The failure of the penetration testing market is framed as a technical problem. According to this author, they feel that it's an economic incentives problem. It rewards the appearance of security over the actual reduction of risk at the company. Because of this, "it is not a market for outcomes, it is a market for signals."
  • The author compares the market to used car sales. The seller knows more about the car's quality than the buyer. So, the price averages out to an expected quality, leaving the higher-quality companies out of business. In pentesting, it's much of the same: the buyer doesn't know where the quality stands. So, they buy certifications and compliance rather than actual security. This leaves us at an equilibrium where an acceptable pentest is all that is gotten.
  • The next issue is around bad incentives. Security teams are evaluated on the audit access rather than the security posture. This makes them incentivized to commission work to pass compliance checks with minimal friction. If a pentest uncovers real issues, this is too much work to deal with and looks bad on them. Because of the friction of fixing issues, insecurity becomes a form of organizational equilibrium
  • Compliance creates a distorted inventory by acting as a demand proxy for security. Pentests are bought not to find issues but to deal with a checklist. Success is often defined by the existence of a report and not the absence of exploitation paths.
  • Flat fees/hourly rates in pentesting make this all a race to the bottom in price. This creates a market where firms reduce costs through checklists and junior staffing. Why is price competed on? The quality of a pentest is largely unobservable. The market prices are not for risk reduction but plausibility deniability.
  • They have a few recommendations on how to fix this in the future: it's all about aligning incentives. For the pentesters, we should move away from one-off pentests to long-term engagements with continuous outcomes from the seller. Right now, compliance is considered security, which is bad. Compliance is a lagging indicator of security. They should be the byproducts of a secure system and not the objective by itself.
  • In general, the market doesn't value high-signal work because it costs more money and it creates unwanted work. They have a great quote at the end that sums everything up: "hey mirror the broader economics of prevention: costs are immediate, benefits are invisible, and success is defined by the absence of events that cannot be proven to have been avoided."

Solana Forking- 1857

surfpool    Reference →Posted 2 Months Ago
  • Solana forking doesn't really exist. This is an amazing innovation for writing proof of concepts locally.

Ethereum Tools by Recon (Free)- 1856

Recon    Reference →Posted 2 Months Ago
  • There are many great free tools on this website for many things. EVM Bytecode analysis, storage slot preimages, invariants sandbox... lots of good stuff!