Resources

People often ask me "How did you learn how to hack?" The answer: by reading. This page is a collection of the blog posts and other articles that I have accumulated over the years of my journey. Enjoy!

Apps shouldn’t let users enter OpenSSL cipher-suite strings - 1704

Frank DENIS    Reference →Posted 7 Months Ago
  • TLS allows for a lot of configuration. Which encryption algorithms and key exchanges that can be used, hashing algorithms and more. The author of this post asks if this is the proper user experience. Their claim is that many admins "fix" (notice the double quotes) by changing the ciphers, only to make the situation worse. For instance, when BEAST and POODLE attacks were in the news people were changing to RC4. Sadly, RC4 had its own issues that nobody really knew about it.
  • The author claims that checkboxs are better than cryptic strings. These checkboxes could contain items like FIPS 140-3 approved or post-quantum or negative options like disable TLS 1.0. Each checkbox is a union of items to perform. A set of simple presets would be very useful too.
  • Why is this nice? Compared to the cryptic strings, this contains future-proof algs, easy-to-understand algorithms, and super-easy compliance. To do this correctly, the creators of the checkboxes would need to be very careful about the mappings.
  • Sometimes, the real strings are necessary. FIPS 140-3 requires NIST-approved algorithms, which aren't always possible. Forward secrecy may be a requirement that may not be doable on the checkbox approach. There are likely other edge cases as well. Overall, it's a great post on making defaults more secure.

Corruption via MathSpace on Firefox Browser- 1703

Manfred Paul - ZDI post by Hossein Lotfi    Reference →Posted 7 Months Ago
  • Browsers need to be fast - I mean, really fast. So, running JavaScript isn't always fast enough. Modern browsers perform Just-in-Time (JIT) compilations of JavaScript to native code, making it faster. This introduces an interesting yet incredibly complicated set of vulnerabilities to consider. This post is a Firefox JIT bug in the Pwn2Own competition.
  • The Ion JIT compiler uses a function called ExtractLinearSum to convert a value into a linear sum expression. For instance, (x+(2+3)) - (-3) can be transformed into x+8. This type contains three parameters: This type contains three parameters:
    1. Value node
    2. MathSpace - an enum with the three values Modulo, Infinite and Unknown that will wrap around the integer space, bail if wrapping is needed and the final one is a default value.
    3. Recursive counter for stack depth exhaustion issues
  • The function ExtractLinearSum is used multiple places in the Ion compiler, one of which is folding or simplifying the linear expressions. The function TryEliminateBoundsCheck is trying to merge bounds checks on the same object to simplify things. For instance, array[i+4]; array[i+7] will generate two bounds checks. To do this, it will create a bounds check object that can keep track of what's going on, eventually leading to a value of 7 being checked on the length.
  • Although the usage of the MathSpace is useful, it's not rigorously verified. In the case of bounds checks, this seems pretty important! Module makes sense in some math cases but doesn't make sense in the case of bounds checks - infinite does. So, what if we can find a way to make the numbers being used in this operation of type Modulo on a bounds check?
  • The following code triggers the bug when i is slightly less than 2^32: array[(i+5)|0]; array[(i+10)|0]. The |0 is used to force this to be 32 bits. The check will overflow because of the MathSpace being set to Modulo, leading to a faulty bounds check. This is only possible with really large arrays, requiring typed arrays to be practically feasible.
  • Getting the write to happen in the proper location only requires fiddling with the minimum and maximum sizes in funky ways to trick the minimum/maximum counting for the bounds. To make this useful to exploit for an OOB read or OOB write, a useful object must be found in the huge address space. They found that Map objects were nice for getting a addrOf and fakeObj primitive. Once there, exploitation is trivial.
  • It appears that this bug was found via manual source code review. Even though JavaScript engines are heavily fuzzed and reviewed, there are still great bugs lurking in unusual places. Overall, great write-up for somebody who knows nothing about browser engines!

Boredom Over Beauty: Why Code Quality is Code Security- 1701

John Saigle - Asymmetric Research     Reference →Posted 7 Months Ago
  • The Web3 space is innovative yet financially risky at the same time, due to attackers' ability to directly steal money. This innovative aspect has led to many hard-won lessons in security that need to be relearned in Web3. This post is about one of them: overall code quality. Code quality is code security.
  • NASA famously implemented their Power of Ten rules for clear guidelines in coding. NASA specifically implemented this because projects with extreme consequences for failure require rigorous code quality standards. CURL contains very serious coding guidelines as well.
  • When code is well-structured and adheres to clear patterns, security vulnerabilities become easier to identify and harder to introduce. Codebases characterized by inconsistency, complexity, and poor organization create fertile ground for security flaws.
  • Now comes the reason for the name: chase boredom instead of beauty. Most secure code is boring and simple - the JC of our company has talked about this extensively as well. Security thrives in predictability and not novelty. Besides the code, this includes docs, standards, linting, and review processes.
  • Why should we take code quality so seriously? Problems cost more to fix later. Whether it's re-architecting something, a major hack, or something else, it just costs much more later. Additionally, when developers trust their foundation and execute without fear, they can build systems that will last forever. Good read!

Uncovering the Query Collision Bug in Halo2: How a Single Extra Query Breaks Soundness- 1700

Suneal Gong - ZK Security     Reference →Posted 7 Months Ago
  • Halo2 is a zero-knowledge (ZK) proof framework based on the PLONK protocol that was originally used for Zcash. Circuits, the flow of operations and verification in a ZK proof, are structured as tables. In these tables, each column holds a sequence of values and each row represents a step in the computation. Constraints, or the limits of the circuit, are defined by querying values in these columns at specific offsets, known as Rotations.
  • Each column is encoded as a polynomial over a finite field. Querying a column at a certain Rotation corresponds to evaluating the value at a specific point. Constraints among columns are enforced using gates. The prover commits the columns using polynomial commitment schemes like KZG. The verifier will receive these commitments and verify it for correctness via black magic math.
  • Circuits have multiple columns and gates, resulting in the evaluation of polynomials at multiple commitment openings. To make this efficient, Halo2 uses a multi-point opening technique, allowing for the verifier to batch many queries into a single proof. In practice, they batch the evaluations, compute a linear combination of all values and check a single equation to ensure it's been satisfied.
  • Alright, enough of the math! What's the vulnerability!? The multi-point opening system encodes the data as Commitment,QueryPoint as the key and to a Value. This key isn't unique enough! It's possible for a "query collision" to occur, where two independent queries have the same key, even if their values are expected to be different. In the context of Halo2, the consequence is horrible: one evaluation can silently overwrite the other. This means that it's possible to forge proofs in many situations.
  • From what I can gather, this vulnerability appears to be somewhat theoretical as no live protocols could have been exploited. Regardless, the bug was super cool and entertaining to look at, even though I don't fully understand ZK math.

Inside the GMX Hack: $42 Million Vanishes in an Instant- 1698

SlowMist     Reference →Posted 7 Months Ago
  • GMX is a very large decentralized trading platform. Although it has a $5M bug bounty, it was exploited for $42M after over 2 years of being live and multiple audits. There are several reasons this likely wasn't found, such as requiring multiple vulnerabilities to be exploited.
  • There were two design flaws. The first one is around the financial manipulation vulnerability within the GLP token. This was done by opening a short position to increase the size of the Assets Under Management (AUM) instantly. This would then increase the price of GLP in a controllable fashion that could reasonably be undone. This is pretty straightforward.
  • The second issue is less simple. When creating short positions, it was possible to call increasePosition, which did NOT update the globalShortAveragePrices in the ShortsTracker contract. Later, when the execution decreases, the value is updated, though. Entries update, but exist to not update. This is not really a vulnerability by itself but a quirk of the protocol.
  • The real vulnerability is very subtle. GMX had a Postionmanager contract that controlled a lot of settings that was only callable via a GMX controlled key. One of these contracts called enableLeverage on the core code before performing any of the trades. There was a backend off-chain service that would trigger this functionality. While Keeper made this call, it was possible to redirect execution and call the GMX contract while leverage was still enabled. This is the vulnerability that makes this possible.
  • With all of that in mind, the attack can be broken down into the preparation and the triggering. First, the attacker creates a long position via a smart contract (used for reentrancy later) and a reduce-order that the Keeper would later execute. When the keeper received the reduce-order, it would call the PositionManager to enable leverage. The Orderbook would then execute executeDecreaseOrder(), update the attacks position and pass execution to the contract via the collataral token being in WETH.
  • In the attackers smart contract, enabled by the sending of ETH to the fallback function, would transfer 3000 USDC to the vault and open a 30x leverage short against WBTC using increasePosition. Because of the second design flaw, the globalShortAveragePrices were not updated. During a future call to the ShortsTracker contract, the globalShortAveragePrices would be updated. This dropped the price of WBTC to about 57x less than it should have been.
  • To exploit this price discrepancy, they used the GLP token. It would first create a large flash loan of USDC to call mintAndStakeGlp to mint a lot of GLP. Next, the attacker would call increasePosition to deposit a large amount of USDC on WBTC. This would update the globalShortSizes, resulting in AUM increasing dramatically. Finally, the attacker would call unstakeAndRedeemGlp to redeem way more tokens than they were entitled to. But why?
  • The AUM was updated but the globalShortSizes was not. When performing calculations on the trades, the manipulated value of the trade was far above the market price, making the trade appear deeply unprofitable. Naturally, this increases AUM by a lot. By doing this over and over, they got more funds from the trade of GLP than they actually should have.
  • This is a pretty crazy exploit in a popular protocol - it makes me wonder what other big protocols are hiding huge bugs. Exploiting vulnerabilities, such as the manipulation of financial instruments, it pretty complicated. I'm guessing that the attacker found the financial manipulation first and then needed to find a way to turn on leverage.
  • Eventually, all of the funds were returned to the protocol. So, why didn't they just claim the bug bounty? Since the keeper functionality was "privileged" and the offchain infra is blackbox, there's a major risk of getting rugged. SlowMist recommends better reentrancy locks be added. In reality, I feel like these were reentrancy issues across contracts (in the case of enableLeverage), making this not a great solution. In the case of the discrepancy in the price updates, I do agree, though. Great write-up explaining this super complex set of issues!

The CPIMP Attack: an insanely far-reaching vulnerability, successfully mitigated- 1692

YANNIS SMARAGDAKIS - Dedaub    Reference →Posted 7 Months Ago
  • This report is an in the wild story of attackers compromising many contracts in a subtle way. The name says it all: Clandestine Proxy In the Middle of Proxy (CPIMP).
  • Smart contract deployment of upgradable contracts typically works in two types: deploy the code and then call an initialization function. Unless specifically checked, the initialize function can be called by attackers before the real user sets malicious settings. In reality, if this happened, a legitimate developer should recognize the failure and just try again. At least, that's the argument I've been hearing for a long time. So, what's different here?
  • Attackers were able to backdoor the contracts without being noticed - real value was being accrued in these contracts for several weeks as well. The malicious actors were monitoring the intended implementation and deployment procedures. Instead of the normal flow of going from the proxy to the proxy implementation, a contract was added in the middle, similar to a MitM attack.
  • To make matters even scarier, most blockchain explorers could not tell the difference! The implementation was shown as the correct one in the explorer. Events and storage slot contests even look correct. Even the deployment showed the incorrect events. Developers just weren't being careful enough upon review.
  • Many project contracts were backdoored, such as EtherFi and Pendle. The malicious actors were waiting for the right moment to profit, but it was caught first. The authors of the post contacted SEAL 911 to start a war room. To not freak out the attackers and get them to exploit things now, it had to be coordinated. This meant getting all affected protocols into a war room at once. Although every remediation was custom, most of the funds were recovered!
  • So, how did the backdoor work? It was sophisicated with persistence, detection evasion and more. First, it added functionality to become the "Super-Admin" to override ownership for ugprades, drains and executions. This allowed the malicious owner to do whatever they wanted.
  • To make it more persistent, it restored itself in the implementation slot - this meant that not even upgrades could remove it. On L2s, if the Super Admin account had been denylisted, they had signed executions that still worked. Even crazier, they added batched direct storage writes in calls as well.
  • Some implementations contained anti-recovery protection. By reviewing balance checks before/after a call, it would prevent 90% of funds from being taken at once. That's pretty devious!
  • The coolest part about this is by far the reason EtherScan finds the wrong implementation contract - the main reason most developers were tricked. The detection of Etherscan consults multiple storage slots that are defined in various proxy implementations IN ORDER. By placing the legitimate implementation address at the old proxy implementations slot (defined by a standard), Etherscan would find the incorrect address! Amazing work and it makes me think that Etherscan should have a fat bug bounty program.
  • Overall, great research in detecting, documenting, and mitigating this vulnerability. In the future, I will be more hesitant about allowing initialization functions to be frontran. Neat!

Break into any Microsoft building: Leaking PII in Microsoft Guest Check-In- 1691

Bribes    Reference →Posted 7 Months Ago
  • While browsing Shodan one day, they noticed a subdomain associated with Microsoft - guest.microsoft.com. Once logged in via a phone number, no information was given. This seemed like it wasn't meant to be publicly accessible.
  • Looking at the Burp Suite logs, they found an interesting API relating to their previous stays: /api/v1/config/ with a JSON parameter called buildingIds. Since they had not visited any buildings, none of the information was provided, though the array of buildings was empty. By providing an ID of 1, they were able to see some building information.
  • Surprisingly, a lot of building information was provided: access codes in some of them, address/building name, parking info, GPS coordinates, QR code data, Microsoft employee emails, etc. After iterating over more IDs, they found buildings from Israel to the United States.
  • They wanted to increase the impact some more. After some more effort reversing the JavaScript, they found the API /api/v1/host. By providing an email, PII about the employee, such as phone number, office location, mailing address, and more was provided. The same issue existed on guests based upon their email as well.
  • They couldn't find any exposed APIs around explicit visits, so they tried digging further. They tried for some path traversals via secondary context vulnerabilities. After using ..%2f..%2f..%2f or ../../../ URL encoded, they were able to get an Azure functions page. But why!? The proxy was decoding the URL encoded / and being used by the actual Azure function. Neat!
  • After some directory brute forcing, they got a 500 error at /api/visits/visit/test. Eventually, they managed to get this working to retrieve a wide range of invitation and meeting information. Sadly, they got nothing for the vulnerability: it was moved to review/repo, fixed, and no payment was ever made. Regardless, it was a good set of vulns!

Would you like an IDOR with that? Leaking 64 million McDonald’s job applications- 1690

Ian Carroll & Sam Curry    Reference →Posted 7 Months Ago
  • McHire is a chatbot recruitment platform used by most of McDonald's franchisees. Employees chat with a bot named Olivia to collect information, conduct personality tests and more that is owned by Paradox.
  • While going through the interview process, they got some disturbing pro-company questions but didn't see anything interesting. Of note, it seemed like Olivia had a solid set of predefined inputs and wouldn't use anything else.
  • On the signin page, they noticed a small icon for Paradox Team Members. They tried the username and password combination of 123456-123456 and this logged them in as an admin on a test restaurant. Crazy but no real impact.
  • Doing authorization without authentication is super error-prone; think of how people can check in for a flight on an airline without ever creating an account, as an example. Sam and Ian attempted to apply for a job when they noticed the API PUT /api/lead/cem-xhr that fetched data. This was likely proxying information to a Candidate Experience Manager (CEM) via an XHR request. This contained a lead_id parameter.
  • They simply tried decrementing the ID and got another applicant's data. This contained previous chat conversations, names, emails, addresses, and phone numbers. etc. Probably the craziest of all, an Auth token for the consumer UI was also sent back, allowing you to effectively become the user.
  • With no bug bounty contact they reached out to people at Paradox.ai and they prompted remediated the vulnerabilty. Sam does a lot of great research on things without bug bounty programs. Although security is getting better in some places, it's clearly getting worse in others.

GitHub Source Code Data Ingester- 1689

gitingest    Reference →Posted 7 Months Ago
  • When using LLMs, quickly grabbing the code you want from the repository is important. Notably, it needs to be delimited, have a file structure and only get the requested files. gitingest does this very well and very quickly. I use this a lot when using LLMs.

Story Network Postmortem- 1688

Story Foundation    Reference →Posted 8 Months Ago
  • Story Protocol received two denial-of-service reports that would take down the chain via panics. Both of these slipped through the cracks of audit competitions.
  • The first vulnerability was caused by a faulty patch of a previously known issue. During the Cantina competition, a vulnerability was reported in the upstream fork codebase of Omni Network. Execution payloads being given to the execution client would be processed by Story but not by GETH due to some weird unmarshalling issues. For instance, adding the same field multiple times in JSON could bloat the payload and be considered valid.
  • The goal was to refactor the JSON into a stricter format like protobuf but it was too late to make such a big change before launch. To fix this vulnerability, Story decided to put a hard limit on block size at 4MB. If the block size was bigger than this, the code would panic. By sending 128KB per message and sending this over and over again, the block could be valid at 4MB and lead to a node crash.
  • For the patch, there's a quick patch and a long term patch. For the quick patch, they edited CometBFT to limit the block size to 20MB and edited the prepare proposal code not to propose blocks larger than a threshold. From rigorous dynamic testing, they determined that block sizes larger than 20MB could not be created. In the long term, they're moving to using protobuf and restricting extra fields.
  • The second vulnerability was a logic bug in handling multiple delegation withdrawals that probably requires more digging into the codebase to fully understand. When unstaking from a validator or rewarding a delegator, the tokens are burnt from the consensus layer's balances. This is to prevent double accounting. If there are unclaimed rewards, they are automatically sent to the delegator.
  • The function ProcessUnstakeWithdrawals iterates over a list of unbonded entries. This loop fails to deal with the situation of multiple withdrawal requests coming from the same delegator. Via some funky state handling, this led to a panic from too many coins attempting to be burned.
  • They decided that the loop was too complicated, since it was handling too many cases at once to be cheaper computationally. They changed the code to have two loops to simplify the code.
  • The takeaways are interesting to me from the development team:
    • Have more time between audits and launches. This is pretty obvious but hard to do in practice.
    • Increase test coverage. Another classic thing.
    • Try to handle more panics within the codebase to not fail. Sometimes, you want stuff to fail but not all the time.
    • Reduce code complexity and maximize readability. A little bit of performance gain is probably not worth a big hack.
  • My personal takeaway as a bug bounty hunter is that DoS bugs are way easier to find as most things. If these are paid out as criticals, then it seems like it's the best bang for your buck.