Resources

People often ask me "How did you learn how to hack?" The answer: by reading. This page is a collection of the blog posts and other articles that I have accumulated over the years of my journey. Enjoy!

Android-based PAX POS vulnerabilities (Part 1)- 1395

stmcyber    Reference →Posted 1 Year Ago
  • Many point of sale (POS) devices are going towards Android based systems instead of obscure custom made things. The authors of this post decided to review the PAX POS system for vulnerabilities. In part 1 of this post, they decided to go through mechanisms for attackers with local access to backdoor the device.
  • In fastboot, the hidden custom command oem paxassert can be used to overwrite the pax1 partition. This is a special partition that doesn't contain a filesystem but is a configuration map. Some values from this map are used in kernel parameters. From this, it is possible to inject our own kernel parameters to get root with a custom rootfs. For more information on the technique, they linked alphsecurity.
  • The unsigned partition exsn also had information concatenated to the kernel parameters. So, by flashing this partition, it's possible to get code execution using the same technique as before. In practice, adding spaces can be easily used to escape the context to add arbitrary parameters.
  • Within one of the Android apps, there is a command injection issue. It checks to see if the command starts with dumpsysx. However, simply appending a semi colon after this can be used to execute arbitrary commands afterwards. The PoC is done via ADB so I don't know how exploitable this actually is.
  • systool_server is a daemon exposed via Android binder with root privileges. It exposes the miniunz, where an attacker can add an arbitrary amount of flags and the input/output directory. Using this and symbolic links, it is possible to get an arbitrary file write primitive, since it's running as root.
  • The systool_server tool performs multiple checks for verifying the uid to ensure only specific users can execute this API. However, these can be bypassed with LD_PRELOAD. Honestly, I don't understand HOW this bypass works but that's what they claim.
  • There finally issue is a downgrade attack to a older signed/vulnerable version. TBH, being able to downgrade is a very common thing for functionality. For instance, what if the version you have doesn't work and you want to go backwards? Not a trivial thing to fix.
  • Overall, many of these attacks were interesting! Backdooring a device like this could be used to steal sensitive card information. Additionally, they have one CVE that is undisclosed that I'm curious to see what it is later!

Putty Private Key Recovery via Biased Nonce- 1394

Marcus Brinkmann    Reference →Posted 1 Year Ago
  • The digital signature algorithm (DSA) requires a number used once (nonce). If this number isn't random, then it's trivial to recover the private key. This is how Geo Hotz hacked the Playstation 3 back in the day.
  • Apparently, it's not JUST completely random. If there is missing randomness, then it's also possible to recover the private key. It's even one of the final questions on cryptopals.
  • Many programs use random nonces. However, some generate them deterministically via hashing and modulo over the ECDSA group, which is effectively random. For the P-521 curve, the number is so large that the upper 9 bits are guaranteed to be 0. Using the biased nonce attack, as seen in cryptopals, it's possible to get the private key in about 521/9=58 signatures with over 90% probability.
  • I don't understand the math on this but it's still interesting. Crazy to find this in Putty, such a popular product. Many cryptography things have unexpected footguns and should always be reviewed by professionals.

Mandrake (PFM) Vulnerability- 1393

Justin Tieri - Strange Love    Reference →Posted 1 Year Ago
  • In the Cosmos ecosystem, there is a cross chain communication framework called Interblockchain Communcation or IBC for short. On top of IBC, there is a middleware called Packet Forwarding Module (PFM). PFM will take an incoming IBC tx and forward it to the next chain in the list, allowing for multi-hop calls.
  • There are several parties involved with this:
    • Source chain: The blockchain that initiates the original IBC message.
    • Intermediary chain: The blockchain(s) that the PFM packet goes through in order to get to the destination.
    • Destination chain: The location in which the original packet was meant to be routed to.
  • When using ICS20 (which PFM uses) for token transfers, the memo stores the routing. Within ICS20, there is some magic that happens for handling assets from other chains. When going from the source to the destination, the tokens are escrowed in the source chain then a representation is minted on the destination. When going backwards, the minted token is burned and the escrowed token is unlocked. Because PFM is doing magic to route multiple ICS20 calls, there is a chance for error here.
  • PFM handles the responses from the destination chain to source chain for successes, errors and timeouts. However, some users were attempting to perform another PFM after their interactions on the destination chain back through the intermediary and source chain. When doing this, the internal accounts of funds got messed up when handling the error path.
  • In particular, the escrow account on the intermediary chain was not properly updating the total supply. Since the escrowed account only has so many funds, this could result in funds being inaccessible from the errors. According to the post, this bug was discovered while trying to debug an IBC client on a real network. Yikes! Luckily, it wasn't possible to steal funds using this issue.
  • The developers said that this wasn't caught because of missing test cases in their end to end test setup. They urge deves to write good unit, integration and e2e tests whenever possible. Another interesting bit to this is testing IBC applications is hard to do - you need to setup multiple blockchains for multiple situations, which is difficult.

Nonce Upon a Time, or a Total Loss of Funds - Exploring Solana Core Part 3 - 1392

Neodyme    Reference →Posted 1 Year Ago
  • Preventing the replay of previous transactions is important for the security of Solana and most blockchain systems. The obvious way would be to check if a signature had already been seen. However, this runs into scaling issues with over 150B transactions and signature malleability issues. So, something else needs to be done.
  • The initial solution was to just not allow transactions that are too old. In particular, if the signature contains an out of date blockhash, then it can be safely ignored. This strategy doesn't work with offline signing though. If the transaction is signed offline then the blockhash may have expired. Since some users want to do things offline with their key, there needs to be another way.
  • Durable Transaction Nonces are a number used once (nonce) stored on chain ahead of time. Instead of putting the blockhash, the nonce is used. After the nonce is used, a new value is generated and stored on chain for the account. Of course, this must be done in both failed and successful calls in order to prevent unintentional execution of the transaction later. This functionality is complicated and very nuanced.
  • Most of the time, the Solana core expects all state to rollback for failure. For instance, writing to an account that you don't own will result in failure. The author points out that "Special cases lead to complexity, and complexity leads to bugs." which I couldn't agree with more! This is a little thing that if not done correctly could cause major havoc.
  • There is a match expression written in Rust that checks three cases - tx succeeded with nonce, tx succeeded with hash and whether an error occurred. The success case for the nonce case actually takes in both succeeded and failed transactions with state writes! What does this mean? Even illegal state writes, such as cross account, can persist. It seems like illegal is different than a regular failure in this context - so, errors get funky leading to the bug.
  • This completely breaks the entire security model of the system. One account can write to another account with arbitrary value. This is an absolute 100% game over, as far as Solana bugs go. At the point this bug was found, $10B was on Solana. I hope they got a huge bug bounty for this find!
  • This is a crazy bug that destroys the entire run time. I think the authors make a really good point that sticks with me - "As a rule of thumb, we recommend that you double-check special cases and complex code". If there is interwoven logic with weird case statements, it's a great place to look for bugs. Subtle calling patterns and unexpected errors can break this code very quickly.

How To Cheat The Staking Mechanism - Exploring Solana Core Part 2 - 1391

Neodyme    Reference →Posted 1 Year Ago
  • Solana is a proof of stake network. So, the more value you provide in Solana, the more power you have in the voting process. With 2/3 of the control, changes to the state can be made. Clearly, ensuring that the staking and voting power is done properly is important.
  • To stake funds, a user 1) creates an account 2) delegates the account) and 3) becomes activated. However, parsing all of the staked chain state every block would be incredibly inefficient. So, instead, a cache or running total is kept instead. If something relevant to the cache has changed then it makes an update to the cache.
  • Solana allows active stake accounts to be merged. This will close one account and add the stakes to the other account without cooldown. When doing this, it does the detection by checking if the closed account has zero funds in it. Normally, this is the case, since the staking program address will do this.
  • However, there is a logic bug here - it's possible to add funds to the old staking account so that it's not properly reaped. If this is done, then the key isn't removed from the cache! So, we can reuse the same staked values in multiple accounts by exploiting this logic flaw.
  • To exploit, here are the steps:
    1. Create two staking accounts.
    2. Consolidate one account into the other.
    3. Add one lamport into the closed account.
    4. Solana core doesn't update the cache for the closed program because it has value.
    5. Recreate the vote account. The delegation is still there and the cache still doesn't get updated properly.
  • To fix the vulnerability, the account is attempted to be deserialized instead of a zero funds check. Overall, a super interesting post on the desync between reality and the understanding of reality.

How a Little-Known Solana Feature Made Program Vaults Unsafe - Exploring Solana Core Part 1 - 1390

Neodyme    Reference →Posted 1 Year Ago
  • Solana is a blockchain that allows for the execution of arbitrary Rust code. The main difference is that information is stored in accounts - both code and data.
  • Program Derived Addresses (PDAs) are public keys that are derived from the address of the program itself. By using a specific seed, the address can be bumped off of the elliptic curve to ensure there is no valid key for it. To generate the PDA, the following valued are used then hashed: hash(seed + program_id + "ProgramDerivedAddress"). When using PDAs, it is cumbersome because a private key must be created for the account and sign the transaction with it.
  • As an alternative, create_with_seed was made. This is a feature of the system program. So, it can create an account and assign ownership to the account. The address of this is calculated by hash(base + seed + owner).
  • These two methods are pretty similar in how they generate code, right? Since there are no separators or unique prefixes for this in Solana, there is the potential for a hash collision! There some constraints though, such as account being system owned and the first 21 bytes of the program_id being valid UTF-8 (1 out of 180K).
  • How would this been useful? A collision like this could have allowed for an awesome rug pull mechanism. There is no way an audit would have caught this either. This was fixed by ensuring that the owner of a seeded account cannot end with ProgramDerivedAddress.

How to freely borrow all the TVL from the Jet Protocol- 1389

Jayne    Reference →Posted 1 Year Ago
  • Jet Protocol was a lending and borrowing protocol built on Solana. The function _market_value() is used to determine the total market value of the loans that had been taken out. So, if this function was broken in some way, you would be able to bypass the protection to take out arbitrary loans.
  • Recently, the protocol had implemented the capability to close a Solana account. Upon doing this, the account is set back to the Pubkey::default value and gives back some of the rent costs.
  • However, the collateral to loan ratio using the function _market_value() has a fatal control flow flaw with this new functionality. It is using Pubkey::default as the indicator to exit the list. So, if an account is closed then this function is interacted with, the loop will exit early!
  • Overall, a fairly simple issue of default values leads to a complete rug pull. To me, the verification of a default value is a red flag and should try to be avoided of things like this. Good find!

Attacking Secondary Contexts in Web Applicarions- 1388

Sam Curry    Reference →Posted 1 Year Ago
  • Web servers are not exposing files on a server in a simple way anymore. Instead, they use proxy's, load balancers and fetch responses from other servers locally. Weird application routing can be used to cause some major havoc.
  • How do we identify these types of routing when we're blind? Using directory traversal and fuzzing for control characters (#,?,&,/,.@) is a good way to find this. Another detection is changes in response for certain directories, such as the headers of a response changing. Finally, stack traces or wrapped responses can be good here as well.
  • What kinds of security issues can we find with this? Data being served across extra layers causes weird issues. HTTP smuggling and CRLF injection can be found in some weird places. Second, since developers don't expect users to be able to control parameters and paths here it causes uber havoc on the endpoint. Adding debug flags or traversing up the directory can access unintended functionality.
  • Information disclosure is a bad one here as well. Internal HTTP headers and access tokens come to find. SSRF from here is dangerous to return data instead of asking the internal network.
  • What types of issues will we run into as a hacker? Directory traversal may not work - not everything will handle these. Another thing is that some servers will still be authed with the same headers or cookies as the original request, making nothing exploitable. A difficult part is guessing the paths, mostly because this is blind. To get around this, we need to have a good context of the rest of the application, brute forcing and a bunch of guess work.
  • Sam has a ton of case studies of this. One interesting case was with Authy (2FA) integration with Pinterest. The application was only checking that the request returned a 200 and the response was {"success":true}. When taking the code from the user and verifying it within Authy, there was a directory traversal on this. To exploit this, simply using ../sms for the 2FA code would return success to bypass the 2FA!
  • A classic case was a directory traversal in invoice routing. If you knew somebody's email on this back-end service, you can traverse back up twice, place an email, place an ID and get invoices cross account.
  • A few takeaways for me. First, these types of bugs are out there but are difficult to triage what to do next. Innovations on the blind discovery of things would be amazing for bug hunting. Next, sanitization is hard for URLs in these cases with extremely complicated bugs. Overall, great find!

Cross Platform 0-Day RCE Vulnerability Discovered in Opera’s Browser- 1387

Oleg Zaytsev - Guardio Labs    Reference →Posted 1 Year Ago
  • Chromium, the underlining browser engine that powers Opera, is highly customizable as a browser. The developers of the user facing browser can add custom APIs to it. Additionally, browsers have built in extensions that cannot be uninstalled even if you wanted to with special privileges.
  • The My Flow functionality for the Opera browser is implmented using the Opera Touch Background extension. Only web resources from Opera flow specific domains are allowed to interact with this extension via the chrome.runtime.connect API. The event listener for My Flow has the ability to open, send, and download files. The process of opening a file could result in executables being ran, which is a major issue.
  • Since these were domain restricted, there are two ways we could go about exploiting this. First, an XSS vulnerability. However, the authors of this post choose a different route - a malicious low privilege extension injecting JavaScript on the domain to execute the payload. Since it's easy to trick users to installing an extension, or compromising an existing one, it's reasonable to assume that this is a viable attack vector.
  • In both of these cases, there is a problem though - the Content-Security Policy (CSP). This allows for the fine-tuning of what content can be loaded on a page - from images to JavaScript. Additionally, the page contained a sub-resource integrity (SRI) tag. With this feature, they were unable to change in-flight requests, since the content would be different. To be honest, I thought you could just add arbitrary tags to the page from an extension but maybe I'm wrong about that.
  • To bypass the SRI, they went back to previous versions of the content. To their surprise, they found an older yet live version of the page that didn't contain an SRI! Using this page, they could now inject the JS with arbitrary content and cause havoc.
  • By calling the SEND_FILE of the private API, we can upload any file to the system, including executables. Then, calling OPEN_FILE will run the executable, giving us code execution. Game over via a malicious extension and a single click of approval.
  • The actual remediation was not directly stated, which is weird. If I was remediating this I'd put limitations on the file types that can be opened, remove the old HTML pages and then try to come up with further protections for code execution to be impossible. Otherwise, a super interesting finding that can probably be ported to other browsers.

Microsoft Edge’s Marketing API Exploited for Covert Extension Installation- 1386

Oleg Zaytsev - Guardio Labs    Reference →Posted 1 Year Ago
  • Chromium, the underlining browser engine that powers Edge, is highly customizable. For instance, there is a file called _api_features.json that stores permissions for vendor-specific APIs. Additionally, the resources.pak contains resources for the vendor-specific APIs as well. By comparing this file with the standard Chrome, they found various custom added APIs.
  • While browsing through these, they found the edgeMarketingPagePrivate API. This API was only accessible from a list of websites belonging to Microsoft, according to the permission model. It was designed to integrate marketing campaigns. How did it do this? It adds in a custom hidden theme, which is similar to an extension. However, they found that, by chance, it also accepted extensions!
  • To add the theme or extension, the private API had to be called with a specific ID. Since the extension is hidden and permissioned, there is no explicit check from the user that this change is okay.
  • To exploit this, the authors give a few hypothetical. First, an XSS on any of these domains would lead to the installation of an arbitrary extension that was very, very highly permissioned. The other method was that another extension could add the JS snippet to one of the domains to trigger the update. This would go from a low to a high privilege extension, just with a little JavaScript.
  • To fix the issue, extension IDs and only themes are allowed to be provided. To me, this feels like the underlying Chrome API to add this extension shares functionality between the themes and extensions but just an assumption. The authors mention that a simply domain based restriction on sensitive functionality is not enough to restrict bad things from happening, which I tend to agree with. Good find!