Resources

People often ask me "How did you learn how to hack?" The answer: by reading. This page is a collection of the blog posts and other articles that I have accumulated over the years of my journey. Enjoy!

Stealing passwords from infosec Mastodon - without bypassing CSP- 1002

Gareth Hayes    Reference →Posted 3 Years Ago
  • Mastodon is an open source alternative to Twitter. With Elon Musk taking over Twitter, many people have flocked to this instead. Gareth decided to take a look at the security of the platform.
  • The social network allows you to enable HTML! Sounds like XSS by default - but the Markdown/HTML allowed was pretty limited. Bold tags and others were allowed but not much else. The author found the source code for parsing the HTML elements and started to look for bugs.
  • The application allowed for the title attribute to be put into a tag. While playing around with double quotes, single quotes and quote-less attributes, they were unable to escape. Now, when combining find and replace with HTML parsing, is where things go bad. Gareth learned that the text :verified: would be replaced by the verified icon (blue checkmark). What happens if we put this into the middle of the title attribute?
  • The code below would magically get transformed an break out of the HTML when replaced! Below:
    <abbr title="<a href='https://blah'>:verified:</a> 
       <iframe src=//garethheyes.co.uk/>"
    > 
    
    After:
    <abbr title="<a href='https://blah</a>'><img draggable=" false" ... ><
    iframe src=//garethheyes.co.uk/>
    
  • If you look closely, the double quote from the swapped in text will finish our HTML! Now, our iFrame will get rendered. What can we do with an HTML injection bug and a strict CSP?
  • The final step of the attack was creating a legitimate looking login page via injecting an iFrame. What's interesting is that the form would autofil in Chrome! Additionally, we can make the input invisible and give them a button to click which would send form with the credentials. Pretty neat phishing attack!

The DFX Finance Hack Explained- 1001

Solidity Scan - Shashank    Reference →Posted 3 Years Ago
  • DFX Finance is a decentralized exchange for stablecoins. The exchange had flash loan functionality as well. A flash loan is where a large amount of money can be borrowed by a user, as long as the funds can be sent back with a fee.
  • The flash loan functionality completely lacked reentrancy protection. A nice visual from Peckshield shows a call to the flash() function then the deposit() function without escaping it.
  • To prevent the stealing of funds on the flash loan, the balance after the flash loan MUST be equal or exceed the original balance. Here's where the vulnerability lies: this checks the overall balance!
  • An attacker can take out a flash loan then deposit the funds into their account! By doing this, the balance is the same and the attacker has tricked the contract into thinking they have a very large sum of money.
  • A few interesting takeaways. First, reentrancy protection should be added everywhere, even when it seems unnecessary. In this case, the flash loan didn't have the reentrancy protection but the deposit did. However, the deed was done!
  • Second, this was audited by Trail of Bits in V1 and the functionality for the flash loan was audited by PickAx for V1. But, the reentrancy bug was not caught in the V2 audit. Overall, interesting bug from different function calls to the contract. Another discussion can be found at Twitter and Halborn.

SSRF and RCE Through Remote Class Loading in Batik- 1000

Piotr Bazydlo - Zero Day Initiative (ZDI)    Reference →Posted 3 Years Ago
  • Apache Batik is a library used for parsing Scalable Vector Graphics (SVG) and transforming them into other formats. Even crazier, the documentation mentioned executing JavaScript, loading and executing Java classes and many other things. This felt like a SSRF goldmine to the attacker, with several previous vulnerabilities indicating this.
  • A common use case is taking in an image or URL to an image to transform it using Apache Batik. The tool has many built in protections in place for scripting and other things. For ScriptSecurity, there are several different settings from no scripts to allowing scripts from loading remotely.
  • In the security controls, there is the concept of origin within a URI. In particular, local SVG files can load scripts but not remote scripts. If we can bypass this control, we can do some horrible things!
  • The parsing to ensure this works properly had a bug in it though. First, the script URL and the document URL get the host removed from it. Next, there is a check to see if the two hosts are the same. The getHost uses the standard Java function, which is known to behave strangely with non-HTTP protocols.
  • The host for a local file (file:///some_file.txt) will always return NULL. Things like an external file and HTTP will properly return the host, making the check succeed. However, jar or Java Archives will also return NULL! Since the domains are now the same, the security protections no longer work as intended.
  • The obvious attack vector is SSRF, but we can do more. Apache Batik supports remote class loading for with Java bindings. By including a remote class in our SVG to execute a JAR file, we can execute some Java code. Using this, it is pretty trivial to execute arbitrary code on the system.
  • An additional way, if remote JAR loading is not allowed but scripts are, is abusing the ECMAScript engine. In particular, accessing the Java runtime from ECMAScript gets trivial code execution, by design. The official security guide for the ECMAScript standard is securing the application with a Java Security manager which is probably never used.
  • Overall, parsing differentials are absolutely fascinating! It's super interesting seeing how this default and unexpected mechanism in Java caused such a big problem. However, the capabilities of these Apache products are just too powerful.

Contact Point Deanonymization Vulnerability in Meta- 999

Lokesh Kumar    Reference →Posted 3 Years Ago
  • In April of 2022, Meta announced a Contract Point Deanonimization. These guidelines are bugs that enable matching of Uniquely Identifiable Information (UII) to User IDs. This goes from finding email addresses and linking that to a profile to many other things.
  • Naturally, emails and other things are important for the login/signup process. So, with this new program, the author decided to take a look here.
  • When passing an email to the password reset functionality, there is a masked email address. While playing around with the older domain of the Enterprise version of facebook (workplace), the author noticed some slightly different functionality on it. In particular, ONLY the email address and username were supported (not the phone number).
  • On the old workplace domain, they tried passing valid Meta accounts but nothing worked. But, it was still using some of Facebook's account cookies, indicating that the two domains were somewhat linked. They started the flow on Facebook then used the cookies on the workplace domain.
  • When they visited the page for entering the OTP the email was shown in an unmasked state. This is a perfect example of an information disclosure bug that this new program is trying to fix! The actual fix was to mask the email address on the reset page and only allow OTP validation to happen on the respected domain.
  • Overall, it's a pretty neat bug! With these extremely large systems, the intermingling of services can cause problems. This is where the recon is incredibly important.

Facebook SMS Captcha Was Vulnerable to CSRF Attack- 998

Lokesh Kumar    Reference →Posted 3 Years Ago
  • Recently, the author of this post had found an issue with the account recovery flow. While trying to send multiple OTP codes, they hit an SMS captcha flow. Most people would stop here, but the author decided to check out the format of the captcha.
  • The captcha URL had a parameter called next. This parameter could be pointed to sensitive GraphQL operations, such as posting to the timeline or changing email privacy settings.
  • What this turns into is a CSRF attack, since the POST request will send the CSRF token (since it's a request being made from the page). The CSRF is triggered if a user clicks the continue button from the captcha with the malicious URL.
  • I'd personally never seen a bug like this! Seeing a URL control all of the content of a request is pretty interesting. The fix for this was adding a message authentication code (MAC) to ensure the URL couldn't be tampered with. Additionally, only a proper OTP code can trigger the action URL now.

Code Injection in WebUI page leading to sandbox escape- 997

bugs.chromium    Reference →Posted 3 Years Ago
  • Extensions within the Chrome browser are immensely important for building out the correct functionality. However, these extensions have incredible capabilities compared to the standard web page. These APIs for the extensions must be secure to ensure that no privilege escalation can occur.
  • The extension debugger API allows for debugging an extension during the development phase. When a tab is connected to this, it starts by navigating to a URL to see if the debugger is allowed to attach to the new URL. Of course, this needs to have the proper permissions to do so.
  • If you try to attach to webui, then the debugging session should be terminated. Once this happens, the onDetach event triggers. I assume that webui is a general term for web pages, with some of the pages within Chrome being more privileged than others.
  • The bug is that during the onDetach event being triggered on the termination of the API, the re-attach can occur on the tab. The author believes this happens because the URL has change on the tab has not been committed yet, which results in the permission check failing. Instead of looking at the webui URL on the tab, it looks at the original one, which has different permissions.
  • Why is this bad? If you can hit the debugger API, then you can add code into the page. By doing this on a privileged page, a serious privilege escalation could occur. This could even be used to execute commands on the device.
  • Overall, this is an interesting bug that comes down to a subtle logic issue. Sometimes, dynamic testing and trying out random things is the only way to find issues.

The Team Finance Hack- 996

Halborn    Reference →Posted 3 Years Ago
  • Team Finance, a crypto token launchpad, was hacked. They were attempting to migrate from the Uniswap v2 to v3. This whole project was a safe keeping for funds will some sort of migration was happening.
  • The migrate function for the smart contract had a faulty locking mechanism. The validation checked to see if the address belonged to a ERC20 token. Since this can be controlled by an attacker, they were able to lock their own ERC20 token in it and make the call themselves.
  • Once the bypass on the call was found, they could perform a liquidity transfer to a new attacker controlled Uniswap v3 pair. Then, the leftover liquidity that wasn't transferred was considered the profit of the swap. Alongside the bypass of the caller verification, an attacker could set the initial price of the token in the pool.
  • Now, the transfer was performed with a skewed price, giving the attacker a massive refund as profit. This finding was missed by Zokyo security. A good reference of this issue was on Rekt.news as well.
  • Defense in depth of not letting any token be used in the contract would have solved this problem. The security of the migration function relied upon this. Further analysis was done by Slowmist.
  • Overall, good audits don't solve everything (unfortunately) and migration code should be considered for the security of an eco-system. An interesting bug that allowed the hacker to get a major payout from specifying a bad initial price.

RC4 Is Still Considered Harmful - 995

James Forshaw - Project Zero    Reference →Posted 3 Years Ago
  • Kerberos is an old authentication protocol that is still used all over the place. The core security concept is using encryption to prove knowledge of user credentials. In the handshake process for this, the user can specify what encryption protocols they support. This is a blessing to make it usable everywhere but a curse because of security.
  • Modern Windows versions disable DES and prefer AES. However, lurking below the surface is another symmetric algorithm - RC4. The stream cipher is known to have many security issues, such as known plaintext attacks. In the RFC for Kerberos in 2006, a modification of RC4 is used in order to work around the known weaknesses of the algorithm. What are the changes?
    • The encrypted data is protected by a keyed MD5 hash, which is used to prevent tampering. The key (confounder) is a 8 byte value.
    • The key is derived from the hash and its base key used in the hash. The previous key and the confounder make sure that the same key is never reused for encryption.
    • The base key is NOT the user key but is derived from a MD5 HAMC keyed with the user's key.
  • The generation of the user key for RC4 Kerberos has historic principles that make it the worst part of the algorithm. When trying to migrate from NTLM authentication (which transfers MD4 hashed passwords), they decided the best course of action was simply using this hash as the RC4-HMAC Kerberos key. They creates a few problems:
    • If an attacker has access to the ciphertext of the RC4-HMAC key, they can attempt to brute force the password. The key isn't random - it's the output of MD4.
    • Kerberoasting can be used here as well. This is the process of requesting a ticket from the Ticket Granting Service (TGT), which is encrypted with this key. Using a known plaintext attack, we can attempt to brute force the key as well. There is a variant of this called AS-REP Roasting
    • as well.
  • Everything prior to this was background. The author was simply trying to understand on the Windows Defender Credential Guard (CG) implements the various Kerberos encryption schemes. They started reverse engineering the DLL CRYPTDLL.DLL. Although this interface is undocumented, the DLL had to export functions and they were easy to work with. While doing this, they noticed there were several private types for encryption they wanted to dive into.
  • One of these algorithms was RSADSI RC4-MD4. This stood out to them for a few reason:
    • The key is 16 bytes but only the first 8 are used.
    • The derived session key is only 5 bytes of randomness. The rest of it is 0xAB repeated.
    • There's no key blinding. Practically, this means that ciphertext can be duplicated between two messages.
    • No cryptographic hash to protect tampering with the ciphertext. With RC4 being a stream cipher, this is particularly bad.
    The first thought on exploitation was brute forcing the 40 bit key - but we can do better.
  • A message to the Key Distribution Center (KDC) can be forced to return an error message with an encrypted timestamp. Since the timestamp is known, a plaintext attack can be used to recover the key! In particular, plaintext byte XOR timestamp byte = key byte. Additionally, the AS-REP is encrypted with the same session key. Since we know the bytes for the keystream of RC4 from the timestamp, this can be used on the AS-REP message as well.
  • Unfortunately, the overlap of the timestamp and other known values is only 4 bytes of the 5 byte key in the AS-REP structure. Of course, we can brute force the final byte by requested encrypted data and attempting to decrypt it. This attack does require a proxy in order to MitM the request being made to the KDC. Can we do this without a privileged network position?
  • If pre-authentication is disabled, then anybody can make a request to get the TGT for a user and specify the RC4-MD4 encryption algorithm. With this, KDC sends back the encrypted AS-REP structure. The very beginning of this structure is DER encoded data, which should be consistent. So, we can use a plaintext attack once again to derive the streamkey for RC4. At this point, we still don't know the 40 bit key that we're trying to obtain or the streamkey for those bytes.
  • With this much information, we are able to successfully encrypt a timestamp to send back to the KDC though. Now, it's time for the magic and hacker mindset... the timestamp is an ASCII string that is null terminated. If the string is NOT null terminated, then an error message is sent back. If it is, then a success message is sent back. Using this information, we can determine when the plaintext is null. Since we have the encrypted byte we created and know this is a nullbyte, we can recover the keystream for this.
  • The trick above can be used for each byte in the timestamp in order to recover each keystream byte to decrypt the AS-REP structure! There's a second trick that must be used in order to keep writing nullbytes though... A quirk of the DER formats long form can be used to continue reading bytes past the point of the actual timestamp. Since we have 28 possibilities for each byte and 5 bytes to get, a total of 1280 requests are required. Not bad compared to the 240 from before.
  • Overall, an absolutely amazing post for Project Zero. A few lessons learned from this:
    • RC4 is bad and stream ciphers are hard to use properly.
    • Don't compromise when writing a protocol for cryptography. Everything should be designed with as much consideration as possible; from strong algorithms to large keys.
    • Tricks may require the addition of another trick! This is seen with the nullbyte padding attack requiring the DER format attack. I commonly think of a trick and if the trick doesn't work, I move on; I should consider this attack in future work.

Bypassing Android Permissions From All Protection Levels- 994

Nikita Kurtin - DEFCON 30    Reference →Posted 3 Years Ago
  • The goal of the talk was to figure out what a user could do with no permissions. Android has three types of permissions for actions:
    • Application defined. These are permissions and capabilities defined in the appfest manifest for an application.
    • Runtime Operations (Dangerous). Things like location, phone calls and other things must be verified every time they happen.
    • System Permissions. These are things that require leaving the application, going to the Android settings and explicitly allowing the application to do.
  • There are default permissions within Android for each application. The author of the post dove into what these were in order to analyze the attack surface. Some of these allowed for interactions with other applications, such as Chrome. If we don't have a permission, can we convince another app to do something for us?
  • The first permission bypass was the INTERNET permission. If an application doesn't have this permission, then they cannot make web requests. However, the Chrome deeplink (implicit intent) allows sending an arbitrary request through it. Then, the response can contain the app to send the data back to. With this, the author was able to successfully make a web request without internet access!
  • There is a special message called SYSTEM_ALERT_WINDOW which can only be called from extremely privileged locations. Of course, user applications cannot trigger this directly. But, can anything mimic this functionality?
  • TOAST is used for displaying messages to the user. However, it can be used to display almost anything. From webviews, to pictures to videos which cover the entire screen. Theoretically, it has a time limit of 4 seconds. Naturally, the TOAST can be done over and over again to bypass this limitation though.
  • The fake system message can be used for clickjacking and Ransomware. The clickjacking is particularly bad since we can hijack a click to approve location permissions or start a phone call. Overall, interesting research into bypassing the permission boundaries of the Android eco-system.

Gregor Samsa: Exploiting Java's XML Signature Verification- 993

Felix Wilhelm - Project Zero    Reference →Posted 3 Years Ago
  • While reviewing the Java standard library, the author came across a strange attack surface: a custom JIT compiler for XSLT programs. The reason this looked so juicy was that this was exposed to remote attackers during XML signature verification with things like SAML used for SSO.
  • While most signature schemes work on a raw byte stream, XML signatures operate on a standard known as XMLDsig. This protocol attempts to be robust against small changes to the signed document, such as white space changes and other things.
  • The signature appears in its own XML tag within the document with many special fields including the information that is signed, information about the key and many other things. The verification is done in two steps: reference validation (transforms on the document itself) and signature validation.
  • When looking at the transforming supported by XMLDsig, there is a format called Extensible Stylesheet Language Transformations (XSLT). This is a programmaging language with the purpose of modifying an existing XML language. It can do things like request remote data, edit the document itself and do other things.
  • In Java, the signature verification is done then the transformations occur. Java will iterate through each of the transformations and perform them on the XML document. This calls a module called XSLTC, which is a XSLT compiler from the Apache Xalan project.
  • The compilation takes in the XSLT stylesheet as input and returns a JIT'ed Java class. The JVM loads this class, constructs it and runs the executable as code. The library depends on the Apache Byte Code Engineering Library (BCEL) to dynamically create the Java classes. Constants get stored in the constant pool and smaller integers get stored on inline.
  • The constant pool is only 2 bytes in size. However, neither XSLTC nor BCEL considers this constraint, leading to an integer overflow at 65535 entries. When JIT'ing the program, BCEL writes the internal class representation with all of the constants but the length is truncated.
  • Practically, an attacker has the constant_pool_count at a small value, meaning that the rest of the pool will be interpreted as method and attribute definitions. The pool starts with a 1 byte tag describing the type of constant, which is followed by the actual data. How do we exploit this though!?
  • There's no dynamically sized value with complete controlled content. Although there are strings, they are in a modified UTF-8 without nullbytes. The field CONSTANT_DOUBLE can be used to create floats with nearly arbitrary content. This gives quite a bit of control but every other byte is 0x6 still because of the field directly AFTER the constant pool.
  • To make this work, we need to spoof the metadata fields after the pool properly. With deep knowledge, about the fields (and much trial and error) Felix had a good way to spoof it using a combination of UTF-8 entries and doubles. With the initial headers made, we can get to the interesting part of the class file definition: the methods table - the methods and bytecode definitions.
  • To add the proper code, we need to align ourselves properly to create the code for a constructor. With this, we can set the bytecode of a function that will be executed. Additionally, to reference classes in Java, we can include an XSLT snippet. Boom, code execution!
  • The Java SDK will verify then run. So, what's the big deal? If an attacker can set their own key, which is particularly common with multi-tenant SAML, then this code path can still be hit. Additionally, secureValidation forbids the usage of XLST transformations in code but is turned off by default.
  • Overall, an amazing post. A few good things I learned from this:
    • Studying technology for unknown but powerful attack surfaces is worth the effort.
    • Error tolerant systems are much easier to exploit. Things with strict exits on errors are harder to do with limited primitives
    • Memory safe languages still suffer from many problems, including issues with integers, as shown in this post.