Resources

People often ask me "How did you learn how to hack?" The answer: by reading. This page is a collection of the blog posts and other articles that I have accumulated over the years of my journey. Enjoy!

Accellion Kiteworks Vulnerabilities- 591

Adam Boileau - Insomnia Sec    Reference →Posted 4 Years Ago
  • Accellion has a large collection of products that are meant to secure the ecosystem against outside attackers, such as encrypted email, secure file sharing and many other features. Kiteworks is a Content Fire firewall product.
  • The application wrote its own query builder library for SQL. However, there is no safe or standard way for protecting ORDER BY or LIMIT constraints. As a result, there are two SQL injections because of this. With the ability to stack queries on top of each other in the application, this vulnerability allows you to call UPDATE or exfiltrate arbitrary data.
  • Once the UPDATE was used to change or create a user with Admin privileges, this is enough to compromise the application. However, the author wanted to go from user to shell!
  • The product allows several SMS backends but also has a generic option that allows arbitrary HTTP requests to be sent from the Kiteworks host. Additionally, this had a test method to see if the SMS service was working properly. The message could be sent through the API and was sent to the phone number specified. The generic, data sent back and test just scream exploitable SSRF!
  • Using the SSRF, JWT tokens could be requested from the internal web server. Additionally, the backend has Apache Solr search engine. It is a known misconfiguration to have remote streaming enabled, which allows for an arbitrary file read from the operating system. Boom!
  • Using the arbitrary file read allows us to steal the HMAC key for the admin to make intra-cluster calls. With becomes particularly bad because an attacker who can obtain a valid JWT and the key material to HMAC can simply call into an endpoint like /dbapi/cli_exec/execute via the front-end webserver exposed to the internet, and have arbitrary commands run via the shell.
  • The user has the uid=500 or non-root permissions. There is a script that attempts to protect against root level access of particular binaries and scripts, which does an insane amount of validation on it. However, after patiently reading through the operating system, the author of the article noticed a permissions issue on the location with some of the binaries. As a result, one of the secure binaries could simply be switched out with another script to become root.
  • To make this attack more spicy, there is a reflected XSS vulnerability on one of the APIs. Using this, the rest of the vulnerability chain mentioned above could be performed. Unauthenticated to popping a shell!
  • The article ends with a failed attempt to get code execution via parsing read-only documents. The author touches on many different potential ways that were partially reviewed and things with known vulnerabilities being used. Additionally, the app runs within a sandbox called Firejail.
  • The sandbox itself is known to be secure; it limits access to the file system, network resources and many other things. The network was restricted by limiting access to all inet sockets. However, the application uses Nginx as a frontend to forward requests to the backend via domain sockets. The deployment of PHP is done using the “fastcgi” mechanism, where a long running PHP server receives requests to invoke scripts, to avoid the cost of process start-up.
  • The limitations of the socket are only done on the inet and not Unix. As a result, a Unix stream socket connection to the PHP FastCGI server can be made to execute a PHP script within the directory. With this, we have code execution outside of the sandbox.
  • Overall, the SSRF exploitation, privilege escalation and sandbox escapes were unique and enjoyable to read. Seeing the full attack and the scrapes of notes at the end were awesome.

The Complete Guide to Prototype Pollution Vulnerabilities - 590

Daniel Elkabes - WhiteSource    Reference →Posted 4 Years Ago
  • The Prototype Pollution is a vulnerability specific to JavaScript (JS) that requires a deep understanding of JS. In JS, there are object, which are a key-value pair (similar to a dictionary in Python). A Prototype is an attribute of an Object that allows for objects to inherit features from one to another. Even a prototype can have a prototype; this is called a prototype chain.
  • The __proto__ attribute of an object has some unique and interesting traits:
    • It is a special attribute that refers to all the Prototype of an object
    • all Objects have __proto__ as their attribute (Prototype)
    • __proto__ is also an Object
    • __proto__ was meant to be a feature, to support processes like inheritance of all attributes
  • What if we could alter the lead prototype object? If we could do this, then all objects would inherit from this! In the context of JavaScript, this would allow us to change the object information for all other objects of the same type being used.
  • On the frontend, this commonly leads to XSS. On the backend, this could potentially lead to RCE even. The whole point is that we are altering or pre-setting fields that could alter the flow of the program.
  • How to find this vulnerability? The deserialization of a string to JSON object or recursive merge operations are good places to look. Here's an additional video by Intigriti.
  • Prototype Pollution is interesting by itself but difficult to find. Keep out an eye for this in the future with testing.

Wodify Security Advisory- 589

Bishop Fox    Reference →Posted 4 Years Ago
  • The Wodify gym management web application is designed to help gyms grow. It is heavily used among CrossFit boxes, mainly in the US, but also across other continents and countries.
  • The application had three vulnerabilities. The first two are fairly standard: 4 stored XSS and insufficient access controls via an IDOR. These are normal and not very unique though.
  • The final bug was a bit more interesting though! There is a specific page that exposes the user's hashed password and JWT, but only to the main user. At first thought, this does not seem like a terrible security problem as only the user can see it.
  • However, one of the stored XSS vulnerabilities mentioned above could be used in order to exfiltrate this information. Now, this is definitely an issue and should be fixed as a defense-in-depth finding.
  • Just because the authorization works properly, does not mean that an information disclosure is not valid. To me, anything that allows for persistent access to an account from a single vulnerability or a single view should be cause for concern. For instance, the ability to change a password without knowing the current password would be an issue. Interesting callout!

You're Doing IoT RNG- 588

Dan Petro - Bishop Fox    Reference →Posted 4 Years Ago
  • Random numbers are very important to security. For instance, they are used for encryption keys, authentication tokens and business logic. Even though random numbers are important, they are terrible at generating random numbers. By design, they are deterministic; 1 + 1 should always equal 2.
  • There are two types of random number generators: hardware and software. The hardware generators are the focus of this article/video. Hardware RNG design has two common implementations: analog circuit and clock timings. Analog circuit has a bit that flows between the two values: 0 and 1. As a result, this is mostly random. The latter way is to get the difference between two clock timings.
  • With hardware random number generators, there can be issues. With the analog circuit method, you must give it time to go onto the next cycle. Otherwise, the same number will be given twice. For the clock method, it is possible for the clocks to sync up. So, if you're not calling the function too much, you are likely okay.
  • Lots of IoT devices are not using operating systems. As a result, a call to a HAL (hardware abstract layer) is made to the hardware RNG. This function has two values returned: an output variable (random number) and the return code. It turns out that no one checks the return code. This undefined behavior can result to all 0's being performed or partial entropy throughout.
  • Instead, use a cryptographically secure pseudorandom number generator (CSPRNG). They never block execution, the API never fails and it pools from many sources. Most operating systems get hardware randomness, timing, network and many other items for randomness. Then, this randomness can be used to seed CSPRNG to get something mostly random. How do we fix this problem? Instead of just getting hardware random numbers, like most IoT devices do, there needs to be a built in CSPRNG subsystem.
  • How can this be actually exploited? It really depends on the device and business logic! One common attack against hardware random number generators is to generate a key for asymmetric encryption. This will likely eat up all of the entropy and cause non-random numbers to be returned.
  • In general, there are two ways for blackbox approaches:
    • View the output of the RNG from the application. For instance, the RSA keys or certs mentioned above.
    • Tax or constantly call the RNG. This will likely cause the RNG to be lower entropy or return 0's.
    For whitebox approaches look for return codes not being validated and common security issues.
  • The authors claim that code that interfaces with the hardware RNG are also vulnerable. For instance, the hardware RNG output from some vendor mentioned on page 1006 and 1052 pages how to properly use the output for security based events. To get proper entropy, after getting a 32-bit number, the next 32 calls to the RNG had to be thrown out. Otherwise, they would not be using proper random numbers.
  • The authors took a look at a few chips to see the random number generation. When looking at the Mediatek 7687, they noticed that the statistical analysis of the numbers being taken out was no-where near completely random. Some numbers occurred much more frequently than others. The Nordie nrf52840 had a problem where the 0x50th (or slightly more) byte was always 0x0.
  • There are a lot of good tools for doing statistical analysis. For instance, dieharder, number circle and many other tools.

Breaking Secure Bootloaders Part 2 - DEFCON 2021- 587

Christopher Wade    Reference →Posted 4 Years Ago
  • The NXP PN533 is a NFC chip used on mobile phones. They all use the ARM Cortex-M architecture. The chip communicates over the I2C interface (/dev/nq-nci) and uses a custom protocol for updates. The updates sound nice but how do we get the update to occur?
  • The Android phone had two firmware files on it. By changing the name of these files, the firmware updater will notice that the version numbers are different than the current. As a result, the update occurs, which can be snooped by logcat.
  • To do the firmware update, a hash chaining process is used. The write command goes to any location in memory that we would like to use but needs to have a valid hash.
  • After reversing the file format, the update process and much more, the author wrote a targeted fuzzer on the firmware update and NCI interfaces. From fuzzing, the author discovered vendor specific NCI commands. One of these was a NCI Config Write command. Although something useful may have been possible here, the author bricked the firmware on the chip by corrupting the configuration.
  • While fuzzing, the author noticed that the last block of the firmware update could be written multiple times. This implied that the hash of the previous block was still in memory, was global in some sense. Because of this, the author was looking for a potential buffer overflow to corrupt parts of the firmware. When sending an invalid command with the same size as the firmware update block, the update would fail. This implied a buffer overflow on the static RAM.
  • What does this mean? The author could create a modified hash to write to portion of memory. Because this is a hash chain and they could overwrite it, the security of the hash chain had now been broken. By doing this overwrite over and over again, we could write to any memory block on the chip. With this overflow, the author could overwrite parts of memory that we using the firmware.
  • Now, it was time to patch in new features! The first thing was that the author changed the NCI version command to read from an arbitrary location in memory and send this out. The author found that the global pointer pointed to 0x100007, which could be used to dump the bootloader directly.
  • The entire bootloader was dumped using read commands from above. With this in hand, the author noted that the firmware could be overwritten in arbitrary ways (on the chip) for a consistent backdoor or just extended functionality on NFC. The PN5180 had the same exact vulnerability. Although, this was likely to be on all similar chip sets.
  • The reverse engineering and blackbox testing is incredible to see in action. Instead of having access to GDB, very subtle assumptions need to be made in order for this work. Even though the vulnerabilities were fairly straight forward, the hard part lies in actually finding them and figuring out how to exploit them in the blackbox setting. Great research!

Breaking Secure Bootloaders Part 1 - DEFCON 2021- 586

Christopher Wade    Reference →Posted 4 Years Ago
  • Smartphone manufacturers often use signature verification to protect their firmware. In order to get root access, the signature verification mechanism needs to be disabled. This requires contacting the manufacturers to get the phone to get the phone unlocked. Besides this, custom tooling is required in order to unlock the bootloader from the device. If you own you should be able to pwn it!
  • On the authors Android device, there is a signature verification being done on the firmware, which is verified by the bootloader. When updating the device over USB, most Android bootloaders speak fastboot. This is a basic USB interface with a myriad of commands for flashing, updating and gathering much information. It should be noted that since most bootloaders are open source and modified, it is important to analyze the firmware directly with a disassembler.
  • Since custom modifications are a great location for finding bugs, the author looked there first. They noticed that the flash command had been modified to allow flashing of specific custom partitions, even when the bootloader was locked. When making a custom fastboot binary, the author accidentally caused a crash with improper ordering of the commands. This appeared to be a buffer overflow in some parsing functionality.
  • But, how do you find the result of the crash without having a debuggable setup? You cannot just add GDB to this! In addition, a hard reset is required in order to get the phone working. To dump the memory of the phone (to learn more about the current state). So, the author wrote an automated script that would overflow by a SINGLE byte then see if a crash occurred. If not, we checked the next byte. If the phone crash, we tried another value. Although this is not perfect, it is good enough for identifying.
  • The author viewed the data from the crash and determined that it was opcodes. From there, they searched for similar patterns and values in the disassembled version of the bootloader to find out it was part of the bootloader itself! The buffer overflow was overwriting the bootloader itself in RAM.
  • The author tested the same vulnerability on a different phone and found the same issue but using a different amount of bytes until the crash. This implies that the vulnerability is present but the phones just use a different memory layout. The issue affected the SDM66 chip from Qualcomm.
  • The Qualcomm chip encrypts the userdata partition. This prevents chip-off analysis using an internal security mechanism on a chip. If an unlocked bootloader tries to access the partition, it is identified as corrupted. The keys are inaccessible (even with code execution) and the EFI API to decrypt the partition is not modifiable. The API verifies whether the bootloader is unlocked and whether the firmware is signed before allowing access to the keys. The new goal is to bypass this to decrypt the partition.
  • The author was looking at how the flow of execution works. They noticed that there is a large difference between where the verification is done and the execution is done; this is a classic bug known as time of check - time of use (TOCTOU). The author had to modify the bootloader in a very particular ways in order to exploit this:
    • Verify with one image the actually use another malicious one.
    • Change the boot command to be accessible. Since the bootloader is locked, the Android image can access the keys. Game Over!
  • This video is really long and has two different exploits of two different chips. So, this is going to be part 1 of my analysis.

OTA remote code execution on the DEF CON 27 badge via NFMI- 585

Seth Kintigh    Reference →Posted 4 Years Ago
  • Neat Field Magnetic Inductance (NFMI) is a short range physical layer that communicates by coupling a tight non-propagating magnetic field between devices. Similar to radio waves but only for short distances. It is uses two coils to communicate to each other.
  • The DEFCON 27 badges communicated over NFMI. The MCU does the bulk of the work and the NFMI chip over UART does the communication with the other badges.
  • For debugging the badge, there is a JTAG interface, serial interface and a SWD that allows for doing whatever you want on the device. In particular, you can dump the firmware, read registers or do whatever you want.
  • While reverse engineering the firmware with IDA pro, the author found a horrible buffer overflow. The code writes bytes into a static buffer until it finds the character E. However, there is not limit on the amount of bytes we can write!
  • To run a quick POC, the author connected to the badge over SWD and added a large packet to the ring buffer on the transmission. Of course, this caused a crash!
  • But how do we write custom code to exploit this over NFMI? First, the center frequency is somewhere between 10.579MHz and 10.56MHz. From a few random sources, some things were found but a bunch of reverse engineering of the signals was required. The author goes into how the signal works but the notes for this are not included.
  • Eventually, they took to reversing the firmware of the NFMI chip itself. They needed to use SWD on the NFMI chip but the traces were in the middle of the board. After finding the reset line, the author scratched off a layer of the board, cut the line to the MCU from the chip and soldered a wire onto the line.
  • After connecting over SWD, the author tried a bunch of different configurations until one of them eventually worked. From reversing the firmware, there was code to drop all packets over 11 bytes. So, what happened? The badge had more buggy firmware!
  • When receiving the packet over UART from the NFMI chip, a few weird things are done:
    • The badge copies the data from UART byte by byte. If this runs out of space, a partial packet is used without the proper delimiter.
    • There is an off by 1 error that makes sure that two bytes are free when only one is being copied. This allows for an odd number size of packets.
  • By truncating data in the perfect way, we can convince the firmware that a packet is MUCH larger than it actually is! Even though a packet cannot be larger than 11 bytes, we can make something look like it's larger than 11 bytes. Now, the buffer overflow we originally saw it exploitable!
  • Since the data being sent out is limited in characters (because of encoding), it is possible to crash the badge but not get code execution. Still, this is super interesting!

Response Smuggling: Pwning HTTP 1 1 Connections- 584

Martin Doyhenard    Reference →Posted 4 Years Ago
  • Websites do not simply go from request to server now-a-days. There are proxies in between, redirects and many other things going on. What would happen if two different passes understood the request or response differently?
  • This attack was originally used on the request in order to trick the service to what request was actually being sent. This can be done by sending multiple Content-Length headers, Transfer-Encoding and Content-Length or whenever two different requests are being made. This article discusses a new way to cause a desync but via the response pipeline.
  • The Connection header is used to specify connection information in a request. In particular, it tells how persistent a connection should be. This is a Hop-by-Hop, which means it is dropped between proxies.
  • The Connection header specifies which other headers are part of the specific connection. Then, this other connection specific headers are removed from the request when it is forwarded to the next part of the pipeline. What if we sent the Content-Length header instead?
  • By removing the Content-Length header from the request, the body of the original request will be interpreted as the start of the next request. The original request is just seen to have an empty body; this is a vulnerability in the RFC itself! Can this be exploited?
  • A few ideas from the original request smuggling:
    • Bypass FrontEnd controls with the new smuggled request.
    • Change Response of a different user with the desynced queue.
    • Web Cache Attacks.
    • Make an existing vulnerability, such as reflected XSS, much more impactful.
  • We can do something better though! By smuggling in two requests within a single request, we can mess up the response queue! The second attacker requests response will go back to the victim, leaving the victims request! By making a final request, we will receive the response to the victims request instead of our own. Damn!
  • There is an issue with getting the correct response back though. If there is a response but not connection in the queue, the response is dropped. As a result, the smuggled request should be a time consuming operation to get it to be sent back to our victim.
  • It is also possible to concatenate responses back to the victim from a users request. This is done by smuggling in a HEAD request that contains a Content-Length header, which is against the RFC but very common. Then, with the second smuggled request contains a reflected endpoint, we can send arbitrary data back to the victim.
  • The article contains a few other techniques that work as denial of services as well. Overall, this is amazing research that will help many researchers find bugs in the future!

Timeless Timing Attacks- 583

Tom Van Goethem & Mathy Vanhoef    Reference →Posted 4 Years Ago
  • Timing attacks are used all over the place in order to implicitly figure out data. Timing attacks are common with cryptosystems to leak information about the key. A timing attack is a specific version of a side channel attack.
  • While on the web, this attack is significantly harder because of network jitter. The higher the jitter, the lower success of the timing attack. By moving closer to the target, adding more requests and a few other tricks, it is possible to statistically analyze the results to figure out the timing of some action. Can this be improved?
  • At this point, absolute response timing is inconsistent because of network jitter. Let's remove this! This can be done by exploiting concurrency to force all of the requests to have the same network jitter on the response.
  • Instead of viewing the response time we only care about the response order. In order to make this possible, requests need to meet the following requirements:
    1. Requests need to arrive at the same time
    2. The server needs to process the requests concurrently.
    3. The response order needs to reflect the difference in execution time.
  • For item #1, there are a few ways to do this. With HTTP/2 or HTTP/3, there is multiplexing that processes multiple packets at the same. With HTTP1, we can use network encapsulation with either Tor or VPN to achieve this.
  • For item #2 this is application dependent. For item #3, the ordering SHOULD be the same. But, it may require viewing the TCP ordering fields to validate. Both of these are doable tough.
  • This new technique blows the old way out of the water! The traditional attack depends on the location of the server and the amount of requests that can be made to only get a precision of 10ish microseconds at the best. The new technique allows precision of nano seconds, with 5 microseconds of precision within only 50 requests! Damn, this is a game changer.
  • The authors took this new knowledge and implemented it in a few places. They used it for a cross-site search attack on HackerOne and the WPA3 WiFi Protocol handshake for EAP-pwd. Exploitation of timing attacks just became more practical!

Snapcraft Packages Come with Extra Baggage- 582

Amy Burnett - Ret2    Reference →Posted 4 Years Ago
  • Snapcraft is a new Ubuntu package management system. This is similar to apt-get.
  • The initial discovery of the bug was during a CTF while doing a pwnable challenge. While the author was building a CTF challenge with Docker, it segfaulted. Since Docker NEVER segfaults, they explored the issue more. From looking at strace, they noticed that this crashed when loading the local version of LibC!
  • This bug looked familiar to DLL Hijacking on Windows machines. This technique exploits the search path when looking for libraries. If a library is not found, then it goes to the next location to find it. The idea is that if we control one of those locations on a privileged process, we can get our own code to run within it.
  • The PID of the crash of Docker was associated with snap. Snap preaches security by containerization. But, most applications include the home plug interface that allows for the home directory to be accessible in the container. This is the reason that the LibC was loaded!
  • Snap packages require a wrapper to launch the container around the application. So, this is likely the case of a bad LD_LIBRARY environmental variable path. The path has a small bug in it: ::. Although this does not seem like an issue at first, the Id is parsed as the current directory! Damn, that's horrible.
  • This bug allows for the loading of arbitrary code into the bulk of applications wrapped with snap, including Docker, VLC and many others. This application is sandboxed though; is there anything that we can do? Can we escape the container?
  • A large amount of Snap applications are GUIs, which utilize the x11 plugin. This exposes the /tmp/.X!11-unix/X0 domain socket to the container, which allows us to send the same command that other windows can. This allows us to send keyboard strokes or mouse inputs to the system. For instance, we can send keyboard strokes to the terminal itself in order to pop a shell :)
  • A few takeaways for me:
    • Be observant of strange or unexpected behaviors. There may be a bug lurking close by.
    • Containered does not necessarily mean secure! Even within a containered environment, the author was able to escalate privileges some of the time.
    • Any application setting LD_LIBRARY_PATH should be diligent in ensuring it does introduce sideloading of libraries from unintended (i.e. relative) directories.