Resources

People often ask me "How did you learn how to hack?" The answer: by reading. This page is a collection of the blog posts and other articles that I have accumulated over the years of my journey. Enjoy!

This shouldn't have happened: A vulnerability postmortem - 688

Tavis Ormandy - Project Zero (P0)    Reference →Posted 4 Years Ago
  • The author of this post found an extremely straight forward bug that has been around for quite some time! The first part of the article explains the bug, then the author dives into why this wasn't discovered and how we can find these types of bugs in the future.
  • Network Security Services (NSS) is a cryptographic library that is maintained by Mozilla. When you verify an ASN.1 encoded digital signature, NSS will create a VFYContext structure to store the necessary data. This includes things like the public key, the hash algorithm, and the digital signature.
  • In this implementation, the RSA signature has a maximum size of 2048 bytes which is 16384 bits. What happens if you use something bigger than this? Memory corruption! An attacker controls the data and the size that gets put into a memcpy for a fixed sized buffer.
  • The bug is fairly straight forward; copying data into a fixed size buffer without any sanity checks. The author then asks "How did this slip through the cracks?" Mozilla does nightly fuzzing of this library, ASAN could have detected this easily, lots of people look over the code... so, what went wrong? The author has three points for this.
  • The author of this post actually found the bug through fuzzing. They were experimenting with a different coverage method than block coverage. One of these was stack coverage, which monitoring the stack during execution to find different paths this way. The other way was object isolation, which is a way to randomly permute templated data,
  • First, the library is missing end-to-end testing. NSS is a modular library, meaning that each component is fuzzed individually. For instance, QuickDER is tested ONLY by creating and deleting objects, but never actually uses them. Since the buggy code only happened when verifying a signature, the fuzzer would have never caught it.
  • Another issue is that fuzzers typically cap themselves on the size of inputs in order to be faster and get coverage quicker. In the case of this library, the size cap was 10,000 bytes. However, these limits are arbitrary and may lead to missed findings, as lots of vulnerabilities occur at the extremes.
  • All of the NSS fuzzers are represented in combined coverage metrics by oss-fuzz instead of individual coverage. This data was misleading, as the vulnerable code is fuzzed extensively but by fuzzers that could not possibly generate a relevant input. This is because the testing uses hardcoded certificates, which is where this code was at.
  • Overall, the bug was really simple. The analysis of why such a simple bug lived so long in the code base was fascinating. To team, it means getting as much real coverage as possible and leading the randomization algorithms to its thing. Consider all of the possible cases where something could be used and ensure that the fuzzer can do this.

Discovering Full Read SSRF in Jamf - 687

Shubham Shah - AssetNote    Reference →Posted 4 Years Ago
  • Jamf is an application used by system administrators to configure and automate IT tasks. There are cloud (SaaS) and on-premise variations of the product. It is a popular MDM (mobile device management) solution for Apple products.
  • The authors were curious if they could find any server side request forgery (SSRF). To do this, they looked for HTTP clients used within Jamf. When building software, it is not uncommon to find a HTTP client wrapper that is used by the rest of the code base, which was the case here. When searching for this in the rest of the code base a bunch of other occurrences came up.
  • By going source to sync, they found all locations where URLs that were controllable by the user were making HTTP calls; this led them to 6 usages in the code. One of them was functionality to test view an image from a distribution point, which takes a user controlled URL then displays the content back to the user.
  • The result of this request is an XML page that has the image base64 encoded. However, the data does not have to be an image; it will base64 encode anything that you request! This gives the attackers the ability to make a request on the internal network and view the result of it.
  • The cloud version of this software is hosted on AWS. SSRF in an application hosted on AWS can lead to the compromise of the instance by making an HTTP call to the metadata service. A simple GET request to an internal IP will return the temporary security credentials of the environment, assuming that a role has been attached to the instance. Using this, it may have been possible to escalate deeper into the account, but the team stopped investigating and reported the bug.
  • The Jamf team monitoring noticed this strange behavior, causing alarms to sound. They banned the IP address making the request and disabled the instance where the exploit had been performed. As a temporary fix they added a WAF rule to all instances that blocked this type of request from being made.
  • Overall, good article on an impactful SSRF vulnerability and exploitation.

Fall of the machines: Exploiting the Qualcomm NPU (neural processing unit) kernel driver- 686

Man Yue Mo - Github Security Labs    Reference →Posted 4 Years Ago
  • The NPU (neural processing unit) is a co-processor on Qualcomm chips that is designed for AI and machine learning tasks. The NPU has a kernel driver that can be interacted with from user space on Samsung devices. Since this is Linux, the code is open source!
  • To interact with the driver, the file /dev/msm_npu is used. This driver has many IOTCL calls, such as allocate/unmap a DMA buffer, load/unload neural network model and several other operations. Most of the commands are synchronous with a few being asynchronous.
  • When loading an NPU model, there is a statically sized global array of contexts that are the different jobs taking place. When calling npu_close, the client pointer is removed from the network.
  • Since this information is global, all information associated with the old clients needs to be removed. By calling npu_close and async npu_exec_network at close to the same time, the client is used but the NPU is never cleaned up! This leads a use after free on a pointer in the global buffer. By replacing the client object with a fake object arbitrary kernel functions, with 1 parameter of control, can be called.
  • The next bug is very strange; it is like the code was never checked for functionality. While calling npu_exec_network_v2, stats_buf can be specified to collect some debugging information. But, this never worked? Instead of specifying the buffer address, an additional dereference was used! &kevt->reserved[0] should have been kevt->reserved[0].
  • The bug above lead to the leaking of the stats_buf address rather than the copying the contents. This allowed the attacker to learn where this buffer was in memory and partially defeat KASLR. What a stupid bug that leads to another step in the chain.
  • The author noticed that an object was being never being initialized and some of the values of it were not guaranteed to be set either. By itself, this may not lead to any interesting bugs. However, when diving into this code further, this object was being copied back into memory, making it a good option for an information leak.
  • struct npu_kevent contained a UNION with four potential elements. In C, the compiler creates a UNION with the largest of the elements for size reasons. The largest element (uint8_t data[128])is an auxiliary buffer of size 128. When the copy happens when a small UNION field is used, such as struct msm_npu_event_execute_v2_done exec_v2_done, then the rest of the data is never initialized.
  • Now, here is the best part: all of the bytes unused by the other field in the UNION will be copied over! This is because the code sizeof(struct msm_npu_event)) takes the size of the struct with the largest field in the UNION for the size. So, even though the used parts of the UNION were initialized, the rest of the buffer was not. Damn, this is an awesome bug!
  • To bring this all together, the third vulnerability can be used to defeat KASLR and all other randomness. The second bug can be used to determine the address of stats_buf, which is important for creating a fake object. The first vulnerability can then have a fake object, on the use after free, that calls a function pointer to get code execution.
  • Once code execution was achieved, the author needed to bypass control flow integrity (CFI). The goal was to call __bpf_prog_run32 with bytecode pointer that should be executed in the kernel. Since the parameters were not setup properly, they needed to find a function to control the second parameter. Moving from parameter 1 to parameter was easy because of the large occurrence of small wrappers in Linux.
  • Overall, these bugs were difficult to spot bugs that were either found by code review or accident with somebody intentionally looking for these bugs. For me, if I see a UNION or global variables being shared, I'll make sure to check out this flow. Great article!

Linux: UAF read: SO_PEERCRED and SO_PEERGROUPS race with listen() (and connect())- 685

Jann Horn    Reference →Posted 4 Years Ago
  • In the Linux programming, sockets are how network connections are made and data is sent. Finding vulnerabilities in the network stack can be catastrophic, since this can be triggered via remote access with no user interaction.
  • When sock_getsockopt handles the option SO_PEERCRED there is no lock when copying the data to userspace. Why is this bad? Because of the missing lock the object could be deleted then sent back to the user with a use after free vulnerability.
  • This race can be triggered by calls that trigger the updating of sk->sk_peer_cred. This is because the creds are replaced, then freed. If the other process/thread is accessing the structure, then a use after free vulnerability could potentially occur.
  • The proof of concept reads the peer credentials of a listening socket over and over again in one thread. Then, in the other thread, they destroy peer credentials object. If this is ran with ASAN, then a use after free vulnerability crash occurs.
  • This vulnerability could be used for an informational disclosure only for privilege escalation; no useful writes could occur. Overall, this is a straight forward missing lock on access of a variable that leads to a real bad bug.

Full key extraction of NVIDIA TSEC- 684

plutooo    Reference →Posted 4 Years Ago
  • In 2018, the Nintendo's Switch security was in a bad place. The bootrom was vulnerable to an easy to exploit buffer overflow in the USB stack. Because of this, the flow could be hijacked, the DRM checks could be completely bypassed and this was in the bootrom, the security of the Switch was completely compromised.
  • How does one fix this? The AES root keys were stolen, meaning that all previous consoles were going to be compromised forever. The T210 chip (main CPU) has a security processor that was currently not in use. By using this chip, Nintendo has fixed their secure boot and added new material!
  • A CMOS transistor has an activation voltage of 0.6-0.7V. When the chip does not have the proper voltage, the transistors act in very funny ways. The main CPU communicates with the PMIC (power management chip) to set the voltage via i2c.
  • When dropping the voltage below a certain point, the CPU starts to act in strange ways. The USB bootrom can be used to compromise the main CPU. Using this, the messages can be sent over i2c in order to set the voltage.
  • This is the perfect setup for a differential fault attack (DFA). This involves causes glitches at the exact right time in order to leak data from the system. In this case, AES-128 has 10 rounds. The idea with DFA is to ignore the first 8-9 rounds, and only focus on the last 2 rounds. If you can get 1-2 bitflips in the last two rounds, you can solve for the key, which is pretty awesome! A reference to DFA can be found here.
  • Bitlocker is a Full Disk Encryption (FDE) solution offered by Microsoft for the Windows Operating system. There are many different configurations for how this is setup practically.
  • One of the setups is having a Volume Master Key (VMK) within the Trusted Platform module (TPM). When the decryption process needs to take place, the key is sent from the TPM to the computer. The problem with this is that if you can sniff the TPM to sniff the data, including the key, to eventually decrypt the entire drive.
  • The first part of the article is dedicated to finding the TPM (and the pins on it) on a Lenovo Thinkpad. After doing some digging, they figured out that this was using the LPC protocol in order to send data, which requires 6 pins.
  • Even though the pins could be directly connected to via direct soldering, they ended up finding a series of pads to connect to. From doing continuity tests, they figured out what each pad did, eventually getting a clean connection!
  • To originally observe the traffic, they used a MSO 19.2 logic analyzer. This worked for basic analysis, but the sample rate on the logic analyzer was not fast enough. So, they setup an FPGA to do the sniffing instead with the LPC Sniffer tool.
  • At this point, the bus could be sniffed to see all data coming and leaving the TPM. Using this, the VMK header can be identified, with the encryption key being viewable directly afterwards. At this point, it is real easy to view the key!
  • They had a consistency problem with output. As a result, they ran this test a few times and compared the output. As doing this, they were able to easily figure out the content. Damn, the key had been taken once it was sent out of the TPM.
  • After this, it was possible to decrypt the disk via the dislocker utility. This allows you to specify the VMK key, without rebuilding the FVEK (whatever this is). Now, Bitlocker was completely circumvented simply because data was being sent out of the TPM. The original article comes from Pulse Security.

Sigint for the rest of us - 682

Matt Blaze - WiFi Village DEFCON 25    Reference →Posted 4 Years Ago
  • Matt Blaze was given a grant at the University of Penn to look at the security of various wireless networking solutions. The goal was to improve the two-way public safety radio, known as APCO 25. This is the standard for digital radio used by the DoD, police stations and many other things. This has some encryption primitives but were not great at the time.
  • These radios are made by several vendors. At the time, Motorola was the only vendor that could load on encryption to be used though. These could be dropped in for standard radio or trunk radio. State and local tend to use trunk and the federal used more traditional systems.
  • The P25 Voice protocol looks as follows:
    • 9600 bits per second with 2 symbols being sent at a time.
    • 12.5KHz bandwidth, to co-exist with existing analog FM radio.
    • IMBE vocoder. Does a good job at encoding voices for being digital! The packets hold 180ms of audio then add some metadata.
    • All transmissions are a one-way model with no ACK or sessions. This makes security complicated because handshakes are not longer possible in this system.
  • Since this is a one-way protocol, the entire system only has symmetric encryption. It uses AES, DES and several proprietary variations of RC4. The keys, used for decryption, must be loaded on the radios in advanced. Additionally, there is an over the air rekeying (OTAR) in order to update the keys for the radio; these keys do expire.
  • The radios error on the side of demodulation. If the radio has encryption enabled but the sender is not encrypting the data, then the voice is demodulated anyway. There is a button to turn on and off secure mode, as well.
  • The first issue is that the voice traffic is NOT authenticated what-so-ever. This means that encrypted traffic can be replayed at will. Even though we do not know the exact content of the message, this could be used to spoof the user being used. If nonces (number used once), then this attack would not be possible.
  • The next attack allows for the discovery of all radios around. When using the radio, a Unit ID, TalkGroupID and NAC are sent with every transmission. The Unit ID is supposed to be encrypted but on the ping message, which can check the encryption ID, the Unit ID is sent back in the clear. This helps to discover all idle radios in the area by using the issue mentioned above. The author calls this the Maurauder's Map of police cars and things.
  • Another interesting attack was the encrypted DoS attack. By selectively jamming a specific part of the frame (64 bit NID), the entire rest of the attack would be thrown out. For 864 symbols of data, only 32 symbols needed to be jammed, which requires a substantially less amount of energy for a jamming attack than what should be required. Using this, if somebody was using the radios in encrypted mode, they would get frustrated and switch back to the unencrypted mode, allowing you to hear the text in the clear. Kevin Mitnick used to do this back in the day!
  • Even though the over the air rekeying is added, there is a major problem with it. When one radio attempts the rekeying, if one of them is NOT listening, then they do NOT get the key. While in the field, there is no way to rekey a radio, making it completely useful. Practically, this means that NOBODY will be using the encryption.
  • To make matters worse, the authors noticed that many of the stations using P25 sent data in the clear, with no encryption! "The first rule of cryptoanalysis: Look for plaintext". They setup a large amount of radios to sit on the federal P25 spectrum. By looking at the encrypted lines, they would find the bands sending sensitive information. It turned out that some of the radios did not have the encryption on, with tactical information being leaked, confidential informants and many more. The only agency that they never saw data from was the postal inspector (lolz).
  • The mitigation for P25 not super helpful. Instead of giving real findings, the author simply says that the protocol needs to be completely rewritten in order to be secure. Currently, the usability is poor, making it easy to accidentally send data in the clear.
  • Overall, the talk sheds light into issues with the P25 protocol used in lots of radios today. There is a false sense of security in the radios because of a fail open method and usability problems making it trivial to accidentally send data in the cleartext. The talk has many great stories of them interacting with law officials and many practical things that happened during the research.

Sigint for the rest of us - 681

Matt Blaze - WiFi Village DEFCON 25    Reference →Posted 4 Years Ago
  • Matt Blaze was given a grant at the University of Penn to look at the security of various wireless networking solutions. The goal was to improve the two-way public safety radio, known as APCO 25. This is the standard for digital radio used by the DoD, police stations and many other things. This has some encryption primitives but were not great at the time.
  • These radios are made by several vendors. At the time, Motorola was the only vendor that could load on encryption to be used though. These could be dropped in for standard radio or trunk radio. State and local tend to use trunk and the federal used more traditional systems.
  • The P25 Voice protocol looks as follows:
    • 9600 bits per second with 2 symbols being sent at a time.
    • 12.5KHz bandwidth, to co-exist with existing analog FM radio.
    • IMBE vocoder. Does a good job at encoding voices for being digital! The packets hold 180ms of audio then add some metadata.
    • All transmissions are a one-way model with no ACK or sessions. This makes security complicated because handshakes are not longer possible in this system.
  • Since this is a one-way protocol, the entire system only has symmetric encryption. It uses AES, DES and several proprietary variations of RC4. The keys, used for decryption, must be loaded on the radios in advanced. Additionally, there is an over the air rekeying (OTAR) in order to update the keys for the radio; these keys do expire.
  • The radios error on the side of demodulation. If the radio has encryption enabled but the sender is not encrypting the data, then the voice is demodulated anyway.

Sigint for the rest of us - 680

Matt Blaze - WiFi Village DEFCON 25    Reference →Posted 4 Years Ago
  • Matt Blaze was given a grant at the University of Penn to look at the security of various wireless networking solutions. The goal was to improve the two-way public safety radio, known as APCO 25. This is the standard for digital radio used by the DoD, police stations and many other things. This has some encryption primitives but were not great at the time.
  • These radios are made by several vendors. At the time, Motorola was the only vendor that could load on encryption to be used though. These could be dropped in for standard radio or trunk radio. State and local tend to use trunk and the federal used more traditional systems.
  • The P25 Voice protocol looks as follows:
    • 9600 bits per second with 2 symbols being sent at a time.
    • 12.5KHz bandwidth, to co-exist with existing analog FM radio.
    • IMBE vocoder. Does a good job at encoding voices for being digital! The packets hold 180ms of audio then add some metadata.
    • All transmissions are a one-way model with no ACK or sessions. This makes security complicated because handshakes are not longer possible in this system.
  • Since this is a one-way protocol, the entire system only has symmetric encryption. It uses AES, DES and several proprietary variations of RC4. The keys, used for decryption, must be loaded on the radios in advanced. Additionally, there is an over the air rekeying (OTAR) in order to update the keys for the radio; these keys do expire.
  • The radios error on the side of demodulation. If the radio has encryption enabled but the sender is not encrypting the data, then the voice is demodulated anyway.

Blog posts atom feed of a store with password protection can be accessed by anyone - 679

Xenx - HackerOne    Reference →Posted 4 Years Ago
  • Shopify is a complete commerce platform that lets you start, grow, and manage a business. It has a protected atom feed, which is similar to RSS. These can be protected by a password.
  • The atom feed was password protected. By finding the preview version of this, the title and some of the content would be seen. Viewing this resulted in a leak of private information.
  • Overall, this was a really simple bug; just a lack of auth on an endpoint. Sometimes, accessing data in unexpected ways has access control problems. This reminds me of a DropBox vulnerability from a 2020.