Resources

People often ask me "How did you learn how to hack?" The answer: by reading. This page is a collection of the blog posts and other articles that I have accumulated over the years of my journey. Enjoy!

Defeating Spread Spectrum Communication with SDR- 666

Michael Ossmann - ToorCon 2013    Reference →Posted 4 Years Ago
  • Spread spectrum is the usage of more bandwidth than necessary on a radio transmission than what is necessary. In the land of radios, bandwidth is the width (in Hz) of the range of the frequency of the components in the signal. Some AM protocols use a narrow band in order to send the information, in the magnitude of 10kHz.
  • Bluetooth uses 100MHz, but only uses 1MHz at a time. Although this seems like a waste, it has the benefit of being less susceptible to interference by moving around. Additionally, multiple Bluetooth connections can be used at once without interference. The extra space with the benefit is known as spread spectrum.
  • There are two main flavors of spread spectrum: Frequency Hopping Spread Spectrum (FHSS) and Direct Sequence Spread Spectrum (DSSS). Are these secure? This talk is about tackling the security of FHSS and DSSS, as claims have been made on the security of this for over a century.
  • FHSS changes frequencies constantly (like Bluetooth). The first question is can we receive or view the data with FHSS? According to the speaker, this can be done with any SDR as long as we know the hopping pattern. An example of this is with the Ubertooth, which is used to hack Bluetooth and BLE. An additional implementation of this was done at ShmooCon in 2011, with code on Github. This can be used to jam or listen into signals.
  • An attack scenario is that SDRs can operate on many channels at once. By receiving or transmitting on all of the signals at once, we do not even need to know the hopping sequence! We can simply listen, then compute later if we want to steal the information. Additionally, if we want to transmit, we send the data on all of the frequencies, where one of them will work. Damn, that seems obvious but is quite clever!
  • When digitizing an analog wave, we do this by representing the data with 0 and 1s with a finite size and a finite amount of samples. When you get multiple bands sitting over the top of each other, this is called aliasing, which causes interference. Most of the time, we avoid this by filtering the frequencies that we want.
  • Most of the time, aliasing is bad. However, with FHSS, we know that the target is only using ONE frequency at a time. As a result, we can turn off the anti-aliasing to intentionally get the data across multiple frequencies to overlap. The author calls this intentional aliasing. In the talk, this is used to find data from multiple bluetooth channel at once, which is really awesome! On the HackRF, this required the turning off of the standard anti-aliasing and the addition of a bandpass filter to the antenna.
  • But, there is still a problem... the channels are overlapped with each other. But, the author used a trick in this to make the data somewhat offset. By using a sample rate that is not an integer multiple of the bandwidth of bluetooth, the frequencies are somewhat offset, even though they layered on top of each other. Wow, another amazing trick!
  • DSSS artificially inflates the amount of bits being sent for redundancy. The term chip is used in order to represent a value being sent. We turn a 1 into 12 chips to make it easier for device to decode down the road. Then, the 0 would be the inverse of the value of the 1. By using a correlation technique with the chip sequence, we can find the actual data being sent by looking for these spikes of expected signals.
  • This modulation technique is used in 802.11b/g, 802.15.4, Zigbee, satellite communication and GPS. When looking at a waterfall graph, the pattern is quite distinct! It has a uniform pattern across a large bandwidth that repeats. How is the security of it?
  • DSSS is not vulnerable to narrowband jamming. However, it is vulnerable to wideband jamming. Additionally we can send random codes to cause corruption if we know the chipping code. If it is really low on power, can we even find it? You can always detect signals!
  • Directional antennas can be used to easier find a signal but this is a given. Additionally, by multiplying the signal by itself the signal becomes really easy to see. To make things better, the chip rate is even obvious in the visualization with three spikes (two ends of the chip rate and the middle). Even an auto correlation feature can be used with this within GNU radio.
  • I am a little confused on how DSSS works still. To me, bandwidth is the size where data between two frequencies. Unintuitively, the increasing the data rate (amount of changes in signal) artificially increases the bandwidth. This concept is the basis of the protocol but it does not make intuitive sense to me.
  • At the end of the presentation, the author is trying to reverse the packet structure of a device they have. By looking at a data sheet, they figured out the beginning of a packet, the amount of chips in a bit and much more. By the end, the author would decode the information from the signal!
  • At the end of the talk, the author claims that spread spectrum is not a security feature, even if many things claim this. Sounds like an interesting thing to attack and go after! Many things still claim these to be a security feature.

This bug doesn’t exist on x86: Exploiting an ARM-only race condition- 665

Stong - Perfect Blue    Reference →Posted 4 Years Ago
  • The CTF challenge is running on a Raspberry Pie. The binary is a multithreaded C++ application for taking the length of a string. You can view or delete the results of this operation as well. The two main data structures are the request struct with a create+delete operation, as well as a thread safe ring/queue buffer that can be read and written for the strlen operation.
  • How it is thread safe? Instead of using locks, while loops are used in order to ensure that the queue is properly used. For instance, if the ring buffer is full, the producer will stay in an infinite loop until the ring buffer is not longer full. This same type of pausing is used on the consumer end when the queue is empty by checking if the head and the tail are the same.
  • CPUs are insane with all of the operations that they do. CPUs will commonly rearrange instructions, as long as the operations don't affect each other, in order to perform actions in parallel. In x86, the memory reordering is strict enough where nothing happens. On ARM, this is not the case.
  • Memory loads and stores can be reordered, assuming that the values down the line are not affected. In the example shown below, thread 1 could set the value of ready and value. However, since these could get reordered, the read on 4 may happen prior to the initialization in 1 even happening! Wow, that's wild.
    volatile int value;
    volatile int ready;
    
    // Thread 1
    value = 123; // (1)
    ready = 1; // (2)
    
    // Thread 2
    while (!ready); // (3)
    print(value); // (4)
    
  • Now, back to the program! In the ring buffer (a circular linked list), the consumer operation waits in a while loop for the head pointer to NOT equal the tail. In order to update this, the producer will remove the entry then sets the head to be the next element. The order seems fine, so what's the problem? CPU reordering!
  • If the CPU reorders the writing of the head pointer PRIOR to writing the pointer into the ring buffer, then our consumer thread can break out of the loop to consume the value. Since the pointer was never written to the queue (with the reordering), we will have access to a stale pointer from the previous usage of the slot in the ring buffer. There is a really good visual for how this works in the article as well.
  • This attack requires 20K requests in order to work. To make this work, we have an identifier on the string get updated, making it obvious when something has been corrupted. At this point, we have a pointer that points to the same object twice though!
  • The binary is compiled with partial RELRO (corruptable GOT entries) and no PIE, meaning that this would be a good target to overwrite a function pointer. Overwriting a function pointer (such as free) to point to system, is a wonderful way to exploit this.
  • To get an information leak, we can simply free one of the pointers we have access to. To turn this into an arbitrary read, a pointer in the chunk can simply be overwritten and point at an arbitrary location. Simple enough!
  • To turn this into a write primitive, we can abuse the second free! By writing to the location of the 'fd' pointer, we can trivially point to this any location and get the chunk back out later. GLibC Malloc does have quite a few mitigations though.
  • The double free protection can be bypassed by putting a chunk into the fastbin first, as the double free protections between the bins are not synced up. I love this trick! Overall, great write up! The bug was wild and I loved seeing into the mind of the exploit developer. Good work!

Type confusion due to race condition during tag type change in Android NFC- 664

Ned Williamson - Google    Reference →Posted 4 Years Ago
  • Android's NFC stack uses TCB (Task Control Blocks). These are used in order to keep track of the incoming tasks from the NFC controller.
  • Each TCB has an auxiliary buffer for the timer data. This appears to be setting a time limit on an action, with a resulting callback happening if the functionality times out.
  • Asynchronous actions are notoriously hard to secure! While fuzzing this functionality, the author discovered that while cancelling one of the TCB tasks, the timer was never cancelled. As a result, the callback could use the information from a removed timer.
  • The author demonstrates the vulnerability by transitioning between two different types. Using the race condition above, it is possible to swap in your own data for this information. This results in a type confusion.
  • The attached proof of concept leads to a segfault, but no mention is to the exact reason. Considering the type confusion, this looks like a fairly exploitable bug! You may need a memory leak to do anything meaningful with this though.

Race condition leads to Inflation of coins on Reddit- 663

yashrs - HackerOne    Reference →Posted 4 Years Ago
  • Reddit has coins that can be purchased. These coins can be used to give awards and do other things on Reddit. The API depends on the application, since some go through Paypal, some go through the Apple Store and some go through the Google Play store.
  • When calling the verify_purchase endpoint (which contains information from the payment in Google) there existing a Time of Check vs. Time of Use (TOCTOU) vulnerability. There is verification being done. However, by making the same request several times concurrently, the money gets added multiple times.
  • In the report, the developers at Reddit mention that they look for this type of issue by creating a DB lock to prevent this. But, the bug appears to be in the memcache lock having multiple entries because of the concurrent requests. Actual verification of the testing is important to verify a fix, as complicated eco-systems add unexpected outcomes.
  • Overall, a great and impactful bug in the Reddit coin handling. Damn, race conditions are so fun!

Stored XSS in Mermaid when viewing Markdown files- 662

SaleeMrAshid - HackerOne    Reference →Posted 4 Years Ago
  • Gitlab does some crazy shenanigans for their Markdown engine. One of these changes is the ability to inline Mermaid, which is a chart render in Markdown.
  • Mermaid supports HTML labels when using flowcharts. However, this is only possible with specific configurations that Gitlab does not have. Namely, the the securityLevel configuration cannot be strict. If we could get HTML into this, we could likely take this to XSS.
  • Mermaid supports the adding of directives, which can change the configurations. For obvious security reasons, several of these cannot be changed: secure and securityLevel are the two important ones to note here. By passing in flowchart.htmlLabels as the string "false" (not the boolean), we can bypass this allowlist since the string is being evaluated for existence instead of a boolean. We use "false" to get it through the allowlist.
  • Since flowchart.htmlLabels is set to some value, we can get the variable controlling it set to true. With this, the labels will now render the HTML directly, resulting in the injecting of HTML. But, what about JavaScript?
  • The page has a fairly strict CSP. Because the page uses nonces for inline scripts, injecting it via this is not possible. In order to bypass this, the author calls Workhorse (which serves pipeline artifacts) with an auto-detected Content-Type. Since the JS is now on the Gitlab domain, it believes that this JavaScript code is coming from the same domain as the page. This satisfies the CSP.
  • With the JavaScript code on the Gitlab domain, we insert directly into the DOM whatever we want. innerHTML does not accept <script> tags. Instead, we pass the script directly into an iframe srcdoc to get XSS on the page.
  • The triage report for this is super interesting. The author makes a few notes on how Gitlab should remediate this. First, they mention that Gitlab should add another item to the denylist for the flowchart.htmlLabels directive, which would prevent this attack. Secondly, they should not allow for potentially malicious Content-Types from the Workhouse. Finally, they mention that HTMLlabels should not be possible anyway.
  • The bug finder mentions that a lot of the security related code for Mermaid is quite broken. For instance, the anti-script settings should block all script execution. But, the author found multiple ways around this quickly, not even including the bug mentioned above. In reality, the project could use an upgrade on the code quality.

Blackswan - 7 Microsoft 0 Days- 661

Erik Egsgard - Field Effect    Reference →Posted 4 Years Ago
  • When sending I/O control requests on sockets, the request codes are verified in order to validate that internal functions cannot be checked. However, the TdxIssueIoControlRequest function accepts codes but does not do the validation. This is labeled as the first vulnerability.
  • With the ability to call an internal functions unexpectedly, many other bugs fell out of this. From this research, 4 exploitation paths surfaced. An arbitrary increment, arbitrary read/write via getting access to a pointer, TOCTOU on a buffer and an INFOleak.
  • The other two bugs are TOCTOU bugs. Windows IOCTLs have three different modes: buffered (copy user buffer to kernel), direct I/O (buffer mapped to kernel address) and neither (where the kernel will operate directly on a shared user mapping).
  • The two other TOCTOU bugs were mapped in the neither category. Because a user can easily write to this type of memory, validation is hard to do. It leads to many TOCTOU bugs.
  • To me, there are 3 bugs: the 2 TOCTOU bugs and the code validation bypass. By simply fixing the validation bypass, you also make those bugs completely unexploitable. But, I suppose the more CVEs the better!
  • The article has a bunch of background on the Windows OS. It was to good to see. Overall, good article with unique and hard-to-find bugs.

Squirrel Sandbox Escape allows Code Execution in Games and Cloud Services- 660

SIMON SCANNELL & NIKLAS BREITFELD - Sonar Source     Reference →Posted 4 Years Ago
  • Squirrel Lang is an interrupted language used by video games and cloud services that allows for custom programming. In CS:GO it is used to enable custom game modes and maps. This language runs in a sandbox in order to prevent the exploitation on hosted machines playing games.
  • The main SquirrelLang implementation is written in C. As a result, a series of memory corruption vulnerabilities could be used to break out of the system. This is also an object- oriented programming language (OOP) and looks similar to PHP.
  • When creating a class, there are two dynamic arrays: values and methods. Additionally, the _members field is used that maps the name of an attribute to their index in one of these arrays.
  • To know which array to go into, a bitflag within the index of the _members field is used. This bitflag is set at 0x02000000. Bitflags being held in a used value is similar to the size in the chunks in glibc malloc. Is the usage of the bitflag done securely?
  • Since the bitflag is at 0x02000000, could we create a class definition with 0x02000000 methods or variables? If we add 0x02000000 methods, then try to get this as a variable, the program will immediately crash! We have got a type confusion vulnerability.
  • Here's an example flow:
    1. Create 0x02000005 methods and 1 field.
    2. The attacker accesses the method with the corresponding index 0x02000005.
    3. The _isfield() macro returns true for this index as the bitflag 0x02000000 is set .
    4. The _defaultvalues array is accessed with index 0x5. However, it only contains 0x1 entries and thus the attacker has accessed out of bounds.
  • Using the type confusion vulnerability, we can use the value accessor to write and read values IF we can create a proper fake object (lots of misdirection).
  • A good usage of this misdirection was setting the _value to retrieve an array type. By using a OOB access, we could control the base address and the amount of entries in the array. Now, by reading or writing to this, we have a beautiful arbitrary read/arbitrary write primitive.
  • Mixing real values and metadata bits can be very dangerous. In this case, the lack of validation of overflows allowed for a bad type confusion eventually leading to code execution.

nt!ObpCreateSymbolicLinkName Race Condition Write-Beyond-Boundary- 659

WALIEDASSAR    Reference →Posted 4 Years Ago
  • In the Windows Operating System, you can create symbolic links using kernel syscalls. Once a reference has been made, the handle is passed back to the user in order to use. symlinks can also be deleted.
  • When a symlink is being created, the valid handle is created quite early in the process. Why is this problem? An attacker can predict this handle value and access it from another thread! Because the lock was never applied (I mean, it is not even finished being created), the rest of the creation process can used an unexpectedly changed symbolic link handle.
  • In the proof of concept, the author has one thread continually closing (removing) symlink handlers and the other one creating them. Eventually, the race will be won, resulting in a crash in the symbolic link creation handler.
  • I had never considered this before! In the future, I will remember the creation process as an interesting place to validate that locks are done properly.

BYPASSING LOCKS IN ADOBE READER- 658

Mark Vincent Yason - Zero Day Initiative (ZDI)     Reference →Posted 4 Years Ago
  • The author was running a fuzzer on Adobe reader. They had both a JavaScript section and a PDF portion. Let's triage!
  • The CPDField of a PDF are internal AcroForm.api C++ objects used to represent text fields, buttons and many other things. In the POC, there is a CPDField object that is a child of another object. When doing this and calling JavaScript on the parent with a callback that has state changing actions on the child, we crash. But why?
  • The CPDField has an internal property called LockFieldProp in order to prevent concurrent access issues. This field is checked every time some change is happening on the object. However, when using a custom callback on (like mentioned above) a recursive call can be made that can free the child object, since it was never locked.
  • When the recursive call goes back up the call stack, the object pointer is now free, resulting in a use after free vulnerability. The initial patch ONLY locked the directory child of ab object. Hence, the author wrote a POC that modified the grandchild of a field, which resulted in the same vulnerability as before.
  • This bugs appears to be extremely exploitable! In JavaScript, controlling the CPDField is easy to control via a heap spray of similarly sized objects. Once the freed CPDField has been swapped out with an object that we control, it is now gameover! The POC submitted to ZDI, once dereferenced, demonstrated control of a virtual function pointer.
  • Overall, interesting find that appears to be extremely exploitable. Seeing deep into the crash analysis of a real bug that the author found fuzzing was quite the insight.

Discourse SNS webhook RCE- 657

Joern Chen    Reference →Posted 4 Years Ago
  • Discourse SNS project is used for mailing list, discussion forums and chat rooms. They have a very nice Security Guide as well, if you are looking for something to look at.
  • While staring at the code in this project, they saw an interesting piece of code open(subscribe_url). The open function in Ruby can be injected into for OS command injection.
  • The problem is that this code has a ton of verification, including having a proper AWS PEM file from SNS. It must be within the SNS service and has an extension sending with a .pem extension. Since we do not control the PEM file going in from SNS, this causes us issues.
  • The code itself is intended to send push notifications to registered endpoints. The code snippet is for grabbing the .pem file. Could this verification by bypassed?
  • The regex verification allows all SNS services to be used, which allows for any SNS operation to be used. The first option was crafting a X509 certificate error by sending a strange looking URL. But, we need a 200 response for this to work, darn.
  • The SNS operation GetEndpointAttributes has a field called CustomUserData. By using this endpoint, it was possible to create a valid X509 certificate that would be returned from the API.
  • With this out of the way, the SubscribeURL on the message being sent with the certificate could be used for command injection. At this point, we could pop a shell on the Discourse instance, even though we clearly should not be able to!
  • Overall, great writeup on how to read the docs and source code in order to find impactful exploits. Using cloud services to build your service is a complicated affair, which really reminds me of a HashiCorp Vault vulnerability.