Resources

People often ask me "How did you learn how to hack?" The answer: by reading. This page is a collection of the blog posts and other articles that I have accumulated over the years of my journey. Enjoy!

FreeBSD 11.0-13.0 LPE via aio_aqueue Kernel Refcount Bug- 940

Chris - Access Vector    Reference →Posted 3 Years Ago
  • FreeBSD supports asynchronous I/O (AIO) with POSIX syscalls. Naturally, with asynchronous actions, reference counts are important to make sure objects aren't deleted too early.
  • The code path used the AIO takes a reference to ucred. However, this is NOT released when the error path is taken. Although this looks like a simple memory leak, arbitrary increments of a 32 bit reference counter could lead to the invalid releasing of the object! To hit this error path, simply send a file that is not a regular file or directory.
  • At this point, there is a theoretical issue with the reference count being overflowable. But, is it practical? The author lists 5 things to check:
    • The reference count is NOT checked for an overflow.
    • It is feasible to do this in a reasonable time.
    • The overflow can cause a premature free to give a double free or use after vulnerability.
    • Flexibility on the heap to allocate a useful object in this space.
    • The object itself is useful with the allocation.
  • The overflow is checked with KASSERT, which is NOT enabled on production builds. Additionally, this is a 32 bit integer, making it possible to perform in a reasonable time. Can we trigger the free? With normal use case, a job will clean up after itself, releasing the object too soon.
  • After this, the author goes through the internals of the BSD heap allocator. There are specialized zones and general purpose heaps. The object being exploited belongs in the general purpose heap, allowing for many standard gadgets. For exploitation, this object is literally security related! Shouldn't be too bad to swap in a fake object to control the ucred for a different uid.
  • The object has many pointers in it, so a simple swap doesn't work; since BSD doesn't have KASLR, this could be exploited fine though. Instead of going for the swap technique, the author had a different plan: keep the pointers the same and use a partial overwrite to cause havoc. Since FreeBSD doesn't zero on free, this is doable. The author didn't go this route though.
  • Originally, using crcopysafe was the way to free the object. Instead of doing this in one go, they had a different idea. Let's use the free to give us a kernel info leak. Then, once we have the leak, we can recreate an identical version of this back in the whole of a legitimate ucred we want to use. Now, we have escalated privileges. They found that the cap_ioctls_limit can be used to write lots of custom data and cap_ioctls_get can be used to retrieve required data.
  • The flow for allocation of the leak is as follows:
    1. Allocate the object and trigger the vulnerability to create the whole.
    2. Create the fake credential using cap_ioctls_limit over the UAF object.
    3. Collect the AIO job in order to free the buffer.
    4. Allocate another fresh set of credentials. This will go into the UAF file buffer, giving us an amazing memory leak.
  • To put the ucred back in, we need to abuse this UAF. This can be done with the following steps:
    1. Free the file points to ucred.
    2. The ucred is now free. Create a new file with cap_ioctls_limit to swap in a fake ucred. The previous step is possible because we know most of the pointers from the info leak above.
    3. You are now root!
  • Overall, an interesting vulnerability! The exploitation of this was straight forward, once the ref count was realized to be wrappable and data controllable within it. The control around the allocations makes this very nice. Great writeup!

RCE via the DecompressedArchiveSizeValidator and Project BulkImports- 939

Vakzz    Reference →Posted 3 Years Ago
  • Gitlab is version control with many other services, such as CI/CD and many other things.
  • DecompressedArchiveSizeValidator is a function that is used to check the size of a archive before extracting it. This is done by using popen3 with gzip. Since the path is potentially user controlled input, the author tried to find a path to exploit this.
  • One place this is used is ImportExport::Importer, which gets the path from project.import_source. Most of the time, this variable is nil though. In the case of bulk imports, this is set with user controllable data though.
  • Using bulk imports, there is a tiny amount of verification done in a regex to remove prohibited values. Shell meta-characters are not included in this flow though. A trivial command injection payload can be used to write to the file system or do anything else now!
  • There are a few complications with this. First, the bulk projects feature flag needs to be enabled. Although, the author of the post found a bypass for this flag to be enabled. This allowed them to trigger the vulnerability on Gitlab.com as well. Secondly, the bulk import size check only happens after 5+ minutes and hitting the max retries for the file.
  • Overall, awesome bug! Found a bad sync then went backwards to find the source.

Browser-Powered Desync Attacks- 938

Portswigger - James Kettle (albinowax)    Reference →Posted 3 Years Ago
  • HTTP Smuggling is an attack that attempts to smuggle in HTTP requests to bypass security controls. Most of the time, this comes down to a misunderstanding of the HTTP protocol between two endpoints. However, this can come down to abnormalities in an implementation as well. This is going to be a plethora of things that don't feel like that go together lolz.
  • A connection can be reused in HTTP. The author found that the HOST header was only validated in the first of the connections but not future ones.
  • Another trick was that the Amazon Application Load Balancer was automatically forwarding the Transfer-Encoding: chunked header if the CL header wasn't there in HTTP. Since browsers automatically add this header with HTTP/2 anyway (even if it's not required) this caused a trivial desync. The surprise on this is unreal!
  • In some cases, the server completely ignores some headers. While testing a site for this (CL.0 smuggling), the front-end will use the CL but the back-end doesn't care. This led to a very simple attack where the back-end would see the second request from a server as the body of the first request. What's freaky, is that this could be done via a browser!
  • The above attack was possible because the service wasn't expecting a POST request at all. The author decided to check in other locations where this desync could be possible. They found that amazon.com/b was vulnerable to this attack, leading to the response queue getting poisoned. This allowed them to get authentication tokens by receiving the wrong response.
  • Client-side smuggling had been born - a request to a single server (with nothing else running) to make something bad happen. Even worse, some of these can be called from the browser, making a drive-by CSRF attack possible with this.
  • A CSD vector is a HTTP request with two key properties. First, the CL must be ignored. THis can happen with errors (long URLs, weird encoding, etc.) or a POST request that wasn't expected. Secondly, a web browser must be enable to trigger this cross-domain, which is limited by the amount of things we have access to in the browser.
  • The trick to determining if the CL is not being used is to use an overlong Content Length. If this returns fine, then you may have an issue. To further determine if you have found a bug, send multiple HTTP requests down this pipeline to send multiple requests at once. The article includes a nice script for the Chrome dev tools to see if you have found a problem via Connection Id tracking.
  • With the Connection pool poisoned on a cross-origin request, what can we do? If the website has user storage, then using the end of one request to save a victims request would allow for the saving of cookies and other things. Because we can poison the response queue, we can launch things like header XSS and other things.
  • A real world case study was Akamai ignores CL headers on a redirect. By making a POST request to this endpoint, the queue is poisoned, since the BODY of the message is ignored. The first complication is the redirect getting followed; this can be stopped by disabling CORS to resolve an error. The second problem is that browsers will discard the connection if too much data is received. To get around this, make the request take more time by adding a cache buster in it.
  • Another case study shows a client side cache poisoning using the smuggling method for JavaScript files. By sending back the poisoned file once and then navigating to this page, we can add our own JavaScript to the page.
  • The author found another form of desync by pausing connections as well by abusing misguided request-timeout implementations.
  • There are tons of way to desync HTTP servers, with this being explored from previous posts. There are many other ways to cause these issues and exploit them in the future. Absolutely amazing research!

Exploiting a Seagate service to create a SYSTEM shell (CVE-2022-40286)- 937

x86Matthew    Reference →Posted 3 Years Ago
  • Seagate is a type of network attached storage device. The author found a windows desktop client called Seagate Media Sync, a tool for copying media files to Seagate disks.
  • The author chose to review the internal communications between the low-privileged UI and the high-privileged services. While looking at the process explorer, the author noticed that this created a named pipe. The pipe was used for communication.
  • The pipe communication was literally writing bytes to a file; this appears to be a custom protocol with two writes occurring. After analyzing the hex data and reverse engineering the code, the author figured out the format. The first block was a 4 byte length field to indicate the body of the message. The message has a signature (0x4B5C), a major command ID, a minor command ID. After this, the information for the specific command is sent.
  • The author mapped out all of the major and minor command IDs to see what they do. One of the more interesting ones was major command 0x10 and minor 0x400. This command was writing a user controlled registry key with an arbitrary value. Damn, that's a super powerful primitive!
  • The author wrote their own client to test this out and it worked! To become SYSTEM, an attacker can registry their own service by writing a path to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services. Once the computer is rebooted, the attackers code would run.
  • Overall, the reverse engineering of the custom protocol and the vulnerability were quite unique in my mind. It was neat see the author take this to full privilege escalation to SYSTEM on Windows.

Vulnerability in Linux containers – investigation and mitigation- 936

Bentham’s Gaze    Reference →Posted 3 Years Ago
  • The author found a vulnerability with the usage of Linux containers and permissions.
  • The standard Linux permissions are read (r), write (w) and execute (x). These permissions are put onto the owner, group and others in that order. A user can be in several groups. Finally, there are ways to allow somebody to run code as another user - setuid or setguid.
  • In Linux, there is the concept of negative group permissions. By setting the file permissions to have nothing on as the group, then all users within the files group cannot perform actions on the file. This allows for the building of a denylist for a particular object.
  • Can you drop from a group to get access to a file though? By default, this is not possible because the checks happen on the supplementary (additional) groups of the user, which the primary group is added to as well.
  • In containers, the action of copying the primary group to the supplemental group is NOT done though. As a result, by running a program with set-group permissions to add permissions to the supplemental group, allowing the primary group to be dropped. In this state, they could perform actions on a file with negative permissions.
  • The vulnerability was found in Podman, Buildah, cri-o and the Docker Engine. A fix should be in the specification and the actual implementations as well. The author puts a few fixes for this, including using su -l for the user and duplicating the group manually. Overall, wonderful post!

How to turn security research into profit: a CL.0 case study- 935

Portswigger - James Kettle    Reference →Posted 3 Years Ago
  • James Kettle reopened the desync/smuggling vulnerability in the last two years. While doing so, he has an interesting set of notes on finding bugs.
  • The amount of bug available is based on a few things:
    • How well understood or explained is a bug?
    • How complex is the system or bug?
    • How obscure or known is the bug class?
  • The bugs exist with complex, obscure and poorly explained issues. If you're looking at SQL injection on a popular website, there will be heavy competition on it. If you're looking for Web Cache Deception, a bug class from a few years ago, then you'll have less competition.
  • Where the good opportunities in research? Most of my stuff is passed upon others, as James alludes too. They have a few points:
    • Did the researcher miss anything?
    • Did they release a scanning tool? If not, can I make one?
    • If they did release a scanner, does it detect every vulnerability mentioned in the paper?
    • Can I see any blind spots in their scanner's design or code?
  • The example that James gives is on his own research on desync attacks. They mentioned that CL.0 style vulnerabilities were mentioned in the presentation, with several obfuscated payloads to make it possible. However, these were never added to the scanning tool.
  • James added these to the scanning tool and instantly found 15 unique bug bounty programs vulnerable to this type of desync. They claim that added new permutations is the easiest way to find fresh desync vulnerabilities.
  • Overlooked research is an easy way to find new bugs. Most people simply use current techniques to find bugs. Overall, good write up!

Securing Developer Tools: Argument Injection in Visual Studio Code- 934

Thomas Chauchefoin - Sonar Source    Reference →Posted 3 Years Ago
  • Visual Studio Code is a text editor from Microsoft with many awesome plugins. The authors decided to audit the Git plugins.
  • Visual Studio Code has two URI handlers called deep links: vscode:// and vscode-insiders://. For this to work, a simple interface for handleUri() needs to be implemented. If a vulnerability is found in this handler, it is a major security issue because this can be exploited with one click on the link.
  • One of the implementations was putting the URL in the input for a clone call directly into an exec() for a system call. If this URL has dashes, then it will be understood as a positional argument. Neat!
  • Command injection are trivial to exploit. However, argument injection is dependent on the tool where the arguments can be set. In this case, we control two inputs for the injection but cannot use spaces, since they will be URL encoded.
  • In the URL, the authors decided to use the flag --upload-pack. Normally, this is done to learn what objects the remote side is missing, and sends them after packing it. However, this can be used to execute a specific command while it communicates with the remote end. As the URL, an attacker would put -u$({open,-a,calculator})
  • The final trick is putting a :x at the end of the URL. This is to ensure that the PROTO_LOCAL in order to use the upload pack command mentioned above. Not much information is provided on this requirement besides this.
  • Overall, interesting post on URI handling and argument injection.
  • SSO providers are the main authentication scheme to login to platforms, such as Google. Besides this, there are many corporate products, such as Cisco Identity Services Engine, Oracle Access Manager (OAM) and VMWare Workshop One Access. This post is aimed at finding a vulnerability in the SSO provider VMWare Workshop One Access.
  • To start with, there is a minimal attack surface for unauthenticated users; as a result, an auth bypass is required. The API generateActivationToken will generate an activation code for an existing OAuth2 client. Calling activate will return the client ID and client secret for this user.
  • Instead of finding a bad code path, they abused functionality in the app itself! By calling generateActivationToken to get the code and sending this code to activate for a default client in the system, we can now act as a application calling the provider. This gives us much more attack surface to work with.
  • The second vulnerability is a Java Database Connectivity (JDBC) Injection vulnerability. The function dbCheck accepts a JDBC URI in order to make a database connection remotely. However, this is a known vulnerable sink that can be taken to code execution.
  • One method is sending back an arbitrarily serialized object that will be deserialized into any object we want. Using the CommonsBeanutils1 gadget (found from ysoserial tool), a shell can easily be gained. A second way is abusing the local gadget socketFactory. By instantiating this object, an attacker can control trigger the execution of a constructor defined in an arbitrary Java class with a controlled string argument.
  • Is code execution enough? Nope! The author wanted to escalate the privileges on the box to become root. While reviewing the permissions of the horizon user on the box via sudo -l, they reviewed the scripts that could be run as root. First, the script publishCaCert.hzn will copy a file into a specified location then make it read/writable by the executor of the script. By doing this, sensitive files can be leaked.
  • The script gatherConfig.hzn will take a DEBUG file and change the permissions of this to be in the TOMCAT user/group. Using this script, we can specify a symbolic link called debugConfig.txt to a root owned file to change the permissions. To get persist access via either of these methods, the script certproxyService.sh can be set to modifiable then run as root.
  • Overall, really awesome post on finding vulnerabilities in the logic of an application and code execution bugs via non-command injection/memory corruption fashion. The post is extremely detailed with many extra routes on top of everything else.

Replicant: Reproducing a Fault Injection Attack on the Trezor One- 931

VoidStar    Reference →Posted 3 Years Ago
  • Joe Grand demonstrated a fault injection attack on the Trezor One hardware wallet in order to recover the key off the device. The original post is very dramatic but shy's area from some technical details. In this post, the author publishes their research on how this attack was performed.
  • Fault injection is the intentional glitching of the system to produce undefined behavior. Hopefully, this benefits us, the hacker, by skipping an instruction, changes a register value or something else. There are three main types of glitching with various benefits: clock, voltage and Electromagnetic.
  • The target of this attack is the Trezor One hardware wallet. This is a low-cost and open source wallet built around the STM32F2. The security feature is set to RDP2, meaning that flash is locked, RAM reads are locked and the debug interface is locked. If the instruction for verifying this configuration was glitched, we could re-enable these settings.
  • The goal is to glitch the voltage/power of the wallet. To do this, the author decided to attack the internal voltage regulator. Since this is the internal voltage regulator, modifications to this line(VCAP_1 and VCAP_2) would directly affect kernel logic, flash memory and IO logic. These lines are accessible from the outer layer of the PCB as well.
  • The vulnerability being exploited was an issue with the BootROM of the STM32F2, commonly referred to as Chip.Fail. This bug allows an attacker to inject a fault around 170 microseconds after BootROM execution in order to glitch the RDP security check. Since RDP2 is no longer in use, a debugger can be turned on to read out the SRAM of the device to get the wallets contents.
  • How do we practically set this up though? First, we remove capacitors connected to the VCAP lines. The capacitors will attempt to keep the voltage stable, which do not want. Hooking up an oscilloscope is required, since we need to know when the device turns on in order to launch the attack. Additionally, we need to solder to the RESET line in order to turn off the board with an unsuccessful glitch. Finally, we need to solder to the SWD port for the debugging interface.
  • Finally, with everything setup, we can replicate the attack! The author used the fancy ChipWhisperer to power the wallet, trigger the reset line and glitch the VCAP line. They used an STLink for the debugging portion of the project. If a glitch was unsuccessful, they simply changed the parameters and tried again. They knew if a glitch was successful or not by using the STLink SWD enumeration code.
  • Prior to using this on the real board, they did this on a developer STM32 board. With the ChipWhisperer, they had to perfect the width of the glitch, the timing of it and the amount of clock cycles to repeat the glitch for. After playing with these settings and letting the setup run for an entire weekend, they successfully got the SWD debugger enabled!
  • Overall, I love the post. One of my big hurdles for fault injection was knowing how to wire everything up on the board itself. This gives me a good idea of how to do that, which I really appreciate.