Resources

People often ask me "How did you learn how to hack?" The answer: by reading. This page is a collection of the blog posts and other articles that I have accumulated over the years of my journey. Enjoy!

CheckMk: Remote Code Execution by Chaining Multiple Bugs (1/3)- 1033

Stefan Schiller - Sonar Source    Reference →Posted 3 Years Ago
  • CheckMk is an IT infrastructure monitoring solution written in Python and C++, similar to Zabbix and Icinga. The architecture has an Apache reverse proxy which directs request to several web servers.
  • First, the CheckMk GUI which is a Python WSGI application or the PHP wrapper to integrate with the open source implementation of it NagVis. These contain the core monitoring services. Using this, Livestatus Query Language can be used in order to query information about the devices being monitored. Additionally, another service (agent-receiver) is used for routing registered agents and collecting info on them.
  • First, they found a server-side request forgery in the agent-receiver. This allows services only accessible on localhost to be hit. Although this requests an auth header, it only checks that it is present before forwarding request later. The user host name is appended to the target URL without any sanitization, giving use an SSRF bug. This is limited to a GET request to ONLY the checkmk GUI though.
  • The checkmk GUI has a few unauthenticated endpoints from a proxied request. One of these is for handling ajava graph images. The query is performed using the LQL interface mentioned before. There is a parameter that an attacker controls that gets put into the query. Since the query language is deliminated by newlines, an attacker can inject newlines to change the query being made! We can even stack queries as well.
  • Security mostly assumes that you didn't get through the first door. Once you are inside, the boundaries become blurred and things become easier. In the next post, they authors dive into HOW this arbitrary LQL query making leads to more issues.

NXP i.MX SDP_READ_DISABLE Fuse Bypass (CVE-2022-45163)- 1032

Jon Szymaniak - NCC Group    Reference →Posted 3 Years Ago
  • The NXP SoC chip has various fuse configurations for security sensitive operations. Once a fuse has been blown, the functionality is forever disabled. The fuse SDP_READ_DISABLE is used to prevent the usage of the UART interface to read out arbitray memory, such as crypto keys, auth tokens and many other things.
  • The boot image on NXP devices supports Device Configuration Data (DCD) sequences. These operations are commonly used for setting up I/O interfaces for boot loaders, clock initialization and more. When the device boot fails, it goes into a Serial Download Protocol (SDP) boot mode which opens up the WRITE_DCD command.
  • The command DCD CHECK_DATA can be used to instruct the bootrom to read a 32 bit value at a specified address and evaluate an expression it. This is done via a mask and will check continually until the expected value is used, unless the count is specified.
  • The catch is that this happens regardless if the SDP_READ_DISABLE fuse is blown! This is a violation of the intended security policy. An interesting note is that DDR memory is volititle so this shouldn't be a big deal, right? It turns out that DDR memory decays very slow when not performing refresh cycles.
  • The goal is to use the CHECK_DATA as a DDR read primitive. This was done by collecting several timing samples for a sweep of reads. The execution time of the command can be directl correlated with the bits being compared to. This means that we can use a timing side channel to figure the bits in memory.
  • The high level of this attack is shown below:
    1. Induce the loading of the sensitive data into memory. This is application specific though.
    2. Force the Device into SDP mode. This can be done by grounding a few pins on the chip or forcing a failure in the boot process using some other means.
    3. Initialize the DDR controller via DCD. Since we need to read from the DDR controller for the attack, it must be ON to do this.
    4. Execute the side channel reading attack in DDR.
    5. Analyze the content to find data and look for errors.
  • Overall, awesome blog post on a logic issue that led to a bypass of the security of the chip. A very well written article as well.

XSS on account.leagueoflegends.com via easyXDM- 1031

Luke Young    Reference →Posted 3 Years Ago
  • Riot Games is a video game creator with many different websites. Because of this, there are many different endpoints that need access to metadata associated with the user. In order to do this, information needs to be shared cross-origin. Back in 2016 (and before) CORS and window.postMessage were spotty at best.
  • What's the solution to this? easyXDM was a JavaScript library that allows for cross-origin calls while using anything that was available - postMessage, Flash LocalConnection or anything else. easyXDM had a producer and consumer model - a producer would export some JavaScript functions that another site could request.
  • In the context of Riot Games, this was pm.html on account.leagueoflegends.com. The functions exported were as follows were get-cookies, set-cookies and send for cross origin request/responses.
  • easyXDM webpages obtain context from a series of query parameters. It should be noted that these are ALL attacker controlled though:
    • xdm_e: The URL to load the page if it's the consumer or the URL of the parent page if it's a producer.
    • xdm_c: The channel to send the messages on.
    • xdm_s: The secret to use to validate between the two parties.
    • xdm_p: The ID of the protocol.
  • This is incredibly scary functionality - so, how is this protected? Both the referrer header on the request and the protocols message origin would have to match an allowlist. The exploitation requires bypassing both of these protections.
  • To bypass the referrer check, they thought about using a link on the league of legends forums. However, they wanted this to be more portable. So, they hunted for open redirect issues on allowlisted sites that were being done via JavaScript (since 301/302s from the server don't set the referrer header). When the xdm_e parameter doesn't match the referer, it will redirect. We can bypass this for an open redirect itself! They found an additional open redirect on another page as well.
  • The origin check is validating that the message is coming from an allowlisted origin. However, when auditing the code of the library, the author noticed that HashTransport works by passing in from data a child iFrame to a parent window via the window.location.hash. But, there's a problem here: how does it know which child or subdomain sent it? It simply uses the xdm_e - which is exactly the bug we need!
  • There's a catch though: if we set the domain to an attacker domain, our page will load but NOT from an allowlisted origin. If we use an allowlisted origin OUR page won't load. The solution? Callback hell with iFrames.
  • At this point, we can make requests but cannot receieve responses - this leaves us with setting cookies and makes arbitrary requests using the cookies. They use this to perform XSS on the site.
  • Overall, interesting bug in a piece of technology that became obsolete. The library maintainers didn't fix the vulnerability becaues this was considered legacy at the time of reporting.

Proxy Upgrade Pattern- 1030

Open Zeppelin    Reference →Posted 3 Years Ago
  • Smart contracts benefit from being mostly immutable but need to be updateable in the case of software patches. From Open Zeppelin, there is a standard for this called the proxy pattern.
  • There is a wrapper (proxy) around the implementation contract. So, the wrapper stays at the same address but the implementation can be changed. This allows consistency while maintaining the ability to update.
  • Typically, this is done by putting code in a fallback function that can call the main implementation. This will copy the incoming call data, forward the call to the proxy via delegateCall and handle the return value.
  • The delegateCall is of particular importance. This means that the proxy contract holds the state of the implementation contract. Does this have any security implications?
  • The implementation contract needs to ensure that the storage state of the previous contract is extended upon and does not overwrite anything important. Maintaining the order of variables is very complicated but has catastrophic consequences when done incorrectly. For instance, what if the contract had a variable named owner then on the update the first slot became lastContributor. A collision has occurred!
  • Collisions can overwrite unintended data in crazy ways. In the Open Zeppelin contract, this is prevented for the proxy address by doing a SHA256 hash and using this as the storage location.
  • Traditional constructors don't work with this pattern, since the contract implementation won't have the proper state at the time of deployment. Instead an initialize function should be used on the new contract, which can be called from the proxy with the proper state.
  • An additional caveat is function clashing. If an implementation in the proxy and implementation have the same name, which one is called? For Open Zeppelin, it depends on who called it. If the admin calls it, then the requests are NOT forwarded to the proxy. Otherwise, for users, they are forwarded to the next contract.
  • Overall, interesting concept - updating on the blockchain. Function clashing, lack of constructor and storage collisions are all issues that have been found in the wild.

Polkadot Frontier EVM Integer Truncation- 1029

pwning.eth    Reference →Posted 3 Years Ago
  • Two ways that things can be hacked in blockchain-land: attacking code running on the blockchain or attacking the blockchain itself. While auditing code for the EVM implementation for Polkadot called Frontier.
  • Frontier executes the Ethereum smart contracts but uses the Polkadot substrate as the ledger; differences in these can cause major problems. Ethereum stores integers as 256 bits but Polkadot stores them on the ledger as 128 bit. This is done via truncating the number in Rust.
  • The balance can never be larger than 128 bit so what's the problem? The msg.value of a transaction has the entire 256 bit value controlled by a user, even if it is an invalid amount to send. This bypasses the verification of the usage of funds within the ledger math but results in it not adding funds to our account.
  • What if a contract used the full msg.value value though? This is the key to the bug. Code written in Solidity will use the full msg.value while the ledger only uses the 128 bits. So, we can call something that uses native ETH, like WETH, and trick it into sending us something that we shouldn't own.
  • The exploit payload is awesomely simple: weth.deposit{value : 1 << 128}. This will deposit an insane amount of WETH into our account without spending any actual ETH. From the authors estimates, over 150M dollars were at risk.
  • Even though Moonbeam, Astar and Polkadot all had 1M bug bounty programs each, they decided to reward a total of 1M and split the bounty. Kind of a bummer for the author of the post but a million is an insane amount of money. Overall, amazing bug discovery and exploitation of the issue.

F5 BIG-IP and iControl REST Vulnerabilities and Exposures- 1028

Ron Bowes - Rapid 7    Reference →Posted 3 Years Ago
  • BIG-IP is a family of products from F5 is an application delivery service. There is a suite of internal APIs for admins only that tends to only be exposed on the LAN that the device exists on though. This article is a dive into that.
  • The first CVE is a large chain of security issues. They first found a trivial command injection on a binary called f5_update_checker. This happens via a file called f5_update_action; simply adding a command injection payload into this file gives code execution.
  • But, this really isn't a huge problem since you need to be able to write a file to the system to do this. This only becomes a problem if we can write a file. While playing around with the admin SOAP API, they obtained the ability to write a file to an arbitrary location with arbitrary content. See where this is going!?
  • Additionally, the SOAP API was vulnerable to CSRF, since it lacked proper cookie flags and other protections. The thing is, the browser would send a pre-flight request with an XML request, making this not possible. The author put the XML into a form with a plaintext content-type.
  • From there, they came to another problem: a form will use a key=value format! This would corrupt the XML payload being sent. However, XML allows for comments! So, the key became <!-- and the value became --> REGULAR XML.... This comments out the equals sign (=) from the form submission, making this valid XML. Amazing.
  • The SOAP API runs as root but Big IP has SELinux as well. This means that obvious areas of attack like /etc/profile.d cannot be written to. They noticed a symbolic link within the directory for a bash script that went into /var/run/config/timeout.sh. Since this location isn't protected by SELinux, this ended up being a bypass, as well as the code execution method mentioned above.
  • With the CSRF, arbitrary file write and command injection/SELinux bypass, we've got code execution on Big IP. The second RCE method was a newline injection into rpmspec files via another administrative API. Since this file is used to create RPM files, adding in new parameters/fields leads to the execution of arbitrary shell commands.
  • Overall, awesome post and I was happy to talk to the author at Hushcon this year to get more information about the CSRF issue.

Till REcollapse- 1027

0xacb    Reference →Posted 3 Years Ago
  • Input validation is a crucial part of web application security. However, with all of the data parsing there are a multitude of ways this could go wrong. Finding a different endpoint, bypassing the regex... lots of different ways.
  • In this post, the article goes through a technique called normalization. This is the process of translating data into a more understandable format. For instance, going changing capitalization is a format of translation.
  • Some steps are obviously for translation but others are for general string handling. For instance, calling unidecode in Python with a string can change the string in unexpected ways.
  • When dealing with regex parsing, string parsing and everything else, different representations slip through the cracks. For instance let's take the regex ^(?:https?:\/\/)?(?:[^\/]+\.)?example\.com(?:\.*)?$. This is meant for parsing URLs that start with example.com.
  • The text https://example՟com will be accepted by regex as a domain argument then translated to something entirely different in punycod, causing a crazy bypass. How did they find this out? Using their new tool Recollapse. This is a blackbox regex fuzzer!
  • This tool seems pretty rad for finding regex parsing issues. To do this, choose separator points and normalization points. Then, mess with the regex until something goes through. They have some real world examples at here from a talk.
  • The first interesting one was a redirect URI for OAuth. Using anything besides the standard URL caused issues. However, by fuzzing away at the API, they found that %3b%40 or ;@ was able to bypass the redirect link parsing but STILL go to our endpoint.
  • They used this to cause cache confusions, shopify account takeover and many other bugs. The tool looks pretty easy to use as well, which is awesome. Parsing differences between two different system will always be a problem!

Hyundai Car Takeover via Mobile Interface- 1025

Sam Curry    Reference →Posted 3 Years Ago
  • Most people focus on the key fob hacking and other things in order to break into a car. But, what could the mobile and web issues? If you can open up a door via the web, that's still a major problem. Because Hyundai allows for this, they starting proxying the actions on the mobile application.
  • Below is the simple HTTP request to unlock a car:
    POST /ac/v2/rcs/rdo/unlock HTTP/1.1
    Access_token: token
    
    {"userName":"EMAIL","vin":"VIN"}
    
  • The access token was a JWT with our email inside of it. So, why would they request an email for the request if they already had it in the JWT? Any time you attempt to use a different email than your own, it is rejected. Sam's thought was what if we could trick the server into parsing a victim email from both inputs?
  • Let's fuzz! They started by fuzzing the registration page to find out two things: the character allowance was high and no email verification was required. From fuzzing, they noticed that adding a CRLF to the account email made it a valid email but still functioned as the original email!
  • For instance, victim@gmail.com%0d and victim@gmail.com would work as the same email, even though they were physically different. Using this, they had a complete authorization bypass which led to the ability to unlock arbitrary cars if they knew the email. Pretty neat!
  • Overall, an amazing find! Input validation is extremely important and fuzzing is a great way to find strange bugs.

Exception(al) Failure - Breaking the STM32F1 Read-Out Protection- 1024

Marc Schink & Johannes Obermaier    Reference →Posted 3 Years Ago
  • The debug interface of the STM32F1 chip cannot have the debugger attachment disabled. Instead, there is a Flash Memory Read Out Protection (RDP) instead; this will block all data access via the debug interface. This article is about bypassing RDP.
  • While playing around with a development board with RDP turned on, the authors ran the reset halt command. When doing this, they get the following output: xPSR: 0x01000000 pc: 0x08000268 msp: 0x20005000. Why is this interesting? Raw register values are sent back, which we shouldn't have access to.
  • Why does this happen? A reset is a special kind of exception. When an exception is called, the processor loads the exception entry address from the vector table to know what to do - vector fetch. Since this is stored in flash memory, how can that vector be accessed?
  • The reset vector is fetched via the ICode bus. So, the fetching of the reset information is done via the instruction fetch line instead of the standard data line. The bus being used in the reason why the read out protection doesn't work in this case!
  • In ARMv7-M there is a Vector Table Offset Register (VTOR) that determines the location of the vector table in the address space. This is normally used to relocate the VTOR when going between applications but can be abused. By changing the VTOR, we can relocate the vector table within the flash memory region!
  • Since the ICode bus passes back information via the PC and returns the address, with interrupts, we can abuse the trust mentioned above to slowly read out information we shouldn't have access to. Again, we can control everything besides flash via the debugging interface.
  • Several items in the VTOR are inaccessible for functional reasons. However, we can wrap around the values (max is 32) to still access the vectors! For instance, the normally inaccessible 1 and be accessed by using the interrupt 33 but there are some limitations to it.
  • Now, for the moment of truth - extracting the information. This is done by doing the following steps:
    1. Perform a device reset to put the microcontroller into a well-defined state.
    2. Configure the microcontroller to trigger an exception of our choosing. Only a handful can be truly triggered via this method though.
    3. Single step in order to make the exception active. At this point, we can extract the data we would like.
  • Overall, this method, even with its shortcomings, was able to extract around 90% of the code in less than an hour on all of the chips. This is a pretty incredible feat by a small oversight on the part of the developers. Amazing blog post!

Nereus Finance Flashloan Attack Analysed and Exploited- 1023

Faith    Reference →Posted 3 Years Ago
  • Nereus Finance is a lending / borrowing protocol. This allows users to deposit their tokens to earn interest on them and borrow funds from this protocol.
  • Why would somebody want to borrow assets if they can't be under-collateralized? In the case of this protocol, the NXUSD token is a stable coin alternative to USDC. The main way to obtain NXUSD is by borrowing it. For NXUSD, the main purpose of the token is staking it; this means giving a token for somebody else to use in order to earn rewards on it.
  • A liquidity pool (LP) is the main method for exchanging one token for another. An LP consists of one or more tokens. When putting your own funds into the pool, you receive a pair or LP token. This can be used to stake, collateral or many other things. In the case of this project and hack, the pool is USDC-WAVAX and this gives back the JLP token.
  • When calculating the price of a liquidity pool, the price is strictly dependent on the ratio between the tokens in the pool. For instance, if there was a 1:1 ratio, trading for one token would give you an equal amount of the other one. If it was 2:1, then trading for two of one token gets you a single token of the other.
  • The contract JLPWAVAXUSDCOracle is used to calculate the price of JLP. This is done via the following steps:
    1. Get USDC price from an external oracle.
    2. Get Avax (Avalanche) price via an external oracle.
    3. Get the reserves of each token within the contract.
    4. Price is the following formula: JLP = (AvaxReserve * AvaxPrice + USDCReserve * USDCPrice) / totalSupplyJLP
  • This isn't some weird injection issue or anything... it's a math issue where there is a case that is forgotten: an attacker can obtain an insane amount of money quickly via a flash loan. The variables AvaxReserve and USDCReserve are somewhat controllable, since we can swap in and out of the contract. These variables are also part of the price of the JLP token, as mentioned above.
  • If an attacker swaps a ^*&@ ton of one token for another, then the price of the JLP can be drastically skewed in either a high or low direction. To drop it low, we exchange in a large amount of USDC, since it is cheaper. At this point, the exchange rate is much cheaper for the NXUSD (which can be traded from JLP) to borrow WAYYYY more than we should be able to.
  • The author of the post puts the steps above:
    1. Use a flash loan to obtain a large amount of USDC and another currency to acquire JLP.
    2. Acquire JLP tokens at the normal price.
    3. Lower the exchange rate of the JLP token by swapping in a ton of USDC for wrapped AVAX.
    4. Using the JLP from before, use this to borrow the NXUSD. Remember, the exchange rate has been dropped, so we get more tokens than anticipated.
    5. Swap back the WAVAX for USDC to bring the exchange rate back.
    6. Pay back the flash loan after getting a huge profit from the NXUSDC.
  • An interesting note that the author makes... we are simply leaving the JLP in the contract since we profit from the NXUSDC. Unlike the bank coming after you in the real world, the only thing that makes you return the loan is the collateral deposited. Since we made more money than the JLP is worth, we simply leave it in the contract.
  • The author includes a very detailed proof of concept that is explained well with a Hardhat setup. So, how does one fix this problem with the JLP token? Having a Time Weighted Average Price (TWAP) or forcing these steps into multiple blocks would solve the problem. An absolutely amazing post and I look forward to more of these in the future!