Resources

People often ask me "How did you learn how to hack?" The answer: by reading. This page is a collection of the blog posts and other articles that I have accumulated over the years of my journey. Enjoy!

Juniper SSLVPN / JunOS RCE and Multiple Vulnerabilities- 992

Octagon    Reference →Posted 3 Years Ago
  • JunOS is a service to automate network operations and many other things. For this, there is a client application that allows for securing connecting to it called SSLVPN. This is what the author looked at.
  • The first vulnerability was a Phar Deserialization issues. Phar is a PHP Archive that contains data in a serialized format. Many of the PHP functions handle this format by default, such as file handling functions. Using the phar:// URI, it is trivial to gain code execution on servers. In this case, the file needs to be on the server, which can be done via an unauthenticated file upload page.
  • The second issue is a reflected XSS payload via the error pages server name. Putting in <script>alert(0)</script> into the parameter gave them a simple XSS payload. If somebody clicked on this leak, it keep be used to steal session information.
  • XPATH is a format for querying information from an XML document. Since this is a dynamic query language, it suffers from the same issues as SQL. Using an XPATH injection vulnerability, it is possible to manipulate JunOS admin sessions or manipulate future queries made by XPATH. This was an authenticated bug that could be exploited via CSRF though.
  • On the upload functionality, the file is written to /var/tmp/$filename. There is code that attempts to prevent directory traversal by looking for / on Linux. However, this can be bypassed because Apache normalizes backslashes to be forward slashes.
  • I don't know where this conversion happens but it must come after the verification done by the application. Once we can control the location of a file (and the type), we can upload a PHP file to /www/dir/ to execute it. Regardless, it's a pretty neat bypass!
  • The final vulnerability is another RCE bug via local file inclusion. The user controls a parameter for loading a PHP file. However, this is ALSO, vulnerable to directory traversal and will be executed. As a result, any uploaded file can be executed once it is on the server.
  • Overall, this is an amazing example of why PHP is much easier to find RCE bugs on. There are many gotchas that are all on full display in this post. From, phar deserialization, to file upload issues to local file inclusion... all of the big

Exploiting Static Site Generators: When Static Is Not Actually Static- 991

Asset Note    Reference →Posted 3 Years Ago
  • Static site generators, such as Jekyll, Hugo, Next.js and others were meant to be so bare bones that security risks were eliminated. This was because, in the past, people were getting pwned with the million plugins they would add to Wordpress. Static site generators have became more dynamic with time though, leading us to this post.
  • Once the static site generators are so popular, many CDN/CI platforms for these sites became available. Netlify and Vercel are a few popular ones. Sam Curry, a different security researcher, was testing a website that used Netlify to host and Next.js for the site generation. Sam sent a message to the author of the post with the following request:
    https://www.gemini.com/_ipx/w_12812,q_122/https%2f%2flocalhost%2f
    
    and response
    Hostname is missing: localhost
    
  • The hostname appears to be coming from the URL for some reason. Netlify (similar to Next.js in this way) can build and pull images from remote sources but uses a allowlist of domains that are allowed. The message response above is an error saying that localhost is NOT in the allowlist. If this was the case, then we may have an SSRF vulnerability if this allowlist could be bypassed somehow.
  • Luckily, the Netlify code for this section is open source. While auditing the code, they found another bug. The protocol of the request can be derived from the header x-forwarded-proto. When using this, it concatenates the entire string from the proto without validating it. For instance, the URL https://evil.com/? would be valid evil.com becomes the new domain to be used. This allows for the pulling of arbitrary images.
  • Why is this bad though? It turns out that SVGs are supported with a specific format. Since SVGs are known to be able to execute JavaScript, this gives us XSS on the site. The post claims this is persistent but doesn't really go into details about why. My understanding is that this is a cache poisoning attack on top of the XSS that was found because the X-Forwarded-Proto wasn't in the list of cache keys.
  • A variant on GatsbyJs since they had focused on other things prior. While reviewing the code, they found two instances of proxying code - once for data with any content type and another for any image extension besides SVG. If the development server was running instead of a production version (I used to host my site like this lolz) then an SSRF bug can be used here. One was a full read and the other was blind.
  • Overall, great research! I appreciate the work that Sam and Asset Note are doing at protecting the web eco-system at large for us. Keep up the great work and bug finding!

Exploiting Xbox Game Frogger Beyond to Execute Arbitrary Unsigned Code- 990

Artem Garmash    Reference →Posted 3 Years Ago
  • The original XBox had many security problems that resulted in two mods bypassing all of the security checks to run unsigned code. First, the hardware mod required soldering on a modchip to the main board with a modified BIOS that disabled all security checks. Secondly, many video games had save files issues that allowed for code execution within the context of the game. There are several known exploits, such as 007, Tony Hawk 4 and a few others.
  • For the software security, there are a fwe things to note. Most of the XBox code runs within the context of the kernel in order to run faster. Additionally, there is no Nx bit, making arbitrary code execution easy once a vulnerability has been found. The goal of this article was to write a save file exploit for the game Frogger in order to execute arbitrary code (ACE). On top of this, taking the ACE to development and run our own game.
  • The first step was turning the hardware of the XBox into a debug kit. Using the SuperIO board, it is possible to get serial port connectivity and use a kernel debugger. Once this is setup, they disassemble the code of Frogger to look for issues. They extend the file format of the player name to not contain a null terminating string. Sure enough, editing the save file with this triggers a segmentation fault.
  • The segmentation fault was writing the player name to a stack buffer without doing a sanity check on the name. As a result, part of the player name was overwriting a pointer and attempting to write to this location. However, this has even worse consequences: if this address is valid, we ACTUALLY control the RIP address on the stack, allowing for the hijacking of the control flow of the program.
  • With the ability to execute arbitrary code, we want to build our own games. To recover the program to do this, a few things are done via shellcode. First, the author wants to disable the memory write protection by clearing the WP bit within the CR0 register. Once this is done, all memory (even read only pages) are now writable, making many more things possible. After this, the RSA public key is patched to allow for the usage of unsigned games. This can be done by searching for the key in memory, then changing it or referencing a kernel structure and editing that.
  • Once we have taken care of business, we can launch an executable by calling kernel methods directly. Since this is more portable and will work for a multitude of games, this is the route they choose to go. Or, calling the XAPI methods directly can be used, by this comes with some restrictions. A few other things could be done to make the environment more friendly, such as invalidating the TLB and flushing CPU cache.
  • The first attempt at the shellcode failed because of an extra question mark in the sprintf call. Nullbytes were becoming a problem here. Instead of writing the loader AFTER the copy, they wrote a smaller version BEFORE. By jumping to the smaller version (shellcode without nullbytes of course), they could reconstruct an arbitrary address to jump to that shellcode instead. This time, the small shellcode worked! Limitations on virtual address and nullbytes led them to try something else.
  • The author went hunting for a second bug a few days later. They found a relative array write and another bad strcpy issue. By using an existing overflow from the previous bug (overwriting this pointer of the class), we can control the where we want to write. Of course, the string we want to write is controlled by us, giving us a WRITE-WHERE primitive.
  • Using this primitive, they wrote the bootstrapped bootloader to an arbitrary address in physical memory (0x01010101). Since this write sits in physical memory and we have the previous vulnerability to jump to any location, the chaining of these two issues gives us a great primitive for getting things going. Overall, great post with many interesting insights.

Curve LP Oracle Manipulation: Post Mortem- 989

Chain Security    Reference →Posted 3 Years Ago
  • Curve is a popular Automated Market Maker (AMM) that uses a Liquidity Pool (LP) to get the funds. Many contracts interact with Curve to find out the going rate of a token. The get_virtual_price() function implements the logic for creating the stable swap mechanism within Curve. LP tokens are commonly priced by computing the underlying tokens per share. Essentially, assets total price/amount of LP tokens.
  • Reentrancy is an attack where a contract can be left then reentered with only SOME state being changed. With a partial change of state, it may be possible to recover funds or overwrite other state into a exploitable position. In order to prevent attacks like this, a reentrancy modifier is commonly used on a single contract to prevent going back into it. However, this is typically only on the main state changing functions on the contract.
  • So, what if we entered a contract, left it via an external call then made a read only call to the service from another contract? Since that is not a major state changing function, it likely does not have the reentrancy modifier on it. Additionally, the code may be in an unintended state, creating opportunity for financial manipulation.
  • When removing liquidity from Curve, there is a reentrancy modifier/decorator (written in Vyper) on the function remove_liquidity(). When calculating the price that the LP token should be swapped for the code gets the balances of the contract, balances of the user and the total supply. Once it does this, the LP token is burned (removing the tokens from circulation) and returns all of the funds to the original caller.
  • Above, the total_supply has been updated but NOT the individual amount of each token. This leaves the contract in a very strange place for new calls being made. Since the price of the LP token is based upon the assets total price/amount of LP tokens, we can make the amount of LP tokens very little but still keep the assets very high. While in this state, calling get_virtual_price() would have an inflated cost as a result. Any pool using this function from Curve would have been severely open to oracle manipulation at this point.
  • This ended up being exploited on Market.xyz in the wild. There's a Youtube video that demonstrates this as well.
  • Overall, reentrancy attacks are extremely hard to mitigate. Reentrancy guards and integer overflow protection are simply not enough. Updating all of the state properly prior to an external call is crucial for the security. It is sickening for defensive people but awesome for offensive folks. To me, it's like memory safety bugs in C - it is too easy to mess up and the ecosystem should make it possible to do easily.

Towards the next generation of XNU memory safety: kalloc_type- 987

Apple Security    Reference →Posted 3 Years Ago
  • Memory corruption vulnerabilities are absolutely everywhere in modern exploit development. Something like 80% of bugs are memory corruption. This post goes into making the XNU (iOS & Mac backbone) OS memory allocator much harder to exploit via these types of vulnerabilities.
  • XNU has several memory allocator APIs but there are two main subsystems: the zone allocator for small chunks and the VM allocator which has page granularity and permission tracking; this post is focused on the zone allocator hardening. This allocator is a generic slab allocator, where a collection of pages are divided into equal chunks of data. There can be a special zone for a given use case, like ipc ports, which is a collection of chunks that can even be subdivided. When under memory pressure, chunks from a given zone may be reclaimed.
  • First, let's define memory safety in a few classes. Temporal safety is only using objects during their allocation timeline (UAF, double free). Spatial safety is only accessing memory that belongs to the particular chunk (buffer overflow, OOB read). Type safety is only using the object as the intended type (type confusion). Definite initialization is asserting that ALL allocations will be properly initialized (info disclosures). Finally, thread safety is ensuring concurrent access is done safely (race conditions).
  • In most exploits, a tiny (constrained) vulnerability is used to make a stronger primitive to eventually hijack the control flow. The first goal to prevent exploitation is type isolation, which things like GigaCage in WebKit pioneered. Preventing the access to specific data structures makes exploitation much more difficult. The second goal is to prevent the trivial overwriting of pointers with data. By isolating each of these as much as possible from each other, a buffer overflow with a user controlled string can no longer cause too much havoc. The rest of the article is explaining how these two systems are implemented.
  • An exploit UAF on iOS had the following flow:
    • Allocate a bunch of objects then trigger a UAF on one of these to create a dangling reference.
    • Free all of the objects for the target in step 1. This makes everything in the zone completely free.
    • Create memory pressure that the zone containing the memory we want gets reused.
    • Allocate a large number of objects, hopefully allocating over the memory from before. This creates a type confusion that can be easily exploited.
  • Since the path above was so reliable, it is time to mess this up. The first item is to make virtual address space reuse across zones (step 3 from the memory pressure) not possible. This was done by allowing for the reuse of physical memory but NOT the virtual memory for single-type zones. Now, simple overwrites of objects like thread and proc can no longer be exploited in this way.
  • The next step is to help isolate data from pointers. The first remediation step is introducing an allocation type that only contains data called KHEAP_DATA_BUFFERS that will live in its own section of memory. Secondly, a sized based collection of zones with a particular namespace of allocations.
  • The final step to making exploitation harder is a non-deterministic allocator. Of course, every type of object cannot have its own zone because of memory constraints. However, the group of objects put into each zone can be randomly selected at boot time, making exploitation inconsistent. They choose 200 zones to divide into different groups depending on the size.
  • For dynamically sized allocations, they disallowed the usage of a non-data header followed by a data-only type; this was to prevent trivially moving to the right zone for exploitation. Additionally, a number of heaps were exclusively created for these variable sized allocations. Finally, arrays of pointers had their own heap section as well.
  • So, we've done a ton of work to harden the allocator. How does this stack up to other things? For type isolation, IsoHeap is perfect (no reuse of zones) but kalloc_type has a large amount of buckets that are randomized for each size. Additionally, the heap metadata for kalloc_type is completely in a different section of memory, unlike others that store freelist pointers inline.
  • Overall, this is a great step forward for protecting the XNU kernel. With these mitigations making exploitation of heap issues harder and the presence of pointer authentication, XNU attacks will require extremely strong primitives from the beginning or a very sophisticated attacker.

Visual Studio Code Jupyter Notebook RCE- 986

Luca Carettoni - DoyenSec    Reference →Posted 3 Years Ago
  • Jupyter Notebooks is an interactive computing platform while VS Code is a text editor. Somebody wrote an extension for Jupyter Notebooks to work with VS code. In the past, there was an XSS vulnerability in the handling of this but it wasn't used for anything too impactful. The goal of this post was taking that XSS to RCE.
  • VS Code uses the electron framework, which is a variant of the Chrome browser, to run as a desktop application. Using the tool ElectroNG for basic auditing configurations, nodeIntegration for VSCode was turned on. This allows for JavaScript to have access to the runtime, giving code execution on the device with any XSS issue.
  • The Jupyter Notebooks integration had a doubly nested iFrame when the content is loaded where the nodeIntegration is turned off; of course, this is where the XSS ocurred at. This iFrame sandboxes the user substantially but has the allow-same-origin flag on it. What does this mean? Files can be hosted on the same file system and it is considered the same origin.
  • Because of the iFrame allowing access to the window as long as it's in the same domain (file system) we can access the top window. Since the top window has nodeIntegration turned on, accessing this window allows us to get code execution.
  • So the new questions comes in: "how do we put something into this folder to bypass SOP on the iFrame?" It turns out that there is a parsing bug in the determining location of the file. Directory traversal can be used in order to trick the location of the file in the vscode-file handler to use a file on the domain that really shouldn't be used. Combining this with the XSS allows for the calling the top level domain, giving us code execution.
  • The final hurdle is that we do not know the local directory of the user. However, this can be circumvented using the postMessage API given from VSCode to leak it. Additionally, the Workspace Trust feature is opted out of by this extension, making no further user interaction required.
  • Even though the XSS bug was fixed, it was interesting to see how it was taken to code execution with the old bug. Electron can be very dangerous with complex systems like this one.

Cisco Jabber: XMPP Stanza Smuggling with stream:stream tag- 984

Ifratric - Google Project Zero    Reference →Posted 3 Years Ago
  • Cisco Jabber is a video conferencing service, similar to Zoom. For instant messaging, it uses a protocol called XMPP over XML. Within XMPP, there are short snippets of XML called stanzas that are sent over a stream connection by using the Gloox XMPP library. Both the control and message requests go over the same stream.
  • From similar research in Zoom, what if a message request could be smuggled into the control of the message? In Zoom, this was done using a unicode decoder difference between the client and the server.
  • Cisco modified the Gloox XMPP library in a few places. While parsing the XMPP stanzas, the original library will exit upon seeing a new stream:stream tag, effectively ignoring the attempted escape. However, in the Cisco version, the function cleanup() is called within this code block instead. This resets the parser state and any XML tag seen after this point will become the new root tag.
  • Now, an attacker can arbitrarily control the data in the stream. This allows for control messages and other things to be injected into the server-side. This simple reset of the parser makes this possible. In terms of what can be done with this exploit, nothing is said. With Zoom, it was possible to get code execution using this though. Overall, interesting bug and variant of the Zoom client!

A Journey To The Dawn - CVE-2022-1786 - 983

KyleBot    Reference →Posted 3 Years Ago
  • io_uring is a new subsystem in the Linux kernel used for speedy IO operations. In particular, the program may need to do privilege transitions many times via syscalls. Instead, a series of IO operations can be performed in parallel.
  • Rapid development == more bugs though. Additionally, complex code with a ton of asynchronous operations tends to have security bugs as well. Additionally, many bugs within io_uring have been used to break out of the Container Optimized OS ran by kCTF, making this a good attack surface for them.
  • When the function io_req_init_async is called, it assigns its own identity to be the worker of the IO request. However, if two threads submit an IO request to the same io_uring at the same time, then they will be attached into the same work queue but with different IDs. The fact that the same identity is used for two different requests is what causes the very subtle security issue.
  • If one of the threads exits then the IO events are all reaped. In this process, the exiting threads identity gets assigned instead of the request submitter. Why does this later? One part of the code uses this as a heap object and the other uses this as a pointer to the middle of a structure. Aka, we have a type confusion creating an invalid free.
  • How exploitable is this? Because of the CONFIG_HARDENDED_USERCOPY (which is enabled on the Container-Optimized OS), the function used to copy data from userland (copy_from_user) cannot be used across slot boundaries. So, the typically method of putting msg_msg and corrupting this will not work. It's possible to spray this area with objects we don't own but its not trivial.
  • What's the strategy then? Allocate the victim object in an invalid slot (between two slots) then use the other parts of the slot (upper and lower) to corrupt it. The object timerfd_ctx is within the kmalloc-256 slot and has plenty of pointers, making it a prime target for exploitation within our fake slot. From the fake slot, the author decided to use the upper and lower slots with the msg_msgseg object, which has mostly user controlled data.
  • Once the heap feng shui is done, we can get the information leak from the object. First, the linked list within timerfd_ctx points back to itself (heap), leading to a nice leak from the msg_msgseg object. For breaking KASLR, arming the timer will set a function pointer which points to the .text section.
  • Hijacking code execution is easy via the function pointer within the timer; but, this leads to a ton of issues. So, they decided to free the timer and attack the allocators freelist instead. The CONFIG_SLAB_FREELIST_HARDENED flag is turned on, which is a type of pointer encoding that requires uses to know the storage address of the pointer, a random value and the new pointer itself. By filling up the entire slab, we can force the ptr to be NULL, leak it and calculate the random value to write the pointer ourselves.
  • By hijacking the freelist, we know have a completely functional arbitrary write primitive. Since they wanted a container escape (and more money) they targeted the way Linux loads executables via binfmt. The structures used for loading executables are writable! Using the primitive from above, the load_binary callback function can be abused to get PC control to ROP.
  • Game over, right? This worked on the authors machine but not the kCTF machine - the only writable part of the system was tmpfs, which was not compatible with the exploit and we needed the O_DIRECT file flag to make this possible. Only a few files could be opened with this flag in the container and they were all very small, making the exploit unreliable.
  • After playing with the heap feng shui and playing with the freelist, they decided to go with a different strategy. They used the timerfd_ctx to ROP instead. Using this, the same controlled binfmt overwrite could be used to get code execution. Another novel technique that was used was to call msleep to gracefully end the ROP in the interrupt context to cause the program to not crash.
  • Amazing article! Great background, nice references and I love the ups & downs included in the article. The thought process behind every decision is very clear, regardless if the thing worked or not. Great exploit and definitely worth the 90K from Google.

Bypassing vtable Check in glibc File Structures- 982

KyleBot    Reference →Posted 3 Years Ago
  • In glibc 2.34, the hooks used for debugging malloc were completely removed from the run time configurations. Since these were commonly used for getting code execution, the author of the post wanted to find a new way to hijack the control flow. The man also runs the how2heap repo as well.
  • The FILE data structure is used by programmers. Within glibc, there is a vtable added to the structure _IO_FILE_plus. In glibc 2.24, a restriction was added to the vtable pointers by ensuring that the pointers were within a very special section of libc called __libc_IO_vtables. Additionally, some pointers are encrypted (key stored in thread local storage) to prevent modification.
  • A bypass for this was found though. First, the _IO_str_overflow pointers use tables outside of the vtable. So, the same attack could be used from before. Additionally, the vtable could be misaligned to invoke the wrong functions. Again, this was patched in 2.28 by removing the function pointers. So, where are we now?
  • While manually auditing, the author found 81 unique function pointers within the special section. They checked all of them and their corresponding to calls to try to find any missing checks. Sadly, all of them are either validated via special vtables or encrypted.
  • The encryption aspect is interesting - modifications can still be made IF we know the key. So, can we overwrite the key or leak the key stored in thread local storage? The goal is to use the misalignments to eventually do this.
  • The file structure is very complicated. Instead of manually auditing, the author decided to use the symbolic execution tool angr. Since this is a bounded model checking problem, angr is the perfect tool for this. They configured Angr to run and let it go to town!
  • The script found 10+ techniques - one of them which is known as the House of Emma. The tool had found a list of calls to the function tables, all which were validated, that would give control over RIP eventually.
  • It turns out that a list of function pointers in _wide_vtable was not being validated by the vtable checker. Three of these techniques were known as the House of Apple. However, the others discovered were brand new. Overall, a good article with fun memes in it!

A “Hat Trick” of DeFi Hacks Underscores the Importance of DeFi Security- 981

Halborn    Reference →Posted 3 Years Ago
  • Three major hacks took place in a single day, resulting in millions of dollars being stolen.
  • The first vulnerability was in Rabby Swap. The contracts router function had the function functionCallWithValue with arbitrary parameters passed to it. This allowed for a user to pass in an arbitrary set of arguments and an arbitrary function as the router.
  • Using this vulnerability, they were able to call swap from the context of the router contract. Using this, previous approvals from other users could be abused to steal all of the money from their wallets. Apparently an audit took place but completely missed this issue.
  • The Template DAO hack was really simple. The function migrateStake had no access controls. Additionally, the previous function did not verify the source address or stake value of the old address. As a result, an attacker could call the contract with a fake old address and stake value, mint their tokens and drain the entire contract.
  • Finally, the Mango Market was hacked, which is just a trading platform. A flash loan was used to inflate the price oracle of the Mango token from 30 cents to 91 cents.
  • Since this increased the value of the attackers collateral, they could borrow even more funds from the protocol. Why is this increase in price so bad? By taking out a massive loan, with the inflated collateral, they could drop the price of the token bad down, they just abandoned the collateral and took the loan
  • For Mango Markets, the crazy part is that the hacker came out and said they would keep some of the funds as a bug bounty payment but he was using the protocol as expected. Even though this is obviously not true, how do you define expected vs. unexpected functionality with a finance market? The guy kept $45 million and the person is public knowledge.
  • Overall, three interesting hacks that led to $100 million being stolen. Super interesting!