People often ask me "How did you learn how to hack?" The answer: by reading. This page is a collection of the blog posts and other articles that I have accumulated over the years of my journey. Enjoy!
phar:// URI, it is trivial to gain code execution on servers. In this case, the file needs to be on the server, which can be done via an unauthenticated file upload page.<script>alert(0)</script> into the parameter gave them a simple XSS payload. If somebody clicked on this leak, it keep be used to steal session information./var/tmp/$filename. There is code that attempts to prevent directory traversal by looking for / on Linux. However, this can be bypassed because Apache normalizes backslashes to be forward slashes. /www/dir/ to execute it.
Regardless, it's a pretty neat bypass!https://www.gemini.com/_ipx/w_12812,q_122/https%2f%2flocalhost%2fand response
Hostname is missing: localhost
protocol of the request can be derived from the header x-forwarded-proto. When using this, it concatenates the entire string from the proto without validating it. For instance, the URL https://evil.com/? would be valid evil.com becomes the new domain to be used. This allows for the pulling of arbitrary images.X-Forwarded-Proto wasn't in the list of cache keys.sprintf call. Nullbytes were becoming a problem here. Instead of writing the loader AFTER the copy, they wrote a smaller version BEFORE. By jumping to the smaller version (shellcode without nullbytes of course), they could reconstruct an arbitrary address to jump to that shellcode instead. This time, the small shellcode worked! Limitations on virtual address and nullbytes led them to try something else.strcpy issue. By using an existing overflow from the previous bug (overwriting this pointer of the class), we can control the where we want to write. Of course, the string we want to write is controlled by us, giving us a WRITE-WHERE primitive. 0x01010101). Since this write sits in physical memory and we have the previous vulnerability to jump to any location, the chaining of these two issues gives us a great primitive for getting things going. Overall, great post with many interesting insights.get_virtual_price() function implements the logic for creating the stable swap mechanism within Curve. LP tokens are commonly priced by computing the underlying tokens per share. Essentially, assets total price/amount of LP tokens. remove_liquidity(). When calculating the price that the LP token should be swapped for the code gets the balances of the contract, balances of the user and the total supply. Once it does this, the LP token is burned (removing the tokens from circulation) and returns all of the funds to the original caller. total_supply has been updated but NOT the individual amount of each token. This leaves the contract in a very strange place for new calls being made. Since the price of the LP token is based upon the assets total price/amount of LP tokens, we can make the amount of LP tokens very little but still keep the assets very high. While in this state, calling get_virtual_price() would have an inflated cost as a result. Any pool using this function from Curve would have been severely open to oracle manipulation at this point. KHEAP_DATA_BUFFERS that will live in its own section of memory. Secondly, a sized based collection of zones with a particular namespace of allocations.kalloc_type has a large amount of buckets that are randomized for each size. Additionally, the heap metadata for kalloc_type is completely in a different section of memory, unlike others that store freelist pointers inline. nodeIntegration for VSCode was turned on. This allows for JavaScript to have access to the runtime, giving code execution on the device with any XSS issue.nodeIntegration is turned off; of course, this is where the XSS ocurred at. This iFrame sandboxes the user substantially but has the allow-same-origin flag on it. What does this mean? Files can be hosted on the same file system and it is considered the same origin. top window. Since the top window has nodeIntegration turned on, accessing this window allows us to get code execution. vscode-file handler to use a file on the domain that really shouldn't be used. Combining this with the XSS allows for the calling the top level domain, giving us code execution.stream:stream tag, effectively ignoring the attempted escape. However, in the Cisco version, the function cleanup() is called within this code block instead. This resets the parser state and any XML tag seen after this point will become the new root tag.io_uring is a new subsystem in the Linux kernel used for speedy IO operations. In particular, the program may need to do privilege transitions many times via syscalls. Instead, a series of IO operations can be performed in parallel.io_req_init_async is called, it assigns its own identity to be the worker of the IO request. However, if two threads submit an IO request to the same io_uring at the same time, then they will be attached into the same work queue but with different IDs. The fact that the same identity is used for two different requests is what causes the very subtle security issue.CONFIG_HARDENDED_USERCOPY (which is enabled on the Container-Optimized OS), the function used to copy data from userland (copy_from_user) cannot be used across slot boundaries. So, the typically method of putting msg_msg and corrupting this will not work. It's possible to spray this area with objects we don't own but its not trivial.timerfd_ctx is within the kmalloc-256 slot and has plenty of pointers, making it a prime target for exploitation within our fake slot. From the fake slot, the author decided to use the upper and lower slots with the msg_msgseg object, which has mostly user controlled data.timerfd_ctx points back to itself (heap), leading to a nice leak from the msg_msgseg object. For breaking KASLR, arming the timer will set a function pointer which points to the .text section.CONFIG_SLAB_FREELIST_HARDENED flag is turned on, which is a type of pointer encoding that requires uses to know the storage address of the pointer, a random value and the new pointer itself. By filling up the entire slab, we can force the ptr to be NULL, leak it and calculate the random value to write the pointer ourselves.binfmt. The structures used for loading executables are writable! Using the primitive from above, the load_binary callback function can be abused to get PC control to ROP.tmpfs, which was not compatible with the exploit and we needed the O_DIRECT file flag to make this possible. Only a few files could be opened with this flag in the container and they were all very small, making the exploit unreliable.timerfd_ctx to ROP instead. Using this, the same controlled binfmt overwrite could be used to get code execution. Another novel technique that was used was to call msleep to gracefully end the ROP in the interrupt context to cause the program to not crash. FILE data structure is used by programmers. Within glibc, there is a vtable added to the structure _IO_FILE_plus. In glibc 2.24, a restriction was added to the vtable pointers by ensuring that the pointers were within a very special section of libc called __libc_IO_vtables. Additionally, some pointers are encrypted (key stored in thread local storage) to prevent modification._IO_str_overflow pointers use tables outside of the vtable. So, the same attack could be used from before. Additionally, the vtable could be misaligned to invoke the wrong functions. Again, this was patched in 2.28 by removing the function pointers. So, where are we now?angr. Since this is a bounded model checking problem, angr is the perfect tool for this. They configured Angr to run and let it go to town!_wide_vtable was not being validated by the vtable checker. Three of these techniques were known as the House of Apple. However, the others discovered were brand new. Overall, a good article with fun memes in it!functionCallWithValue with arbitrary parameters passed to it. This allowed for a user to pass in an arbitrary set of arguments and an arbitrary function as the router.swap from the context of the router contract. Using this, previous approvals from other users could be abused to steal all of the money from their wallets. Apparently an audit took place but completely missed this issue. migrateStake had no access controls. Additionally, the previous function did not verify the source address or stake value of the old address. As a result, an attacker could call the contract with a fake old address and stake value, mint their tokens and drain the entire contract.