Resources

People often ask me "How did you learn how to hack?" The answer: by reading. This page is a collection of the blog posts and other articles that I have accumulated over the years of my journey. Enjoy!

User can pay using archived price by manipulating the request sent to `POST /v1/payment_pages/for_plink`- 749

Gregxsunday - HackerOne    Reference →Posted 4 Years Ago
  • Stripe is an application for online payments. This researcher noticed that archived prices could be used on a payment link. As a result, an attacker could use a different price than the current one on a payment.
  • This was due to a lack of validation on the payment link AND the price being active. It appears that the price was validated but not for being turned on. Good find!

Security advisory for the standard library in Rust(CVE-2022-21658)- 748

The Rust Security Response WG    Reference →Posted 4 Years Ago
  • One most programming languages, including Rust, have wrappers around file systems. This includes creating, editing and deleting files. A common problem to worry about in these cases is symbolic links. If an attacker can convince a privileged program to use a file but is actually referenced to something else as a symbolic link, this can create major security problems.
  • As a result, Rust has built in protections for this type of attack. The standard program will NOT follow symbolic links unless it is explicitly allowed in the function call. For remove_dir_all, there is explicit documentation that says "This function does not follow symbolic links and it will simply remove the symbolic link itself." Awesome, so everything should be good!
  • Validating whether a file is a symbolic link or a regular file can go wrong in many ways though. These types of exploits commonly rely upon Time of Time vs. Time of Use (TOCTOU) vulnerabilities where the validation checks the security then an action is performed later. If, between the check and the use something is changed, then the original security validation can be bypassed.
  • In the case of files and symbolic links, the vulnerability is that a validation is done to see if the file is a symbolic link or not. If this check is passed, then the standard library will delete the directory since it believes that a symlink is not being used. However, this suffers from a TOCTOU problem. If the validation can see a file but then be swapped for a directory, the validation means nothing.
  • This race condition had an extremely tight window. But, with enough tries, an attacker would be able to exploit this. It is interesting that this was a known issue but the validation of the symlink suffered from the same bait and switch attack that was trying to be prevented. Good find!

Telenot Complex: Insecure AES Key Generation- 747

X41 D-Sec    Reference →Posted 4 Years Ago
  • The complex alarm system uses Mifare DESFire EV1 and EV2 NFC tags to authorize users. Using a software called compasX, it is possible to remotely access these alarms via the VdS 2465 protocol. The software communicates via TCP/IP.
  • While looking to automate the pulling of logs off the device from compasX, the authors decided to open up the client in a disassembler. While learning all about the system, they noticed that srand(time()) was being used to generate AES key material. Can this be exploited?
  • The authors wrote a program that sets the PRNG seed with srand(time()). Then, it gets 16 bytes via rand(). They then compare the bytes of the real key to the one they generated to see if it matches. If this is the case, then we can prove that this key was generated insecurely. And, unsuprisingly, they got a hit.
  • The protocol is as follows for this NFC card authentication process:
    1. Reads signals the tag to start authentication process using AES keys known to both of the reader and card.
    2. The tag generates a 16 byte number encrypted with the AES key.
    3. The reader decrypts the number. This is then rotated one step to the left.
    4. The reader generates is own random number to the shifted value and appends it to the original random number. This value is encrypted and sent back to the tag.
    5. The tag decrypts the data. It then verifies that their random number is shifted properly. The reader random number is shifted, encrypted and sent back to the reader.
  • If an attacker has an emulated tag, such as a Proxmark, a complex attack can be ran on this. In step 3, an arbitrary piece of data can be sent. Then, when this is shifted and encrypted going back to the user, we have encrypted data that we KNOW the value of. Using the script from above, a large amount of AES keys can be tried until the expected value is found!
  • What makes this attack so interesting, is that quirks of the DESFire protocol make it possible to brute force the key offline! To me, this is a design flaw in the protocol. The authors wrote up actual code to do this with a Proxmark and were able to bypass the security of this!
  • What makes this attack so interesting, is that quirks of the DESFire protocol make it possible to brute force the key offline! To me, this is a design flaw in the protocol. The authors wrote up actual code to do this with a Proxmark and were able to bypass the security of this!

GSOh No! Hunting for Vulnerabilities in VirtualBox Network Offloads- 746

Max Van Amerongen - Sentinel Labs    Reference →Posted 4 Years Ago
  • Pwn2Own is a contest where contestants compete for the master of pwn. The author decided to tackle the very hardened target VirtualBox.
  • While searching through the code, looking for interesting attack vectors, they noticed a memcpy on Generic Segmentation Offload (GSO) used in NAT emulation. After analyzes various code paths and using a SMT solvers, they discovered that they could control a fair amount of information from this memcpy. Good attack surface to start!
  • The specific code path, with a lot of control, was via paravirtualized networking. Paravirtualization has the guest install drivers that aware they are running in a virtual machine in order to work with the host to transfer data. One of these drivers is the virtio-net driver, which comes with the Linux sourc as a network adapter.
  • Generic Segmentation Offload (GSO) is for putting the heavily lifting of checksums for network traffic or segmentation of Ethernet packets. GSO is implemented via VirtIO to speed up the process of generating this information.
  • When the NAT code receives the GSO frame, it gets the full Ethernet packet to pass to a library for TCP/IP emultation called Slirp. There is a buffer allocated for this packet, along with a size for the allocation. There is an Assertion if the size is too big. However, the assets are NOT compiled into release builds. Since the default size is the smallest bucket of sizes, this leads to problems.
  • An additional vulnerability occurs in the validation of the guest GSO parameters. Even though this validation exists, the same assertion bug as above exists. As a result, a heap overflow can occur.
  • In the CheckSum offload ecosystem a size parameter is blindly trusted without any validation. This bug leads to an integer underflow or an out of bounds read access. Checksumming too much data does not seem interesting at first glace; however, by doing this multiple times, this turns into a weirdly complicated out of bounds read vulnerability.
  • Sadly, no exploitation details were given. However, it is assumed that the vulnerabilities can be used to escape the virtual machine. To me, the most interseting part is that there are multiple types of asserts in the library. One of them is compiled into the system while the other is only in some builds, but NOT the production builds. This seems to be a common problem and is worth checking out in other places.

Knock Knock! Who's There? - An NSA VM- 745

fG    Reference →Posted 4 Years Ago
  • The NSA created a virtual machine within BPF to backdoor machines. The device binary dewdrop uses a technique known as Port Knocking for communication. Instead of having a listening port, which is easily spotable by many netstat and other commands, it is a libpcap that looks for magic packets.
  • The tool is extremely quite. Output is redirected to /dev/null, signal handlers are removed, cores files are disabled... To make reversing harder, strings are XOR obfuscated, with an off the shelf tool, making them easy to decode.
  • The author dives into the weeds of the binary. They find out that BPF is being used to sniff the traffic and for the VM. To view the BPF bytecode, the Cloudflare tool bpf_tools can be used and there is even a bpf debugger that can be found as well.
  • The bytecode is completely reverse engineered to figure out what is going on. From looking at the instruction set (similar to assembly), we can figure it out. The BPF is mainly used for port knocking but supports DNS, HTTP, TCP, ICMP and many other types of information being sent to it.
  • Interesting post into how a highly sophiscated attacker recieves commands to the system. The author is looking for a Linx developer with great understanding of the Kernel as well; could be a fun opportunity!

Cache Poisoning at Scale - 744

Youstin    Reference →Posted 4 Years Ago
  • Caches are used on the web to make the internet faster. However, what if there is a desync between the cache and the website? This attack, known as web cache oisoning, is complicated to find but can cause huge damage when found.
  • Apache Traffic Server (ATS) is a caching HTTP proxy that is widely used. When a request is sent with a URL fragment, ATS forwards the request without stripping the fragment. Since the cache strips out the fragment for the cache key but forwards this along, this may lead to a desync between the request being made and the cache.
  • If the proxies behind ATS encode %23 to #, a completely diffrent cache key may be implemented than the actual request made. If the backend normalizes ../ then XSS or Open Directs may even be possible to change the page in action.
  • To test this at scale, the author wrote a tool to detect unkeyed headers for cache poisoning. While using Github, they noticed that the Content-Type header was vulnerable when using an invalid value. By sending an invalid Content-Type header, the request would not work properly, causing a DoS to the other users.
  • Gitlab uses GCP and Fastly in order to host static files. Since GCP allows for x-http-method-override by default, setting this header to a different method would cause issues. Even though a 405 error message for POST would not be cached, HEAD and PURGE would get cached, causing some major issues. This technique worked on targets besides Gitlab as well.
  • Ruby on Rails is commonly deployed with the Rack middleware. The header x-forwarded-scheme changes the scheme of the request with this. By sending http as the value, a 301 redirect would occur to the same location. If this was cached by the CDN, a redirect loop would occur, denying access to the file. This was exploited on HackerOne and Shopify.
  • The X-forwarded-host additionally caused some issues. Using this, a 301 redirect could be performed on the result of JavaScript files, with this then being cached. Since the JavaScript was being loaded into the page of the user, this turned into a very serious XSS vulnerability.
  • Another attack involved URL parameters. The author noticed that the cached data, for images, only cached the parameter size. If two size paraemters were passed in, both were included as the cache key, but the server only used the last one. This led to another DoS.
  • Another attack involved URL parameters. The author noticed that the cached data, for images, only cached the parameter size. If two size paraemters were passed in, both were included as the cache key, but the server only used the last one. This led to another DoS.
  • For identifying caching issues, the tool Param Miner from Burp Suite is fairly awesome. When looking for results on the caching, the Age, X-Cache and several other headers can be useful for learning how the caching for the system works.
  • The author has a list of headers that were exploited in this research as well. Identifying Cache Poisoning vulnerabilities seems so hard in practice, since it requires weird quirks of the system. However, scans may sometimes be enough :)

Flickr Account Takeover- 743

Lauritz Holtmann    Reference →Posted 4 Years Ago
  • Flickr is a site for storing photos, videos and other media. The authentication flow is integrated with AWS Cognito with the OpenID connect flow, which is quite similar to the standard OAuth2 flow. Cognito has an interesting quirk though: the token passed back to a user can be used in the AWS CLI.
  • By default, the token has some permissions for the Cognito AWS service. To start with, the author simply ran get-user directly at AWS. To their surprise, this returned information, including internal statistics, about the user. Besides, reading, the attributes can be written to as well.
  • Since these attributes control the flow of authentication and are assumed to ONLY be written by the backend of the application, this bypasses many of the verification steps. At this point, assumptions about the system have been broken, wh
  • OpenID connect has a unique identifer for each user. While looking at the user attributes, the author noticed that this was the email instead of an unchangable ID (called a sub), according to the specification. If another changable ID is used for the sub, then the assuming of the role from a third party may cause permission problems.
  • During the login flow, Flickr normalizes the email address to be all lowercase letters on the backend. However, the user attributes being set do not do the same normalization. As a result, a user can set an email with uppercase letters in their email but then assume the account of somebody with a legitemate signed up email. Account takeover!
  • The author adds a few hints at the end for developers. First, be careful with the sub claim for authenticating users. Secondy, ensure that the Cognito attributes are looked down properly once the token is returned. Third, verify the email on the login flow for Cognito.
  • From the security researcher side, the more parties that are involved with authentication, the harder it is! Taking the time to understand the authentication flow will lead to DEEP bugs that are extremely impactful. This author has many other Open ID Connect bugs on their blog as well.

XNU: heap-use-after-free in inm_merge- 742

Sergei Glazunov - P0    Reference →Posted 4 Years Ago
  • When programming in multi-threaded applications, concurrent access can cause many security bugs. A protection for this is to add a lock to the data to ensure that only one thread can use this variable at a time. The other thread must wait for this to occur.
  • Mutexes and locks do have the chance to setup dead locks, where the thread is completely stuck in its state. Additionally, where should the lock go on a variable? In functions deeper down the stack? Up top? There is no census on this.
  • As a result, a user may want to call a function that locks a variable they have already set a lock on. In order to fix this problem, the thread will temporarily drop the lock to let the function using the variable to lock it. However, this leaves a very small window of opportunity for something else to claim this memory or do something malicious to it.
  • Considering this is an obvious code-smell, this is not the first or last time you will see this bug. In this case, when inp_join_group needs to create a new membership entry, it briefly releases the sockets lock. Since this pointer is passed into a local variable, when the lock gets drop, a concurrent call to this function could make the pointer in the local variable invalid.
  • The race condition can lead to a use after free vulnerability that is likely exploitable. This same bug exists in the IPv6 stack as well.
  • Overall, this is an interesting bug that was found via code review. Going forward, this is a code smell that I will look for on multi-threaded applications.

PS4 CCP Crypto Bug- 741

Flatz    Reference →Posted 4 Years Ago
  • The Crypto Coprocessor (CCP) is a separate processor that takes in request to perform cryptographic operations. However, to make such a request, it must be cryptographically signed via HMAC. These are the keys to the kingdom.
  • In the request, you can specify the size of the key being used. Because of this, each byte of the key can be brute forced with a 1/256 chance of being correct. By doing this byte by byte, we can recover the entire key being used.
  • Cryptography is hard! Not just the math but the logic must be perfect as well. Such a meme of a bug!

Getting root on Ubuntu through wishful thinking- 740

Kevin Backhouse    Reference →Posted 4 Years Ago
  • This is a story of how the author successfully exploited CVE-2021-3939 in Ubuntu’s accountsservice, then spent the next two weeks trying to figure out how my own exploit worked. It seemed like magic, even to him! The original bug was accidentally discovered while writing an exploit for another vulnerability.
  • There is a static variable (shared between threads) that is allocated once. In the function user_get_fallback_value, a pointer to this static variable is returned. However, in some code paths, this variable can be freed. Since this variable is only crated the one time, this results in a double free vulnerability on the string, which can be triggered an infinite amount of times.
  • When exploiting a double free vulnerability, the basic idea is to turn this into a use after free. The idea is to free the object once, get something useful allocated into this location, use the bug to free it again to create the UAF. Once it is in this state, it is MUCH easier to exploit, even if the original bug was a double free. Most of the time, we want multiple “owners” to both believe that they own the same chunk of memory.
  • When searching for primitives with this bug, the author came across many issues. The freed chunk is only 0x20 bytes in size, which means the stability of the exploit goes way down since it will be reused frequently. To make the stability worse, the author also noticed this vulnerability could be used as an information disclosure by using the user_new functionality. However, this only worked if the address from a valid UTF-8 string.
  • To successfully exploit this vulnerability, the author needed a 0x20 sized memory allocation or get it to consolidate into a larger chunk. After hitting several deadends (such as rewriting bus names), with no good targets in the 0x20 size, the author ran into some magic once they stepped away from their seat!
  • After attempting one exploit strategy with a wrong sized chunk, their exploit magically worked after several hours of a script running. After spending weeks trying to figure it out, they came to a crazy conclusion: two different threads were overwriting a function pointer! This happens because of the double free bug on an object with a function pointer inside of it.
  • When SetEmail or any call is made to Polkit, a struct called CheckAuthData is used. This struct has a function pointer that determines what call to make during a callback. This struct is also 0x20 in size.
  • As a result, if we trigger the double free vulnerability on the 0x20 sized chunk, it MAY be the CheckAuthData struct getting freed. Then, another request, such as SetPassword, would be made, creating the CheckAuthData in this chunks place. When the original requests uses the callback (and it's authorized to), it will be on the wrong callback, resulting in SetPassword being called instead. Wow, that is wild!
  • The author really leaned into the madness on this one. Most of the time, exploits do not fall out of a tree like this. However, it is interesting to see how a simple double free, with no other primitives, leads to the ability to change a password. These data-driven attacks are incredibly hard to stop.