Resources

People often ask me "How did you learn how to hack?" The answer: by reading. This page is a collection of the blog posts and other articles that I have accumulated over the years of my journey. Enjoy!

Kernel Vmalloc Use-After-Free in the ION Allocator- 644

Gyorgy Miru    Reference →Posted 4 Years Ago
  • ION is an allocator used by the Android kernel. It is an extensible memory management framework for DMA buffers. These buffers are represented with file descriptors that can be shared between user and kernel map.
  • The lowest function call is ion_buffer_kmap_get(), which increments a buffer's reference counter and calls a heap specific memory map function.
  • The file descriptors have many functions that can be performed on them. The IOCTL DMA_BUF_IOCTL_SYNC can arbitrary increment or decrement the reference counter for the shared buffer. This reference counting issue can lead to a malicious user triggering a use after free.
  • The exploitation of this bug would heavily depend on the usage of the allocator in the kernel. Interesting find regardless!

Parallels Desktop Guest to Host Escape- 643

Ben McBride - Trenchant    Reference →Posted 4 Years Ago
  • This story is the research put into a VM escape for the Parallels Virtual machine for Pwn2Own. It is interesting to see the start and the end of the research, as we usually only see the final result.
  • The author did a solid amount of reading into the VM escape world. After a while, they noticed that the hardware components of a VM were something that had been done wrong on several occasions for VirtualBox and VMWare. As a result, they choose to target this virtual hardware component.
  • The author wrote a simple port I/O fuzzer. This fuzzer wrote random data to random I/O port addresses in the guest in the Linux VM. From there, they had a more precise fuzzer that fuzzed MMIO (memory mapped I/O) with random bytes. Both of these fuzzers caused crashes on either ASSERTs or real crashes.
  • Parallels dumps logs quite nicely for us! From the crash, we could see a memory dump of what had happened. From reverse engineering one of the crashes, they noticed that something had occurred in the hypervisor that should not have happened. The code had to do with the virtio VGA device settings.
  • From more reverse engineering and reading the specification of virtio the functionality was in the PCIVirtIOWriteMM function.
  • All of the real crashes came down to one bug: the driver feature select was user controllable and being used as a index in an array access without ever being validated. This led to a straight forward relative out of bounds read and out of bounds write.
  • To get code execution, they found a list of function pointers called port I/O handlers. By writing to one of the unused handlers, they could get code execution in the kernel by adding their own port writing commands to the Hypervisor code.
  • Taking this to code execution was trivial, as the __data and __bss sections do not have the NX bit set on them. As a result, the shellcode could be written to some user controlled data into some of these sections of memory, then executed via the fake port handler. So, what is next?
  • Arbitrary code execution on a test platform is the hard part but there is still more to it. They needed to move from their test Macbook to the most recent version of Big Sur. Additionally, they needed to demonstrate complete control of the system, not just simply code execution.
  • To demonstrate code execution and control over the main device, they decided to pop a calculator on the host machine. This was done by mapping a physical memory page in the hypervisor as readable and writable. From there, they scan for a page on the host machine. If they had found a target page, they would modify the process to pop a calc. Not hard but just annoying to do.
  • The change to Big Sur ended up being a big deal. The exploit code had to now account for several exploit mitigations and several changes to the architecture of the hypervisor.
  • Bypassing ASLR was quite easy with the relative read. To get a more powerful arbitrary write/arbitrary read, they overwrite the port I/O handler parameters. By controlling these values, AhciIdpIndexInPortFunc gave absolute read and AhciIdpIndexOutPortFunc gave an absolute write.
  • Here are some tips from the author at the end:
    • For Pwn2Own, just try! Not everything for this competition is novel and crazy work. Sometimes, simple things work well. You find vulns by DOING and not READING.
    • Read previous work and specifications help a lot for ideas.
    • When first on a target, do something real simple to help you get familiar. Reverse something easy or setup a simple fuzzer to get some momentum.
    • Post-mortem analysis in security research is real important to see if you missed anything and why you did.

A tale of making internet pollution free - Prototype Pollution Bugs- 642

s1r1us    Reference →Posted 4 Years Ago
  • JavaScript is a prototype based language. Practically, this means that when new objects are crated, they carry over properties and methods of the prototype (__proto__) object. This object-based inheritance gives JavaScript flexibility and power, but with great power comes great responsibility!
  • The attack can only happen when a merge operation occurs that can overwrite the main prototype. From this point forward, the prototypes will inherit the value that we need for an object, which can lead to XSS on the client-side or RCE on the server-side.
  • The bulk of the testing on the client-side was done by adding __proto__[field] = value then checking to see if this had been inherited by the application. They wrote a chrome extension that automatically did this when visiting a page, which is pretty rad!
  • While testing, the authors found that 80% of nested parsers in JS were vulnerable to this attack. All they had to do was find some sites that were using this software and exploit it for further gain. Exploitation required the finding of an XSS gadget that existed within the context of the application by poisoning some important field. They have several interesting case studies on this.
  • The first case was Jira. They found that Jira Service Management was using the backbone for query parameter handling. By passing in __proto__.x=11111 as a parameter, the prototype could be poisoned. Using the untrusted types extension, a few interesting syncs can be found to inject directly into the DOM.
  • The authors constantly told developers about the issue but did not tell them about the proper way to fix it. Besides __proto__, you can use [constructor][prototype] as well. If you're a bug bounty hunter, this is a real good way to make extra money from the same bug.
  • While browsing with the extension on for the Apple-Watch application, they noticed a pollution issue. To exploit this, they poisoned the attribute src then changed the onerror handler to be JavaScript that they controlled. When the image failed to load, this would execute JavaScript in the context of the page. The URL query parameter is :__proto__[src]=image&__proto__[onerror]=alert(1).
  • Segment analytics used querystring, which is vulnerable to prototype pollution. They found that the pollution was only possible if the property was a number. They found a real crazy payload if knockout.js was being used, but nothing else.
  • If you want to see a real example of this vulnerability in action, check out Trello. If you go to the developer console and type in __proto__[123] you will notice that the value is set to 'xx'. To me, this was really helpful for visualizing the vulnerability class.
  • This post was awesome for learning about how Prototype pollution works. I'm excited to add the Chrome Extension they pointed out and add my own payloads as well. Thanks for the real world examples, friends!

Multiple bugs allowed malicious Android Applications to takeover Facebook/Workplace accounts- 641

Youssef Sammouda    Reference →Posted 4 Years Ago
  • The author attacked the Facebook Workplace (enterprise internal Facebook product) phone application on Android. In particular, they hit the Oauth flow.
  • The OAuth flow for phone applications with Facebook is quite weird because of the usage of deep links or custom URIs. The challenge is that an attacker can register the same handler and receive the callback with the access token. Facebook did a few things in order to prevent this from happening:
    • When using a Facebook webview, the redirects can only go to specific URL schemes, with some of the generic ones (fb{APP_ID}://) are blocked.
    • If in a mobile browser, then a confirmation is made to the user to make sure they actually want to redirect.
    • Check for specific headers for Facebook servers.
    • Check for unique values associated with the application for the Facebook SDK.
  • Some of the defenses were not universal though. They were expected to be validated by a specific app or URI. The second option above (validation of mobile browser redirect) was not implemented in the Workplace OAuth flow. As a result, an attacker could register the fbWP_APP_id:// scheme for the application then steal the access token from the end of the OAuth flow.
  • The fourth (final) protection mentioned above was the required usage of an Android specific key for the SDK. By providing an empty key in the header for Workplace, the dialog would not appear (as it does on Facebook) and the access token would be sent back. Another terrible bug!
  • The first protection above blocks redirects to non-allowlisted URI schemes, such as fb and fb-work. The author noticed that by specifying xd_arbiter in the redirect_uri parameter and change the URI scheme to fb:// the redirect would succeed.
  • For whatever reason, having this specific combination of inputs bypassed the allowlist protections for the fb:// and several other URIs. Then, if the an attacker had control over a specific URI on the phone, they could force the redirect to occur with the access token, resulting in a major compromise.
  • Overall, the threat of a malicious URI handler is really interesting and not something that I considered. It was cool to see how Facebook tried to protect against the abuse of this in the authorization process but also how it was bypassed. Good finds!

Remote Code Execution in SharePoint via Workflow Compilation - CVE-2021-26420- 640

Zero Day Initiative (ZDI)     Reference →Posted 4 Years Ago
  • Sharepoint workflows are mini-applications that can be used to streamline and automate several business processes. An organization can use workflows to attach business logic to documents or items in a SharePoint.
  • The Workflow Foundation runs workflows only when all of the types are authorization in the authorizedTypes list, which is found in the web.config file for ASP.net. Alongside this allowlist, are a specific list of denied types from allowed namespaces that are known to be dangerous, such as the System.Workflow.ComponentModel.Compiler option. The post is finding a way to circumvent this allowlist and denylist.
  • The Compile() function is offered by the WorkflowCompiler for building applications, based on specific parameters. But, this allowed namespace is heavily locked down via the denylist for the types that can be used within it. Since denylists are hard to get right, is there a way around this?
  • The bulk of the WorkflowCompiler functionality is implemented within WorkflowCompilerInternal. The WorkflowCompilerInternal code does not have the same denylist as the other one and is implicitly allowed via another option in the list. This means that we can access the bulk of the WorkflowCompiler functionality through the internal version!
  • The Compile operation can be directly called via the WorkflowCompilerInternal function. Using this function, code can be compiled without the specific restrictions, allowing for arbitrary code to be ran without a workflow on the server. The full POC and walkthrough of the exploit are in the article.
  • Denylisting is really hard to do! If you do not think of every possible necessary that could be abused, then it is vulnerable to attack. I bet we will see some bypasses for how this works in the future!

Path traversal and file disclosure vulnerability in Apache HTTP Server- 639

Apache    Reference →Posted 4 Years Ago
  • Apache is an open source HTTP server that is used all around the world. Finding vulnerabilities in this that lead to RCE or file disclosure is a huge deal.
  • A flaw was found in the path normalization code that allowed for a path traversal to map URLs to files outside of the document root.
  • The payload was .%2e/, which should have been normalized to the normal directory. For whatever reason, this was URL decoded down the road but not properly sanitized.
  • A file of .%2e/.%2e/.%2e/.%2e/.%2e/.%2e/.%2e/.%2e/.%2e/etc/passwd could be used to escape the web server root to bring back the password file. A directory traversal in Apache in 2021; that is insane! Here is a proof of concept. URL parsing is extremely difficult!
  • Some further information about this can be found at SANS. The particular code had not been changed in 20+ years! The code author was trying to simplify the parsing for normalization and validation, but did not understand the security implications of the changes. The URL decoding checks for the . (period) appears to be removed.

The discovery of Gatekeeper bypass CVE-2021-1810- 638

Rasmus Sten - FSecure    Reference →Posted 4 Years Ago
  • The GateKeeper feature enforces signatures from legitimate developers in order to run. This prevents malware being spread under a legit name and the code simply being modified. This is also used to make sure only certified developer can run code on the system.
  • A second feature, that was recently added, is notarization. This validates that the code does not contain any known malware; this review process is separate from the app review.
  • The final piece to the puzzle is macOS quarantine. This file extended file attribute is used to determine if an extra pop up should open prior to executing the file. Many web browsers voluntarily set this flag when downloading things from the internet.
  • While testing the length of file names in an application they were building, they found an interesting bug! If a file had a VERY large hierarchy (larger than PATH_MAX value), the extraction process for the archive utility would get confused.
  • While messing around with zipping and unzipping archives with too long of paths, they noticed that the quarantine attribute was missing on some of the files. This seems bad!
  • The author needed an un-archived structure that would be opened by un-archive utility but NOT by Safari. Long enough to fail the quarantine attribute. Short enough to execute the binary. Finally, short enough to be found by finder.
  • The final item (browsable) could be done via a symbolic link. Everything else was done by creating a structure that was barely longer than the PATH_MAX.
  • There is a second article that does a complete breakdown of this bug. The 'tldr;' is that the parser exits out early if the PATH_MAX is found length is found out the attribute setting.
  • This was a really interesting bug that was found by complete accident, where I had two main take-aways. Be observant, as you will find many things by just noticing odd things with giving weird inputs. Error handling is hard, especially on the threshold of what is allowed.

Stealing weapons from the Armoury- 637

AP Tortellini    Reference →Posted 4 Years Ago
  • Armoury Crate is an app that can be used to configure, connect and control ROG gaming products. This product can be used to customize RGB lighting and many other parts of the system.
  • The author knew this program ran with very high privileges. As a result, they wanted to see if they could find a privilege escalation vulnerability. In particular, they started looking for DLL Hijacking vulnerabilities.
  • When looking for DLL Injection bugs, procmon is a great tool, as it can be used to filter different calls and results. When looking for these bugs, CreateFile with a "NO SUCH FILE" or "PATH NOT FOUND" result could potentially be a bug.
  • If the process is running with high privileges and cannot find the DLL, then we can add our own to the search path. This will then loaded into the binary, resulting in code execution in the context of the application.
  • This DLL hijacking vulnerability exists in the application for two reasons. The first one is that the DLLs were not cryptographically signed and not validated for the signatures.
  • The second vulnerability is that the ACLs (access control list) for the directory C:\ProgramData\ASUS\GamingCenterLib\ were not properly configured. The DLL hijacking will go through multiple directories (including the ProgramData) once it cannot find the original DLL.
  • From both of these bugs, it was possible to add a DLL to the C:\ProgramData\ASUS\GamingCenterLib\ directory.
  • An interesting note was that the author enabled Options->Enable Boot Loggin for ProcMon to see the loading process of the binary. Otherwise, they may not have found the bug.

Crucial’s MOD Utility LPE – CVE-2021-41285- 636

VoidSec    Reference →Posted 4 Years Ago
  • Crucial Ballstix MOD Utility is a software product for customization and controlling gaming systems such as LED colors, patterns, memory temperature and several other things. Since this does some major display changes, this driver on Windows has very high privileges. The service uses the MODAPI.sys driver, which is open source. T
  • All of the vulnerabilities relate to insecure IOCTLs for the driver being exposed. One of IOCTLs allows for the direct writing to IO ports. This allows for the writing to the harddrive directly, which theoretically is just game over.
  • Another issue was the ability to map physical memory to virtual memory. This gives a Write-What-Where primitive for the kernel. Again, a really simple exploit to pull off.
  • Computers have model-specific registers (MSRs) that are used for CPU info. By being able to read/write to MSRs, The _LSTAR register is used to transition from user-mode to kernel-mode. By playing with this register, we can direct what happens after a syscall, which gives us code execution in ring-0. Again, game over.
  • This post had many strange/interesting exploit primitives, such as the port writing and MSR register setting. Sometimes, we have to be real creative in writing our exploits.

fail2ban – Remote Code Execution- 635

Jakub Zoczek - Securitum    Reference →Posted 4 Years Ago
  • Fail2Ban analyses logs and other data sources in search of brute force attempts. There are plenty of rules, such as SSH, SMTP, HTTP and many many. Once it finds a pattern, it will ban the IP address.
  • There are actions that occur when blocking a client; one of these is sending an email. An easy way to send an email via the Unix CLI is $ echo "test e-mail" | mail -s "subject" user@example.org. However, this code has a deadly flaw in it!
  • The mail command has the ability to execute code if "~!" is included in the command. In the case of fail2ban, a parameter was being included in the mail command that that allowed for code execution.
  • What was the input? A call for whois. How do we even control this? Asking an ISP to add a particular did not work, as they only change things for groups. They span up their own WhoIs server in order to attempt to run this exploit.
  • To make this practical, an attacker would need to force somebody to go to their WhoIs server. This could be done via a MITM attack, since whois is unencrypted anyway.
  • Interesting find for a straight flaw in the mailutils command.