Resources

People often ask me "How did you learn how to hack?" The answer: by reading. This page is a collection of the blog posts and other articles that I have accumulated over the years of my journey. Enjoy!

Exploit Post Mortem- 739

MonoX Team    Reference →Posted 4 Years Ago
  • Solution for trading tokens that have already been collateralized. This includes synthetics, NFT shards and many other things. This attack stole $31M in assets from Ethereum to Polygon.
  • The smart contract has the ability to swap one token for another. In this case, a token being sold and a token being bought are specified. What bug can exist here?
  • What if you specify the same token. By setting the in and out tokens to be the same, the price would be updated on the tokenIn, resulting in the tokenOut being worth more! This caused a massive inflation on the price of the MONO token, which is native to the platform.
  • The attackers executed this attack via a script to steal $31 million in assets by continuing jumping up the price of the token without ever losing the token. They are insured for only a million dollars. So, some of this will be distributed back to the people who lost their money.
  • The source code for this bug is showed at here. Mainly put, it shows that there is no validation that the two tokens are the same. This is so sad since this company went through several security audits.
  • To deal with this, a few things were done. Stop wallet exchanges for any addresses linked to the attack, pause the contract until a serious fix is made, chat with security advisors and try to find the attackers. Decentralized is great until something like this happens. Don't you wish the government could just give you the money back? If only FDIC was on cryptocurrency. Just let the banks manage the money or things like this will keep happening!

Polygon Lack Of Balance Check Bugfix Postmortem - 738

Immunefi    Reference →Posted 4 Years Ago
  • The MATIC token is the main token within the Polygon ecosystem. It's like Ether, but for Polygon. This token is used for voting, improvement proposals and many other things. The token itself is a smart contract on the network and is used for paying gas or transaction fees.
  • There is a special function that allows for gasless transactions to be made. The user who owns the token digitally signs several parameters, such as the operator, amount, none and expiration. The token is gasless since the operator pays for the gas.
  • There are two horrible bugs in this contract. The first one is that the sender balance is never validated to have enough money. As a result, an attacker with $0 can pose as having $2m dollars. Neat!
  • The second bug, which is shocking was not caught in the development process, is bad error handling. When validating a signature, there are many ways that it can fail. However, there is a major problem: the failure is not handled properly. Sometimes, the require block is used, which reverts the operation. Other times, however, it returns 0x0, which appears to be a valid operation.
  • As a result, of the bad error handling, an invalid signature of a bad length will return 0x0. The function doing the verifying call believes this is legitimate and continues on as normal. Error handling is hard and is something I always look for during code review.
  • The finder of this vulnerability was given 2.2 million dollars for his finding. A second finder got 1 million dollars. Finally, a single attacker found it and stole 1.8 million from it. You can't steal ALL of it otherwise the coin will not be worth anything. Damn, that's a big pay day for such a simple bug! Maybe I should hunt for smart contract vulnerabilities!
  • The resolution for this bug was simply removing the vulnerable function transferWithSig. I'm unsure if they removed the gas swapping functionality or what happened. It seems like the code quality for smart contracts is quite well, as BOTH of these bugs would not have survived a proper security review.

Fuzz Testing YottaDB- 737

Zach Minneker + YottaDB Devs    Reference →Posted 4 Years Ago
  • MUMPS is a programming language and database used in the banking and medical sector. This programming language/environment pre-dates ANSI C, making it have interesting quirks. Zach (an amazing co-worker of mine) did some amazing fuzzing research into two different implementations of MUMPS: finding 30 CVEs in total .
  • Zach setup a fuzzer for the YottaDB implementation of MUMPS. This was done by sending manipulated code, which was generated from the fuzzer. Using dynamic instrumentation to track paths that had already been hit, the fuzzer was able to go down some dark rabbit holes to eventually find some bugs.
  • To make the fuzzer actually work, Zach had to remove the signal handling, the input type and many other things. Zach eventually setup AddressSanitizer to find some of the non-crashing bugs as well.
  • This will likely become a DEFCON talk as well. I'm excited to see some of the technical details released for this; some of the bugs are wild! Use after frees, buffer overflows, logic bugs... so many weird things!
  • The developers of this project were 1 in a million. Now that these bugs were found, they chatted with Zach on how to fix them. Furthermore, they asked Zach to help the project setup a fuzzing infrastructure for the project. Overall, he made the world a much more secure place :)

BreakingFormation: AWS CloudFormation Vulnerability- 736

Tzah Pahima - Orca Security    Reference →Posted 4 Years Ago
  • AWS Cloudformatino allows the provisioning of AWS resources, such as EC2 instances and S3 buckets using templates. Since the service has the ability to do all of these things, a vulnerability in this would allow for the editing inside of ALL other accounts.
  • Within the template parsing, the author of the post found an XML eXternal Entity (XXE) Injection vulnerability. By including the External Entity into the XML file, HTTP requests can be made (SSRF) and files can be read from the file system. Using this vulnerability, the authors stole credentials from the host file system!
  • Once they had the credentials on the host file system, it is gameover. With how much permission CloudFormation has, this could be used to escape customer boundaries and effect many running services. If this would have been found in the wild by an attacker, this could have been a major security breach of many different systems.
  • Sadly, there are very little details on the privilege escalation technique and injection point for the XXE. There is a bunch of marketing fluff; I wish there were more technical details about this. Interesting and impactful bug but very few technical details.

Exploiting URL Parsing Confusion- 735

Team82 & Snyk    Reference →Posted 4 Years Ago
  • The authors of this article decided to examine the implementation of 16 URL parsers. From Python libraries such as urllib to browsers to Curl. An interesting item they call out is there are multiple revisions and definitions of URLs, which likely causes the discrepancies.
  • A URL is made up of many points: scheme://authority/path?query#fragment. Any one of these could cause many security issues depending if the two different libraries parse the URL properly.
  • The first example they give is a bypass for the local JNDI protection in VMWare ESXi Server. With the recent Log4J vulnerability, the RCE was only possible if a non-local URL could pull down an exploit Java class. Two different parsers were used in order to VERIFY and USE this URL. This causes the problem.
  • ldap://127.0.0.1#.evilhost.com:1389/a is the URL. The verifier thought that the URL was 127.0.0.1 but the actual URl was evilhost.com! The discrepancies between verifier and usage are the reason for many security issues!
  • From this research, 8 CVEs came out of it. For instance, an Open Redirect was found in Flask-Security by providing a URL with too many backslashes in the scheme.
  • Only some of these parsing bugs were fixed! If you are searching for a way to bypass URL protections, knowing the differences between these parsers can bear much fruit! Giving this a read for all of the small details may be helpful in the future.

Reverse Engineering Yaesu FT-70D Firmware Encryption- 734

landaire    Reference →Posted 4 Years Ago
  • HAM Radio is a real fun hobby! The author was curious about adding custom firmware to their HAM Radio. As a result, they found a USB firmware updater for Windows. This article was about reversing this application to decrypt the firmware.
  • The author immediately opens up the tool in IDA Pro to reverse engineer the application. After finding good context clues, they use the WinDbg debugger's Time Travel Debugger (TTD) feature while the firmware update occurs. They mention that non-Windows platforms have rr which also has this feature.
  • The function for asset 23 will find and load a resource of type RES_UPDATE_INFO into a large buffer dynamically. This object is passed to another function, which appears to do the encryption/decryption process. While going through this function, IDA automatically named a variable time. I had absolutely no idea that IDA did this!
  • Do we have to write our own decryption code? Not yet! At this point, we can break after the decryption has been done to get a hex dump of the firmware. Running strings on this (once converted to ASCII) shows real strings that are in the radio. Hurray!
  • The author spends more time on the encryption code. They break this up into a few steps:
    1. Building a 48-byte buffer containing key material. This is done via expanding several bytes into the table via XORs with a static buffer.
    2. Build a 32 byte buffer containing 0x800 byte static table. Combine previous steps buffer with this buffer.
    3. Iterate over 8 bytes at a time. For each byte, index from another lookup table to find the index to find the value to XOR with from step 2.
  • The previous steps above assume we have the key. How is the key generated? This is generated via the Unix timestamp at the very beginning! Some inflation is done on these bits to get a bigger key. Interesting!
  • The author made a Github repository with a re-implementation that can decrypt the firmware at Porkchop. Security by obscurity never works! :)

Zynq Part 1: Dumping the bootrom the hard way- 733

ropcha.in    Reference →Posted 4 Years Ago
  • The Zyng is a family of chips from Xilinx that combines ARM9 cores with 7-series FPGA fabric. For a time it was one of the cheapest ways to bootstrap an ARM secure boot chain without minimum orders and NDAs. The author spent time learning how this system works.
  • First, they bought a Cora Development Board to poke around with the device. They messed around with the device post-boot to dump the On-Chip Memory (OCM) configuration and the register state. At the very top of the OCM, they found 512 bytes of assembly that noted two other OCM regions.
  • The goal was to glitch the device until something interesting happened. They had major troubles finding a consistent trigger point because of how widely the timing varied on the device. Eventually, they did get the glitch to work, bypass security protections and output the bootroom.
  • However, the author found an easier way to grab the ROM. The ROM is built in a way that there is an initialization function for the boot mode selection interface; this uses a callback function. From there, the UART initialization routine sticks the entire payload in RAM. This code grabs nbytes from an offset in the ROM image and writes this to dest.
  • Here is what the author found from this: the source address is not properly validated. offset is properly checked for the POSITIVE bound only; it never checks the negative bound. Even though this does not seem like a problem at first, we can turn this into a problem. If the attacker controls the offset and the location being written to exists passed initialization, it can be used to persist the ROM to be read later.
  • When the Zynq scans for ONFI (Open NAND Flash Interface), there is a parameter page that populates local copies of several of these fields. Even though this is technically, considered untrusted data (outside of the BootROM), no validation is done. Why the technically part? A custom chip emulator had to be made in order to exploit this!
  • These parameters represent data from physical characteristics to supported features of the ONFI interface. To make this work, they had to emulate the ONFI interface when it was communicating with the BootROM. Damn, that is wild!
  • To trigger this the author went for the element XNandPs_ReadSpareBytes. This field is normally used ECC data reads. This user controlled buffer is read in with no sanity checks though! Using this, a simple stack overflow is possible to overwrite the return instruction pointer on the stack. Using this, code execution can be hijacked.
  • How do you know you have triggered something though? After running this exploit, they set the data to a constant stream of uart_init addresses. Once this was hit, a UART sequence was outputted, showing that the hijack had been successful. To write proper shellcode (since they already had ROP working), they ran into a problem. This turned out to simply be a endian problem on the write (lolz).
  • Eventually, they wrote a payload that would hijack control flow to turn on JTAG for more thorough debugging. The main reason for this attack is when secure boot is turned on and all debug interfaces are turned off. Using this exploit, the boot chain could be compromised. Overall, interesting series of posts, with an official release by xilinx even!

The Pinouts Book- 732

NODE & Baptiste    Reference →Posted 4 Years Ago
  • Pins are annoying to remember. Even more annoying, is having 45 documents open at once. This "book" has a list of all the pinouts with links to the actual datasheets.

The JNDI Strikes Back – Unauthenticated RCE in H2 Database Console- 731

Andrey Polkovnychenko & Shachar Menashe - JFrog    Reference →Posted 4 Years Ago
  • Log4Shell was a vulnerability in the Log4J library in Java. By simply adding a special format string to the logging output, the Java Naming and Directory Interface (JNDI) queries. This interface is quite powerful and can lead to remote code execution when it reaches out for a remote Java class to execute. JDNI injection is a bug class in itself that has been seen before.
  • Since the Log4Shell vulnerability, the authors of this post decided to look into other similar vulnerabilities. They started scanning open source repository for JNDI injection vulnerabilities by searching for the dangerous sink javax.naming.Context.lookup. They found a very similar bug: several code paths path unfiltered attacker-controlled URLs to the javax.naming.Context.lookup function.
  • As a result, a JDBC URL can be specified by the attacker. Once this is executed, a Java class can be returned, which will be executed, leading to code execution. They found this vulnerability in several places within the H2 database engine.
  • In the H2 web based console, upon attempting to login, there are two fields that are interesting: Driver Class and JDBC URL. By specifying a malicious class to be loaded pre-auth code execution (almost by design) within the application. By default, only default connections for this are allowed and the console should run only on localhost.
  • While looking through the SQL handling, they also noticed that the LINK_SCHEMA stored procedure passes driver and URL arguments directly into a vulnerable function. By setting this up properly, code execution can be achieved. However, this does require the ability to execute arbitrary queries on the database, which makes this unlikely to occur.
  • The fix for this is to prevent remote JNDI queries. It should only allow for local calls. To me, this seems feeble but we will see if it stands the test of time.

Where's the Interpreter!? (CVE-2021-30853)- 730

Patrick Wardle    Reference →Posted 4 Years Ago
  • File quarantine, Gatekeeper, & Notarization on MacOS are what prevent non-apple signed applications from running on a computer. In particular, this is meant to stop attacks where the application pretends to be Adobe Reader while actually stealing all of your files. Bypassing this leaves users at risk.
  • The root cause breaks down to a weird edge case in the system: a bash script without specifying the interpreter. By using a bash script with only a #! but without the interpreter, MacOS will gladly run this. But, for some reason, the missing interpreter bypasses the verification that MacOS should do with the user protections mentioned above. Why does this happen?
  • When no bash interpreter is specified (#! only) then an error message is returned when trying to call exec_shell_imgact. If this fails as a script, it will now use /bin/sh as the program to run.
  • Here's the kicker: MacOS now thinks that the MacOS binary being ran is NOT a bash script but the binary /bin/sh. Since this is a now a MacOS binary instead of a bash script, the call to exec_shell_imgact never happens. Eventually, when this gets to policy manager at syspolicyd, it decides that no security checks need to be made because it is NOT a script and is a trusted platform binary.
  • A super single bug wrapped in layers of complexity. Sometimes, fuzzing and trying random things is the way to go instead of raw code review. Good find!