Resources

People often ask me "How did you learn how to hack?" The answer: by reading. This page is a collection of the blog posts and other articles that I have accumulated over the years of my journey. Enjoy!

AWS CDK Risk: Exploiting a Missing S3 Bucket Allowed Account Takeover- 1525

AquaSec    Reference →Posted 1 Year Ago
  • Bucket Snipping is a class of AWS integration vulnerabilities that stems from S3 buckets being globally scoped. The idea is to possess readable and writable access to an S3 bucket used by another account then use this control to perform malicious things in the account.
  • The AWS Cloud Development Kit (CDK) is a framework for infrastructure as code. This allows using programming languages like Python to translate into configurations in AWS. CDK has a one time initialization step called bootstrap. This one time command creates the necessary roles for CloudFormation and CDK in the account, an S3 bucket and other resources.
  • The bucket named can be changed but has a default format of cdk-{qualifier}-assets-{account-ID}-{Region}. The qualifier has a default value of hnb659fds, the region is guessable and the account Id is the only somewhat secret value but may be learned through other means.
  • Because the bucket is somewhat predictable and S3 is global, it can be registered ahead of time. They call this out as a DoS which to me is BS - this is like saying that me frontrunning the creation of a domain is a DoS! You can literally just change the name slightly and it's fine.
  • Sometimes, resources get deleted from AWS accounts, especially with quota limits. If this happens, another user can create the S3 bucket because it's global. By default, the CloudFormationExecuteRole used by CDK has admin privileges to the account, making the ability to overwrite a template very, very bad. Does this work? Yes it does!
  • The attack assumes that the user has CDK initialized in their account, deleted the S3 bucket and has a predictable qualifier:
    1. Attacker recreates the deleted S3 bucket in the account using the predictable name. On this bucket, the attacker sets a permissive resource-level policy.
    2. On the S3 bucket, configure a Lambda function to inject the malicious admin role into any CloudFormation template that gets loaded. There are hooks for S3 bucket operations that make this possible.
    3. User runs cdk deploy in their account.
    4. Upon triggering the build, the template gets written to the attackers S3 bucket, which gets backdoored. Since the template has been updated, it will deploy the resources that the attacker specifies.
    5. What does an attacker do with this? They crate a Backdoor IAM role that can be assumed by an account that they control! At this point, they have admin rights in the account.
  • How likely is this to happen though? This is where I find the article interesting! First, they checked if a particular IAM account existed using a known technique which is surprising to me that it works. They created a scanner that looked for the existence of the S3 bucket name for 38,000 known accounts. If the role existed but the S3 bucket didn't, the attack was on!
  • From the 38K accounts analyzed, 2% of them used CDK. Of these accounts, 10% of the accounts were found to be vulnerable to this attack vector. According to AWS, about 1% of CDK users were affected by this issue. An account takeover in AWS from simply deleting a bucket is pretty bad news!
  • To remediate the vulnerability, newer versions of CDK ensure bucket ownership by the account. Additionally, AWS added some docs to not use the default qualifier. Since previously bootstrapped users are still potentially vulnerable to this attack, AWS added messages in the CLI terminal with a big error message that this still affected them.
  • Overall, a good writeup on a classic AWS bucket snipping vulnerability. It amazes me that these are still being found in AWS products after all of these years.

Let's PWN WTM!- 1524

haxxin    Reference →Posted 1 Year Ago
  • In a previous post, the author broke the firmware encryption that was using Wireless Trusted Module (WSM) on a printer. This time, they target WSM itself.
  • They didn't have a root shell on the device from their friends previous backdoor. So, they looked around and used an N-day reported by Synacktiv. The lbtraceapp binary has setuid permissions and functions similar to ftrace. By passing it a program, such as /bin/bash, you get root for free. Unfortunately, root doesn't mean all access with capabilities.
  • The lbtraceapp ran as root but didn't have the CAP_SYS_MODULE capability. When looking around at various processes, they found that some shellscript executes sleep which DOES have the capability we need. Since we are root, we can write to the location in /proc/pid/mem with some shellcode to hijack the process.
  • With this, it was time to poke around at the WTM module for real. They didn't feel like cross compiling though. So, they wrote a small daemon that forwards TCP traffic from their machine via the wtm Python client to the device. At first glance, the commands look almost identical to the Trusted Platform Module (TPM) specification with context loading and such.
  • WTM encryption stores customer key information in an encrypted blob on the file system that is then loaded to the chip. In theory, the chip should have a super duper secret key to make it impossible to decrypt offline. What they learned is that some data is preserved between execution of stores and loads of the context.
  • This is super helpful! The file system encryption key is actually wrapped by a different key other than the super duper master key. Since this key is still leftover on the chip, we're able to leak the wrapped key. Why is this helpful? This gives us the ability for offline root filesystem decryption.
  • Can we get code execution in the kernel firmware itself? It turns out that the chip does NOT have its own decided DRAM! By simply writing to /dev/mem, we can overwrite the processor code itself! To do this, they overwrote a virtual unused command to give them an arbitrary read/write primitive that could be accessed via their Python client. They dumped OTP fuses that shouldn't be dumpable. The next step would be dumping the super duper secret AES key!
  • Overall, an awesome series of blog posts on the kernel. To me, it was eye-opening to see how important the permissions of files, memory and other things were for security. If these basic things are messed up then you're in a world of hurt.

Retrofitting encrypted firmware is a Bad Idea- 1523

haxxin    Reference →Posted 1 Year Ago
  • Lexmark is a common printer brand that the author had looked at before. In a recent update, the Firmware encryption process was changed so they decided to take a look at it after being nudged from a friend. After putting in a persistent backdoor and upgrading the firmware, they were ready to reverse engineer the system.
  • In the previous version, it was using an AES key stored at some location on the file system. When trying the old script, the decryption failed. After exploring the OS on the newly upgraded system, they found references to WTM. After some snooping around, they eventually found out that WTM is the Wireless Trusted Module that handles trusted boot.
  • On Lexmark printers, there was a rustlang init binary and a rustlang kernel module for interacting with the chips WTM interface. The WTM client interacts via netlink sockets. They really didn't want to deal with reversing the kernel driver though. So, instead, they patched the netlink sockets to use regular sockets in the PLT table. Why? Just to make it easier.
  • Using good ol' TCP, we could implement the kernel side server for the client. More importantly, this allows for emulation of the init binary, giving us a better test env. After simulating a good amount of the kernel driver over TCP, the client sends the kernel driver the key! Yep, it was that simple - intercept traffic to see the key.
  • The vendor did a better job at adding encryption to the device. The problem is that a previously pwned device already has access to it. Retrofitting a new process to an old device doesn't work because of this reason. Interesting post!

Live Chat Blog #1 - Misconfigured User Auth Leads to Customer Messages- 1522

Rojan Rijal - Ophion Security    Reference →Posted 1 Year Ago
  • Chatbots on websites are becoming more and more popular. They usually come in three flavors: GenAI bot fed customer data to answer questions, simple FAQ on internal and external information and a live agent chat. Most of these are done via some service provider and not rolled in house.
  • The service provider of Live Chat systems requires some sort of authentication, naturally. The article has a nice diagram for it. At a high level, the backend will generate a HMAC digest that contains the user identifier. This hash is communicated to the live chat agent backend, allowing the user to make requests.
  • They tested various organizations for integrations with the Live Chat platforms. In one of the integrating organizations, they found a signing oracle. The email in the cookies was being used as the input without any checks to see if the user owned the account or not. Since an authentication token was created, they could view the chat logs of the message.
  • A fairly simple vulnerability but it required understanding the integrating of complex parts, making it more interesting.

Ordinal Theory + BRC20- 1521

Casey Rodarmor    Reference →Posted 1 Year Ago
  • Ordinals are a numbering system for all Satoshi's (small unit of a Bitcoin) on the Bitcoin network. They can be referenced in a few different ways, such as the block.satoshi or just a decimal number. In reality, this gives every satoshi a serial number.
  • The ordinals themselves can have interesting rarity metrics. There are events that happen on Bitcoin with different rarity metrics. Blocks, difficulty adjustments, halvings, and cycles. This change appears to be an extension on top of the Bitcoin core protocol.
  • Inscriptions, made of the data after a script, have their own special formatting. First, the OP_FALSE opcode is used to make the script always fail. After that, the data is wrapped in an IF statement that will never execute. The data itself is added via PUSH instructions to create an envelope. With ordinals, ord is pushed first, followed by the content type and data. Different values being pushed, such as 1 for Content Type, signify the data being added on the ordinals.
  • Inscriptions use the TapRoot script type, which has two transactions: a commit and a reveal. The inscription content is contained within the input of a reveal transaction. The inscription itself it made on the first sat of its input.
  • The specification appears to have a deploy, mint and transfer for BRC20 tokens. The inscription string being added is minimized JSON. The fields include the type, inscription mark, operation and amount being transferred. Of course, this begs the question - how do we limit the amount of tokens, false transfers and such? I'm still trying to figure this out :)

BRC20 Pinning Attack- 1520

Many Researchers    Reference →Posted 1 Year Ago
  • BRC20 tokens are a non-fungible asset on the Bitcoin network. The tokens are embedded into a Satoshi using Ordinal Theory. The ordinals protocol can assign more information on top of a satoshi called an inscription. The Satoshis each have an ID, which is ordered by how they were mined on the Bitcoin network.
  • The Ord data is stored in a Taproot script-path spend script. The Ord is the reference of a particular Satoshi. The Inscription is the information about the token itself, which can be an image, text, or anything else. Commonly, the metadata is stored in the witness data of the transaction. It should be noted that the Bitcoin itself hasn't changed - just how people are viewing said Bitcoin.
  • BRC20 has three operations: deploy, mint (create) and transfer. The inscribed operations are not executable actions. They leverage the OP_RETURN OPCODE in a bitcoin transaction that is not spendable then add arbitrary data to the end of the script, called an Inscription which contains a JSON payload to specify what is happening.
  • The transfer requires two transactions:
    1. Inscribe the JSON-style content into a satoshi marked with a unique sequence number. This is sent to yourself.
    2. Execute the actual transfer by sending the previous inscription to the receiver.
  • The BRC20 token balance structure has three values: available balance, transferrable balance and overall balance. Available balance is what a user can freely spend. Transferrable balance is a token that has been inscribed in a token transfer but hasn't been sent to the user yet. The overall balance is just the sum of these two.
  • I don't fully understand how this system works tbh. Here are a two articles: one and two.
  • The attack targets the transferable balance field. When a token transfer is initiated via an inscription, the tokens move from available to transferrable, waiting for pending transactions to be confirmed. First, an attacker sends a bunch of false transfers from the target to the user. When doing this, they make the first tx with the inscription update happen but make the second one have a very low fee to guarantee it will not happen for a long time.
  • By doing this, the system is locked up. The requests for usage appear to require sequential transactions. So, if there's a transfer happening to you then you are unable to perform any actions. By intentionally getting the transactions stuck for long periods of time, it locks your targets wallet as well. They ran this locally then on a popular wallet used on Binance. I personally think that testing a live system like this without notifying the party first is completely unethical.
  • I'm still confused on how the tracking of BRC20 tokens works. None-the-less, this was an interesting vulnerability that abused multiple parts of how Bitcoin and BRC20 function, which is cool. This was a systemic issue on Bitcoin which is pretty crazy. Overall, I wish more context was provided on token tracking but an interesting vuln!

Are you serious?- 1519

Visakan Veerasamy    Reference →Posted 1 Year Ago
  • The world is full of people who don't take their world and life seriously. As humans, it's hard to optimize for anything besides the pleasures in front of us. This article talks about how unserious the world is and the benefits of being serious.
  • Being labeled as serious takes time. If somebody claims to be serious, how can we be so sure? People may believe they are serious about something but may change their minds soon enough.
  • How do we stay around for a long time? You can't take yourself too seriously. If you hit too much of a roadblock that you can't shake off, you'll eventually quit. So, being able to enjoy what you're doing enough to where the consequences don't affect your being is crucial. Richard Feynman loved physics until he had to do it professionally. Once he stopped the whole "do important work" and just started tickering, he had all of the breakthroughs. I found this to be quite true in my own life.
  • According to the author, humans are not a serious species. We don't do enough to prepare ourselves for things that are hard, like marriage. We don't prepare ourselves because we don't want to know the truth of the matter. What's this person like in difficult times? What are their pet peeves and flaws? Do we have similar perspectives for kids? All hard things that should be asked but we don't.
  • The final gem in this article is to be honest with yourself about how serious you are. If you say your serious about something but you're really not, then you're wasting your time and others.

Escaping the Chrome Sandbox Through DevTools- 1518

ading2210    Reference →Posted 1 Year Ago
  • All untrusted code in Chrome, especially JavaScript on websites and within browser extensions, runs in a Sandbox. Practically, this means that the code is limited to the set of APIs instead of system calls. The raw Chromium Gui, called WebUI with the chrome:// URL protocol, can interface with the raw C++ code and are privileged sections that run outside a sandbox.
  • Being able to execute code within the chrome:// is usually game over with UXSS or some other bug. So, keeping this clean of malicious code is important to the security of the browser. With this knowledge, our story begins with looking at Enterprise Policies in Chromimum. These are a way for an administrator to enforce certain settings by devices owned by a school or company.
  • These policies are downloaded remotely to a JSON file then placed in /etc/opt/chrome/polices for usage. Since it's annoying to write these policies by hand, the developers created a policy testing WebUI page at chome://policy. In particular, it shows a list of provided policies, logs them and allows for exportation.
  • Oddly enough, you can't edit these policies. After some digging, they found an undocumented API for editing these. There is a feature check for whether it is possible to do this. Unfortantely, the check is faulty and always returns true for all builds of Chrome. At this point, we can only call the API from the privileged WebUI pages. Convincing somebody to copy JS into the console to execute is unlikely. So, how can we escalate this? Chrome extension sandbox escape!
  • The previous vulnerability had been reported in the devtools Chrome extension APIs. When calling chrome.devtools.inspectedWindow.eval(), the command is stored. If the tab is crashed then moved to another page, since as a WebUI page, it gets executed! The key to this attack was sending a request to eval before Chrome decides to disable the devtools API but while you are on a WebUI page. Classic race condition!
  • The author wondered if any variants of this vulnerability existed. They checked out the chrome.devtools.inspectedWindow.reload() function to try to do a similar thing. To the authors surprised, it worked! They could continually spam reload() requests with JavaScript and switch the tab to a WebUI page. This exploits a race condition between the communication of processes on killing the devtools API. Neat!
  • What's the worst thing that we can do with the chrome://policy page? The enterprise policies have a setting for Legacy Browser Support called Browser Switcher. This is meant to launch an alternative browser when a user visits specific URLs in Chrome that are not supported. In particular, the AlternativeBrowserPath can be used to execute an arbitrary command with arbitrary commands. This gives us a shell if we can execute it!
  • At this point, they have a shell but the race condition only works like 70% of the time with only a single chance to hit it. They were curious if the same revival trick from the original bug report would work. To their surprise, calling the debugger twice in a row results in a crash. At this point, the code is stored and will be launched upon recovery. However, it's at this point that we update the tab itself to a different location. Now, this is 100% reliable.
  • So, what's the fix? Funnily enough, Google had fixed this vulnerability originally then added a special case that exempted reload() from this patch. Originally, they just cleared all pending messages unless it was a reload.
    • Adding a loaderId on the renderer side. Ensures that a pending command is only valid on a single page.
    • Fix the feature flag for test policies.
    • Prevent the crash from happening in devtools. Idk how they did this though
  • I absolutely love this blog post. The bugs are not super complex, they have simple to understand code snippets yet it's still high impact. To me, this really shows up complexity opens up the attack surface is crazy ways.

Cosmos LSM Module Backdoor- 1517

AllInBits    Reference →Posted 1 Year Ago
  • The Cosmos blockchain is a popular AppChain SDK used by various blockchains like Osmosis. The main feature developer for the SDK is the Interchain Foundation. In the past 3 years, the Liquid Staking Module (LSM) was built by a third party called Iqlusion. This is where the drama is at.
  • Iqlusion developed all of the Cosmos SDK code for the LSM portion alongside an individual named Zaki. In July of 2022, Oak Security performed a security audit of the codebase. They found a fairly bad vulnerability in the codebase that was brushed off by the developers and noted as intended design. In particular, a staker could avoid slashing by tokenizing their delegations, which is a major compromise to the security of the protocol.
  • A year after this code was reviewed, Zaki was reached out to by the FBI (I'm serious) about the developers being linked to North Korean threat actors. For some reason, Zaki did not disclose this to anyone in the Cosmos community and continued with the project as normal. A few months after this, a proposal was made to add LSM to the Cosmos Hub. To me, this shows a major lapse in judgement from Zaki - prioritizing features and personal gain before security.
  • Eventually, LSM was added to Cosmos Hub. This is disturbing for two reasons. First, there is a fairly bad vulnerability in the repository that was never fixed. Most of the time, auditors are willing to relent after some discussions. Given that the vulnerability was still there, it's strange that this got the move on. Second, another issue, intentionally added by the NK developers, may have been present in the codebase without anybody knowing.
  • All of this recently came to light because of an article from CoinDesk. To me, it's scary how the code got to production without anybody flagging the security issue in the report. Additionally, how an individual didn't mention the NK developers working on this.
  • An absolutely crazy situation. When working with this amount of money and annonimity though, these things are bound to happen. Personally, I think the article repeats itself too much for dramatic effect and calls the vulnerability "critical" when the report itself from Oak Security labels it as a high. Regardless, the write up has a lot of good links which I appreciate.

Zendesk Backdoor for Half of All Fortune 500 Companies - 1516

hackermondev    Reference →Posted 1 Year Ago
  • Zendesk is a customer service tool. To setup, you link it to your company's customer support email, such as support@company.com. Now, Zendesk will manage all incoming emails and create tickets for you.
  • When an email is sent to the company's Zendesk support email, a new ticket is created. To keep track of the thread, an automatic reply-to address is created with support+id{id}@company.com where {id} is the ticket number. Zendesk has ticket collaboration that lets you CC someone on email replies. The author found a really bad bug in this.
  • Zendesk did not protect against email spoofing on the collaboration feature! This meant that an attacker could impersonate the original sender to tag their own email to the ticket. Now, all of the ticket information would be readable by the attacker. Ticket IDs are also sequential, making it easy to guess.
  • When reported, both HackerOne and Zendesk claimed this fell "out of scope" because of a clause saying that "SPF,DKIM and DMARC issues are out of scope". Instead of just popping a single company with this over and over again, they decided to escalate this. In a previous blog post from 2017, the author used Zendesk to login to private Slack workspaces by bypassing the verification process for emails using the support email. They wanted to reproduce this.
  • Slack had added a protection to prevent these types of attacks since then: a random blob in the no-reply. Since the exploit required knowing this, it wouldn't be possible. Since this protection was added to Slack, it was NOT added to their other OAuth options of Google and Apple!
  • The flow for exploit was as follows:
    1. Create an Apple account with support@company.com as the email to request a verification code.
    2. Apple sends the verification code to Zendesk, which automatically creates the ticket.
    3. Use the email spoofing bug to add yourself to the ticket created for the Apple email verification.
    4. Login to the support Portal for the CCed account. This has the code in it now.
    5. Enter the verification code in Apple to confirm the address.
    6. Use Slacks "Login with Apple" feature from your new company email.
  • The author of this reported the vulnerability to a lot of Fortune 500 companies. They got 50K in bug bounties from these companies but Zendesk wasn't happy. The kid (he's 15) didn't show the Slack privilege escalation technique to them, which escalated the privileges dramatically. Finally, two month after submission, Zendesk fixed the issue but claimed that email flagging would have caught this in their internal systems. Because he broke the HackerOne disclosure guidelines, he got no bounty from them.
  • Personally, I don't think this was handled properly on either side. Attackers don't care about scope - they care about impact. So, Zendesk should have dealt with this imo. Daniel also didn't show HackerOne the Slack privilege escalation but Zendesk may not have cared even if he did. They only cared once the customers complained. Feels like a damned if you do and damned if you don't situation.
  • Regardless, a simple vulnerability and an amazingly creative privilege escalation alongside drama on the bug reporting made this an awesome read. They sum up the experience best at the end "that's the reality of bug hunting—sometimes you win, sometimes you don't."