Resources

People often ask me "How did you learn how to hack?" The answer: by reading. This page is a collection of the blog posts and other articles that I have accumulated over the years of my journey. Enjoy!

SSH Keystroke Obfuscation Bypass- 1495

Philippos Maximos Giavridis     Reference →Posted 1 Year Ago
  • SSH has a problem where a passive observer is able to deduce some information via the metadata, which violates most cryptographic principles. By default, each keystroke is clearly identified and timestamped. To combat this, SSH started obfuscating the keystrokes some.
  • The obfuscation veils the keystroke packets among a wave a fake packets that should look the same. When a keystroke is made, a bunch of these chaff packets start flooding out to hide all real keystrokes.
  • The author decided to do some analysis on the sizes of these packets to see if the protection actually worked. While analyzing, they noticed that some packets were substantially larger than the rest! The chaff packets should be the same as the keystroke packets in size in order to mask them but this doesn't appear to be the case. What's going on?
  • After reviewing the source code, wireshark captures and SSH verbose mode logs, they understood what was going on... SSH can group multiple requests together into a single packet. On the second keyboard stroke, this starts happening. The real keystrokes are packaged up with a PING packet, creating a packet twice the size as a normal keystroke and two server-side responses.
  • Using this knowledge, it's possible to get the same information as before - how many keystrokes were made at what intervals. They create a tool for doing this that is pretty cool! Typing out certain commands have a specific rhythm (such as sudo apt upgrade) making the analysis possible to get the actual sent command out of the packet. Overall, good post on side channel analysis and how easy it is to mess up these types of protections.

Feeld dating app – Your nudes and data were publicly available- 1494

Bogdan Tiron - Fortbridge    Reference →Posted 1 Year Ago
  • A dating app is an absolute mess in terms of access control. Shocker...
  • The first bug really sets the tone - non-premium users can view premium functionality via direct request. In the mobile app, it's just not shown to the user. Classic bug
  • After the first vulnerability, it becomes a ton of mostly uninteresting from a technical standpoint access control vulnerabilities. Using IDORs on GraphQL APIs, you can read the messages of others, update another persons profile, get a like from any user, send messages in another persons chat and view other peoples matches were simple IDORs.
  • It was possible to view another users attachments as well. This was a fairly standard IDOR except with the URL prepended with v1 bypassed all authorization checks. Fuzzing does wonders when done correctly but this is a fairly weird thing to fuzz for.
  • The other interesting bug was that attempting to redelete a message, it would return the result of of the message. Why does it save a message after deletion, I'm not sure but it's an interesting case of an IDOR leading to information disclosure in a weird place. This same bug can be used to delete and edit messages as well.
  • The main reason I wrote this up was how bad the access control of this was and the impact of it. Sometimes, the things without bug bounties are worth looking at in order to make the world a more secure place.

Zero-Click Calendar invite — Critical zero-click vulnerability chain in macOS- 1493

Mikko Kenttala    Reference →Posted 1 Year Ago
  • macOS calendar is paired with all of the other macOS services like Mail. The author found a bug in it to get RCE, which is terrifying. They don't just show the bug - they show how to get steal photos too!
  • Calendar invites can have attachments. When the name is used as part of a path, it not sanitized. This gives us a classic directory traversal, which I cannot believe actually happened in something this important. This gives us an arbitrary file write or an arbitrary file delete if the event/attachment is deleted.
  • Gaining RCE from this was not an easy task and required writing many files and using the Open File functionality of Calendar. First, they create a calendar entry that has Siri Suggested content. This will open other injected files in the future. The next attachment coverts old calendar formats to the new format to make sure this attack will work.
  • The next attachment is a .dmg file. This dmg contains a background image that points to an external samba server. For whatever reason, even though this has the quarantine flag, it will not be subject to quarantine. The next injected file is used to open a URL a URL triggered from the mounted samba mount from before to open an app. Finder will attempt to open this application, indexing the file and registering a custom URL type.
  • The final file (triggered by the Siri events mentioned before) will open the custom URL that was just registered. When this URL is opened, it will execute the binary! This is possible because the quarantine flag is not set on the samba loaded file, for whatever reason. When the file is executed, it pops a shell or does something more interesting like stealing photos...
  • TCC in macOS should prevent access to photos. However, they found a clever trick to steal them anyway. By abusing the RCE, the configurations of Photos can be changed to control the iCloud settings. This allows them to control the location where the files are downloaded to! When the sync happens, they can recover the sensitive files.
  • An amazing blog post! Many of the techniques for taking this to zero click RCE were interesting and specific to macOS, which probably took a lot of reverse engineering. Using the Siri autoloading to open links, Samba downloaded links not being quarantined, and the forcing the indexing of the custom URI were all awesome finds. The bug was simple but the exploitation was not!

URL validation bypass cheat sheet- 1492

Portswigger    Reference →Posted 1 Year Ago
  • URLs are notoriously hard to parse. This article is a list of easy to try URL domain bypasses. This includes absolute URLs, CORS bypasses and weird host headers.
  • The domains contain different encodings (URL encodings), classic parser differentials such as semi colons and https://\\ and usage of username/passwords in the URL.
  • I had been writing a CTF challenge for the Spokane Cyber Cup. From this article, I found 3 bypasses for one of my challenges immediately. Solid techniques!

Writeup of CWA-2023-004- 1491

CertiK    Reference →Posted 1 Year Ago
  • In CosmWasm, a module for running Wasm on Cosmos blockchains, the maximum wasm payload is 800KB. Before the contract is saved to disk, it goes through some sanity checks. This check is to ensure it's not too big. The bug is effectively a zip bomb to slow the chain down.
  • When taking the Wasm bytecode, the compilation process can leads to signatures being inlined multiple times in compiled code. By using a large signature with many references, it's possible to create a gigantic file when it's loaded to be megabytes or gigabytes in size. If it's larger than 2GB in CosmWasm, this can lead to panics.
  • The cosmwasm-vm crate uses the Mutex type to safeguard race conditions on the inner caching of the module. If code crashes during a mutex, then the lock becomes unusable. This creates a denial of service when this object is used. Since all CosmWasm calls now crash, this leads to a denial of service on major parts of the contract.
  • From the user's perspective, this translates to the blockchain stalling in processing any transaction, akin to a network outage. To fix the issue, additional restrictions were added to the maximum amount of functions, parameters and total function parameters. This limits the size of a payload but doesn't really fix the root cause. Interesting!

Exploiting Misconfigured GitLab OIDC AWS IAM Roles- 1490

Nick Frichette    Reference →Posted 1 Year Ago
  • OpenID Connect (OIDC) is a common authorization service. Of course, AWS supports a way to authorize services outside of AWS to assume IAM roles using it. Besides this post, they have many other cases where the permissions of OIDC are incorrect and this leads to a privilege escalation. The service of focus this time is Gitlab
  • The default trust policy for OIDC Gitlab authentication contains the principal (Gitlab.com), an action for AssumeRoleWithWebIdentity and an optional condition key of gitlab.com:sub. This is either a group, project or branch that is allowed to submit this.
  • The reason there is a misconfiguration is the optional condition key - aka, fails open. The sub field on the JWT - who is permitted to assume the role - is not a required field. If this is not included, then there are a wide variety of ways to assume the role in AWS.
  • The example policy used for the test does not include the sub at all but only the aud. To exploit this, an attacker needs to create a valid JWT for the sts:AssumeRoleWithWebIdentity invocation. Doing this only requires having an account on Gitlab, creating a project with CI and support for JWT generation. In the CI, we can simply output the GITLAB_OIDC_TOKEN and this will work for us.
  • In AWS, we can then use the token with a call to sts:AssumeRoleWithWebIdentity to assume the role now. Generating a trust policy for Gitlab in the AWS console is created insecure by default, which is terrifying. In the case of Github Actions and Terraform Cloud, AWS made changes to require specific fields. Overall, a good and concise write up on a common AWS misconfiguration.

Unauthenticated Access to GCP Dataproc Can Lead to Data Leak- 1489

Roi Nisimi    Reference →Posted 1 Year Ago
  • Google Dataproc is a managed service that runs Apache Spark and Hadoop clusters for data analytics workloads. When creating an instance, the default allows for no internet access but computers in the same VPC can access the the service completely.
  • The Dataproc cluster contains a YARN Resource Manager on port 8088 and HDFS on port 9870. Neither of these require any authentication on them.
  • If an attacker has access to a vulnerable compute instance via an RCE bug, they can then access the Dataproc clusters. If they access the HDFS endpoint, they can browse through a file system to obtain sensitive data.
  • Their key takeaway of using an OSS project and hosting it without considering the security consequences is a good callout though. To me, the issue is on Google for using this incorrectly. To fix this, I'd personally add a better default network permissions in order to prevent this from happening. The authors are right - shells happen and is the public instance doesn't need access to it then it shouldn't have network access to it.

Persistent XSS on Microsoft Bing.com by poisoning Bingbot indexing- 1488

Supakiad S. (m3ez)    Reference →Posted 1 Year Ago
  • Bing is the Microsoft search engine. BingBot is the web crawler used to keep Bing up to date with search results.
  • When a user searches for a video on Bing, the search engine retrieves the content from its index with all of the videos details. Even though the data is stored as JSON, the returned content type is text/html for some reason.
  • Since the metadata associated with a video is completely controlled, the browser may confuse this as a loadable HTML page! The author created a video on several different platforms with script tags. Once the indexer had picked this up, if we go to the exact page for this, it leads to stored XSS on Bing. A user must click the link in order to be exploited though.
  • Another Content Type mishap! I feel like I've been seeing more and more write ups about this. Good find!

Bypassing CSP via URL Parser Confusions : XSS on Netlify’s Image CDN- 1487

Sudhanshu Rajbhar    Reference →Posted 1 Year Ago
  • Many website uses Static Site Generators alongside an Image CDN to optimize the images on the website being loaded, such as NextJs, which this website uses. The image CDN behind the scenes has a URL parameter for the image. The allowed URLs are typically inside an allowlist with some Content-Type validation as well.
  • On Netlify, the endpoint is /.netlify/images?url=. The author placed the main paeg into this endpoint, with the requested content-type being text/html, and got a response of HTML! So, if we could find an arbitrary file upload on the site, we could achieve XSS through this endpoint.
  • For the website, all of the images are uploaded on the same CDN website. Using this, it's trivial to upload a file with arbitrary content but it must have a valid content-type. When going to the CDN, it pops an alert box. However, when trying the same through the image endpoint it doesn't work because of the CSP.
  • How does this CSP work? It turns out that it's a dynamic nginx configuration! If the location was on the /.netlify/images?url= path, then it returned script-src: 'none'. If we could trick nginx to see a different URL but have it parse the images endpoint then we would have a CSP bypass.
  • The author tried /./.netlify/images?url=... which nginx will parse differently than the underlying application. Neat! The CSP now contains a script-src that allows our script. In order to have this work in the browser, the page above needs to be URL encoded with /.netlify%2fimages though. This gives us XSS!
  • Netlify fixed the issue but the author found another bypass with an additional leading slash. For whatever reason, this has not been patched yet though. They fixed this by changing the types of files allowed on the CDN but left the parsing issue the same as before.
  • Overall, a super interesting bug report! A mix of new technology with old bugs is fun to see.

Flask-CORS allows the `Access-Control-Allow-Private-Network` CORS header to be set to true by default - 1486

flask-cors    Reference →Posted 1 Year Ago
  • Private Network Access (PNA) is a new browser security feature to prevent direct access to local networks. Segmenting the local network is important for preventing CSRF-like attacks to compromise a users network.
  • The mechanism for fixing this is Access-Control-Allow-Private-Network header. If this header is not included for a particular website, then it will reject the local network request.
  • In Flask, the default for this header was true. This effectively removed the protections of the new PNA specification. So, it just sets the default to false now.