Resources

People often ask me "How did you learn how to hack?" The answer: by reading. This page is a collection of the blog posts and other articles that I have accumulated over the years of my journey. Enjoy!

Exploring Container Security: A Storage Vulnerability Deep Dive - 699

Fabricio Voznika & Mauricio Poppe - Google Cloud     Reference →Posted 4 Years Ago
  • Kubernetes is a container orchestration framework that was built by Google for deployment, easy scaling and simple management.
  • In Kubernetes there is a feature that allows the sharing of a volume inside the containers of a pad. For instance, this could be used for pre-populating a database or something like that.
  • Since the user can provide a path to be used for this, Kubernetes has to be extra careful when handling this path. Attacks with symbolic links and time of check vs. time of use (TOCTOU) issues are quite common in these areas.
  • A previous vulnerability involved a symbolic link. One container would create a symbolic link outside of the container. Then, when another container would start up and setup a volume according to this symbolic link, it would go to the host system instead.
  • A fix to another vulnerability was to make sure the subpath mount location is resolved and validated to be inside of the volume. This fixed a TOCTOU issue between the verification and the usage of the link.
  • The previous fix takes several steps to ensure that the directory being mounted is safely opened and validated. After the file is opened and validated, the kubelet uses the magic-link path under /proc/[pid]/fd directory for all subsequent operations to ensure the file remains unchanged, which is awesome. However, the authors found out that all of the efforts of this fix were for none because mount uses the procfs magic-link by default.
  • So, there is a small race condition: but is it exploitable? Very much so! There is a syscall called renameat2 which just swaps two files paths. But running this in a loop, it is possible to get the verification to do check thing but the mount to use another!
  • The solution to the bug was to add the --no-canonicalize flag to the mount command. This ensures that the tool doesn't use the magic links.
  • TOCTOU bugs are hard to find, as they usually only exist in complicated applications. Files are a great place to find these bugs though; using it securely is intuitive.

Bypassing Box’s Time-based One-Time Password MFA- 698

Tal Peleg - Varonis    Reference →Posted 4 Years Ago
  • Sometimes, a username and password alone is not enough for security. In high sensitivity areas, additional levels of security are necessary. So, another way to demonstrate that someone owns an account is required; this is commonly called MFA (multi-factor authentication).
  • The Google Authenticator app and SMS text messages are common ways to do this. An attacker being able to bypass MFA is quite bad if they already have the username and password.
  • The bypass is quite simple: remove the MFA. Once the user is in a partially authenticated state (such as when they log in), the /mfa/unenrollment endpoint can be used to remove the MFA.
  • To remediate this, the user needs to be completely authenticated to remove the MFA. The author also mentions using a SaaS app to do MFA instead to avoid these types of issues.
  • In terms of testing, partially authenticated spaces are interesting to test. Most APIs use authorization between users and authentication as ONLY if the user is who they say they are. This API simply did not consider the case where a user had logged in with only the username/password. This is a good place to test at!

CSS injection via link tag whitelisted-domain bypass- 697

zonduu - GlassDoor    Reference →Posted 4 Years Ago
  • Glassdoor can load CSS on their domains via links in the URL. This has a domain allowlist, making sure that you cannot load your own CSS into the page.
  • The allowlisting functionality has a flaw though. By adding a domain as https://zonduu.me/example.css?http://www.glassdoor.com/ in the URL parameter, the CSS can be injected into the website via the link tag. To me, this looks like an issue with a regex or a case of in or contains being used improperly.
  • What can you do with arbitrary CSS being loaded into the page? In Internet Explorer, you can get XSS via the expression function. Additionally, using standard CSS selectors, the HTML source of the page can be read from the page. Another attack could be looking for sensitive information in URL query strings, such as OAuth tokens.
  • The report was initially reported as a medium but moved to a low with the service team saying this did not have much impact. To me, being able to infiltrate data from a page should be of higher severity but I'm unsure how to practically do this with CSS alone.

Whispers Among the Stars- 696

James Pavur - DEF CON Safe Mode    Reference →Posted 4 Years Ago
  • Satellites are absolutely everywhere! Are they secure? It turns out that a lot of data is being sent unencrypted over satellite broadband for both internet and TV. In the past, the HTTP requests were fine. But, as the internet has gotten more complicated and more used, the security did not improve with it.
  • Satellite communication for internet traffic is quite odd. This is the flow:
    1. User makes a request.
    2. Request is sent to the satellite.
    3. Request is received by the satellite.
    4. No processing is done. The request is sent back down to the ground station. In reality, this is a VERY large beam that covered a large portion of the earth.
    5. The ground station converts the this to internet traffic.
    6. Ground station sends data back to the satellite.
    7. Satellite sends the data to the original location.
  • Who can read this traffic? It turns out that a cheap TV satellite dish with a card for processing the data is all you need. The researchers ended up spending about $400 for this. Using the tool EBSPro (used for finding satellite TV signals), a spectrogram shows the signals that come from satellite feeds. The card for processing the data can then be used to dump the raw data from the signal. The card has a card came with a tool called TBS recorder. The output is raw binary data. But, simply grepping for HTTP in this output shows us internet traffic! At this point, we can see private information, which is a serious security vulnerability. To make matters worse, this can be done from a different continent!
  • There are two main protocols for sending information are MPEG and Generic Stream Encapsulation (GSE). GSE is more common for maritime (boats), aviation and bigger clients. Past research focused on MPEG but this research built upon GSE. They built a tool called GSExtract that does a fuzzy search for HTTP traffic and can partially recover details with cheap equipment and bad streams.
  • What does this mean practically? None of the data for customers was encrypted by default. They essentially had the same viewpoint as an internet service provider (ISP). However, things get worse with maritime and other things. These customers used the protocol for LAN communication, which would normally be behind a firewall. Items such as LDAP, email and other things are open, editable and viewable. Even with TLS, DNS is still unencrypted.
  • As an example, the authors saw a ton of information about a lawyer. They could see private emails from themselves to a client, and the DNS traffic of the them, such as PayPal. Since they know the email of the lawyer, can see internet traffic and DNS, they could hit the reset password link on PayPal and take over the account. Damn, even though TLS is employed on HTTP traffic this does not mean that everything is secure!
  • Passwords to configuration operations, FTP services with electronic chart display and information system (ECDIS), point of sale (PoS) traffic with credit cards, GSM cellular devices on airplanes... All of this passive logging is absolutely terrible when looking at the bigger picture.
  • A particular interesting target was the aviation industry. When they started this research in 2020, everything was going well and they were viewing lots of traffic. However, the pandemic stops flights in their tracks, making their be way less traffic. But, there is silver lining here: less traffic from people using Instagram gave them almost ONLY operations of the airline or the airplane. This made completely possible (and a rare opportunity) to see how the satellite traffic of planes actually worked. Eventually, they were able to fingerprint the service for EFB on airlines and several other things. Interesting flip of the script!
  • Can anything active be done? TCP session hijacking! TCP has MANY random values that normally cannot be predicted as part of the three way handshake. By being able to do this, the traffic being sent down can be altered in some way. For instance, a website (not using HTTPs) talking to a ship could be altered to have a different response than what was expected. Even though we cannot send bad packets to the satellite itself, since we are apart of the internet, this creates a major problem.
  • How can we protect against these attacks? Simply just use encryption all over the place. DNSSEC, HTTPs and all that jazz are a good solution to the problem. When this is not possible, sending data over an encrypted VPN connection should be done. The authors were also building a tool called QPEP to still have good performance but encrypt the traffic.

KVM: SVM: out-of-bounds read/write in sev_es_string_io- 695

fwilhelm - Project Zero (P0)     Reference →Posted 4 Years Ago
  • The KVM (Kernel Virtual Machine) is a kernel based virtual machine. In the land of Linux, this is the built in virtualization platform. This vulnerability focuses on the SEV-ES (Secure Encrypted Virtualization - Encrypted State) functionality of KVM.
  • When a SEV-ES enabled guest triggers a VMGEXIT for a string I/O instruction, the function sev_es_string_io is called to copy the string between an unencrypted guest memory region and the virtualized target dev. When doing this copy, the size of the write and count variable (amount of bytes) are controlled by the attacker.
  • With this data is used, a memcpy is performed. However, the location being written to is limited in size to 0x1000 bytes! So, if we specify something outside of this range, for a write, then we have an out of bounds write primitive. In practice, size * count > 0x1000 is all we need to do.
  • A similar bug exists on the read side. Interestingly enough, the read functionality ALWAYS resulted in an out of bounds read if the value was greater than 1. That is sort of weird that this got through the code review process, as it was buggy from the start.
  • When running the kernel with KASAN (kernel ASAN) a crash occurs on both of these attempts; both for the out of bounds read and the out of bounds write. Why wasn't this discovered before? VMGExit is a shutdown function, which is not what is usually fuzzed. The more you fuzz the more things you will find!

Apple ColorSync: use of uninitialized memory in CMMNDimLinear::Interpolate- 694

mjurczyk    Reference →Posted 4 Years Ago
  • A color profile is a set of data that characterizes a color input or output device, or a color space. In practice, a binary chunk-based format is used. The extension .icc and .icm are used for this. These files are common for embedded devices, making them good targets for attackers.
  • While fuzzing the macOS and iOS Apple ColorSync library the author found a crash. The crash occurs when using an invalid 8-bit input channel count field of 0. The issue causes a crash for reading at an invalid memory address.
  • The function first calculates the desired start address within a data point array, and then starts reading the data points in a loop. The address is calculated with the following expression: &lutAToB->CLUT.DataPoints[2 * x * y]. The variable y is initialized in a for loop prior to this access. However, if the amount of iterations in the loop is 0 then the variable is never initialized.
  • Since the variable is never initialized on the stack then it is used, the y coordinate can contain a large positive or negative value. If an attacker could control this value, it would create a very nice out of bounds read primitive.
  • Overall, this is an interesting bug! The value never gets initialized then gets used as an index. Even though somebody reading the code would assume the variable gets initialized in the for loop, this was not the case if the loop counter variable was set to 0.

Renes'hack- 693

CollShade    Reference →Posted 4 Years Ago
  • The author was working with a device that had the Renesas RX65 chip on it. The author wanted to see the contents of firmware. Alas, the chip had a write programmer lock on it. The author needed a 16 byte ID code that they clearly did not have. Additionally, all of the protocols for communication were proprietary. This looks like a challenge to me if I've ever seen one!
  • Three interfaces for firmware operations appeared in the docs: USB, FINE and SCI. FINE is a proprietary interface used by Renesas and is not documented. USB is also not documented, so the easiest option is to use the serial interface. The Serial protocol had a limit on the amount of ID codes and USB was not exposed on the device. So, the best option was FINE.
  • By reading schematics and datasheets, the author found out that FINE was a single wire interface. Using the Renesas Flash programmer on a dev board allows us to see the traffic being sent. Here's the problem though: this did not allow for us to discriminate the host and device communication.
  • In order to fix this problem, the author added a small resistor small resistor on the OCD side, which is a divider. When the MCU pulls the line low, the voltage at the center point (ADC) is 0V. However, when OCD pulls the line LOW there is a small voltage (about 200mV) at the center point. This allowed for the differentiating between the two lines of communication and better reversing of the protocol.
  • Even with this amazing setup and some reversing, a friend of the author noted that the SCI protocol looked quite similar to the FINE protocol (which is well documented in the manual). There are some notes on the two protocols but the author leaves the rest of this reversing for someone else to do.
  • How can the ID check be bypassed? Glitching! If we can change the path that the code running hits by physically altering the device, then we can access the programming mode. In this case, the goal was to glitch the power supply of the MCU right after sending the Serial Programming ID Code Check Command, which was in FINE.
  • To set this up properly, the author removed every capacitor on VCC to create a direct connection to the Core power supply. This makes the voltage and current actually glitch the system instead of their normal smoothing operation. To setup the glitching, this is what the author did:
    1. Run the initialization sequence of FINE until the ID Check is sent.
    2. Send the command to the MCU. This is where our timer for the glitch starts.
    3. Glitch the system at a set interval.
    4. Loop over the position 50 times. Increase the timer if this does not work.
  • After running this for a very long time, the programmer works to extract the Flash! Glitching is extremely powerful but complicated to setup. The author submitted the CVE 2021-43327 for this vulnerability on the chip is well. Interesting bypass and it's real cool to see a real setup of glitching.

Jupyter Notework Instance Takeover- 692

Gafnit Amiga - Lightspin    Reference →Posted 4 Years Ago
  • Amazon SageMaker is a fully managed machine learning service hosted on AWS. With SageMaker, data scientists and developers can quickly and easily build and train machine learning models, and then directly deploy them into a production-ready hosted environment.
  • While looking at the source code of the website via view-source:, they noticed an interesting file path being used for the environment. This directory called home/ec2-user/anaconda3/envs/JupyterSystemEnv/share/jupyter/lab/static had several HTML, JavaScript and CSS files inside of it. By modifying the HTML page, the author trivially achieved XSS! But here is the thing: this was a self-XSS. What can we really do with that?
  • Since all domains are of the form <my_instance><region>.sagemaker.aws, self XSS could potentially be escalated away from the hosted instance. They noticed that all of the cookies were scoped to ONLY .sagemaker.aws! In particular, the anti-csrf token was on this domain.
  • The author thought to use an attack that I had never considered before: cookie tossing. This is when an attacker can set a cookie on the domain that is used for critical actions. In the case of our XSS, we can do exactly that on our domain.
  • Sagemaker used the double submit CSRF protection. This works by sending the CSRF token in a cookie and the other one as a header or a field in the request. Since the attacker cannot normally see the CSRF token cookie, this works quite well. However, in this case, the cookie tossing issue allows us to specify the cookie CSRF token! This means that the double submit method has been compromised since we can control both values being sent.
  • There are a couple of other things to consider with CSRF though: origin validation, non-safe request issues and the Same-Site cookie flag. The origin validation was non-existent, so this part was fine. In the land of browsers, CORS becomes a major problem for CSRF attacks because of pre-flight OPTIONS request is made. As a result, only certain types of request, known as simple can be used. This disallows us from setting custom headers, using JSON and any non GET/POST request.
  • The application puts the CSRF token into a header. However, the author figured out that this could be included as a GET parameter and the request would still work! One more problem though: the request being made was a JSON request. The trick was to send the data as the Content-Type to plain/text in order to get the JSON to be sent but still interpreted as JSON! That's a new trick for me.
  • The final CSRF mitigation problem is the SameSite cookie flag. There are three settings for this: none, lax and strict. The default in Chrome and Firefox is lax, but the default in Safari is none. When the setting is lax, some cross-domain requests are allowed, such as GET requests to the top level domains. strict will never send cookies from cross-domain requests.
  • In Chrome, if the SameSite attribute is never set (defaults to lax), then there is a 2 minute grace period where all requests will contain the cookies. So, the author figured out a GET request (which does not get affected by SameSite with the lax setting) to reset the Authtoken of the user! Since the cookie was now set less than 2 minutes ago, the auth cookie will be used. Damn, that is a real fancy workaround!
  • With the CSRF protections defeated, an extension can be added to a notework to execute code in there. Now, the access tokens for the role can be stolen, leading to much more damage being caused.
  • I learned a few new tricks with this. First, cookie tossing makes sense and is a large weakness with the double submit CSRF protection. Second, the plain/text simple request and setting the csrf value as a parameter. Finally, the trick for resetting the auth token was really awesome to abuse the Chrome grace period on the SameSite cookie header. Overall, a crazy article about how a self-XSS lead to a compromise.

URL whitelist bypass in https://cxl-services.appspot.com- 691

David Schutz    Reference →Posted 4 Years Ago
  • An internal GCP project called cxl-services is used for internal requests within some other service. The author does not give a description of what it does at all.
  • The application has an allowlist for the domains that can be called internally using this service. When validating the URL, the parser falls for the '\@' trick, which even the original RFC gets wrong.
  • The issue is that https://[your_domain]\@jobs.googleapis.com thinks the authority is jobs.googleapis.com but the library making the request makes the request to [your_domain] with a path of /@jobs.googleapis.com. Hence, the verification differing from the usage causes the vulnerability.
  • Why does this SSRF cause a problem? Most of the time, the attacker gets access to an internal network. In this case, an authorization token for App Engine is in the request, which is now leaked to us.
  • With the access token in hand, the author wanted to demonstrate impact without jeopardizing the company. They found a few other projects that the authorization token had access to (docai-demo, p-jobs, garage-stating, etc.). They took rigorous notes on the requests they made, in order to help Google with incident response.
  • The patch for the bug was pretty terrible: block all usage of '\@' next to each other. So, adding anything between these (such the URL https://[your_domain]\_@jobs.googleapis.com) still caused the SSRF. After fixing this, they found ANOTHER issue. There are still old vulnerable versions of the AppEngine app running, which needed to be patched.
  • Overall, interesting bug and a trick that I did not know about! Verification corresponding to actual usage is hard to do properly. Additionally, lots of bugs are not fixed properly the first time!

Authentication Bypass when using JWT w/ public keys- 690

Plokta - HackerOne    Reference →Posted 4 Years Ago
  • JSON web tokens (JWTs) are a common way to create session tokens. They contain three main fields: header, data and a signature. The header is information about the token, the data is the important information about the user and the signature is a cryptographic algorithm to demonstrate that the JWT has not been tampered with.
  • JWTs can use both asymmetric and symmetric algorithms for the signature. The asymmetric version, such as RSA, is more commonly used because the public key can be used to verify the signature without knowing the public key. This makes it possible to be used on other sites besides the one that generated it!
  • The header is a base64 encoded JSON blob that contains several elements but only one that we are interested in: alg, which is short for algorithm for the signature. For instance, this could be set to RS256 for RSA or HS256 for HMAC.
  • So, if the user can specify the algorithm, which key is used? And this is where the vulnerability occurs at! If the algorithm set by the user is used without validation and the input is expecting an asymmetric encryption algorithm, problems occur.
  • With RSA, a public key is used in order to verify the signature. So, this would be the key used for the input. But, if we select a symmetric encryption algorithm, such as HMAC, this will be used as the secret key.
  • This is where the magic lies: the public key is public! By selecting a symmetric key algorithm, such as HMAC, we can sign the JWT with the public key. Since this is the input into the signature validator, it will blindly think that the RSA or asymmetric public key is the secret. Now, we can sign arbitrary objects!
  • This is the issue that happens in Jitsi, which is an open source product that is similar to Zoom. When looking around to the NodeJS jsonwebtoken for if this would be possible, it turns out that it is if the algorithm could be set. Otherwise, the algorithm is assumed based upon the content of the secret.
  • Interesting bug that probably exists in other places. JWTs are awesome but have many foot-guns inside of them.