Trusted Execution Environments (TEE) are a sections of a CPU that run code that is isolated from the rest of the system. The whats and the hows of these differences depend on the generation. In recent years, a shift has occurred, with these now being done on consumer servers. Server TEEs use deterministic AES-XTS without replay protection for memory encryption and don't use integrity checks either. Dropping these protections comes with significant advantages in terms of speed and usability.
TEEs goal is to shield the code and data in the enclaves from both inspection and modification, even with root-level adversaries on the computer. At boot-time, the CPU reserves sections of memory for the exclusive use of Intel's implementation of TEE called SGX. This cannot be accessed by other parts of the execution - only the SGX enclave. SGX implements memory encryption by encrypting the entire address space with a key determined at boot time.
The goal of this attack is to steal content from the SGX enclave by sniffing memory during specific operations. To do this, they created a custom logic analyzer that sits between the DDR5 memory bus and the CPU; details of its construction are in the paper. To ensure the proper lines are being listened to, testing must be performed to verify the mapping of physical addresses to DIMMs. Since the kernel allocates the memory used by SGX, once we know the location of our interposer, we can force the SGX enclave to use that memory.
They also use a clever trick to start and stop the SGX enclave: memory permission changes. Once the TEE is executing the code, we can't see it. However, we CAN change the memory's permissions. By removing access from the TEE engine, the code effectively stops. This allows traces to be taken to be much more accurate.
A key insight is that the encryption key used by enclaves is the same on boot for the memory. Since we can upload our own enclave code to the machine, any data written by the other enclave that matches ours will be a known value! They provide an example of this being bad: secure attestation with ECDSA. By doing this during an ECDSA signing process, they can determine the value of k. With k in hand, you can recover the private key!
With these keys in hand, it's possible to run code that would typically be confidential outside the SGX context. This capability is catastrophic and is demonstrated in the rest of the paper. In the example of BuilderNet, a network of block builders on an EVM chain that runs in an enclave, it breaks all of the guarantees. Usually, the software prevents frontrunning and reading the information of confidential transactions, but this is now possible. Even worse, this contains a key with $200K worth of ETH in it. A similar thing can be done on the Secret Network as well.
There are several other cases of this attack breaking the security. If you don't use deterministic encryption, then this attack wouldn't be possible. Another safety measure is to have location verification and CPU whitelisting. This would prevent jobs from being used on attacker-controlled hardware in a lab capable of performing these attacks.
In my mind, once an attacker has access to your computer, they can do whatever they want. This is a great example of that. Overall, an excellent paper on breaking TEEs with a hardware-based attack.