At the core of Multi-party Computation (MPC) wallets are Threshold Signature Schemes (TSS). This allows for the decentralized ownership of a single key, which is pretty amazing. The TSS scheme is used to generate keys and sign then (t of n) with no trusted dealer.
The TSS system used in major blockchains commonly uses Elliptic Curve Digital Signature Algorithm (ECDSA). A protocol for TSS using ECDSA utilizing homomorphic encryption and zero knowledge proofs was created in 2019.
In an audit from Kudelshi Security of the TSS-lib of Binance, a report was made titled interface prone to collisions. Aka, hash collision by improperly concatenating values together. The issue was that a single array of values, such as [a,$,b,$] could be interpreted the same as two independent arrays like [a] and [b]. Originally, this was flagged as only a low severity finding.
The authors of this post found that the ambiguous encoding scheme could be used to recover the private key. When performing a Fiat-shamir transformation, an encoding scheme is involved to serialize the to-be-hashed transcript to bytes. Hence, if the encoding is ambiguous, major issues can occur.
Under the hood, this attacked the dlnproof mechanism. By attacking the alpha value with our ambiguous number issue, a bits can be leaked from the key one by one. This takes a lot of computation and interactions between all of the owners of the key but can be done.
The next issue was with the optimizations being made by the projects. In one project, the number of iterations for the dlnproof was reduced from 128 to 1. By guessing challenge bits when computing this proof with small iterations, we can forge the response. I don't understand the protocol but this seems to give us information about the key being used.
The final issue was another optimization problem. When doing the dlnproof, they only ran the check a single time in some projects. The challenge being performed here is similar to a log proof for a group of prime order. But, with this, the order is not prime but composite. As a result, it's possible to brute force a value that fits the mold. Now, the secret values are mod e instead of mod n. To recover the secret, another signing phase alongside lattice attacks can be performed.
Many of these issues come from the unsoundness of optimizations and a bad implementation flaw. For the optimization problems, following the paper would be have sufficient. It is worth modifying cryptographers algorithms? 100% not! Even the auditors looking at these protocols are probably not qualified to find those types of novel issues; these attacks are very complicated and take months to full of.