Contest platforms in web3 are an alternative to standard security reviews. The auditing firm Zellic bought the contest platform CodeArena last year and has decided to write a report on metrics for audit contest information. Naturally, the platform wants to look better, but you've got to know when things are snake-oily. Imo, some of this just feels like a competitor bash (especially the screenshots that are obvious to know who they're calling out), but there are some good points.
The true benefit of audit competitions is the number of eyes and skills your code gets. A traditional audit is a low-hanging-fruit audit with known entities. In a contest, the participants are incentivized to find unique and high-impact issues. So, the coverage is better theoretically.
The first metric is finding count. Many of the findings reports include invalid issues or don't de-duplicate the issues to inflate the numbers. Teams only care about the high and medium severity bugs.
The next metric is participant numbers. There's a difference between participants and useful participants. Using a number with "participants who submitted a valid finding" would be much better. It's also hard to know how much time was actually spent on the code for those participants. However, this final point is true on all platforms.
The third one is "Claims about exclusivity". A general issue with contests is how you know good researchers are looking at your code. At Cantina, they have pre-paid folks to work on audits. On Sherlock, they have a Lead Watson who gets an automatic part of the prize pool.
Having full-time people on your platform is better than not having it at all. There's a concern if these folks are actually spending the time on your project. If they weren't, they would probably lose their contract with the contest platform. Their concerns are valid (are they on it the entire time, who is managing this, etc) but some it better than the none of C4.
Comparisons between audit contests and traditional audits are usually somewhat confusing. Severity scales are different, "fake" vulnerabilities are sometimes presented in both cases, and asymmetric comparisons are made on codebases that are either different projects or audited at different points. This is a fair call out.
It's good to consider the differences between the platforms to decide where to host competitions and participate as a hacker. This article has some good points but also has a very skewed perspective with A) being an auditing firm and B) owning C4. So, take the content with a grain of salt.