Security, or Security Theater?

Bottom Line Up Front

The security provided by measures such as audited contracts, locked liquidity, and doxed/KYCed teams is often overstated by projects, and misunderstood by investors. Every measure has its strengths and limitations, as well as workarounds that can be exploited by bad actors. Knowing and understanding these will help you better assess the relative risk of a project before investing.


Above and beyond anything else, a project’s team needs to inspire trust in the project itself. Even if people don’t necessarily trust the team (say, if they are anonymous), they have to at least trust the safeguards put in place around the project. Third party audits and code reviews, liquidity locks, and other such measures are common safeguards used to build trust, but how much security do these actually provide?

Below, I’ll cover the five most common security measures projects use today: doxing/KYCing the team; contract audits; liquidity locking; renouncing contract ownership; and using multi-sig dev wallets. For each, I’ll talk about what they guard against, what they don’t guard against, and how they could potentially be circumvented or exploited.

Team KYC and/or Doxing

Doxing is where the devs behind a project either fully (full name and bio) or partially (just face and voice, first name only, etc) reveal their identities. There are also KYC services that verify the identities of the team privately, and issue a voucher saying that it’s been done. This allows devs to remain anonymous to investors but still show proof that someone knows who they are.

The Good: Knowing who the devs are lets you research their experience, and any past projects they may have been a part of. Being fully (publicly) doxed also means the team’s reputation is on the line, so they’ll probably want the project to be a success.

Full dox: name, picture, and Linkedin profile links from the Landlord team
Full dox: name, picture, and Linkedin profile links from the Landlord team

The Bad: Crypto is still completely unregulated in most countries, so there is little to no legal accountability for bad actions. Whether the devs are skimming funds from a project wallet or do a hard rug pull, the only likely repercussions will be having a bad reputation.

The Ugly: First, private KYC services are all but useless. If a project turns out to be a scam, the company likely isn’t going to release the identity of the devs to the public (though some claim they will, if the project is a clear scam). Second, fake names, backed by fake Linkedin profiles are not uncommon. Profile pictures could be fake as well. I have even seen projects go as far as to hire an actor on Fiverr to record a short video pretending to be the lead dev. Being “fully doxed” doesn’t always mean they’re actually doxed.

Audited Contracts

Third party services like CertiK and Solidproof will scan the smart contract of a project for any coding issues or malicious functions, such as ones that would prevent selling, or allow unlimited minting of tokens.

The Good: Most people are not programmers, so having a trusted third party review the smart contract for anything malicious or unstable is a great service. Passing an audit is usually a good indication that a new project will not exit-scam shortly after launch, and that the contract contains no critical vulnerabilities.

The Bad: Receiving an audit, passing an audit, and making all the recommended changes in the audit are three different things. Though a project has been audited, it doesn’t mean they have addressed any flagged vulnerabilities. Always read through the audit yourself to understand the project better.

The Ugly: An audit tells you what is in the contract, but it does not prevent what’s in the contract from being used or abused. Some functions, such as the ability to change taxes or mint tokens, are reasonable to have under certain circumstances. But, they require a lot of trust in the dev team to not abuse them. An audit will flag these functions:

This from the Bash Protocol audit
This from the Bash Protocol audit

And the dev team may give reasons for having the functions:

Forwarded from their now deleted Telegram channel
Forwarded from their now deleted Telegram channel

But in the end, there is nothing actually preventing something like this from happening:

Right after launch, the lead dev started minting and selling trillions of tokens, effectively draining the liquidity from the project.

Fun fact: the BASH Protocol team was doxed and KYCed, too:

Locked Liquidity

This means ownership of the liquidity pool for a token has been renounced for a set amount of time. Ownership is usually transferred to a third party smart contract, such as DxLock.

Poocoin does a good job flagging projects with unlocked liquidity
Poocoin does a good job flagging projects with unlocked liquidity

The Good: Locking liquidity prevents the team from withdrawing the pool of funds on which their token is traded. Assuming the liquidity pool is regularly funded, usually from a portion of transaction taxes, this helps ensure investors can always buy and sell tokens without issue.

The Bad: Liquidity is locked for a set period of time, after which the team will have full access to the funds. Ideally, this lock period is for a year or more, but that’s not mandatory. Projects can lock their liquidity for a single day or week if they want. It’s important to know how long funds are locked for, and what the team plans to do once they unlock.

The Ugly: First, see the example from BASH protocol above. Their liquidity was locked, but by minting and selling tokens, they were able to effectively drain the liquidity pool anyway. Second, let’s say liquidity is locked for six months, but the project is a honeypot (that is, you can buy tokens but can’t sell them). The team will be able to cash out on the project all the same, they’ll just have to wait until the liquidity unlocks.

Renounced Ownership

Every smart contract has an associated wallet which has the ability to call its functions. Similar to locking liquidity, renouncing ownership means transferring control over the smart contract to a third party wallet, usually a dead wallet that nobody has access to, effectively making the contract fully autonomous.

The Good: Renouncing ownership means the various parameters of a contract, like transaction taxes and the ability to mint tokens, cannot be changed or abused by the devs, essentially making the project trustless (that is, security is not based on trusting devs to act morally).

The Bad: While renouncing ownership prevents devs from further manipulating functions in the contract, if parameters are maliciously set prior to renouncing, then the damage is already done. Additionally, contracts typically only dictate where funds are allocated from transactions, but not who has access to those funds - so, devs might not be able to change the transaction tax, but they have unrestricted access to the funds that are being generated.

The Ugly: Through some coding slight of hand, ownership can be renounced and then regained, or critical functions can give specific wallets hard-coded access (essentially, the contract will say if you want to call a function, you either have to be the owner OR this specific wallet that the dev controls). To complicate things further, a smart contract can call another contract to execute functions. The main smart contract might have its ownership renounced, but the other contracts it calls to do things do not.

This, basically
This, basically

Multi-Sig Wallets

Typically speaking, to transfer funds out from a wallet, only one person needs to approve the transaction - its owner. On multi-sig wallets, several people have to approve a transaction to have it execute. Most commonly, signers will be a majority of a diverse group of people (five of six people, eight of ten people, etc).

The Good: Multi-sig wallets, especially when the signers consist of both dev team and community members, helps to ensure any and all transactions represent the best interests of both devs and investors. Additionally, it prevents any single person from going rogue and draining funds from a project wallet.

The Bad: Most projects have multiple wallets associated with it, and more often than not only the main dev wallet will be multi-sig. If the dev wallet has this kind of enhanced security, but the marketing, buyback, and investment wallets don’t, then there really isn’t much value added.

The Ugly: If you need four of five core team signatures to approve a transaction, and the entire core team is in on a grift, then the scam-related transactions will always be approved. Similarly, if any majority of signers, dev team or community,  are in on the grift, then everything will be approved without issue. Even if a required signer from the community is not read in to a scam, if they are presented with dozens of transactions a day they are likely to approve each one without question, based solely on their trust in the devs to be doing the right thing.

The Takeaway

No matter what security measures are in place, risk can only be reduced, never eliminated. When researching a project, carefully consider what is actually protected by these measures, and what is left to trust in the project’s team. For example, if a portion of every transaction is set aside for marketing or buybacks, what wallet are those funds held in, and who has access to them? If the liquidity is locked, what is the plan for when it unlocks? If the team is anonymous, how have they proven that they have the knowledge and experience to successfully launch and grow a crypto project?

Essentially, after security measures are considered, do you trust the team with the control that they have left?

Have a question, comment, tip, inside info, or anything else? Email

Subscribe to Know Your Crook
Receive the latest updates directly to your inbox.
This entry has been permanently stored onchain and signed by its creator.