Mcap -- BTC -- ETH -- SOL -- BNB -- XRP -- F&G -- View Market
Loading prices…

Sybil Attack

An attack where one person creates many fake identities to gain disproportionate influence in a network. The problem proof-of-work and proof-of-stake exist to solve.

Security 5 min read

A Sybil attack is what happens when a single person creates many fake identities in a system that was designed to count identities as distinct. The name comes from a 1973 book about a woman with dissociative identity disorder named Sybil; computer scientist John R. Douceur used it in a 2002 paper to describe the general class of attacks where one actor pretends to be many. The core problem Sybil attacks exploit is any system that treats “one identity equals one vote” as a meaningful rule without having a mechanism to verify that identities are actually distinct. In a permissionless network where anyone can create accounts for free, this rule is trivially defeatable — you can create a thousand accounts, cast a thousand votes, and drown out the real participants.

Bitcoin was designed with Sybil attacks specifically in mind. The original whitepaper includes a direct reference to the problem: “the proof-of-work system essentially solves this by making it one-CPU-one-vote”. By tying consensus power to computational work rather than to claimed identities, proof-of-work made it impossible for an attacker to inflate their influence by simply creating more accounts. Proof-of-stake solves the same problem differently, by tying influence to locked-up capital — you cannot fake having stake, so creating more validator identities does not help you unless you have more actual ETH to stake. In both cases, the mechanism is designed around the assumption that identities are cheap to create and the real scarce resource must be something else.

Where Sybil Attacks Actually Hit Crypto

The consensus layer is well-defended against Sybil attacks, but plenty of other layers in crypto are not, and this is where the attack pattern actually shows up in practice.

Airdrop farming is the most visible form. When a project announces that it will airdrop tokens to users who have interacted with the protocol by a certain date, the economically optimal response for a sophisticated attacker is to create hundreds or thousands of addresses, have each one do the minimum activity required to qualify, and claim the airdrop on all of them. The result is that a large fraction of major airdrops have gone to Sybil farmers rather than to real users, and the projects have tried various defenses — unique action patterns, social media verification, transaction history analysis — with mixed success. The Arbitrum, Optimism, Starknet, and LayerZero airdrops all had substantial debates about how to identify and exclude Sybil clusters, and none of them got it perfectly right.

LayerZero took a particularly aggressive stance with its 2024 airdrop: it invited users to self-identify as Sybils in exchange for a smaller guaranteed reward, and separately paid bounties to people who reported other Sybils. The result was that tens of thousands of addresses were excluded, but the effort also produced a lot of false positives and community anger from users who felt they had been unfairly swept up in the filter. The general lesson is that eliminating Sybil farming on airdrops is extremely hard — the attackers are well-resourced, they use sophisticated address-hopping and transaction patterns to mimic real users, and any filter that is tight enough to catch them will also catch some real users in the process.

Governance votes are another vulnerable surface. Most DAO voting systems use token-weighted voting (one token equals one vote), which technically avoids the classical Sybil attack where you vote once per identity — but the economic equivalent problem is that whoever accumulates the most tokens wins the vote, which is Sybil-like in its effects even if it is not Sybil in the strict sense. Some governance systems try to implement quadratic voting, reputation-based voting, or identity-bound voting to address this, but all of them have to grapple with the fundamental problem that distinguishing “one real person” from “many coordinated accounts” is unsolved in the permissionless setting.

Validator decentralisation metrics can also be misleading because of Sybil-like concentration. Ethereum has roughly a million validators, which sounds very decentralised, but many of those validators are operated by the same entities — Lido, Coinbase, Binance, and the big staking pools each run tens of thousands of validators. The raw validator count is a poor measure of actual decentralisation because it does not distinguish between one person running one validator and one operator running many. The underlying concentration is the interesting number, and it is smaller than the raw count suggests.

Proof of Personhood as a Partial Answer

Several projects have attempted to solve the Sybil problem by building identity systems that verify “one human equals one account”, so that non-consensus decisions (voting, airdrops, UBI distributions) can use real identity-weighted mechanisms. Worldcoin is the most prominent attempt, using biometric iris scanning to verify each participant as a unique human before issuing them a Worldcoin identity. BrightID uses a social-graph-based approach, verifying identities through video calls between existing verified users. Proof of Humanity and similar projects use reputation-based attestation chains.

None of these have achieved mainstream adoption, and each has serious tradeoffs. Worldcoin’s biometric model has raised real privacy concerns and has been banned or restricted in multiple jurisdictions. Social-graph systems are harder to bootstrap and can be gamed by dedicated attackers coordinating across multiple fake identities. Reputation systems are vulnerable to sophisticated Sybil farms that can pass the reputation thresholds. Sybil resistance remains one of the unsolved problems in decentralised systems design, and the current state of the art is “partial solutions that work against low-effort attackers but fail against dedicated ones”. Whether a good solution is possible at all, without centralised identity verification, is an open question.