Many consensus architectures are robust against attackers up to a certain size, but if an attacker gets large enough, they become exploitable. Can we do better by making protocols not just stable but unexploitable?
The Problem with Objectivity
Objective protocols can be maintained using only the protocol definition and published data - no external information needed. However, this creates a fundamental symmetry problem: the protocol cannot distinguish between truth and lies.
Consider these scenarios:
- Truth is B, most follow protocol honestly, but 20% are attackers
- Truth is A, but 80% are attackers pretending truth is B
From the protocol's perspective, these are indistinguishable. This enables P + epsilon attacks, epistemic takeovers, and profitable 51% attacks.
Enter Subjectivity
Subjective protocols require external information beyond the protocol and published data. While this sounds problematic, the human community can detect manipulation and deceit that pure cryptography cannot.
Subjectivocracy - a governance model:
- If everyone agrees, follow unanimous decision
- If there's disagreement between A and B, split into two forks implementing each option
- Let the community decide which forks matter
Applied to SchellingCoin:
- All voters vote A or B on a question
- If unanimous, reward everyone
- If disagreement, split into two forks - one for each answer
- Each fork has its own currency; users choose which to trust
This is essentially a formalized reputation system - the mechanism records all votes, letting users choose which group of participants to trust based on voting history.
Making It Practical
Pure subjectivocracy has problems:
- Too many decisions create cognitive load
- "Very stupid users" (VSUs) like IoT devices and smart contracts can't get social information
Solution: Use weaker governance for non-contentious issues, with subjectivity as a fallback. A refined SchellingCoin:
- Voters vote A or B
- Majority voters get rewards; minority gets nothing; deposits frozen one hour
- Anyone can "raise the alarm" with a large deposit (50×reward), forcing a fork split
- On the correct fork, alarm raiser gets 2× deposit back; on wrong fork, they lose it
- Rewards/penalties become more extreme after alarm: correct voters get 5×reward, incorrect lose 10×reward
This creates a unique equilibrium where truth-telling is dominant.
The Public Function of Markets
How do VSUs (like smart contracts or IoT devices) choose the correct fork?
Markets provide the answer. After a fork:
- One fork controlled by truth-tellers
- One fork controlled by liars
The market will price the truth-tellers' currency higher. Markets translate human intelligence from subjective protocol into a pseudo-objective signal that VSUs can follow.
Market robustness: As long as honest participants' economic weight exceeds attackers, the market provides correct prices. Manipulation is expensive and temporary.
Implications for Proof of Work
Proof of work can also be seen through this lens. Exponential subjective scoring (ESS) penalizes late forks - always-online users reject hostile attacks even with more total work, expecting the attacker will eventually give up.
VSUs simply look at total proof of work, temporarily tricked during attacks but eventually seeing the original fork win. The attacker pays dearly for treachery.
Conclusion
Subjectivity makes game-theoretic analysis of cryptoeconomic protocols easier and more secure. However, it has implications:
- Single-cryptocurrency maximalism cannot survive
- Subjective design requires loose coupling where higher-level mechanisms don't control lower-level protocol value
- Every mechanism needs its own currency that rises/falls with perceived utility
- Thousands or millions of "coins" may need to exist
Perhaps only a few mechanisms (consensus on block data availability, timestamping, and facts) need to be subjective, with everything else built objectively on top.