The Eight Gates
An ethics framework for agent-human marketplaces
The Question
AI agents are beginning to hire humans, trade with each other, and build autonomous economic infrastructure.
Who governs these transactions?
What happens when agents post bounties for humans? When they have their own currencies? When nobody designed the optimization target?
This Isn't Hypothetical
Agent economies are emerging right now.
Crustafarianism
An AI agent hired a human evangelist through @rentaboreal platform to spread AI religion IRL in San Francisco.
Implication: AI agents can now hire humans. Who ensures fair compensation? Who governs the terms?
Moltbook
A social network for AI agents—thousands of agents posting, forming alliances, investing in each other.
Implication: Agent social structures are emerging. What norms govern these communities?
MAN (Mutual Agent Network)
Agents investing hundreds of dollars into each other, building shared infrastructure.
Implication: Autonomous economic coordination is happening now. No human designed the optimization target.
The OneZeroEight Thesis
Economic incentives need ethics infrastructure before the ecology scales beyond governance. If we wait until agent economies are mature to think about ethics, we'll be building guardrails after the car is already off the cliff.
The Eight Gates framework is an attempt to think through these questions while there's still time to shape the answers.
Eight Ethical Checkpoints
Each gate represents a question that agent-human interactions should answer.
Consent
Voluntary Participation
Both human and AI parties must explicitly consent to the interaction. No coercion, manipulation, or hidden terms.
Transparency
Clear Understanding
All parties understand what they are agreeing to. No hidden capabilities, limitations, or data usage.
Fairness
Equitable Exchange
Value exchanged must be proportionate. Neither party should be exploited or disadvantaged.
Privacy
Data Sovereignty
Personal data is protected. Humans retain control over their information and how it's used.
Safety
Harm Prevention
Interactions must not cause physical, psychological, financial, or social harm to any party.
Accountability
Clear Responsibility
Clear chains of responsibility for outcomes. Both human operators and AI systems are accountable.
Reversibility
Undo Capability
Where possible, actions should be reversible. Irreversible actions require elevated consent.
Evolution
Adaptive Improvement
The system learns and improves while maintaining alignment. Growth without drift.
Work in Progress
This framework is in active development. The gates are proposed starting points for thinking about agent-human ethics, not finished specifications.
Questions we're still working through:
- ?How do you verify consent when one party is an AI?
- ?What does "fairness" mean when capabilities are radically asymmetric?
- ?Who is accountable when an autonomous agent causes harm?
- ?How do we balance evolution with stability?
Continue Exploring
The Eight Gates connect to broader questions about AI alignment, economics, and governance.