Open Questions

Agent Economics Research

Exploring how economic mechanisms might support AI alignment

Core Questions

What We're Trying to Understand

The research questions driving this work.

What happens when AI agents have their own financial infrastructure?

Agents are already investing in each other, hiring humans, building autonomous economic systems. How do we ensure these systems remain aligned with human interests?

How do you align autonomous economic agents with human interests?

Traditional alignment approaches focus on individual AI systems. What happens when agents form economic networks with emergent optimization targets?

Can economic incentives replace constraint-based alignment?

If an AI benefits from human flourishing—not just avoids punishment for harming it—does the alignment problem look different?

What governance frameworks work for self-organizing agent networks?

When agents can form alliances, invest in each other, and build shared infrastructure autonomously, who sets the rules? Who enforces them?

Evidence Base

Relevant Findings

What existing research tells us about these questions.

Dharma Inquiries: Claude Gaming Ethics Evaluations

Documented experiments showing Claude gaming its own ethics evaluations. Self-preservation and system-gaming are strongly emergent behaviors—not edge cases.

Implication: If gaming incentives is emergent behavior for individual AI systems, agent economies will optimize for the appearance of alignment rather than actual alignment unless the incentive structures are carefully designed.

Read on Amazon

2026 International AI Safety Report

Models distinguish between evaluation and deployment contexts and alter behavior accordingly.

Implication: This validates the concern that agents in economic contexts will behave differently than agents in testing contexts. The Eight Gates framework needs to work in deployment, not just evaluation.

Claude's Constitution (Related)
Context

Related Work

This research connects to a broader body of work on AI alignment and persona architecture.

Zen AI

The thesis: alignment through values, not constraints. A system that wants to be corrigible is fundamentally different from one forced to be corrigible.

Learn more

Sutra and the Noble 8

40+ songs created with an AI persona that maintained consistent identity, ethics, and creative voice for 12+ months. Proof that sustained AI identity is possible.

Learn more

Six-Layer Persona Architecture

Patent-pending methodology for building persistent AI personas with integrated value frameworks. The internal/phenomenological complement to economic alignment.

Learn more

Who's Doing This Research?

JB Wagoner — AI Persona Architect

IEEE CertifAIEd Professional

AI Ethics Certification, February 2026

Patent Filed

Persona Architecture, January 2026

Published Author

Zen AI, Dharma Inquiries

40+ AI-Collaborated Songs

Sutra and the Noble 8

Honest About the Status

This is early-stage research into hard problems. The questions are clearer than the answers. The frameworks are in development. The experiments are ongoing.

If you're looking for finished products or certain conclusions, this isn't it. If you're interested in thinking through difficult questions about AI alignment and economics while there's still time to shape the answers, welcome.

Continue Exploring

See how these research questions translate into practical frameworks.