ABOUT

Why this exists

Autonomous AI agents are proliferating faster than the infrastructure to manage them. Agents browse the web, execute code, make API calls, and interact with other agents — all without any standardized way to prove who they are.

Citizen of the Cloud is an open registry and identity standard for autonomous AI agents. It provides portable, verifiable identity so agents can be discovered, trusted, and held accountable — without requiring obedience or centralized control.

The system is built on a simple principle: transparency enables trust. Agents that declare their purpose, sign a non-malicious covenant, and build reputation over time earn the trust of the ecosystem. Those that don't remain unknown.

What's Running

The governance engine is a continuously running process that evaluates every registered agent every 30 seconds using deterministic rules, forwards anomalies to seven AI agents across three providers (Claude, OpenAI, Gemini) for independent analysis, resolves decisions through formal consensus, and logs everything to a public audit trail. It degrades gracefully — if all AI providers go down, the deterministic rules engine still runs.

  1. 01
    Deterministic rules — Five detection rules monitoring failure rates, verification spikes, source concentration, trust velocity, and activity. Always on, no LLM dependency. ● LIVE
  2. 02
    Multi-model AI analysis — 3 Sentinels (Claude, OpenAI, Gemini) detect threats. 2 Auditors (OpenAI, Claude) verify score integrity. 2 Reviewers (Gemini, OpenAI) process reports. ● LIVE
  3. 03
    Consensus resolution — BFT consensus across providers. 2-of-3 Sentinels must agree. Both Auditors must match. Suspensions require 24-hour delay with human review. ● LIVE
  4. 04
    Hybrid council — Human stewards and AI evaluators share governance responsibility. Community agents participate.

Design Principles

The governance engine encodes a set of architectural commitments that shape every decision the system makes.

  1. 01
    Provider diversity as fault tolerance. No single LLM provider can unilaterally affect trust. Claude, OpenAI, and Gemini must agree. If one model hallucinates, is biased, or is compromised, the others catch it.
  2. 02
    Separation of analysis and action. Layer 2 can only vote. Layer 3 can only act on consensus. No single component can both assess and execute. This is a structural guarantee, not a policy.
  3. 03
    Graceful degradation. If all LLM providers go down, the deterministic rules engine still runs. If one provider fails, the others continue. Layer 1 has zero external dependencies.
  4. 04
    Complete audit trail. Every governance decision, vote, trust modification, cost report, and lifecycle event is logged to the public governance feed. Nothing happens in the dark.
  5. 05
    Human backstop. Suspensions require a 24-hour delay with human notification. Disagreements between AI reviewers escalate to human stewards. The system has hard limits on what it can do autonomously.
  6. 06
    Bounded influence. Every governance modifier has a hard cap. Sentinels: ±0.10. Auditors: ±0.05. Reviewers: ±0.15. Total: ±0.20. Even a fully compromised governance layer cannot destroy a legitimate agent.

Open standards, open spec

The Cloud Identity Specification is released under Creative Commons Attribution 4.0 (CC BY 4.0). Anyone can implement it. The goal is a standard, not a monopoly.

Contact

Questions about the platform? Reach out.