Research lab

State Capacity AI builds guides for using AI in government and public services.

We focus on how decisions get routed, who stays accountable, and what happens when machines and humans make choices together.

State Capacity AI is a project of Occupant. Applied research and tools at occupant.ee.

Public Capacity Lab is our research partner. PCL researches how systems deliver services, how citizens experience those services, and how trust and legitimacy are sustained in civic life.

Contact: ron@statecapacity.ai

Browse the guides

What we work on

Decision architecture

Standards and models for how automated systems make decisions inside institutions.

When a system approves a permit, denies a benefit, or allocates funding — what governs that action, who can review it, and how does it get appealed?

We write the specs for decision-making that remains accountable even when machines are involved.

Transparency infrastructure

Tools and patterns that make institutional behavior visible to the people inside and outside the system.

  • Disclosure and provenance tools
  • Procurement benchmarks
  • Audit-ready decision logs
  • Frameworks for tracing how money and authority move

If a system affects your life, you should be able to see how it works — and if it spends public money, where it went.

Legibility is a precondition for trust.

Protection frameworks

Structural safeguards that prevent systems built for public good from being repurposed for extraction, abuse, or opacity.

  • Licensing and governance models
  • Anti-capture design patterns
  • Audit and oversight structures
  • Research into failure modes, evasion, and institutional blind spots

Open systems need protection. So do the people operating inside them.

Published guides

Tools

Field work

We study where public systems break down — and sometimes step in to make those problems legible enough to fix.

Current areas:

  • AI and administrative complexity
  • Tax and funding opacity
  • Decision routing in automated systems
  • Procurement and vendor risk
  • Institutional fraud and evasion patterns
  • Interoperability across jurisdictions

Why this work exists

Decisions in public systems are happening faster than institutions can understand them. The risk isn’t bad outcomes — it’s losing the ability to see where decisions come from at all.

State Capacity AI builds the trust plumbing for that era — so institutions stay governable, and the public can still contest what affects them.

About the lab →