The 5 Laws of AI Harness

Foundational, non-negotiable principles. Designed to be short, repeatable, and absolute.


Law 1: Never Trust Autonomous Execution

No AI agent action is inherently safe.

Every action must be evaluated at runtime regardless of authorization, intent, or prior behavior. Authorization to act is not the same as safety of action.


Law 2: Every Agent Is an Identity

AI agents are not tools. They are actors.

An autonomous agent that reasons, plans, and executes across systems is an operational identity — not an application feature. It must be governed with the same rigor applied to human identities, and stricter constraints appropriate to its autonomy.


Law 3: Constrain Behavior, Not Just Access

Access control is insufficient.

What matters is what the agent does after access is granted. Granting an agent permission to use a system does not govern how it uses that system, what decisions it makes, or what actions it chains together. Behavioral constraints must operate above and beyond access policies.


Law 4: Enforce at Runtime

Detection is too late. Prevention before execution is too early.

Control must happen during execution — at the point where autonomous decisions are being made, tools are being invoked, and actions are being taken. Runtime is the only enforcement window that matches the continuous nature of autonomous behavior.


Law 5: Governance Must Span Systems

No single system can govern AI agents alone.

An autonomous agent operates across identity, infrastructure, security, and data systems simultaneously. Governance that operates within only one domain cannot see or constrain cross-system behavior. Enforcement must coordinate across all domains the agent touches.


Origin Principle

These five laws derive from a single assertion:

Autonomous AI agents are a new class of enterprise actor. They require a new class of governance — one that operates at runtime, across systems, and at the level of behavior, not just access.