Reference Framework + Proof Lab

Proof of Lineage

Making AI actions and content provable end to end.

A framework and reference architecture for binding AI systems to cryptographic identity, policy enforcement, and verifiable provenance.

This is not a product. It is a missing layer of the AI stack.

Lineage Thread Architecture
Execution Layer
prompts · agents · tools · data · external actions
Identity Control Plane
identities · tokens · context · policy · scoped execution
Provenance Layer
audit · lineage · receipts · replay · defensible proof
The Problem

AI outgrew the identity layer

AI agents now write code, touch production systems, send emails, move data, and make decisions. Most organizations cannot answer basic questions:

  • Who ran this action?
  • What identity was used?
  • What policy allowed it?
  • What token was involved?
  • What model touched the data?
  • What audit trail exists?
  • What can be proven later?

We built identity for humans logging into apps. We did not build identity for non-human actors executing real work at machine speed. That gap is now the AI trust failure.

Visualization of fractured AI governance architecture
The Trust Gap

Most AI systems operate on borrowed trust

Shared secrets. Long-lived tokens. Implicit permissions. After-the-fact logging. UI claims of safety. None of these produce proof.

When something goes wrong, teams cannot reconstruct what happened. They cannot prove who initiated an action, what identity executed it, what policy evaluated it, what data influenced it, or what chain of tools ran.

That is not governance. That is hope.

Receipt Preview
2026-01-21
action: summarize_contract
agent: ContractSummarizerAgent v0.1
identity: svc_agent_contracts
policy: PASS → contracts:read, egress:restricted
audit: event_id issued
provenance: input + output fingerprints
DiFilippo's Law
Core Doctrine
stable statement v1
Only actions and content that can prove who created them and where they came from can be trusted in the AI era.
Trust cannot be based on intent, interface claims, or vendor assurances.
Trust must be based on cryptographic identity, policy enforcement at execution time, verifiable provenance, and receipts per action.
The Map

The AI Identity Control Plane

As AI maturity rises, identity maturity must rise with it. Otherwise you get agent sprawl, token sprawl, unbounded privilege, and zero auditability.

The control plane binds agents to first-class identities, actions to scoped tokens, execution to policy decisions, outputs to lineage records, and every step to an audit trail.

Identity is no longer login infrastructure. It is execution infrastructure.

AI identity control plane map preview
Proof Over Opinions

Reference Artifacts

Two proofs that make the control plane tangible. These are not products—they are proofs.

Agent Identity Receipt

A reference demo showing identity, token issuance, policy evaluation, audit event, and provenance stamp per action.

View demo →

Content Lineage Stamp

A reference prototype showing fingerprinted outputs and lineage records, including what cannot be proven.

View demo →

Field Guide

A compact framework and checklist for making AI systems auditable, governable, and defensible.

Download →

⚠ The Default Failure State

  • Agent sprawl without identity governance
  • Token sprawl with long-lived credentials
  • Unbounded privilege escalation
  • Zero auditability or forensic capability
  • No defensible proof of what happened
  • No replay or reconstruction
This is the default state today. This is what we are solving.
Get Started

Download the AI Identity Control Plane Guide

A practical framework for making AI systems auditable, governable, and provable. Includes the map, the law, a reference architecture, a governance checklist, and metrics that prove maturity.

Or skip the form: Direct PDF download

Guide Contents
PDF · 3 pages
Page 1: What just broke
Page 2: DiFilippo's Law + control plane requirements
Page 3: Outcomes and metrics