Core Doctrine

DiFilippo's Law

A constraint statement for trust in the AI era. Short enough to repeat. Strict enough to matter.

The Law
v1 · stable
Only actions and content that can prove who created them and where they came from can be trusted in the AI era.
What It Replaces
  • Interface claims of safety
  • Brand trust and vendor reputation
  • After-the-fact audits
  • Shared secrets and long-lived tokens
  • Policy that exists on paper but not at execution time
What It Requires
  • First-class identities for agents
  • Scoped tokens minted per action
  • Policy decisions evaluated in context
  • Audit events emitted for each step
  • Provenance stamps and receipts
  • Forensic replay of action chains
Common Objections

"We already log things"

Logs without identity binding and policy decisions are not proof. They are telemetry. Telemetry is not defensible in a governance review.

"We can add this later"

Later means after tokens and agents have already proliferated. That is the expensive direction. Retrofitting identity is harder than building it in.

"Watermarks will solve it"

Watermarks cover content. The risk is actions. Actions need identity, policy, and receipts. Content lineage is necessary but not sufficient.

Implications

For AI platform teams: Every agent needs a first-class identity. Every action needs a scoped token. Every output needs a provenance stamp. Every step needs an audit event.

For security and GRC: You cannot audit what you cannot prove. You cannot prove what you did not instrument. Instrument identity and provenance at design time, not after incidents.