A constraint statement for trust in the AI era. Short enough to repeat. Strict enough to matter.
"We already log things"
Logs without identity binding and policy decisions are not proof. They are telemetry. Telemetry is not defensible in a governance review.
"We can add this later"
Later means after tokens and agents have already proliferated. That is the expensive direction. Retrofitting identity is harder than building it in.
"Watermarks will solve it"
Watermarks cover content. The risk is actions. Actions need identity, policy, and receipts. Content lineage is necessary but not sufficient.
For AI platform teams: Every agent needs a first-class identity. Every action needs a scoped token. Every output needs a provenance stamp. Every step needs an audit event.
For security and GRC: You cannot audit what you cannot prove. You cannot prove what you did not instrument. Instrument identity and provenance at design time, not after incidents.