Can you prove what your AI agent did?
Paste what you already have: a log, trace, webhook, signed record, or plain-English incident summary from an AI agent, API call, tool invocation, or MCP server. See what your team can observe, what another party can verify, and what is still missing.
Built for API operators, MCP hosts, platform teams, security, support, and compliance.
No signup. Runs in your browser.
Sample result from a typical log excerpt
Start from a real situation
Pick a scenario or paste your own artifact below.
Start with what you already have
Logs, traces, webhooks, signed records, and short incident summaries all work. Paste an artifact from an AI agent, API request, tool invocation, or MCP server response.
Everything stays in your browser. No outbound verification or artifact fetches from this page.
Start with any artifact you already have
Logs, traces, webhooks, signed records, and even a short incident summary are enough to begin. The point is not perfect input. The point is to see where proof breaks down.
What your logs show vs what others can verify
Local observability
- Logs and traces help your team debug.
- They explain what your systems observed and how they behaved.
Useful for internal understanding. Weak for cross-party proof.
Portable evidence
- A signed record captures the decision boundary in a form another party can verify without trusting your dashboard.
Useful for review, disputes, audits, and handoffs.
Observability helps you understand. Evidence helps you prove.
Built for AI agents, APIs, tools, and MCP servers
If an agent calls a tool, hits your API, triggers a payment, or goes through an MCP server, the same question comes up later: what happened, what was allowed, and what can another party verify without trusting your dashboard?
AI agent calls
Track what the agent asked for, what policy applied, and what happened next.
MCP servers
Keep a portable record of tool invocations and decisions without changing the developer workflow.
APIs and gateways
Make access decisions explicit and keep signed records that survive handoff.
Payments and audits
Connect authorizations, outcomes, and review-ready records without exposing internal infrastructure.
See how evidence strength changes by scenario
The same workflow can look very different depending on whether you only have logs or whether you also have a signed record.
Start here if you want to see the difference before pasting your own artifact.
An AI agent calls a tool through your API or MCP server.
What most teams have
- Timestamps, tool name, response status
- Internal request ID
Another party can verify
- Nothing independently
With a signed record
- Issuer identity
- Policy that applied
- Timestamp integrity
- Portable proof
Test what breaks trust
A dashboard can say anything. Evidence has to survive tampering, disputes, and handoffs.
Portable evidence should fail loudly when key trust assumptions are broken.
What your logs cannot settle
These are the questions that turn internal traces into cross-team arguments.
Was the agent authorized to take this action?
Logs show what happened. They do not prove what was allowed.
Which policy was in effect at the time?
If the policy changed between the action and the review, logs cannot tell you which version applied.
Can the other party verify this independently?
Internal logs require the other party to trust your infrastructure. Signed records do not.
Has this record been modified since it was created?
Log entries can be edited, deleted, or backdated. A signature detects any change.
Can I hand this to an auditor without giving them system access?
Logs live in your infrastructure. A signed record is a portable file anyone can verify.
Can I prove what an MCP server or tool actually returned?
Logs may show the call path. They do not give another party a portable record of the decision and result.
What changes for each team
Your team receives agent traffic you do not fully control.
Observability helps debug but does not travel across boundaries.
Captures the decision boundary for later review without replaying the incident.
Originary does not replace your logs. It makes the decision portable.
Keep your gateway, auth, payments, and observability stack. Originary adds a verification layer that evaluates requests, applies policy, and returns signed records you can prove later.
Portable records on the product side. Open standard underneath.
Use the diagnostic. Then take the next step that fits.
Start with your current artifacts. Move to examples, verification, or deployment when you are ready.
No account required for the diagnostic or Agent Auditor.
For AI agents, MCP servers, APIs, and tool integrations.
Planning a rollout? Talk about enterprise deployment
Have a signed record already?
Proof Check helps you understand whether your current artifacts are enough. Agent Auditor is for opening and verifying a signed record once you already have one.
Need to inspect a raw JWS instead? Use Inspector.
Frequently asked
How is this different from logs and traces?
Logs and traces help your team debug internally. They do not give another party independent, portable proof of what happened. Signed records do.
Does this work with MCP servers?
Yes. The same gap shows up in MCP servers as in APIs and tool calls: teams can see what happened internally, but later review still depends on local systems. Originary adds signed, portable records that can be verified without those systems.
Can I use this with AI agent tool calls?
Yes. When an AI agent invokes a tool through your API or MCP server, Originary can issue a signed record of the decision at the point of action.
What does another party actually verify?
They verify who issued the record, whether the contents have been modified, what policy was in effect, and when the action occurred. All using the issuer public key, with no dependency on your systems.
Do I need to replace my gateway or observability stack?
No. Originary works alongside your existing auth, payments, observability, and agent runtimes. It adds signed records that travel outside your system.