ThirdKey publishes the trust primitives we ship as open specifications and peer-reviewable preprints. Every cryptographic boundary, every policy phase, every guarantee — written down so security teams can audit it before it runs in production.
Open-access on Zenodo. Each paper is a self-contained specification a security engineer can implement and audit.
A compile-time approach to enforcing policy gates in AI agent loops via typestate encoding. Evaluated across nine hosted LLM providers — 263 forbidden tool-call attempts refused without execution at 30–95µs per check. Addresses the time-of-check-to-time-of-use problem with affine ownership semantics.
An open specification for zero-trust execution of AI agents. Defines declarative tool contracts with allow-list enforcement, compile-time verification of the Observe-Reason-Gate-Act loop, and structural separation of the policy gate from language-model influence across five architectural layers.
Working notes from research.thirdkey.ai — what we’re building, what we got wrong, what changed.
Prompt-based safety degrades under operational load — a kind of context rot that parallels human goal neglect. The fix is structural: policy evaluation and typed contracts, not prose rules.
ToolClad replaces freeform shell generation with a declarative manifest of typed parameters and templates — safer agent execution through allow-list validation, not deny-list filtering.
A Rust agent runtime that enforces policy evaluation as a mandatory compile-time phase — combining typestate patterns, durable journaling, and cryptographic auditing so agents can’t bypass authorization.
A guarantee that can’t be written down isn’t one.