About
I research how to bring formal methods into the world of AI-generated code. The long-term vision is what people call veri-coding: a future where every function an LLM writes comes with a machine-checked proof of its correctness. We're not there yet, but that's the direction.
As a research engineer at the Beneficial AI Foundation, I work on formal verification of Rust cryptographic implementations using Lean4. The focus right now is on writing proofs that check the correctness of real crypto code — making sure the software that secures our systems actually does what it claims.
I'm also an AI engineer at a stealth startup, where we're using applied category theory as a framework for agent orchestration. The idea is to lean on type theory — and eventually dependent type theory — to give structure and guarantees to multi-agent systems. We're exploring neuro-symbolic AI approaches to get there.
My background is in pure mathematics — I have a PhD in Algebraic Geometry. That training in rigorous abstract reasoning turns out to be useful when you're trying to make AI systems you can actually trust.