From ItaLean to Vericoding: Why I'm Now a Lean4 Fanatic
From ItaLean to Vericoding: Why I'm Now a Lean4 Fanatic
Back in November 2025, I found myself in Bologna for ItaLean. I'd been working with Lean4 for only a few months by then. My first real contribution was a small mathlib PR I'd started in August and finally got approved in October. Was it LLM-assisted? Absolutely. But having a math background meant I knew how to sharpen my reasoning and leverage AI properly rather than just blindly accepting what it spat out.
The conference changed something for me. Talks from Harmonic, DeepMind, and LeanFRO showed me Lean4 wasn't just another theorem prover collecting dust in academic corners. It sits at this weird, exciting intersection of interactive theorem proving and actual programming language design. The kind of thing that makes you rethink what's possible.
But the talk that really got me was Leo de Moura's historical take on Lean's genesis. There's something special about hearing the creator of a programming language talk about their "creature." De Moura walked through the early design decisions, the problems they were trying to solve, the moments where they almost gave up or went a different direction. You could see the whole evolution laid out, and suddenly Lean4 made sense not just as a tool but as the result of years of careful thinking about what formal verification needs to look like.
I left Bologna somewhat obsessed. My colleagues joke that I've become a living meme. I genuinely wish everything was coded in Lean4. I know how that sounds. But once you see what's possible when code and proofs live together, it's hard to go back to the old way of doing things.
Making the Jump
I moved from academia to industry around October and November. The timing lined up with joining BAIF, where people are doing serious work on formal methods for software verification. Not toy examples. Real production systems. Right now we're verifying encryption protocols used in actual deployed code. Think Signal-level stuff. The kind of code where bugs don't just crash your app but compromise people's privacy and security.
This matters more than most software work. When you're dealing with cryptographic implementations, there's no room for "it works on my machine" or "we'll patch it later." The stakes are too high. Formal verification gives you mathematical certainty that your code does what you claim it does.
The Vericoding Dream (and Why It's Hard)
Here's where my mind goes late at night: what if we had machines that generate code together with proofs of correctness? Vericoding. Code that comes with a mathematical guarantee it does what you want. No more "trust me, I tested it" or "the type system catches most bugs." Actual proofs.
We're not there yet. Not even close. The problems aren't just technical challenges you can engineer your way around. They're deeper than that.
Start with specs. What counts as a correct specification for a program? Who decides what "correct" means in the first place? Natural language is how humans think about what code should do. But natural language is messy and ambiguous and full of implied context. Is it logical enough to be formalized? How do you capture all the nuances in what someone means when they describe a feature and turn that into something you can prove things about?
These questions sit at the boundary of epistemology and engineering. What does it mean to know something is correct? How do we bridge the gap between human intent and machine-checkable proof? I don't have answers yet. But working on these problems feels right. It feels like the kind of problem worth spending years on.
What I'm Building
My focus now is on making formal verification more practical. That means integrating AI tools into the verification process in different ways. Automation pipelines that lower the barrier for teams who want to adopt formal methods. CI/CD systems that help people working on software verification make their tests and checks production-ready. Better tooling means better code bases. It means formal methods stop being this niche thing only PhD researchers touch.
Why This Matters
Three months ago I was in academia. Now I'm writing proofs for encryption code and building automation tools. The transition happened fast, but it doesn't feel jarring. It feels like the natural next step.
Formal verification used to be slow, expensive, and reserved for life-critical systems like airplane software or medical devices. AI is changing that equation. Not because AI can write perfect proofs on its own (it can't), but because it can handle the tedious parts while humans focus on the interesting problems. The math background helps me see where AI is useful and where it falls apart. The engineering side helps me ship tools people will actually use.
We're at the start of something. Vericoding might be years away. But the pieces are coming together. Better languages like Lean4. Better AI assistants. More people realizing that "just test it really well" isn't good enough for critical systems. More companies willing to invest in verification.
I went to a conference in Bologna and came back a fanatic. Worse things have happened.