Hello Atomic, Goodbye Git: Rebuilding Software Collaboration for the AI Era with Lee Faus
This week I sat down with Lee Faus, Founder and CEO of Atomic, just after he closed a 2.5M pre-seed round supported by Yoni Rechtman via Slow Ventures and Jean Sini via Irregular Expressions and of course, Vermilion Cliffs Ventures.
Lee has more than two decades experience building and scaling developer infrastructure at GitHub, GitLab, and Red Hat. He sees the next wave of AI adoption forcing a reset at the foundation, not another layer of tools added at the edges. Atomic doesn’t bolt AI onto an existing workflow, it rethinks version control for a world where humans and machines both produce code at scale.
Listen in not just for the technical depth, but Lee’s articulation of where today’s systems are already failing, why enterprises are paying attention, and what it means for how engineers work over the next decade.
Full interview here with highlights below.
AI turned developer productivity into a weapon, but the process never changed
Most teams stack copilots and models onto workflows designed for slow, human-only contribution. That mismatch hits hardest at validation, where Git starts to strain under scale it was never designed to handle.
“Sure, developer productivity is awesome. It’s only one measurement when we talk about building software. So that whole workflow has to change.
If I’ve got a thousand engineers and each one is solving three problems a day, that’s three thousand branches. Most pull requests are open for three days, so now you’ve got nine thousand pull requests sitting there waiting to be approved.
Now give every engineer a hundred agents. You end up stacking all of those changes on top of each other, and Git doesn’t have a good way to reason about that.
That’s where Git breaks. Not because it’s bad software, but because it was never designed for this world.”
This isn’t a future problem. Large mono repos are already breaking under human scale. AI just makes the failure mode impossible to ignore.
We don’t need AI for Git, we need a different way of storing change
Git’s core abstraction assumes changes arrive slowly and are reviewed asynchronously by humans. That model breaks once humans and machines are producing code continuously and in parallel. Fixing that isn’t about adding AI to Git, it requires rethinking how change itself is represented.
“If we think about how Git is architected - I’m going to simplify this a lot - Git is really about comparing snapshots. You’re looking at one picture and another picture and trying to see the differences. But what happens when you’ve got a thousand of those pictures layered on top of each other? It becomes really hard to understand what changed and why.
Now think about LEGO blocks. If I have a red block and a green block, they know how they relate to each other. When I insert a new block in between them, it knows exactly where it belongs. This is Atomic.
Because of that, we always have a clean state of the repository, and changes commute automatically.”
That decision enables large-scale agentic development without pushing humans or machines into serialized workflows. It also embeds provenance directly into the system instead of tacking it on later.
Enterprises aren’t worried about future scale, it’s the limits that already exist
Enterprise teams aren’t speculating about what AI might break someday. They’re already hitting limits around repository size, validation speed, and accountability. AI doesn’t create those problems but it makes them impossible to ignore.
“Could you actually go back to be able to look at that piece of code and say, was that written by a human? Was it written by AI? If it was written by AI, what was the prompt that generated that code?
If it ended up going through a security process, who was the reviewer? Did AI review the code or did a human review the code? All of those things are now becoming critical to people adopting AI.”
Repository size, I/O bottlenecks, and centralized CI pipelines are already creating friction. AI accelerates the volume of change to the point where those bottlenecks become operational risks. AI doesn’t create these requirements. It makes them non-negotiable.
In an AI world, correctness and trust can’t be proprietary
Atomic’s commitment to open source is about credibility, not ideology. Open source gives developers and enterprises the ability to inspect and verify the systems they’re betting on. In the next generation of developer infrastructure, trust isn’t a feature. It’s the baseline.
“Do you open source it? If you open source it, what license do you want to use?
Because we’re extending Pijul, we’re sticking to their license. We can always append the copyright to say we’re using their code plus our code, so there’s an extension of the copyright.
But the dataset needs to hold the rigor for the math to math. If we were doing this closed source, we couldn’t get any sort of independent validation that it’s actually doing what we want it to do. So going down the open source path was important to me.
I wanted to make sure we get open source people to come in, play with it, help us find where the math maybe doesn’t math, but at the same time come in and say, what other innovations do you want to put in?”
Final Thoughts
The through-line here is that AI doesn’t just add pressure to existing systems. It exposes where those systems were already fragile.
Git reshaped software development by enabling distributed human collaboration. Atomic bets that the next shift comes from enabling distributed human and machine collaboration, with correctness, provenance, and trust built in from the start.
For founders, operators, and enterprise teams adopting AI, this shift isn’t optional. The only question left is whether the foundation evolves fast enough to keep up with how teams already write software.


