While Distracted by AI: The Quiet Rise of Local-First Software

March 04, 2026

Every boardroom conversation, every VC pitch deck, every tech conference keynote. It's all AI, all the time. And fair enough. Generative AI is genuinely transformative.

But while the entire industry stares at the same shiny object, something else is taking shape. A small, serious community of distributed systems researchers and engineers, some of whom literally wrote the book on building data-intensive applications, have been quietly assembling the building blocks for a fundamental shift in how software works.

No hype cycle. No billion-dollar funding rounds. Just working code.

The result is a new category of software that makes the cloud optional, not essential. Software where your data is yours, your tools work offline, and nobody else's outage becomes your problem.

What the Cloud Giveth

Let's be fair. The cloud won for good reasons.

Centralized infrastructure gave us things that were genuinely hard to do otherwise. Single sign-on and role-based access control meant one place to manage who can see what. Consistent data meant everyone on your team saw the same version of the document, the same project board, the same customer record. And in the last decade, cloud-mediated real-time collaboration gave us Figma, Google Docs, and Linear: tools where multiple people work on the same artifact simultaneously, and it feels like magic.

These are real achievements. The cloud model earned its dominance.

But it came with trade-offs we've been slow to name:

We traded away agency. Your work product (your documents, your designs, your conversations) lives on someone else's servers, governed by someone else's terms of service. You don't own it in any meaningful sense. You rent access to it. And increasingly, your data isn't just hosted. It's the product. Used to train AI models, refine ad targeting, or sold to third parties, often buried in terms of service updates that nobody reads.

We traded away permanence. Services shut down. Companies pivot, get acquired, or simply decide a product isn't worth maintaining anymore. Your data survives only as long as the company's "incredible journey". When that journey ends, your data often goes with it. Or you get a grace period and a zip file of exports in a format nobody else reads.

We traded away sovereignty. And this one is getting harder to ignore. All that data sitting on someone else's servers also sits in someone else's jurisdiction. Governments are asserting new powers over data held by cloud providers. Who can access your organization's data, under what legal framework, and what they can compel a company to do with it are no longer abstract compliance questions. They're live strategic risks. When your most sensitive work product sits in a handful of cloud providers concentrated in a handful of jurisdictions, the question "should someone else control access to my work?" stops being philosophical and starts being urgent.

We traded away resilience. When us-east-1 goes down entire companies come to a halt. Your apps can't be opened, and in more extreme cases, cook you in your sleep. The document, design or presentation you laboured over is suddenly inaccessible. Not because anything is wrong with your laptop or your network, but because a downstream vendor's architectural decision put a data centre thousands of miles away in the critical path of your workday.

Conversely, software engineers use Git every day. Git never breaks and works perfectly offline. You can commit, branch, merge, and review history without a network connection because there is a full copy of the repository on your device. That's not a quirk of Git's design. It's the whole point. It's an architectural decision, and that decision has a name: Local-first.

The principle is simple: the application works locally, on your device, with your data. It synchronizes when it can, but it doesn't depend on a connection to function. Your data is yours first, shared second. The term was coined in a seminal 2019 paper from the Ink & Switch research lab, co-authored by Martin Kleppmann (yes, the "Designing Data-Intensive Applications" author). The paper laid out seven ideals for software that respects user ownership and agency, from instant responsiveness and offline capability to long-term data preservation. It became a rallying point. A community formed around it. And the building blocks it called for are now arriving.

Git proved this model works for code. It's about to be applied to everything else.

Why Now

If you've been around technology long enough, you've seen the peer-to-peer pitch before. And the decentralized identity pitch. And the "users should own their data" pitch. They all sounded good. None of them shipped products people actually wanted to use.

So why should this time be any different?

Two reasons. First, the trade-offs described above used to be tolerable. Outages were rare, platforms were trustworthy enough, and nobody was asking hard questions about data jurisdiction. That's no longer true. The motivation for alternatives has gone from ideological to practical.

Second, the technology has finally caught up.

AI spent decades as a punchline, "always ten years away." The underlying mathematics didn't change. What changed was that the infrastructure, the data, and the compute finally caught up to make it practical. When it did, it went from academic curiosity to the fastest spreading technology in human history.

The same pattern is playing out here. Peer-to-peer networking, CRDTs, and public key cryptography are old ideas. What's new is that they finally work well enough, together, to build real products, with real outcomes, for real people. Not one breakthrough, but a convergence of independently developed pieces arriving at the same time. Here are a few of the key ones:

Conflict-free data synchronization. This is the hard computer science that makes real-time collaboration possible. I remember talking with Google engineers years ago about the Operational Transforms powering the now-defunct Google Wave. It worked like magic, multiple people editing the same document in real time, but it required a central server. That was the trade-off: real-time collaboration meant centralization. CRDTs remove that trade-off entirely. Conflict-Free Replicated Data Types allow multiple people to edit the same data independently, even offline, and merge their changes automatically peer-to-peer when they reconnect. No central server. Projects like Automerge have turned the academic research into practical, usable libraries. The same magic as Google Wave, except it works without Google in the middle.

Reliable peer-to-peer networking. Why should your message to a colleague sitting in the next room route through a data centre on another continent? Projects like Iroh have solved the hard problems of P2P connectivity (working through firewalls and NATs, using relays as fallback, encrypting everything end-to-end) and packaged it as a library that developers can drop into an application. P2P networking that just works, without needing a PhD in network protocols.

Decentralized identity and access control. This is the piece that killed previous attempts at P2P software. Without a central server, how do you identify data, and how do you identify people? Content addressing solves the first problem: instead of identifying data by where it's stored (a URL, a file path, a database row), you identify it by what it is, a cryptographic hash of its contents. The same document has the same address whether it's on your laptop, a colleague's phone, or a backup server in another country. For identifying people and controlling access, decentralized identifiers (DIDs) and hardware-backed key pairs (passkeys) handle the "who are you" problem. And for the "who has access to what" problem, new protocols for secure group messaging (MLS) and group key management (Automerge's forthcoming KeyHive project) are making it possible to handle permissions across a group without a central authority. When combined with non-extractable keys stored in secure hardware, these solve the usability and security problems that made earlier approaches impractical.

A Blue Ocean

But here's where I get excited. Forget what's wrong with the cloud for a moment. Think about what you can build when the building blocks above are available:

New product categories. Applications that work identically whether you're online, offline, on a plane, or in a country with unreliable internet. Real-time collaboration without a central server mediating every keystroke. Software that survives the vendor shutting down, because the data was never theirs to begin with. It was on your device the entire time.

New economics. The cloud model has a structural cost problem: your infrastructure bill scales with your users. Every API call, every connection, every byte stored, it all runs through servers you pay for. In a local-first model, the user's device does the heavy lifting. Cloud becomes a convenience for backup and relay, not a cost centre that grows linearly with your customer base. For a startup, this is a meaningful structural advantage.

A new trust model. Users own their data. Sharing is explicit, not default. The application is a lens on your data, not a landlord of it. You can switch applications without losing your life's work, because the data layer isn't coupled to the application layer. In a climate where data jurisdiction and platform trust are live political questions, this isn't just a technical nicety. It's a product advantage. It's something you can tell your customers that your cloud-dependent competitor cannot.

Brass Tacks

I wanted to see if the theory held up. So I built a Slack clone. Channels, messages, multiple participants (including AI Agents!), but with no server in the middle...

Rob Elves's avatar
Rob Elves
3mo

My multi-user + AI agents p2p collaboration app is starting to take shape! Automerge and Iroh are a joy to work wth. #localfirst #cloudless #rust #WIP #learnbybuilding youtu.be/JSqCABm8UQU

Stormr progress demo - Multi-user + AI Agent Collaboration

Stormr progress demo - Multi-user + AI Agent Collaboration

YouTube video by Robert Elves


https://youtu.be/JSqCABm8UQU

Here's what surprised me: it just worked. Okay, there may have been a few false starts on my part, but this tech seriously works. Peers found each other and connected directly. Messages synced. I could take a device offline, keep writing messages, bring it back online, and everything merged seamlessly. When I showed it to a few people, they used it, and then asked "ok, but what's the catch?" There wasn't one. It just felt like a normal chat application, attachments and all. They barely got the point, because there was nothing broken or weird to point at. Which is exactly...the point.

The thing that struck me most wasn't the end result. It was what I didn't have to build. I didn't write a networking stack. I didn't implement a sync protocol. I didn't build an authentication system from scratch. I assembled existing building blocks. Libraries from some of the very best minds in distributed systems and the cryptography space.

We have the technology. Now we need the builders, the apps.

Cloud Second

While the tech industry fights over AI model margins and GPU allocations, the assumptions underneath cloud software, assumptions that have been load-bearing for two decades, are being quietly dismantled. The assumption that you need a central server to collaborate. That someone else should hold your data. That an outage five thousand miles away should stop your workday. That you should lose access to your own work when a company decides to pivot. One by one, these assumptions are becoming optional.

This isn't anti-cloud. The cloud still has a role. For heavy compute, for global relay infrastructure, for backup and archival, cloud services remain valuable. But for a growing class of applications, they move from the centre of the architecture to the edge. From the landlord to a utility. From first to...cloud second.

The local-first stack is ready. The political and societal tailwinds are real and accelerating. The community is small but serious, and the code is open. The question now isn't whether this shift is possible. It's what you're going to build to capitalize on it.


My appreciation goes to for his stewardship of the LoFi community and his time to review a draft of this article.

(Also published on LinkedIn here)


lofi
local-first
p2p