Show HN: YAOS – A 1-click deploy, real-time sync engine for Obsidian
YAOS is a local-first, real-time sync engine for Obsidian that uses Yjs CRDTs and Cloudflare Durable Objects to provide 1-click deployment with zero-cost hosting. The backend uses a chunked checkpoint + delta-journal MVCC storage engine to handle real-time text synchronization and offline scenarios.
I'm a heavy Obsidian user.
I recently got tired of the two usual sync tradeoffs:
1. File-based sync (iCloud/Dropbox/Syncthing) that leaves you waiting for changes to propagate, or hands you a "conflicted copy." 2. Self-hosted setups (like CouchDB) that need touching VMs and dockerized databases to sync markdown.
So I built YAOS: a local-first, real-time sync engine for Obsidian.
Self-hosting OSS should have better UX.
You can deploy the backend to your own Cloudflare account in one click. It fits comfortably in Cloudflare's free tier (costing $0/month for normal personal use), and requires absolutely no terminal interaction, no SSH, and no env files.
You can try it out right now: https://github.com/kavinsood/yaos
How it works under the hood:
- Text sync uses Yjs CRDTs. It syncs real-time keystrokes and cursors rather than treating the vault as a pile of files to push around later. - Each vault maps to a Cloudflare Durable Object, giving you a low-latency, single-threaded coordinator at the edge. - The backend uses a chunked Checkpoint + Delta-Journal MVCC storage engine on top of the DO's SQLite storage. - Attachments sync separately via R2 (which is optional—text sync works fine without it).
The hardest part of this project was bridging Obsidian's synchronous UI and noisy OS file-watchers with the in-memory CRDT graph. I had to build a trailing-edge snapshot drain that coalesces rapid-fire IO bursts (like running a find-and-replace-all) into atomic CRDT transactions to prevent infinite write-loops.
The current design keeps a monolithic CRDT per vault. This is great for normal personal notes, but has a hard memory ceiling (~50MB of raw text). I chose this tradeoff because I cared more about fast, boringly reliable real-time ergonomics than unbounded enterprise scale.
I also wrote up engineering notes on the tricky parts (like handling offline folder rename collisions without resurrecting dead files) on GitHub.
I've spent the last three weeks doing brutal QA passes to harden mobile reconnects, IndexedDB quota failures, and offline split-brain directory renames.
I'd love feedback on the architecture, the code, or the trade-offs I made. I'll be hanging out in the thread to answer questions!