When we tell people that Bun Agents stores all their tasks, notes, and habits in a SQLite database that runs inside the browser, we usually get one of two reactions: "That's not possible" or "That sounds slow." Both reactions are understandable — they reflect the state of browser storage as it was five years ago. Neither is accurate today.

This post is a technical deep-dive into how browser storage evolved from a 5MB key-value store to a platform capable of running full relational queries at near-native speed, why we chose SQLite over the alternatives, and what this means practically for the people using Bun Agents — including how to get your entire database as a portable .sqlite file any time you want.

A brief history of browser storage

To appreciate why OPFS-backed SQLite is meaningful, you need to understand the landscape it replaced.

Why not IndexedDB?

IndexedDB is good at what it's designed for: storing large blobs, caching API responses, and providing a simple key-value store for structured objects. But productivity data — tasks with projects, labels, due dates, and completion states; notes with tags and backlinks; habit records with streaks and metadata — is fundamentally relational data. The productivity apps that use IndexedDB as their primary store end up building a terrible, slow, bespoke query engine on top of it in JavaScript.

Consider a simple query: "Give me all tasks that are incomplete, due this week, in any project tagged 'work', sorted by priority." In SQLite:

SELECT t.id, t.title, t.due_date, t.priority
FROM tasks t
JOIN task_labels tl ON t.id = tl.task_id
JOIN labels l ON tl.label_id = l.id
WHERE t.completed = 0
  AND t.due_date BETWEEN date('now') AND date('now', '+7 days')
  AND l.name = 'work'
ORDER BY t.priority ASC, t.due_date ASC;

Execution time with a few thousand tasks: under 2ms, thanks to B-tree indexes on completed, due_date, and the join columns. The equivalent operation in IndexedDB would require pulling all incomplete tasks into JavaScript memory, joining them manually with label data, filtering, and sorting — easily 10–50x slower, and requiring significantly more code to maintain correctly.

SQLite also gives us full-text search via the FTS5 extension, which powers the global search in Bun Agents. Searching 10,000 notes in under 20ms, with ranking by relevance, is something IndexedDB simply cannot offer.

sql.js vs. native SQLite WASM

The sql.js library was a genuine breakthrough when it launched — it proved that running SQLite in the browser was possible. But it had a fundamental limitation: it loaded the entire database into memory on startup and required explicit serialization of the in-memory state back to persistent storage (usually IndexedDB). For small databases this is fine; for a user with years of notes and habits, loading hundreds of megabytes into memory on every page load is not acceptable.

The official SQLite WASM build, maintained by the SQLite team at the D. Richard Hipp organization, takes a different approach. It uses OPFS as its virtual filesystem, which means reads and writes go directly to a real file on disk — or in OPFS terms, to a file in the browser's private filesystem. The database is never fully loaded into memory; SQLite reads and writes pages on demand, exactly as it does in a native application. Database files can be gigabytes in size with no startup cost beyond loading the WAL (write-ahead log).

"SQLite is not a toy. It is the most widely deployed database engine in the world — used in every Android device, every iPhone, every macOS installation, and every Firefox browser. Running it in WASM isn't a hack; it's porting a battle-tested, meticulously engineered system to a new platform."

OPFS performance: the numbers

The OPFS synchronous access API (available via createSyncAccessHandle() from a Dedicated Web Worker) provides file I/O that is synchronous from the worker's perspective, which means SQLite can operate with its standard synchronous VFS interface without modification. The result is performance that closely tracks native SQLite on the same hardware.

In benchmarks on a 2023 MacBook Pro M2 and a 2022 iPhone 14:

These are the performance characteristics of a real database engine. They're meaningfully faster than any IndexedDB-based alternative because there's no serialization overhead, no round-trip through a JavaScript promise queue, and no memory copy of the entire dataset.

How we handle schema migrations

One of the engineering challenges of shipping a local-first database is schema migration. With a cloud database, you can run a migration once on the server and all users are immediately on the new schema. With a local database, each user's device has its own copy of the schema, and migrations must run on the device at app startup — potentially on old schema versions from months or years ago.

We handle this with a migration runner that executes on every app load, before the UI is shown. It works like this:

  1. On first launch, the database is created with the current schema version (tracked in a migrations table)
  2. On subsequent launches, the migration runner queries the migrations table to find the current version
  3. Any migrations with a version number higher than the current version are executed in order, inside a single SQLite transaction
  4. If a migration fails, the transaction is rolled back and the user is shown an error — their data is never left in a partially-migrated state

Each migration is a simple SQL string — we don't use a code-heavy migration DSL. This means migrations are readable, auditable, and guaranteed to be idempotent when wrapped in CREATE TABLE IF NOT EXISTS and ALTER TABLE IF COLUMN NOT EXISTS guards. The full migration history is part of the open codebase so users can inspect exactly what changes were made to their schema over time.

How data export works

The most important user-facing feature enabled by our SQLite architecture is data export. Because your entire Bun Agents database is a single .sqlite file in OPFS, exporting it is a matter of reading that file out of OPFS and offering it as a browser download. No server roundtrip. No data processing on our end. No waiting for an export job to complete.

From Settings → Data → Export Database, the process takes roughly one second for typical database sizes (most users' databases are under 50MB — a year of daily tasks, notes, and habits generates surprisingly little data). What you get is a standard SQLite file that any SQLite client can open: DB Browser for SQLite, DuckDB, the sqlite3 CLI, Python's sqlite3 module, or any other tool in the ecosystem.

We also offer per-plugin JSON exports for users who want a more human-readable format — every task as a JSON object, every note as a JSON document with its full content and metadata. But the .sqlite export is the canonical backup format because it's portable, it's universally supported, and it's exactly what's on your device — no transformation, no data loss.

What this means for you

The technical implementation details matter because they have direct consequences for your experience as a user. Because your data is in a local SQLite database:

The bet we've made is that local-first, SQLite-backed storage is not just a technical curiosity — it's the right foundation for productivity software that actually belongs to the people who use it. Every year, the browser platform gets more capable, and the case for running real application logic locally gets stronger. We built on that foundation from day one, and we're not planning to change it.