Summarizer

Write Scalability Patterns

Analysis of SQLite's write throughput capabilities, serial write handling, edge sharding with Cloudflare D1, and when single-node architecture suffices

← Back to Databases in 2025: A Year in Review

While traditional wisdom favors multi-tier architectures like Postgres, many proponents argue that SQLite’s modern features—such as WAL mode and `BEGIN CONCURRENT`—allow a single node to handle thousands of simultaneous connections with impressive efficiency. By serializing fast writes, a single machine can theoretically serve millions of users without noticeable lag, challenging the necessity of complex cloud infrastructures for most business applications. This "single-node" resurgence is further bolstered by tools like Litestream for backups and Cloudflare D1 for edge sharding, which capitalize on the massive power of modern hardware and the latency gains of colocating the database with the web server. Ultimately, the discussion suggests that for many use cases, over-engineering for "Facebook-scale" creates unnecessary complexity that a well-tuned, local SQLite setup can easily bypass.

10 comments tagged with this topic

View on HN · Topics
As a backend database that's not multi user, how many web connections that do writes can it realistically handle? Assuming writes are small say 100+ rows each? Any mitigation strategy for larger use cases? Thanks in advance!
View on HN · Topics
Couple thousand simultaneous should be fine, depending on total system load, whether you're running on spinning disks or on SSDs, p50/99 latency demands and of course you'd need to enable the WAL pragma to allow simultaneous writes in the first place. Run an experiment to be sure about your specific situation.
View on HN · Topics
You also need BEGIN CONCURRENT to allow simultaneous write transactions. https://www.sqlite.org/src/doc/begin-concurrent/doc/begin_co...
View on HN · Topics
Why have multiple connections in the first place? If your writes are fast, doing them serially does not cause anyone to wait. How often does the typical user write to the DB? Often it is like once per day or so (for example on hacker news). Say the write takes 1/1000s. Then you can serve 1000 * 60 * 60 * 24 = 86 million users And nobody has to wait longer than a second when they hit the "reply" button, as I do now ...
View on HN · Topics
> If your writes are fast, doing them serially does not cause anyone to wait. Why impose such a limitation on your system when you don't have to by using some other database actually designed for multi user systems (Postgres, MySQL, etc)?
View on HN · Topics
Thats basically how the web started. You can serve a ridiculous number of users from a single physical machine. It isn't until you get into the hundreds-of-millions of users ballpark where you need to actually create architecture. The "cloud" lets you rent a small part of a physical machine, so it actually feels like you need more machines than you do. But a modern server? Easily 16-32+ cores, 128+gb of ram, and hundreds of tb of space. All for less than 2k per month (amortized). Yeah, you need an actual (small) team of people to manage that; but that will get you so far that it is utterly ridiculous. Assuming you can accept 99% uptime (that's ~3 days a year being down), and if you were on a single cloud in 2025; that's basically last year.
View on HN · Topics
I agree...there is scale and then there is scale. And then there is scale like Facebook. We need not assume internet FB level scale for typical biz apps where one instance may support a few hundred users max. Or even few thousand. Over engineering under such assumptions is likely cost ineffective and may even increase surface area of risk. $0.02
View on HN · Topics
SQlite as a database for web services had a little bit of a boom due to: 1. People gaining newfound appreciation of having the database on the same machine as the web server itself. The latency gains can be substantial and obviously there are some small cost savings too as you don't need a separate database server anymore. This does obviously limit you to a single web server, but single machines can have tons of cores and serve tens of thousands of requests per second, so that is not as limiting as you'd think. 2. Tools like litestream will continuously back up all writes to object storage, so that one web server having a hardware failure is not a problem as long as your SLA allow downtimes of a few minutes every few years. (and let's be real, most small companies for which this would be a good architecture don't have any SLA at all) 3. SQLite has concurrent writes now, so it's gotten much more performant in situations with multiple users at the same time. So for specific use cases it can be a nice setup because you don't feel the downsides (yet) but you do get better latency and simpler architecture. That said, there's a reason the standard became the standard, so unless you have a very specific reason to choose this I'd recommend the "normal" multitier architectures in like 99% of cases.
View on HN · Topics
> SQLite has concurrent writes now Just to clarify: Unless I've missed something, this is only with WAL mode and concurrent reads at the same time as writes, I don't think it can handle multiple concurrent writes at the same time?
View on HN · Topics
I think the right pattern here is edge sharding of user data. Cloudflare makes this pretty easy with D1/Hyperdrive.