Using SQLite in Production: 2026 Reality Check


SQLite has a reputation problem. Developers assume it’s a toy database for development and testing, unsuitable for production workloads. This assumption is increasingly wrong. SQLite handles significant production loads for many companies, but understanding where it works and where it doesn’t is critical.

What’s Changed

SQLite was always capable, but recent improvements have expanded its production viability significantly. WAL2 mode, concurrent read improvements, better query optimization, and enhanced JSON support make modern SQLite competitive with traditional client-server databases for many use cases.

The database now ships with most operating systems and languages. It’s battle-tested at enormous scale—SQLite is likely the most deployed database in the world, running on billions of devices. The file format is stable, backward compatible, and explicitly guaranteed to remain compatible until at least 2050.

Cloud hosting evolution matters too. Deploying SQLite on modern infrastructure is simpler than it used to be. Distributed file systems like Litestream enable real-time replication to S3, providing disaster recovery without complex database cluster management.

Where SQLite Excels

SQLite is exceptional for read-heavy workloads. Since the database is just a file, reads are extremely fast. There’s no network latency, no connection pooling overhead, no query serialization. The database is in process with your application code.

Applications with moderate write volume and single-server deployment fit SQLite perfectly. Content management systems, analytics dashboards, internal tools, and edge computing applications are often ideal SQLite candidates.

The simplicity advantage is enormous. No separate database server to provision, configure, monitor, or maintain. No connection pool to tune. No replication lag to handle. Deploy your application binary and a database file—that’s it.

Backup and restoration are trivial. Copy the database file. That’s your backup. Restore by copying it back. For applications where downtime during restore is acceptable, this simplicity eliminates an entire category of operational complexity.

The Write Concurrency Reality

This is where most assumptions about SQLite break down. SQLite supports concurrent reads beautifully. Multiple processes can read simultaneously with no contention. But writes are serialized—only one write transaction executes at a time.

This doesn’t mean SQLite can’t handle write load. A single write transaction can be extremely fast, often completing in single-digit milliseconds. If your write volume is hundreds or even a few thousand transactions per second, SQLite handles this easily on modern hardware.

The limitation appears when you need parallel writes from multiple servers or processes. SQLite doesn’t support this. If your application requires horizontal write scaling across multiple servers, you need a different database.

WAL mode partially alleviates write contention. Readers don’t block writers and writers don’t block readers. This allows simultaneous read and write operations, significantly improving concurrency compared to the default rollback journal mode.

Network File System Complications

Running SQLite on network file systems (NFS, SMB) is officially unsupported and unsafe. The file locking mechanisms SQLite relies on don’t work reliably over network protocols. You’ll experience corruption.

This rules out certain cloud deployment patterns. You can’t run multiple instances of your application on different servers all accessing the same SQLite file on shared storage. This is a fundamental architectural constraint.

However, running SQLite on a single server with local storage works perfectly, even in cloud environments. Deploy to a VM or container with attached storage, replicate the file to S3 with Litestream, and you have a robust production setup.

Size Limitations in Practice

SQLite databases can theoretically grow to 281 terabytes. The practical limit is lower, determined by file system limitations and performance characteristics rather than SQLite itself.

Databases in the tens or hundreds of gigabytes are common and work well. Performance depends on access patterns, indexing strategy, and available memory more than absolute database size.

Very large databases (multiple hundreds of GB or several TB) become operationally challenging. Backup and restore take longer. Disk space management becomes more critical. At some point, the simplicity advantage diminishes and client-server databases become more appropriate.

Replication and High Availability

Native replication doesn’t exist in SQLite. This is a deliberate design decision. SQLite is embedded, so replication would require coordination mechanisms that violate the embedded nature of the database.

Tools like Litestream provide replication through continuous filesystem-level copying to object storage. Changes stream to S3, GCS, or similar storage in near real-time. Recovery involves restoring from storage and replaying recent WAL frames.

This isn’t the same as database cluster replication. There’s no automatic failover to a hot standby. But for many applications, the ability to restore from a replicated copy within seconds or minutes provides adequate disaster recovery.

LiteFS from Fly.io offers another approach, providing distributed SQLite with read replication across multiple nodes and coordinated writes. This gives some of the benefits of database clustering while maintaining SQLite’s simplicity for application code.

Query Performance Characteristics

SQLite’s query optimizer is sophisticated but different from PostgreSQL or MySQL. Understanding these differences prevents performance surprises.

The database tends to prefer index scans over sequential scans more aggressively than PostgreSQL. This is usually good, but occasionally leads to suboptimal query plans when the optimizer chooses index scans for queries where sequential would be faster.

JSON support has improved dramatically. SQLite now includes extensive JSON functions and can index JSON fields efficiently. Applications that previously needed PostgreSQL’s JSONB can often use SQLite instead.

Full-text search is built in through FTS5. For many applications, SQLite’s FTS is sufficient, eliminating the need for ElasticSearch or similar tools. The search quality isn’t quite as sophisticated, but the operational simplicity is compelling.

Security Considerations

SQLite doesn’t have user authentication or authorization. If you have access to the file, you have complete database access. This is fine for single-server applications where file system permissions provide access control, but problematic for multi-tenant scenarios or when database-level access control is required.

Encryption is available through extensions like SQLCipher, which provides transparent database encryption. The entire database file is encrypted at rest. Performance overhead is reasonable, typically 10-20% slower than unencrypted access.

Migration from Client-Server Databases

Migrating to SQLite from PostgreSQL or MySQL is often straightforward for applications that fit SQLite’s constraints. The SQL dialect differences are minor and easily handled.

The bigger challenge is application code that assumes client-server database characteristics. Connection pooling code becomes unnecessary. Retry logic for connection failures needs removal. Transaction isolation expectations might differ.

Migrating away from SQLite to client-server databases is also relatively simple. Since SQLite closely follows SQL standards, most queries work with minimal modification. The hard part is operational changes around deployment, backup, and monitoring.

Real Production Examples

Expensify famously uses SQLite in production, processing millions of transactions daily. Their architecture places SQLite on individual servers with replication handled at the application level.

Many applications serving static or semi-static content use SQLite effectively. The database stores content that changes infrequently but gets read millions of times. The embedded nature eliminates database server costs while providing excellent read performance.

Edge computing deployments increasingly use SQLite. Running databases at edge locations close to users provides better latency than centralized databases, and SQLite’s simplicity makes deploying to dozens or hundreds of edge locations practical.

When to Choose Something Else

If you need horizontal write scaling across multiple servers, use PostgreSQL, MySQL, or a distributed database. SQLite doesn’t support this pattern and trying to force it leads to corruption and data loss.

If you need sophisticated access control and multi-tenancy at the database level, client-server databases are more appropriate. SQLite’s file-based security model doesn’t provide the granular access control required.

If your write volume is extremely high—tens of thousands of writes per second—consider alternatives. While SQLite is fast, client-server databases with write parallelization can exceed SQLite’s serialized write throughput.

Operational Advantages

The operational simplicity cannot be overstated. No database server to provision, patch, monitor, or scale. No connection pooling configuration. No replication lag. No split-brain scenarios. No complex backup procedures.

This simplicity has real cost implications. You eliminate database server costs, reduce operational overhead, and decrease the surface area for failures. For many applications, these benefits outweigh SQLite’s limitations.

Debugging is simpler too. Copy the database file to your development machine and run the exact production data locally. No need to sanitize dumps or configure local database servers to match production.

The 2026 Recommendation

Use SQLite for applications with these characteristics:

  • Single-server or edge deployment
  • Read-heavy workload or moderate write volume
  • Need for operational simplicity
  • Embedded or mobile applications
  • Development/testing environments

Use client-server databases when you need:

  • Multi-server write scaling
  • Database-level access control
  • Extremely high write throughput
  • Native replication and high availability
  • Complex analytical queries across huge datasets

The middle ground is larger than most developers assume. Many applications confidently deployed on PostgreSQL or MySQL would work fine, possibly better, on SQLite. The default choice has shifted from “obviously need a real database” to “does this actually need more than SQLite provides?”

For teams building new applications, starting with SQLite and migrating if you outgrow it is often the right choice. Premature scaling to complex database infrastructure wastes time and money solving problems you don’t have yet.

SQLite is a legitimate production database in 2026. Understanding its actual capabilities and limitations rather than outdated assumptions lets you make better architectural decisions and potentially eliminate significant operational complexity from your stack.