Which, for this use case - bulk generating a read-only data file from a source dataset in a batch job - seems like a pretty good performance tradeoff, no? In the case of a failure of some kind - no big deal, just restart the process anew.
This is one of those use cases where SQLite isn’t replacing <database> - it’s replacing fopen.
Exactly, this use case is write-once then read-only after that. Random updates to the sqlite file after that would need more “normal” settings, but they wouldn’t also need as much throughput.
This is one of those use cases where SQLite isn’t replacing <database> - it’s replacing fopen.