> You're going to have down time for migrations unless you're very clever with your schema and/or replicas.
probably worth stating these kinds of design considerations/assumptions up-front
i'm sure lots of applications are fine with "downtime for [database] migrations" but lots more are definitely not, especially those interested in synthetic metrics like TPS
I'd argue the opposite most applications are fine with an hour of downtime a month and arguably much more downtime then that. The recent AWS and Cloudflare outages have proven that.
You can achieve zero downtime with Sqlite if you really need to.
TPS is not a synthetic metric when you cap out at 100 TPS because of Amdahl's law and your users having a power distribution.
1h of downtime per month means you're delivering at best two 9s of availability. again that may be fine for lots of applications but it's trivial scale, and certainly a couple orders of magnitude below what aws and cloudflare provide
taking a step back, if your application's db requirements can be satisfied by sqlite [+replication] then that's great, but that set of requirements is much narrower, and much easier to solve, than what postgres is for
Personally, I'm a fan of event sourcing if you need zero downtime you probably need auditibility. You have one SQLite database that's an append only event log. You build projections from it into any number of SQLite databases. These databases pull from the log database and update themselves. They are completely expendable. So you never have to vacuum them or migrate them. You just build a new projection and then point to it. This is also how you can spread your sqlite over nodes (if that's your thing, with something like NATS).
There are other ways where you're just more disciplined with your schema and indexes. I.e jsonb, partial indexes and existence based programming (from data oriented design).
That's indeed a possible (and very resilient) solution. But that's a significant shift for many apps. I'm fan of event sourcing in theory, but I've always been afraid of it making things overly complex in practice, for relatively small apps. But I haven't tried hard enough to have a real opinion on this.
Oh for sure. That's why my default is if you don't need event sourcing and are ok with some down time a month if you need to add a really large index. Keep it simple.
Once you have a decent CQRS set up (which is a natural fit with SQLite) and event sourcing it can be quite easy to spin up on a new project.
I think people don't have an honest assesment of what their project requirements are in terms of uptime and scale. So they start with crazy microservice multinode architectures that are designed to scale to the moon and inevitably end up with more downtime due to complexity.
I'd still recommend people start with managed PG for most things. But, if you're comfortable using VPSs or bare metal servers SQLite can take you very far.
I fully agree that most projects would work perfectly fine with a monolith and PostgreSQL. And yet they go with microservices, Redis, RabbitMQ, etc, making the system more complex and less available.
SQLite would be perfect if it would allow multiple concurrent writers, which would allow running database migrations without downtime. I’m looking forward to Turso for this.
why would you create a new PricingService for every request? what makes you think a mutex in each of those (obviously unique) PricingService values would somehow protect the (inexplicably shared) PricingInfo value??
I had a similar reaction to that, glad it's not just me.
Meanwhile with the 4th item, this whole example is gross, repeatedly polling a buffer every 100ms is a massive red flag. And as for the data race in that item, the idiomatic fix is to just use io.Pipe, which solves the entire problem far more cleanly than inventing a SyncWriter.
The author's last comment regarding "It would also be nice if more types have a 'sync' version, e.g. SyncWriter, SyncReader, etc" probably indicates there's some fundamental confusion here about idiomatic Go.
Yeah this whole section of the article threw me all the way off. What even is this code? There’s so many things wrong with it, it blows my mind.
About the only code example I saw in here and thought “yeah it sucks when that happens” is the accidental closure example. Accidentally shadowing something you’re trying to assign to in a branch because you need to handle an error or accidentally reassigning something can be subtle. But it’s pretty 101 go.
That fix really confused me as well. Not only does the code behave differently from the problematic code, but why do they even need the mutex at that point?
> setting a field to true from potentially multiple threads can be a completely meaningful operation e.g. if you only care about if ANY of the threads have finished execution.
this only works when the language defines a memory model where bools are guaranteed to have atomic reads and writes
so you can't make a claim like "setting a field to true from ... multiple threads ... can be a meaningful operation e.g. if you only care about if ANY of the threads have finished execution"
as that claim only holds when the memory model allows it
which is not true in general, and definitely not true in go
GP didn’t say “setting a ‘bool’ value to true”, it referred to setting a “field”. Interpreted charitably, this would be done in Go via a type that does support atomic updates, which is totally possible.
the distinction between "concurrent use" and "concurrent modification" in go is in no way subtle
there is this whole demographic of folks, including the OP author, who seem to believe that they can start writing go programs without reading and understanding the language spec, the memory model, or any core docs, and that if the program compiles and runs that any error is the fault of the language rather than the programmer. this just ain't how it works. you have to understand the thing before you can use the thing. all of the bugs in the code in this blog post are immediately obvious to anyone who has even a basic understanding of the rules of the language. this stuff just isn't interesting.
matrix's users want it to be a decentralized/encrypted irc/slack, but unfortunately matrix's maintainers believe their mandate is to build a next-gen tcp/ip (or something very close to that)
probably worth stating these kinds of design considerations/assumptions up-front
i'm sure lots of applications are fine with "downtime for [database] migrations" but lots more are definitely not, especially those interested in synthetic metrics like TPS