I am not much of a devops person but running your own DB in a VPS with docker containers don't you also need to handle all this manually too?
1) Creating and restoring backups
2) Unoptimized disk access for db usage (can't be done from docker?)
3) Disk failure due to non-standard use-case
4) Sharding is quite difficult to set up
5) Monitoring is quite different from normal server monitoring
But surely, for a small app that can run one big server for the DB is probably still much cheaper. I just wonder how hard it really is and how often you actually run into problems.
My guess is some people have never worked with the constraints of time and reliability. They think setting up a database is just running a few commands from a tutorial, or they're very experienced and understand the pitfalls well; most people don't fall into the latter category.
But to answer your question: running your own DB is hard if you don't want to lose or corrupt your data. AWS is reliable and relatively cheap, at least during the bootstrapping and scaling stages.
Yes, but that does not mean it is now clean water. Anything could happen between the moment Google ingests it and spits it back out, the assumption that it is 'just' a little warmer is nice but it misses the option of for instance contamination from a secondary circuit or various substances leaching into the water used as a coolant.
I know google fiber kinda flumped, but if they are already doing their own power generation for data centers they might decide to sell that power to the public too. What is really scary is that I foresee a day where these big tech companies will see it is more profitable to serve utilities to people than web services. Then, after they have a monopoly in most areas, they will enshitify it too.
I don't think that will happen. Being utility is hard and margins are not great unless you get some government money like credit. And even those might go away with change in regime.
There just isn't enough margin or "free money" for someone like Google.
This is true, to supply software you can build it once and replicate endlessly, to supply email you need to run servers. That's commoditised and the team just sees a slider controlling number of servers.
But to provide power or internet you need to dig up the roads and lay a wire to every house. It's a totally different kind of business to which a tech person is completely unaccustomed. It would be more likely for a plumber or electrician to do such a thing. It's true a tech company could buy wholesale fiber access and provide internet on top of that, like they provide email on top of wholesale servers, but that's only one part of the business.
Tech companies are struggling to even build datacenters right now because of underestimating the work involved. They're really not used to things that don't scale by themselves.
There have been some large scale companies that went under because of platforms chosen to develop their products in. First that comes to mind is MySpace with Dreamweaver.
That requires business logic to run in the frontend in the first place though. One could argue it shouldn't. Anything that is checked in the frontend, needs to be re-checked in the backend anyway, because you cannot trust the frontend, because it is under control of the browser/user.
Electron is quite bad on memory usage because it carries its own v8 environment on top of its own browser platform on top of using _another_ v8 environment for the nodejs part.
Tauri and Wails just use the one available in the OS (UIWebKit in macos, WebView2 in windows), it is also why they load so fast, you probably already have the heavy part loaded in memory. And, of course, brings a tiny statically linked binary instead of running on top of a massive runtime.
I once ran into a bug where our server code would crash only on a specific version of the Linux Kernel under a specific version of the OpenJDK that our client had. At least it would crash at startup but it was some good 2 weeks of troubleshooting because we couldn't change the target environment we were deploying on.
At least it crashed at startup, if it was random it would have been hell.
As I have gotten older, I have grown immense respect for older people who can geek out over stuff.
It’s so easy to be cynical and not care about anything, I am certainly guilty of that. Older people who have found things that they can truly geek out about for hours are relatively rare and some of my favorite people as a result (and part of the reason that I like going to conferences).
I like my coworkers and they’re certainly not anti-intellectual or anything, but there’s only so long I can ramble on about TLA+ or Isabelle or Alloy before they lose interest. It’s not a fault on them at all, there are plenty of topics I am not interested in.
It seems a common problem in our profession that you can’t really talk to anybody about what you are doing. My friends have a vague idea but that’s it.
I work with music streaming, it is mostly just a lot of really banal business rules that become an entangled web of convoluted if statements. Where to show a single button might mean hitting 5 different microservices and checking 10 different booleans
1) Creating and restoring backups
2) Unoptimized disk access for db usage (can't be done from docker?)
3) Disk failure due to non-standard use-case
4) Sharding is quite difficult to set up
5) Monitoring is quite different from normal server monitoring
But surely, for a small app that can run one big server for the DB is probably still much cheaper. I just wonder how hard it really is and how often you actually run into problems.
reply