I really like the avoidance (elimination) of one-way-door decisions by turning them into several small(er) two-way-door decisions. I guess the software development interpretation of it is clearly defined boundaries of responsibility, and avoiding to leak implementation details beyond those?
> Most people are idiots, and being a manager means you’ll have to deal with it, and perhaps more importantly, that they have to deal with you.
> It turns out that while it’s easy to undo technical mistakes, it’s not as easy to undo personality disorders. You just have to live with theirs - and yours.
All models are wrong. Some (wrong) models are useful.
Computers can't represent 0.1 (in floating point), yet that hasn't stopped anyone from doing finances on their computer.
I don't think this is big news OR a parlor trick. It's just some obscure thing computers can't do that nobody has noticed for 70 years because nobody needed it.
But BCD is not floating point (generally shorthand for the IEEE 754 Floating Point standard, which nearly every CPU and GPU have hardware support for). And I don't know much about BCD, but it is probably missing niceties like NaN and Inf for capturing edge cases that happen deep inside your equations. These matter if your system is to be reliable.
> generally shorthand for the IEEE 754 Floating Point standard
Yes, generally, but that is just a social convention. There is nothing stopping you from doing floating point in base 10 rather than base 2, and if you do, 0.1 becomes representable exactly. It's just a quirk that 1/10 happens to be a repeating decimal in base 2. It is in no way a reflection of a limitation on computation.
IEEE 754 defines decimal floating point in single (32 bit), double (64) and quadruple (128) precision since the 754-2008 revision. (However .NET’s type I mentioned above even though it’s 128-bit is its own thing and does not adhere to it).
> Computers can't represent 0.1 (in floating point), yet that hasn't stopped anyone from doing finances on their computer.
floating point for financial data may have made sense back when my 386 DX CPU has a FP coprocessor and computation were dog slow.
In this day and age though you'll typically be not just a bit but much better using an abstraction that can represent numbers with decimals precisely, which frees you from a great many rounding error, epsilon computation, error propagation, etc.
> If it worked from the start, it wouldn't need to be fixed later.
True. But you ignore the fact that NO SOFTWARE IS EVER DONE. Software always has bugs, and even if it didn't, it will bitrot as the business needs change.
In theory, it's better to have 100% working software. In practice, that never happens (or only happens for a few weeks at best). Eventually the software needs to be changed. In that case, software that is "written for humans" will always be easier to change than "software that used to work, but now we need to change it, but nobody understands it".
SOFTWARE CAN AND SHOULD BE DONE.
Complicated and bloated systems are never done, but not every system (program/tool/etc) needs to be big, bloated, ugly and complex. It does not matter if the software is 'written for humans', if it's too complicated it won't be changed and/or fixed ever (basically)
On that note, Zawinski's Law:
“Every program attempts to expand until it can read mail. Those programs which cannot so expand are replaced by ones which can.”
This is mostly true, but it does not have to be. :(
"Software is never rewritten. Projects last longer than expected; programmers get bored or burned out; management moves on the newer challenges. The attitude of ‘good enough’ reflects reality.
Instead of being rewritten, software has features added. And becomes more complex. So complex that no one dares change it, or improve it, for fear of unintended consequences. But adding to it seems relatively safe. We need dedicated programmers who commit their careers to single applications. Rewriting them over and over until they’re perfect. Such people will never exist. The world is too full of more interesting things to do.
The only hope is to abandon complex software. Embrace simple. Forget backward compatibility." - Chuck Moore
> True. But you ignore the fact that NO SOFTWARE IS EVER DONE.
that is just false. I've seen plenty of one-use software - made specifically for trade shows, exhibitions, etc. which were never reused again because the entire software was the logic specific to that particuliar exhibition.
there is also a few thousand gigabytes of old game ROMs and abandonware on the internet which are all perfectly done software.
> polls showed that people were opposed to breaking up Microsoft 54-35.
Sure. But that is entirely based on their perception. If you asked "Should Microsoft be allowed to charge Dell a Windows license for every CPU, even Dell ships a different OS?", you will get a different answer.
The real problem is that any case takes 10+ years to litigate, so the landscape is completely changed, reducing the willpower of the public to enforce.
There is no such thing, because Nodes (and even entire datacenters) can fail.
In other words, node failure is an infrastructure problem that is best NOT handled by your bespoke application code. Replacing failed nodes should NOT be custom code in your app, that way lies madness.
> kind of looks like K8s wants to invent Erlang/OTP for infrastructure
You can say "K8s and Erlang implement similar ideas implemented at different levels." But you can't pretend one is a substitute for the other, nor that "Erlang has done everything that K8s can do."
E.g. Writing in Erlang doesn't magically get you:
- deploy/upgrade of the Erlang runtime, including rollback + multiple versions co-existing, including any compiled foreign code (which is required in the article).
- ability to log, monitor, probe and reroute the connections between services -- In a standard way such that the application doesn't have to be modified.("service mesh")
- Ability for an entire ecosystem of tools to inspect the versions of your application services that are deployed, because it's exposed as an API.
- A standard ecosystem of plug-ins for operators (autoscale, autoscale to EC2 spot instances, capacity planning, "best practices" of running a MySQL cluster, etc.) None of these should ever be mixed with the application. (Unless you are Kelsey Hightower: https://github.com/kelseyhightower/hello-universe )
> There is no such thing, because Nodes (and even entire datacenters) can fail.
Gross over-simplification that I'd also call a strawman. In the face of lack of electricity, of course no computer language matters at all. What's your point?
If you want to be online only part of the time, look into IPFS. Even if you go offline, your popular pages will likely be cached for a while, and can be cached forever if someone takes an interest in them.
> The results may be good enough for plenty of folks, but I highly doubt they will ever be better than the big budget search engines.
On the other hand, that is not the criteria for them to be successful, because they are already better on a different metric: Privacy.
I find that 90% of my queries are just "who is that person?" or "what is that thing?". Those queries can be answered by DDG just as well as Google. So I'm not giving up anything, but gaining privacy. More importantly, I'm (subtly) sending a signal to the market that I find privacy is important.
Even on the 10% of queries that are complex, DDG does "good enough". I sometimes wonder if Google has the results sorted better, but I'll gladly waste a few minutes here and there in exchange for privacy.
You could do both privacy and good search results without using DDG: startpage.com
You can also have at least as good privacy (better in my opinion) and better search results (again, in my opinion) with other search engines: qwant.com
What is it those two examples doesn't have that DDG does have? The evangelist feel of DDG really turns me off and I'm sure it scares many off from ever looking at it. It is a bit like listening to Apple fans talking about Steve 'The Juice' Jobs.
It does have one security implication: Wireguard never responds to an attacker. Your tunnel is responding to attackers, so now they know your box exists and can probe it for problems. Each TCP connection takes up memory. If you limit the TCP connections, they can run a slowloris attack and tie them all up.
So it's not a security hole, but it is slightly less secure.
> Is it generally reliable asking Amazon support what capacity you need for a particular use case
Everyone's use case is different, they are not experts at your application, you are.
Spend some time learning about the various services, do a deep-dive and maybe even a prototype on the ones that look interesting. There are literally dozens of ways to build any application, depending on what your goals are (low cost, low latency, low maintenance, etc)
> run it for a week
Run it for an hour, you should be able to quickly get a cost/benefit.
There are a ton of instance types, but you generally only need to test 2 or 3 families, and 2-3 sizes. (Do you need lots of RAM? CPU? Disk? FPGAs? GPUs?). It's worth the time to automate this, so you can periodically test it. (Yes, it will cost you a dollar or two.)
> But what if the service is more network latency sensitive
Don't forget your own part of the stack here. Writing in a scripting language can add miliseconds, as can normalizing your data (i.e. NoSQL usually prefers de-normalizing, which trades off more duplication for lower latency).
See also the "Linux kernel management style" document that's been in the kernel since forever: https://docs.kernel.org/6.1/process/management-style.html