Hacker News new | past | comments | ask | show | jobs | submit | TuringTest's comments login

Excuse me for sounding rough, but - isn't this reinventing comp-sci, one step at a time?

I learned about distributed incrementally -monotonic logs back at the late 90s, with many other ways to do guaranteed transactional database actions. And I'm quite certain these must have been invented in the 50s or 60s, as these are the problems that early business computer users had: banking software. These are the techniques that were buried in legacy COBOL routines, and needed to be slowly replaced by robust Java core services.

I'm sure the Restate designers will have learned terribly useful insights in how to translate these basic principles into a working system with the complexities of today's hardware/software ecosystem.

Yet it makes me wonder if young programmers are only being taught the "build fast-break things" mentality and there are no longer SW engineers able to insert these guarantees into their systems from the beginning, by standing on the shoulders of the ancients that invented our discipline, so that their lore is actually used in practice? Or am I just missing something new in the article that describes some novel twist?


When I was in school I had an optional requirement. You had to take one out of 2 or 3 classes to graduate. That was compiler design, which was getting terrible reviews from my peers who were taking it the semester before me, or distributed computing. Might have been a third but if so it was unmemorable.

So I took distributed computing. Which ended up being one of the four classes that satisfied the 80/20 rule for my college education.

Quite recently I started asking coworkers if they took such a class and was shocked to learn how many not only didn’t take it, but could not even recall it being an option at their school. What?

I can understand it being rare in the 90’s but the 00’s and on were paying attention to horizontal scaling, and the 2020’s are rotten with it distributed computing concerns. How… why… I don’t understand how we got here.


So many people I work with don't "get" distributed systems and how they interplay and cause problems. Most people don't even know that the ORDER you take potentially competing (distributed) locks even matters -- which is super important if you have different teams taking the same locks in different services!

The article is well written, but they still have a lot of problems to solve.


I went too far the other way. Concurrent things just fit my brain so well that I created systems that made my coworkers have to ask for help. One that still sticks in my mind after all these years wanted to ask me to knock it off but lacked the technical chops to make it a demand. But I could read between the lines. He was part of my process of coming around to interpreting all questions as feedback.

There’s about 20% of reasonable designs that get you 80% of your potential, and I’m a world with multiple unrelated work loads running in parallel, most incidental inefficiencies are papered over by multitasking.

The problem is that cloud computing is actively flouting a lot of this knowledge and then charging us a premium for pretending that a bunch of the Fallacies don’t exist. The hangover is going to be spectacular when it hits.


> The hangover is going to be spectacular when it hits.

I'm honestly looking forward to it. We constantly deal with abstractions until they break and we are forced to dive into the concrete. That can be painful, but it (usually) results in a better world to live in.


Cloud will come back in your lifetime and maybe mine. Everything in software is cycles and epicycles. Hyperscaler hardware is basically a supercomputer without the fancy proprietary control software, which is third party now.

I think your points are pretty spot on - most things have already been invented, and there's too much of a move-fast-and-break-things mentality.

Here's a follow-up thought: to what extent did the grey-beards let us juniors down by steering us down a different path? A few instances:

DB creators knew about replicated logs, but we got given DBs, not replicated log products.

The Java creators knew about immutability: "I would use an immutable whenever I can." [James Gosling, 1] but it was years later when someone else provided us with pcollections/javaslang/vavr. And they're still far from widespread, and nowhere near the standard library.

Brendan Eich supposedly wanted to put Scheme into browsers, but his superiors had him make JS instead.

What other stuff have we been missing out on?

[1] https://www.artima.com/articles/james-gosling-on-java-may-20...


James (my source was an insider in the Java team at Sun, pre-Marimba) wrote java.util.Date, which I had my one assistant (Ken Smith of Netscape) translate from Java to C for JS's first Date object, regrets all around but it "made it look like Java".

I wish James had been in favor of immutability in designing java.util.Date!


This is certainly building on principles and ideas from a long history of computer science research.

And yes, there are moment where you go "oh, we implicitly gave up xyz (i.e., causal order across steps) when we started adopting architecture pqr (microservices). But here is a thought on how to bring that back without breaking the benefits of pqr".

If you want, you can think of this as one of these cases. I would argue that there is tremendous practical value in that (I found that to be the case throughout my career).

And technology advances in zig zag lines. You add capability x but lose y on the way and later someone finds a way to have x and y together. That's progress.


> As far as the market goes, I think part of the problem is that most industries are not true free markets, AKA: high competition, low barriers to entry.

That sounds like the classic "if all you have is a hammer, all problems look like nails".

It's true that decentralised decision making can make good use of local information. But you're ignoring the kind of problems that are created by decentralisation itself, so they can't be fixed by more of it.

A system that is being optimized with only local information is prone to getting stuck in local minima, situations which can be very far from the true optimum.

To escape from the local minima you need to gather global information compiled from the whole system, and use it to alter the decisions that individual actors would make from just their locally achievable information.

That's the role of regulation, and why regulated markets work better than chaotic ones. Regulation can make individuals coordinate to achieve larger goals than what's possible without it. And to enforce effective regulation you need some kind of authority, which is centralised.

Of course that raises the question of how that authority is created and what goals does it set; for that, we get politics, with various groups trying to influence what the authority will decree and whose interests it will pursue harder.


I'd recommend The Fractal Organization,[1] or any other about Viable System Models.[2]

This framework based on cybernetic science provides good heuristics to both understand and design complex social structures.

[1] https://www.amazon.com/Fractal-Organization-Creating-sustain...

[2] https://en.wikipedia.org/wiki/Viable_system_model


> chomsky would present examples like this for decades as untracktable by any algorithm, and as a proof that language is a uniquely human thing

Generatove AI has all but solved the Frame Problem.

Those expressions where intractable bc of the impossibility to represent in logic all the background knowledge that is required to understand the context.

It turns out, it is possible to represent all that knowledge in compressed form, with statistical summarisation applied to humongous amounts of data and processing power, unimaginable back then; this puts the knowledge in reach of the algorithm processing the sentence, which is thus capable of understanding the context.


Which should be expected, because since human brain is finite, it follows that it's either possible to do it, or the brain is some magic piece of divine substrate to which laws of physics do not apply.

The problem turned out to be that some people got so fixated on formal logic they apparently couldn't spot that their own mind does not do any kind of symbolic reasoning unless forced to by lots of training and willpower.


That’s not what it means at all. You threw a monkey in your own wrench.

The brain has infinite potentials, however only finite resolves. So you can only play a finite number of moves in a game of infinite infinities.

Individual minds have varying mental technology, our mental technologies change and adapt to challenges (not always in real time.) thus these infinite configurations create new potentials that previously didn’t exist in the realm of potential without some serious mental vectoring.

Get it? You were just so sure of yourself you canceled your own infinite potentials!

Remember, it’s only finite after it happens. Until then it’s potential.


> The brain has infinite potentials

No, it doesn't. The brain has a finite number of possible states to be in. It's an absurdly large amount of states, but it is finite. And, out of those absurd but finite number of possible states, only a tiny fraction correspond to possible states potentially reachable by a functioning brain. The rest of them are noise.


You are wrong! Confidently wrong at that. Distribution of potential, not number of available states. Brain capacity and capability is scalar and can retune itself at the most fundamental levels.

As far as we know, universe is discrete at the very bottom, continuity is illusory, so that's still finite.

Not to mention, it's highly unlikely anything at that low a level matters to the functioning of a brain - at a functional level, physical states have to be quantized hard to ensure reliability and resistance against environmental noise.


You’ve tricked yourself into a narrative.

Potential is resolving into state in the moment of now!

Be grateful, not scornful that it all collapses into state (don’t we all like consistency?), that is not however what it “is”. It “is” potential continuously resolving. The masterwork that is the mind is a hyoerdimensional and extradimentional supercomputer (that gets us by yet goes mostly squandered). Our minds and peripherals can manipulate, break down, and remake existential reality in the likeness of our own images. You seem to complain your own image is soiled by your other inputs or predispositions.

Sure, it’s a lot of work yet that’s what this whole universe thing runs on. Potential. State is what it collapses into in the moment of “now”.

And you’re right, continuity is an illusion. Oops.


Huge amounts of data and processing power are arguably the foundation for the "Chinese room" thought experiment.

I never bought into Searle's argument with the Chinese room.

The rules for translation are themselves the result of intelligence; when the thought experiment is made real (I've seen an example on TV once), these rules are written down by humans, using human intelligence.

A machine which itself generates these rules from observation has at least the intelligence* that humans applied specifically in the creation of documents expressing the same rules.

That a human can mechanically follow those same rules without understanding them, says as much and as little as the fact that the DNA sequences within the neurones in our brains are not themselves directly conscious of higher level concepts such as "why is it so hard to type 'why' rather than 'wju' today?" despite being the foundation of the intelligence process of natural selection and evolution.

* well, the capability — I'm open to the argument that AI are thick due to the need for so many more examples than humans need, and are simply making up for it by being very very fast and squeezing the equivalent of several million years of experiences for a human into a month of wall-clock time.


I didn’t buy that argument at all either.

Minds shuffle information. Including about themselves.

Paper with information being shuffled by rules exhibiting intelligence and awareness of “self” is just ridiculously inefficient. Not inherently less capable.


I don’t think I understand this entirely. The point of the thought experiment is to assume the possibility of the room and consider the consequences. How it might be achievable in practice doesn’t alter this

The room is possible because there's someone inside with a big list of rules of what Chinese characters to reply with. This represents the huge amount of data processing and statistical power. When the thought expt was created, you could argue that the room was impossible, so the experiment was meaningless. But that's no longer the case.

if you go and s/Chinese Room/LLM against any of the counter arguments to the thought experiment how many of them does it invalidate?

I'm not sure I'm following you. My comment re Chinese room was that parent said the data processing we now have was unimaginable back in the day. In fact, it was imaginable - the Chinese room imagined it.

I was responding to the point that the thought experiment was meaningless.

Haskell can be taught as recursive programming, learning about accumulator parameters and high order functions. A background in logic (which most programmers have to some degree) is more useful than an approach in math in that regard.

*> Am I alone in thinking it is perfectly acceptable to ask a non-technical user to open a command line and type something?*

You're not alone, but that doesn't make it right; rather it puts you in the large bucket of developers that have no idea of:

1) how and why common non-technical people use computers, and

2) why asking them to perform actions they don't understand is a terrible idea both in terms of usability (because the process is difficult, intimidating and error prone) and security (because you're training them to follow instructions that can easily put their system in risk when requested by a malicious actor).

If the actions to follow are really that simple, they should be automated in full; there's no reason why you would show the users anything more complex that a start button to trigger a script, and a confirmation dialog explaining the risks (and maybe requesting elevated permissions).

But if the process is not so simple that it can be automated behind a single button, then why on heaven would you expose them to an interface that requires a complex interaction (copy/paste various large texts from a web page), shows cryptic messages as feedback, and gives no clue on what to do next if any step has errors?


> asking them to perform actions they don't understand is a terrible idea both in terms of usability (because the process is difficult, intimidating and error prone) and security (because you're training them to follow instructions that can easily put their system in risk when requested by a malicious actor).

I think you greatly overestimate the understanding of nontechnical users. Why is

1) double click icon -> click here -> click there -> type in box -> click ok

usable and secure but

2) open powershell -> type command -> hit enter

not? My suspicion is a surprisingly large fraction of nontechnical users have limited understanding of what they're doing and just blindly follow instructions. What does it matter if they are doing that with a mouse or with a keyboard?

> If the actions to follow are really that simple, they should be automated in full; there's no reason why you would show the users anything more complex that a start button to trigger a script, and a confirmation dialog explaining the risks (and maybe requesting elevated permissions).

I completely agree.

> But if the process is not so simple that it can be automated behind a single button, then why on heaven would you expose them to an interface that requires a complex interaction (copy/paste various large texts from a web page), shows cryptic messages as feedback, and gives no clue on what to do next if any step has errors?

What, like this doesn't happen with GUI applications? Those problems have absolutely nothing to do whether an interface is graphical or not and have everything to do with bad usability in general.


> What does it matter if they are doing that with a mouse or with a keyboard?

It happens to matter a lot, in large part by a cognitive principle that most programmers should know, but very few do: recognition vs recall.

Text prompts in general are much harder to use because the user needs to remember the name of the command they need to type, which is a cognitive task way more difficult that recognizing the shape of a UI control with a recognizable command name on it.

> What, like this doesn't happen with GUI applications?

You're right that GUIs are not immune to terrible design, I still remember when open source developers but UIs for their software that were mere a front-end layout of the command line, and you needed to understand in full the inner workings of the app to use the UI.

However nowadays anyone who builds GUIs for a living has been trained in the principles of usability and has some knowledge of mental models and putting user needs upfront, so that style is avoided by everybody but the most isolated do-it-yourself programmers.


As the article explains, fold functions are a simpler way to think about monoids. So in recursive pure functional programming, many important iterative processes (pipelines, accumulators, machine states...) can be expressed as an application of fold-l or fold-r.


The thing is that you are conflating CPU architectures with computer architectures; In academia they are treated as two different educational topics, for good reason.

The first one covers the lowest level of how logic gates are physically wired to produce a Turing-equivalent computing machine. For that purpose, having the simplest possible instruction set is a pedagogical must. It may also cover more advanced topics like parallel and/or segmented instruction pipelines, but they're described in the abstract, not as current state-of-the-art industry practice.

Then, for actually learning how modern computers work you have another separate one-term course for whole machine architecture. There you learn about data and control buses, memory level abstractions, caching, networking, parallel processing... taking for granted a previous understanding of how the underlying electronics can be abstracted away.


Yeah but that has something to do with

1) commercial hardware pipelinea being improved for decades in handling 3D polygons, and

2) graphical AI models are trained on understanding natural language in addition to rendering.

I can imagine a new breed of specialized generative graphical AI that entirely skips language and is trained on stock 3D objects as input, which could potentially perform much better.


>> IOW, he thinks he's found a preference function that's strictly better than any other preference function

>Yes - this seems like a major philosophical reach

On the contrary, I see as containing a trivial contradiction or paradox.

To evaluate what function is 'strictly better' than the rest, you need to use a ground preference function that defines 'better'; therefore, your whole search process is itself biased by the comparison method you choose to use for starters.


Better according to whatever other preference function you started with, regardless of what it was.


Yeah, that's the point. The initial preference function will guide the whole process; unless you change after each step the preference function used to compare, in which case it may never converge to a stable point.

And even if it converges, different initial functions could guide to totally different final results. That would make it hard to call any chosen function as "strictly better than any other".


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: