Hacker News new | past | comments | ask | show | jobs | submit | stuartjohnson12's comments login

Took this opportunity to become a paying customer. Thank you Kagi. I pray that your sadly inevitable enshittification can be warded off.

That's what the person responding to meant - attempts to make human systems "rational" often involves simplifying dependent probabilities and presenting them as independent.

The rationalist community both frequently makes this reasoning error and is aware they frequently make this error, and coined this term to refer to the category of reasoning mistake.


> coined this term to refer to the category of reasoning mistake.

That's not at all what the Wikipedia article for it says. It presents it as an interesting paradox with several potential (and incorrect!) "remedies" rather than a category of basic logical errors.


The “quadrillion days of happiness” offered to a rational person gives away that such allegories are anthropomorphized just for the sake of presentation. For the sake of what the philosophers mean, you should probably imagine this as an algorithm running on a machine (no AGI).

It’s a mental tease, not a manual on how to react when faced with a mugger who forgot his weapon at home but has an interesting proposition to make.

Similarly the trolley problem isn’t really about real humans in that situation, or else the correct answer would always be “do nothing”.

It’s what the comment here [0] says. If you try to analyze everything purely rationally it will lead to dark corners of misunderstanding and madness.

[0] https://news.ycombinator.com/item?id=42902931


> Similarly the trolley problem isn’t really about real humans in that situation, or else the correct answer would always be “do nothing”.

The correct answer in case of it being about real people, of course, is to switch immediately after the front bogey makes it through. This way, the trolley will derail and make a sharp turn before it runs over anyone, and stop.

The passengers will get shaken, but I don’t remember fatalities being reported when such things happened for real with real trams.


The scenario is set up by an evil philosopher though, so they can tie up the people arbitrarily close to the split in the rails, such that your solution doesn’t work, right?

In this case, it won’t matter, I’m afraid, which way the trolley goes as it will at least mangle both groups of people, and the only winning move is to try to move as many people as possible away from the track.

An Eastern European solution is to get a few buddies to cut the electrical wire that powers the trolleys and sell it for scrap metal, which works on all electrical trolleys. (After the trolley stops, it can be scavenged for parts you can sell as scrap metal, too.)


> An Eastern European solution

Made me chuckle. Funny 'cause it's true. About the trolley problem, if taken literally (people on tracks, etc.) pulling the lever exposes you to liability: you operated a mechanism you weren't authorized to use and for which you had no prior training, and you decided to kill one innocent person that was previously safe from harm.

Giving CPR is a very tame/safe version of the trolley problem and in some countries you're still liable for what happens after if you do it. Same when donating food to someone who might starve. Giving help has become a very spiny issue. But consciously harming someone when giving help in real life is a real minefield.

P.S. These philosophical problems are meant to force a decision from the options given. So assume the the problem is just a multiple choice one, 2 answers. You don't get to write a third.


> P.S. These philosophical problems are meant to force a decision from the options given. So assume the the problem is just a multiple choice one, 2 answers. You don't get to write a third.

I know about it. And yet I refuse to play the game. The problem is that even philosophers should be able to acknowledge that in the real universe, no box should be too big to prevent from thinking outside of it.

Otherwise we get people who conflate map with the territory, like what this whole comment thread is about.


> The “quadrillion days of happiness” offered to a rational person gives away that such allegories are anthropomorphized just for the sake of presentation.

So what? It's still presented as if it's a interesting problem that needs to be "remedied", when in fact it's just a basic maths mistake.

If I said "ooo look at this paradox: 1 + 1 = 2, but if I add another one then we get 1 + 1 + 1 = 2, which is clearly false! I call this IshKebab's mugging.", you would rightly say "that is dumb; go away" rather than write a Wikipedia article about the "paradox" and "remedies".

> Similarly the trolley problem isn’t really about real humans in that situation, or else the correct answer would always be “do nothing”.

It absolutely wouldn't. I don't know how anyone with any morals could claim that.


Interestingly, the trolley problem is decided every day, and humanity does not change tracks.

There are people who die waiting for organ donors, and a single donor could match multiple people. We do not find an appropriate donor and harvest them. This is the trolley problem, applied.


I would pull the lever in the trolley problem and don't support murdering people for organs.

The reason is that murdering people for organs has massive second-order effects: public fear, the desire to avoid medical care if harvesting is done in those contexts, disproportionate targeting of the organ harvesting onto the least fortunate, etc.


The fact that forcibly harvesting someone’s organs against their will did not make your list is rather worrying. Most people would have moral hangups around that aspect.

Yea, it doesn’t seem quite right to say that the trolley problem isn’t about really people. I mean the physical mechanical system isn’t there but it is a direct abstraction of decisions we make every day.

> the trolley problem isn’t about really people

My actual words quoted below give one extra detail that makes all the difference, one that I see people silently dropped in a rush to reply. The words were aimed at someone taking these problems in a too literal sense, as extra evidence that they are not to be taken as such but as food for though that has real life applicability.

> the trolley problem isn’t really about real humans in that situation


> We do not find an appropriate donor and harvest them. This is the trolley problem, applied.

I don't think that matches the trolley problem particularly well for all sorts of reasons. But anyway your point is irrelevant - his claim was that the trolley problem isn't about real humans, not that people would pull the lever.

Edit: never mind, I reread your comment and I think you were also agreeing with that.


> his claim was that the trolley problem isn't about real humans

Is it though? Let's look at the comment [0] written 8h before your reply:

> the trolley problem isn’t really about real humans in that situation

As in "don't take things absolutely literally like you were doing, because you'll absolutely be wrong". You found a way to compound the mistake by dropping the critical information then taking absolutely literally what was left.

[0] https://news.ycombinator.com/item?id=42907977


That's only because LLMs haven't been a target until now. Search worked great back before everything became algorithmically optimised to high hell. Over time, the quality of information degrades because as metric manipulation becomes more effective, every quality signal becomes weaker.

Right now, automated knowledge gathering absolutely wipes the floor with automated bias. Cloudflare has an AI blocker which still can't stop residential proxies with suitably configured crawlers. The technology for LLM crawling/training is still mostly unknown, even to engineers, so no SEO wranglers have been able to game training data filters successfully. All LLMs have access to the same dataset - the internet.

Once you:

1. Publicly reveal how training data is pre-processed 2. Roll out a reputation score that makes it hard for bots to operate 3. Begin training on non-public data, such as synthetic datasets 4. Give manipulated data a few more years to accumulate and find its way into training data

It becomes a lot harder.


If my server is unreliable, adding an unreliable backup is better than nothing.


That really depends. There are times when adding backups or "safety" features can make circumstances worse.


Exactly, I've had cases when half-assed "backup" components led to cascading failures that were horribly difficult to troubleshoot.


Maybe, but do you think it is good enough?


Engineer who uses TikTok here, I'll let you know once I become a communist.


Okay, I'll bite, I searched and couldn't find them. What are they called?



Upvoted for hacking on hacker news


- Shitty clickbait? check

- Phony anecdote? check

- Title riddled with spelling errors? check

- Instantly shadowbanned from Reddit for spam? check

- Posting a link to a Reddit post because your domain is already banned from HN for spam? check

This is just sad, dude.


I also don't produce code that meets my standards by default. I learned to code by competing in programming competitions, so I had a tendency to test by pushing and to write incomprehensible one-liners. Reading books helped me learn to write code worthy of a software engineer, it didn't come automatically. Similarly, Claude is capable of doing what I want it to in times when I'm specific enough about what I ask it for much faster than if I did it by hand. That said, Claude definitely does accelerate turning your codebase into mud if you let it handle all of its own thinking.

I see this sentiment a lot on Hacker News and I really think it is similar to when new high level programming languages dropped and people were concerned about how un-optimized the compiled code was going to be.


> The new API is trying to move away from a model where subobjects in an API resource are expanded by default, to one where they need to be requested with an include parameter. We had plenty of discussions about this before I left.

This feels like the worst middleground between REST and GraphQL. All of the data flexibility of GraphQL with the static schemas of REST. Wasn't this kind of thing the whole idea underpinning GraphQL?

Maybe you can get around this with new SDK generators handling type safety for you, but I am definitely not looking forward to trying to understand the incomprehensible 5 layers of nested generics needed to make that work.

I remember looking up to Stripe as pioneers of developer experience. This reads like a PM with their back against the wall with a mandate from above (make requests n% faster) rather than a developer-first design choice made to help people build better systems.


My team did this at Square too.

When you give everyone a grab bag of everything without asking them what they need, it takes longer to materialize the other entities from other caches and systems, especially in bulk APIs. Most of your callers don't even need or read this data. It's just there, and because you don't know who needs what, you can never remove it or migrate away.

By requiring the caller to tell you what it wants, you gain an upper hand. You can throttle callers that request everything, and it gives you an elegant and graceful way to migrate the systems under the hood without impacting the SLA for all callers. You also learn which callers are using which data and can have independent conversations, migrations, and recommendations for them.

Each sub-entity type being requested probably has a whole host of other systems maintaining that data, and so you're likely dealing with active-active writes across service boundaries, cache replication and invalidation, service calls, and a lot of other complexity that the caller will never see. You don't want the entire universe of this in every request.

It's a nightmare to have everything in the line of every request for simply legacy reasons. If you have to return lots of sub-entities for everyone all the time, you're more likely to have outages when subsystems fail, thundering herd problems when trying to recover because more systems are involved, and longer engineering timelines due to added complexity of keeping everything spinning together.

By making the caller tell you what they need, you quantitatively know which systems are the biggest risk and impact for migrations. This moves the world into a more maintainable state with better downstream ownership. Every request semantically matches what the caller actually wants, and it hits the directly responsible teams.

Stripe might also be dealing with a lot of legacy migrations internally, so this might have also been motivated as they move to a better internal state of the world. Each sub-entity type might be getting new ownership.

Grab bag APIs are hell for the teams that maintain them. And though the callers don't know it, they're bad for them too.


Sounds like boring code with lots of plumbing scores yet another point against magically flexible code claiming to handwave away complexity


> against magically flexible code claiming to handwave away complexity

It might have just been scope creep over time that became a mountain of internal technical debt, data dependencies, and complexity. That's difficult to cleanly migrate away from because you can't risk breaking your callers. That's what it was in our case.


I think it's the flexible middle-ground that REST APIs and GraphQL APIs converge on. GraphQL APIs that are completely open are trivially DOS'd with recursive data loops or deeply nested over-fetching requests and hence need to be restricted down to acceptable shapes -- thus converging on essentially the same solution from the opposite direction when constructing a GraphQL API.


Don't most production-ready GraphQL servers have some sort of static query cost estimator that is intended to be hooked up to a rate limiter? At the bare minimum, it should be very easy to set up simple breadth+depth limits per request.

This doesn't seem meaningfully more complex than rate limiting a REST API, especially a REST API with configurable "includes."


> trivially DOS'd with recursive data loops or deeply nested over-fetching requests

The depth of recursion can be limited in servers like Apollo.

Maybe “trivally easy to misconfigure”


Yeah. First you limit the depth of recursion.

Then you limit which objects can nest which other objects, under which circumstances..

Pretty soon -- you have a proscribed set of shapes that you allow... and you've converged on the same solution as achieved in the other direction by the REST API requiring explicit data shape inclusion from the caller.


That’s a slippery slope that I don’t think holds up.

When designing the schema, you keep performance and security in mind.

You need to do the same for REST APIs.

Just because some nodes don’t have edges that connect to some other nodes does not mean you’re back at REST.

The main benefit of graphql in not creating super rigid contracts between the frontend and the backend or between services is maintained.


> and you've converged on the same solution as achieved in the other direction by the REST API requiring explicit data shape inclusion from the caller.

Yes, and with GraphQL you didn't have to invent your own way to represent the syntax and semantics in the query string, and you get to use the GraphQL type system and tooling.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: