Hacker News new | past | comments | ask | show | jobs | submit | sebastialonso's comments login

You read it right. There's no contradiction. The famous original bit started with "a man and his son". This bit is certainly part of the LLM's training corpus, so it's expected to acknowledg it when you mention it.

The thing is, you didn't mention that bit to the LLM. You mentioned a completely different scenario, basically two persons who happen to be cousins. But you used the same style when presenting it. The issue is not a hidden contradiction or a riddle, the issue is that the LLM completely ignored the logical consequences of the scenario you presented.

It's like asking it about the name of the brave greek hero in the battle where the famous Trojan Cow was present. If you get "Achilles" is obviously wrong, there was never a Trojan Cow to begin with!


But you're also using "social media" a shorthand for "algorithmic-based attention-maximizing recommendation machine". That's the current implementation of the bigger and most impactful social networks.

Networks that don't work with that model, tend to be much more wholesome. And they work.


Sure, different forms of social media could exist. And then get outcompeted by as you aptly call it "algorithmic-based attention-maximizing recommendation machines".

Even if you introduced regulations against many of these practices, corporations would still strive to optimise this aspect of their platforms in different ways.

Even if we removed capitalistic incentives from the equation, more attention-grabbing platforms would still be selected for.

I'm not saying it's an intractable problem, but rather that this outcome is happening for a very good reason.


Yes, because it's not intractable. It's just a good definition and statement of the problem. Without that, you can even being thinking about solutions or make sensible assessments.

Sharks are not vicious killing machines. Hungry and aggressive instances of sharks are killing machines.


You'd need to somehow get rid of every predatory platform that exists to even entertain the notion of people making the switch to a new, less-engaging one.


Messaging platforms like WhatsApp come to mind


Well these are messengers, not social media. The only social media platforms without algorithms I can think of are decentralized, i.e. what can be described as "fediverse".


Please supply sources for this outrageous claim.



This is an outstanding resource, thanks!


Agree with the spirit of the argument, but I disagree about the bad design. BCrypt has its trade-offs, you are expected to know how to use it when using it, specially if by choice.

It's like complaining about how dangerous an axe is because it's super sharp. You don't complain, you just don't grab the blade section, you grab it by the handle. And


If passing more than 72 bytes to a function makes it silently fail, it IS bad design, especially for a sensitive, security-related function. The first condition in the function should be `if len(input) > 72 then explicitly fail`

Not letting people use your API incorrectly is API design 101.

To be clear this is not the fault of the bcrypt algorithm, all algorithms have their limitations. This is the fault of bcrypt libs everywhere, when they implement bcrypt, they should add this check, and maybe offer an unsafe_ alternative without the check.


There is no other answer than this. Silent failures are never acceptable, even if documented. Because despite what we want to believe about the world, people don’t read the docs, or read them and forget, or read them and misunderstand.


If your crypto library works like an Axe and the methods aren’t prefixed with “unsafe_”, the library is bad. I would expect an exception when a hashing function gets an argument that’s too long, not just dropping of the excess input. Who thinks that’s the best choice?


Can't believe the answers you're getting. The answer's a big fat NO. If you find yourself in that situation, there's something very incorrect with your design.


So how would you design it instead?


    key = anyhash(uuid+username)
    if (result := cache.get(uuid+username)):
        if hash_and_equality(password, result.password_hash):
            return result.the_other_stuff
    # try login or else fail


Some insight into why this is good and why including the password as input in the derivation of the cache key is terrible would be appreciated.


With no password in key: mildly cleaner to drop entries on password change, even if the cache didn't get the command to drop the key, the next login would override the old key's value anyhow, instead of potentially a key per password that was valid in the short period around a password change

Of course, if you have any validness of old sessions / passwords around a password change, you are doing something wrong.

My personal wondering is, considering KDF is meant to be expensive, why is IO more expensive to the point it needs a cache?


Thanks, good points.

> why is IO more expensive to the point it needs a cache

The advisory mentions it's only exploitable if the upstream auth server is unresponsive. So it seems to be mainly for resilience.


Related to the subject, can anyone recommend a book about timezones? Not a technical, programming book, but one with the history of time zones and curious use cases?


It is not exactly what you're looking for, but long ago I read this book about the invention of the naval chronometer:

https://en.m.wikipedia.org/wiki/Longitude_(book)

It's generally pretty well-regarded and closely related.


Revolution in Time is drier but still interesting and covers up past the quartz crisis starting from Babylon.


Thanks a lot!


Point 3 is dishonest, imo.

jj has no evidence today to support the idea that it will be a main tool in the future. An even heavier burden of proof comes from the idea that it could be adopted by mainstream forges, so it's certainly not an argument on how jj is better than git.


Interesting topic! Completely disagree with his take though.

> The centrality of computing stems from the fact that it is a technology that has been changing the world for the past 80 years, ever since the British used early computing to change the tide of war in World War II.

I take issue with the idea hinted at here. Just like Algebra and other branches of mathematics were invented to deal with daily down-to-earth issues like accounting or farming, you'd be hard pressed to find a consensus that mathematics "aims to explain the real world".

The historical origin is very clear, but the train left the "real world" station pretty fast, "unreasonable effectiveness" notwithstanding. Am I to understand that because Enigma was broken using a physical machine, the field is bound to study physical reality? To me this feels as uncomfortable as to refer to astronomy as "telescope studies".

> I believe that thinking of TCS as a branch of mathematics is harmful to the discipline. [...] > Theories that fail at this “explain/predict” task would ultimately be discarded. Analogously, I’d argue that the role of TCS is to explain/predict real-life computing

Yeah, if you hired me to design harmful approaches, not in a year I would have come up with something as harmful as this.


I can't stop thinking parent comment is ChatGPT output


We should delve into that.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: