_this_ is the Judgement Day we were warned about--not in the nuclear annihilation sense--but the "AI was then let loose on all our codez and the systems went down" sense
So it turns out that AI is just like another function, inputs and outputs, and the better you design your input (prompt) the better the output (intelligence), got it.
The Bitter Lesson claimed that the best approach was to go with more and more data to make the model more and more generally capable, rather than adding human-comprehensible structure to the model. But a lot of LLM applications seem to add missing domain structure until the LLM does what is wanted.
The Bitter Lesson states that you can overcome the weakness of your current model by baking priors in (i.e. specific traits about the problem, as is done here), but you will get better long-term results by having the model learn the priors itself.
That seems to have been the case: compare the tricks people had to do with GPT-3 to how Claude Sonnet 3.6 performs today.
The Bitter Lesson pertains to the long term. Even if it holds, it may take decades to be proven correct in this case. Short-term, imparting some human intuition is letting us get more useful results faster than waiting around for "enough" computation/data.
Improving model capability with more and more data is what model developers do, over months. Structure and prompting improvements can be done by the end user, today.
Not trying to nitpick, but the phrase "AI is just like another function" is too charitable in my opinion. A function, in mathematics as well as programming, transforms a given input into a specific output in the codomain space. Per the Wikipedia definition,
In mathematics, a function from a set X to a set Y assigns to each element of X exactly one element of Y.[1] The set X is called the domain of the function[2] and the set Y is called the codomain of the function.[3]
Not to call you out specifically, but a lot of people seem to misunderstand AI as being just like any other piece of code. The problem is, unlike most of the code and functions we write, it's not simply another function, and even worse, it's usually not deterministic. If we both give a function the same input, we should expect the same input. But this isn't the case when we paste text into ChatGPT or something similar.
LLMs are literally a deterministic function of a bunch of numbers to a bunch of numbers. The non-deterministic part only comes when you apply the random pick to select a token based on the weights (deterministically) computed by the model.
Perhaps you're a testament to why we actually want "managers who are also engineers" in these roles - for exactly cases like these, where you have the experience to know what "done" means.
That's exactly why it's written this way -- to devoid the reader of any prejudice
and humanize this tragic story -- because it is tragic. If it was written "normally", like "Elvis' Grandson Ben killed himself, and here's why..." some people might hand wave it away as just another "poor little rich boy story". This is also probably one of the reasons for his suicide (IMO of course). I mean, he's Elvis' kid he couldn't possibly be unhappy, right?
As to whether or not this is appropriate for HN, I'm also not so sure... but I enjoyed reading it.
I think you're taking my example a little too literally there... it's an example. The point is to save the reveal (spoilers?) till the end so that you can relate to the story more.
I find it difficult to relate to a story about growing up the ultra wealthy grandson of a famous musician. Them being related to Elvis doesn't make it less relatable.
And personally, adding the "who is it" mystery made the story less relatable. Instead of reading it and empathizing, I was trying to figure out who they were talking about. Then there's the morose reveal of "Aha! It was Ben Keough that committed suicide."