Hacker Newsnew | past | comments | ask | show | jobs | submit | OzzyB's commentslogin

Put in a couple of MIDI ports and I'll pretend it's a modern day Atari ST and run some Cubase...


_this_ is the Judgement Day we were warned about--not in the nuclear annihilation sense--but the "AI was then let loose on all our codez and the systems went down" sense

crazy times...


Thank you for taking up the mantle!


So it turns out that AI is just like another function, inputs and outputs, and the better you design your input (prompt) the better the output (intelligence), got it.


The Bitter Lesson claimed that the best approach was to go with more and more data to make the model more and more generally capable, rather than adding human-comprehensible structure to the model. But a lot of LLM applications seem to add missing domain structure until the LLM does what is wanted.


The Bitter Lesson states that you can overcome the weakness of your current model by baking priors in (i.e. specific traits about the problem, as is done here), but you will get better long-term results by having the model learn the priors itself.

That seems to have been the case: compare the tricks people had to do with GPT-3 to how Claude Sonnet 3.6 performs today.


The Bitter Lesson pertains to the long term. Even if it holds, it may take decades to be proven correct in this case. Short-term, imparting some human intuition is letting us get more useful results faster than waiting around for "enough" computation/data.


Improving model capability with more and more data is what model developers do, over months. Structure and prompting improvements can be done by the end user, today.


Not trying to nitpick, but the phrase "AI is just like another function" is too charitable in my opinion. A function, in mathematics as well as programming, transforms a given input into a specific output in the codomain space. Per the Wikipedia definition,

    In mathematics, a function from a set X to a set Y assigns to each element of X exactly one element of Y.[1] The set X is called the domain of the function[2] and the set Y is called the codomain of the function.[3]
Not to call you out specifically, but a lot of people seem to misunderstand AI as being just like any other piece of code. The problem is, unlike most of the code and functions we write, it's not simply another function, and even worse, it's usually not deterministic. If we both give a function the same input, we should expect the same input. But this isn't the case when we paste text into ChatGPT or something similar.


LLMs are deterministic. It's just that the random seed is hidden from you, but is still an input.


LLMs are literally a deterministic function of a bunch of numbers to a bunch of numbers. The non-deterministic part only comes when you apply the random pick to select a token based on the weights (deterministically) computed by the model.


You got that 100% right. The title should be "The day I told (not taught) AI to read code like a Senior Developer".


Perhaps you're a testament to why we actually want "managers who are also engineers" in these roles - for exactly cases like these, where you have the experience to know what "done" means.


Semantic Web FTW xD


This was dramatized in Darren Brown's "The System"[1] some years ago, but instead of stock picks he used horse race betting.

Its amazing to see the people that won on the previous Nth rounds believe that their next tip was a "sure thing".

[1] https://www.youtube.com/watch?v=zv-3EfC17Rc


That's exactly why it's written this way -- to devoid the reader of any prejudice and humanize this tragic story -- because it is tragic. If it was written "normally", like "Elvis' Grandson Ben killed himself, and here's why..." some people might hand wave it away as just another "poor little rich boy story". This is also probably one of the reasons for his suicide (IMO of course). I mean, he's Elvis' kid he couldn't possibly be unhappy, right?

As to whether or not this is appropriate for HN, I'm also not so sure... but I enjoyed reading it.


The first sentence is

>The grandson of a legendary musician, Ben grew up in wealth and luxury.

If they're trying to avoid the "poor little rich boy story," they did a poor job.

I said nothing about it being unsuited for HN. I take issues with the story telling, not it being posted here.


> "poor little rich boy story,"

I think you're taking my example a little too literally there... it's an example. The point is to save the reveal (spoilers?) till the end so that you can relate to the story more.


I find it difficult to relate to a story about growing up the ultra wealthy grandson of a famous musician. Them being related to Elvis doesn't make it less relatable.

And personally, adding the "who is it" mystery made the story less relatable. Instead of reading it and empathizing, I was trying to figure out who they were talking about. Then there's the morose reveal of "Aha! It was Ben Keough that committed suicide."


> 01:04 "And I can't even say his name"

Well, it looks like a big fail right off the bat.


I believe that would be BONDS (or GILTS if you're in the UK)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: