Hacker Newsnew | past | comments | ask | show | jobs | submit | more Seb-C's commentslogin

I liked my X1 carbon gen 7, but the motherboard died after 2 years and was not repairable, so it's my biggest regret recently.


Except that the bubble's money is not being invested into cutting-edge ML research, but only into LLMs. And it has been obvious from the start to anyone half-competent about the topic that LLMs are not the path to AGI (if such a thing ever happens anyway).


I don't think it's that obvious, in fact the 'bitter lesson' teaches us that simple scale leads to qualitative, not just quantitative improvement.

It does look like this is now topping out, but it's still not sure.

It seems to me a couple of simple innovations, like the transformer, could quite possibly lead to AGI, and the infrastructure would 'light up' like all that overinvested dark fiber in the 90s.


They have pretty much already won cold war 2, and achieved this by using US capitalism and tech against itself.


GitHub is implemented in Ruby (on Rails), and now there is apparently a strong internal push from Microsoft to code everything using AIs/LLMs.

IMO mixing a dynamically typed language, a framework based on magic conventions and vibe coding sounds like the perfect recipe for a disaster.


Are you suggesting that we use an LLM as an interface between the AI and the player?

Why would anyone choose to awkwardly play using natural language rather than a reliable, fast and intuitive UI?


No, I think they're suggesting the LLM should literally be "talking shit", e.g. in a chat window alongside the game UI, as if you're in a live chat with another player. As in, use the LLM for processing language, and the chess engine for playing chess.

I think this is quite an amusing idea, as the LLM would see the moves the chess engine made and comment along the lines of "wow, I didn't see that one coming!" very Roger Sperry.


And having a better understanding of psychology is a good thing, isn't it?


The Manhattan project was an engineering project, not so much a scientific one. It was only made possible because the science and theories behind it were discovered first.


Hallucinations are not a bug or an exception, but a feature. Everything outputted by LLMs is 100% made-up, with a heavy bias towards what has been fed to it at first (human written content).

The fundamental reason why it cannot be fixed is because the model does not know anything about the reality, there is simply no such concept here.

To make a "probability cutoff" you first need a probability about what the reality/facts/truth is, and we have no such reliable and absolute data (and probably never will).


>To make a "probability cutoff" you first need a probability about what the reality/facts/truth is, and we have no such reliable and absolute data (and probably never will).

Can a human give a probability estimate to their predictions?


Humans can explain how they arrived at the conclusion, an LLM fundamentally cannot do that since they don't remember why they picked the tokens they did, they just make up an explanation based on explanations it has seen before.


You use a lot of anthropomorphisms: doesn't "know" anything (does your hard drive know things? Is it relevant?), "making things up" is even more linked to continuous intent. Unless you believe the LLMs are sentient this is a strange choice of words.


I originally put quotes around "know" and somehow lost it in an edit.

I'm precisely trying to criticize the claims of AGI and intelligence. English is not my native language, so nuances might be wrong.

I used the word "makes-up" in the sense of "builds" or "constructs" and did not mean any intelligence there.


have you seen Iris flowers dataset? it is fairly simple to find cutoffs to classify flowers.

or are you claiming in general that there is no objective truth in reality in philosophical sense? well, you can go on that more philosophical side of the road, or you can get more pragmatic. things just work, regardless how we talk about them.


I don't mean it in a philosophical sense, more in a rigorous scientific one.

Yes, we do have reliable datasets as in your example, but those are for specific topics and are not based on natural language. What I would call "classical" machine learning is already a useful technology where it's applied.

Jumping from separate datasets focused on specific topics to a single dataset describing "everything" at once is not something we are even close to doing, if it's even possible. Hence the claim of having a single AI able to answer anything is unreasonable.

The second issue is that even if we had such a hypothetical dataset, ultimately if you want a formal response from it, you need a formal question and a formal language (probably something between maths and programming?) in all the steps of the workflow.

LLMs are only statistical models about natural languages, so it's the antithesis of this very idea. To achieve that would have to be a completely different technology that has yet to even be theoretized.


Don't bundle your JS then, problem solved. Vanilla JS and CSS written 20 years ago still works except maybe for a couple of obscure deprecated things.

Rot is directly proportional to the amount of dependencies. Software made responsibly with long term thinking in mind has dramatically less issues over time.


Saying that a natural language based interface will replace dedicated graphical UIs makes no sense to me.

It will never be as intuitive or efficient, not even mentioning the reliability.

A picture is worth a thousand words, and no LLM is going to change that.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: