Hacker Newsnew | past | comments | ask | show | jobs | submit | totorovirus's commentslogin

this is why we need zero knowledge proof


What shines better in a resume? ex-founder or ex-founding engineer? If ex-founder is a better role for next job, I think founders are taking much less risk than founding engineers. They get all the exposure to talking to rich people, VCs, learning how financial game works, which i regard much more rewarding career-wise.


I can already see how many people are so illiterate about GPUs.


proves my point that llms are simply a next token predictor. There are many interesting properties that we see "emergence" of intelligence but I think it's just human's incapability to hold so much knowledge on active memory.


"Next token predictor" isn't quite the burn that it seems like, because perfect next token prediction would require actual understanding. That's because you can almost always cast any question about understanding into a form where it depends solely on the next token (there are a couple nitpicky exceptions and caveats but not many).

GPT 4 is at a high enough level of performance that mere simple statistics aren't really helping it do any better, it really is developing structures especially in the middle layers that perform some amount of high level understanding.

I don't think that pure next token prediction will always be the optimal way to train and enhance these behaviors, but it's not fair to say that it's unrelated, if this really was just stochastic parroting then LLMs would have topped out way before the level they're at now.


That's the thing. Although given the source of their knowledge is pure condensed wisdom, which is some sort of artificial intelligence, they lack the ability to "think", which is crucial to solve problems.


Mapping of language patterns in vector space is most definitely not "pure condensed wisdom"


Thank you for clarifying this fact. My comment was more about showing signs of intelligence. Maybe I oversimplified my statement too much.


LLMs literally are next token predictors, so I'm not understanding your broader point.


I think this has always been pretty obvious but the AI faithful have vested interested in insisting that LLM can actually think and solve problems.


More shocking are those that insists that the human brain must then also work by just guessing the next missing thing. As if the thought process behind I'm hungry starts with "I" and then trying to figure out what next best fits in... it's absurd.


The token would be the pure sensation of hunger, not the word for self, which is merely a convenient abstraction which we use to share knowledge between outside and over time.

LLMs don't have that sensation (why would they?), that doesn't mean can only be used for text: https://deepgram.com/learn/applications-of-transformer-model...


Jeez, I don't know how you would think that I thought that LLMs would have a sensation for hunger. That was not even close to my point at all.


> As if the thought process behind I'm hungry starts with "I"

That sounds like {you think that {people who think LLMs work like humans} believe that {the human sensation of hunger} is merely {saying the phrase "I am hungry"}}.


There are professional wedding guests in Korea


sexy self is a cool naming convention that will possibly replace foobar https://github.com/wangyi-fudan/wyGPT/blob/main/cpu.cpp#L140


could we have a CRISPR based penis elongation medicine in future? I would like to invest to that startup


Go for curing baldness first


The author is roasting Jeanine Banks and she is probably the real motivation that drove the author writing a whole long article of google's culture when things were beautiful.


Is ADHD a real thing? I see too many SV engineers diagnosed with ADHD to the point that it feels like a large scam run by psychiatrists that they lowered the threshold of the actual ADHD.


It's real but most people don't have ADHD, they can't focus because they aren't actually interested in what they are doing but blame ADHD and take drugs.


That make perfect sense.

If coding is really interesting to you, hyperfocus make you spend more time doing it.

Getting into the zone is probably hyperfocus kicking in.


How could a sane engineer think that chatgpt wrapper could be the next unicorn?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: