Hacker Newsnew | past | comments | ask | show | jobs | submit | IntrepidPig's commentslogin

God it infuriates me


Truly what is going on


I always felt like one of reasons LLMs are so good is that they piggyback on the many years that have gone into developing language as an information representation/compression format. I don’t know if there’s anything similar a world model can take advantage of.

That being said there have been models which are pretty effective at other things that don’t use language, so maybe it’s a non issue.


I will gladly take $10B to find out for you.


Another way to make the same point is to observe that every single society has language.

But only some groups have the ability to systematically encode language as writing.

Writing is a technological marvel.


There's a lot of info about the world in video and photographs. A lot of how we learn is seeing things. Plus interacting of course.


Maybe until the model outputs some affirming preamble, it’s still somewhat probable that it might disagree with the user’s request? So the agreement fluff is kind of like it making the decision to heed the request. Especially if we the consider tokens as the medium by which the model “thinks”. Not to anthropomorphize the damn things too much.

Also I wonder if it could be a side effect of all the supposed alignment efforts that go into training. If you train in a bunch of negative reinforcement samples where the model says something like “sorry I can’t do that” maybe it pushes the model to say things like “sure I’ll do that” in positive cases too?

Disclaimer that I am just yapping


This post is almost definitely a scam, but it does a great job of illustrating how much more dangerous scams are going to become with the advent of AI. Here we have a bunch of people who probably pride themselves in catching scams falling for it. Scary stuff


I think this is good that I should prove it, I now about 80 percent of anything in internet now days is fake, but can you find any scam project that offers real test demo (send me email and I send you a link pixelstreaming front-end in which you see real-time streaming and can talk to avatar EchenDeligani@gmail.com) or any zoom call so I can walk you through the project, or any one which specialy focuces on persain language


> Post-filter works when your filter is permissive. Here’s where it breaks: imagine you ask for 10 results with LIMIT 10. pgvector finds the 10 nearest neighbors, then applies your filter. Only 3 of those 10 are published. You get 3 results back, even though there might be hundreds of relevant published documents slightly further away in the embedding space.

Is this really how it works? That seems like it’s returning an incorrect result.


Yeah it feels similar to inventing the nuke. Or it’s even more insidious because the harmful effects of the tech are not nearly as obvious or immediate as the good effects, so less restraint is applied. But also, similar to the nuke, once the knowledge on how to do it is out there, someone’s going to use it, which obligates everyone else to use it to keep up.


The age old tension remains taught


Nothing about this makes any sense. We’ve already got a number of people pointing out flaws like why did he wait 15 years to write about it, why does it look like it was written by an LLM, and is it really reasonable to blame such a massive failure completely on your peers and not take an ounce of responsibility yourself? But these things all start to make sense once you actually reach the end of the article and realize it’s all a ploy to sell you his fancy new equivalent to a self-help book, which you can tell is legit because its name is a forced acronym. Can we take this off the front page please?


I think it is better to be charitable. I think he does genuinely believe what he wrote is what happened. His PDF book is free and Creative Commons.

There could be many reasons he waited this long. Maybe he waited until he was retired and would not face blowback. Maybe he just has some free time.

It is very plausible that WebOS could have been an equal peer to iOS and Android. CEOs have killed off projects that might have been great commercial successes while perusing short term gains.

In a decade's time we might hear a story from inside ATI or AMD how they killed off their chance of beating CUDA for short term gains.


> Can we take this off the front page please?

Don’t do this. Engagement is what drives stories to the front page. If you don’t like it just move on.


It must be X given that they recommend installing xbacklight, arandr, and Compton alongside it.


And the section on running in Xephyr (X server that runs inside another X session) for debugging.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: