Hacker Newsnew | past | comments | ask | show | jobs | submit | HPsquared's commentslogin

iPod 30-pin is a classic.

I wonder if aggregators will emerge (something like Ground News does for news sources)

LLM pattern [0] will probably eventually emerge as the best way to fight those biases. This way everyone benefits from token burn!

[0](https://github.com/karpathy/llm-council)


I was thinking the same. Isn't it a bit of a power imbalance for your employer to look after all of those aspects of your life? It's not far from there to company towns. If you lose/quit the job, you lose health insurance and so on. Not a good bargaining position.

I think it makes sense to give some of the cost to businesses because they also create some of the cost. For example, in the US you have to pay unemployment tax based off of your business' turnover. If you hire and fire a lot of people, you cause more unemployment, so you have to pay more taxes to cover your societal cost. It makes sense.

The key point is that it's just a tax. The government should still be the one doing unemployment, although they're not, but that would be the ideal.

So maybe if you have, say, a dangerous business you pay more healthcare tax or something. That could work instead of letting companies provide health insurance.


There's a diffuse, but I suspect large, economic cost to delaying other vehicles.

You mean like delaying all the people trying to go full speed in the fast lane?

You just made me imagine switching lanes microtransactions as a solution which made my soul shudder in a deep seated disgust, so I had to share it.

Notable OSS contributions should confer status and funding, like paper publications do.

Television and radio set the parameters for the "single-stream culture" that emerged in the 20th century. Mostly a result of the limited bandwidth of early broadcast technology, so everyone had to watch the same few channels.

Web 2.0 broke this into millions of creators. Generative AI produces everything on-demand, but again there is a small number of (polymorphic) models producing the content.


It's as if they have captured the most significant nodes on the map. Or were these prestige journals built up under the same system?

I think there's also an "alignment blinkers" effect. There is an ethical framework bolted on.

EDIT: Though it could simply reflect training data. Maybe Redditors don't drive.


I wonder to what extent the Google search LLM is getting smarter, or simply more up-to-date on current hot topics.

It seems like the search ai results are generally misunderstood, I also misunderstood them for the first weeks/months.

They are not just an LLM answer, they are an (often cached) LLM summary of web results.

This is why they were often skewed by nonsensical Reddit responses [0].

Depending on the type of input it can lean more toward web summary or LLM answer.

So I imagine that it can just grab the description of the „car wash” test from web results and then get it right because of that.

[0] https://www.bbc.com/news/articles/cd11gzejgz4o


Presumably it did an actual search and summarized the results and neither answered "off the cuff" by following gradients to reproduce the text it was trained on nor by following gradients to reproduce the "logic" of reasoning. [1]

[1] e.g. trained on traces of a reasoning process


It's almost certainly just RAG powered by their crawler.

Proving that RAG still matters.

On the upside, Shakespeare isn't going to change soon.

So you're saying we should burn Shakespeare onto a chip? /s

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: