Hacker News new | past | comments | ask | show | jobs | submit | gnatolf's comments login

A lot of the comments on HN lately are rightfully focused on this formative brain exercise that leads to intuition and conceptual understanding that is chiselled away by the shortcuts that GenAI provides. I wonder where the gain of productivity from GenAI and the drop off in 'our brain'-quality intersects.

It's actually not that different than talking with employees, however, the LLMs still have very significant shortfalls (which you know about after using them a lot.)

If a manager doesn't know anything about what their employees are working on, they are basically fucked. That much holds up with LLMs. The simple stuff mostly works, but the complex stuff isn't going to pan out for you, and it will take a while to figure out that's the direction you went in.


One comparison is with Stack Overflow (SO). Given a task, there are usually multiple answers. The question may not even be relevant; often, multiple question pages must be compared.

The best answer is the one that fits the aesthetics of my approach--one that didn't exist before (there was only the problem before), but the answer is simple, straightforward, or adaptable.

Having multiple answers is good because different minds evaluated the question. It is a buffet of alternatives, starting from others' first principles, mistakes, and experience. Some are rejected outright from some tacit taste organ. Others become long-lived browser tabs, a promise to read carefully someday (never).

All this is void if it turns out using SO is similarly degenerative in the same way, though.


We should probably require AI to always be able to explain it's conclusions.

That way we can quickly assimilate knowledge from the AI and theoretically always have at least as much knowledge as the AI.

I suppose it also means that we can verify that the AI is not lying to us.


Unfortunately we don't have that kind of AI. We only have the useless kind.

The churn of staying on top of this means to me that we'll also chew through experts of specific times much faster. Gone are the day of established, trusted top performers, as every other week somebody creates a newer, better way of doing things. Everybody is going to drop off the hot tech at some point. Very exhausting.

The answer is simple: have your AI stay on top of this for you.

Mostly joke, but also not joke: https://news.smol.ai/


I couldn't agree more, this 'polished' style the finished comment comes in is super boring to read. It's hard to put the finger on it, but overall flow is just too... Samesame? I guess it's perfectly _expected_ to be predictable to read ;)

And then one of the iterations was asking additional ways llms could be used and then adding some of those as content which seems odd but plausibly helpful brainstorming. Just the phrasing of the original question makes it sound like things the user isn't actually doing but wants in their comment if that makes sense.

Thanks for the example chat it was a valuable learning for me!


Mostly just SNR issues.

Not to take away from that fact, but the share of train-moved freight (over time) is more interesting. It probably highlights the complications of scaling train networks easily.

I did the same. In short examples like the ones used in the article, it's easy to reason about the states and transitions. But in a much larger codebase, it gets so much harder to even discover available transitions if one is leaning too much on the from/into implementations. Nice descriptive function names go a long way in terms of ergonomic coding.


Plenty of Europeans have cargo bikes and make do with 2-3 supermarket trips per week for families of 4-5 peeps.

Only bulk drinks (crates of beer/soda/...) are challenging. But for those, very often delivery systems are in place that surely are more efficient than individual trips anyways.


I regularly carry four cases of water (48 cans) on my standard bicycle without a problem.

Whenever I go grocery shopping I mount a milk crate to my rear rack (this takes about six seconds) and put the cases in vertically. I can also carry a 4L jug of milk in the handlebar-mounted basket.


Mostly it's to cover up that the catalogue isn't as great anymore, isn't it? Since almost every big label took back the rights and started their own streaming service, Netflix simply doesn't have as much content (that anyone would want to see) anymore.

I quit all those platforms recently and I'm not missing the frustration of having to 'switch channels' through their incomprehensible categories and views anymore.


Given the rate of improvement wrt to llms, this may not hold true for long


The rate of improvement with LLMs seems to have halted since Claude3.5, which was about a year ago. I think we’ve probably gone as far as we can go with tweaks to transformer architectures, and we’ll need a new academic discovery which could take years to do better


And for a brief moment I stopped to think about whether we're looking at a horrible middle earth hallmark movie or just some 'clever' parody.


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: