Hacker Newsnew | past | comments | ask | show | jobs | submit | more FeepingCreature's commentslogin

> In open source hardly anyone is even using LLMs and the ones that do have barely any output, In many cases less output than they had before using LLMs.

That is not what that paper said, lol.


Which paper? The quoted part is my own observation.


Oh I see, I thought you were quoting https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o... "Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity"

Which shows that LLMs, when given to devs who are inexperienced with LLMs but are very experienced with the code they're working on, don't provide a speedup even though it feels like it.

Which is of course a very constrained scenario. IME the LLM speedup is mostly in greenfield projects using APIs and libraries you're not very experienced with.


Yep, human contribution is extremely valuable especially very early in before the AI has a skeleton it can work off of. You have to review those first few big refactors like a hawk. After that you can relax a bit.


(Layman guess) Pressure? The incoming split air has to go somewhere. The volume of air inflowing above and below is roughly the same.


If(f) it's trained end to end, it's a unified system.


Sound sensitivity varies by person. It's not generational.


Of course it is.


Shit man, I want a commandline tool that can grep embeddings. `emb-locate` when


True where trivial; where nontrivial, false.

Trivially, humans don't emit something they don't know either. You don't spontaneously figure out Javascript from first principles, you put together your existing knowledge into new shapes.

Nontrivially, LLMs can absolutely produce code for entirely new requirements. I've seen them do it many times. Will it be put together from smaller fragments? Yes, this is called "experience" or if the fragments are small enough, "understanding".


>> Nontrivially, LLMs can absolutely produce code for entirely new requirements. I've seen them do it many times.

I think most people writing software today are reinventing a wheel, even in corporate environments for internal tools. Everyone wants their own tweak or thinks their idea is unique and nobody wants to share code publicly, so everyone pays programmers to develop buggy bespoke custom versions of the same stuff that's been done 100 times before.

I guess what I'm saying is that your requirements are probably not new, and to the extent they are yes an LLM can fill in the blanks due to its fluency in languages.


Nothing is truly and completely new. I'm not formulating my requirements in an extinct language. My point is "filling in the blanks" and "do new things" are a spectrum.

LLMs have their limits, but they really can understand and productively contribute to programs that achieve a purpose that no program on the internet has done yet. What they are doing is not interpolation at the highest level. It may be interpolation/extrapolation at a lower level, but this goes for any skill learnt by anyone ever.


Humans can observe ants and invent any colony optimization. AIs can’t.

Humans can explore what they don’t know. AIs can’t.


What makes you categorically say that "AIs can't"?

Based on my experience with present day AIs, I personally wouldn't be surprised at all that if you showed Gemini 2.5 Pro a video of an insect colony and asked it "Take a look at the way they organize and see if that gives you inspiration for an optimization algorithm", it will spit something interesting out.


It will 100% have something in its training set discussing a human doing this and will almost definitely spit out something similar.


That's a good point but all it means is that we can't test the hypothesis one way or the other due to never being entirely certain that a given task isn't anywhere in the training data. Supposing that "AIs can't" is then just as invalid as supposing that "AIs can".


What makes you categorically say that "humans can"?

I couldn't do that with an ant colony. I would have to train on ant research first.

(Oh, and AIs can absolutely explore what they don't know. Watch a Claude Code instance look at a new repository. Exploration is a convergent skill in long-horizon RL.)


> Humans can observe ants and invent any colony optimization. AIs can’t.

Surely this is exactly what current AI do? Observe stuff and apply that observation? Isn't this the exact criticism, that they aren't inventing ant colonies from first principles without ever seeing one?

> Humans can explore what they don’t know. AIs can’t.

We only learned to decode Egyptian hieroglyphs because of the Rosetta Stone. There's no translation for North Sentinelese, the Voynich manuscript, or Linear A.

We're not magic.


That's what benchmarks like ARC-AGI are designed to test. The models are getting better at it, and you aren't.

Nothing ultimately matters in this business except the first couple of time derivatives.


humans also eat


Yeah my most common aider command sequence is

    > /undo
    > /clear
    > ↑ ↑ ↑ ⏎


Still no multi-row tab bars or API support for hiding the main tab bar, as was explicitly promised when they killed TabMixPlus.


Power users interested in this niche feature can use a script: https://old.reddit.com/r/firefox/comments/1m594nv/multi_tab_...


I think with UserChromeJS you can run TabMixPlus too. I don't remember if it needs anything else. (Or a fork like Waterfox, of course.)

But like, if we're turning signature checking and sandboxing off, we're getting pretty far from stock Firefox. And of course, they can (and often will) break it on every update.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: