Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

- don't learn from what you tell them

- don't have career growth that you can feel good about having contributed to

- don't have a genuine interest in accomplishment or team goals

- have no past and no future. When you change companies, they won't recognize you in the hall.

- no ownership over results. If they make a mistake, they won't suffer.





Sounds like my teammates.

- don't learn from what you tell them

Whenever I have a model fix something new I ask it to update the markdown implementation guides I have in the docs folder in my projects. I add these files to context as needed. I have one for implementing routes and one for implementing backend tests and so on.

They then know how to do stuff in the future in my projects.


They still aren't learning. You're learning and then telling them to incorporate your learnings. They aren't able to remember this so you need to remind them each day.

That sounds a lot like '50 First Dates' but for programming.


> They aren't able to remember this

Yes, this is something people using LLMs for coding probably pick up on the first day. They're not "learning" as humans do obviously. Instead, the process is that you figure out what was missing from the first message you sent where they got something wrong, change it, and then restart from beginning. The "learning" is you keeping track of what you need to include in the context, how that process exactly works, is up to you. For some it's very automatic, and you don't add/remove things yourself, for others is keeping a text file around they copy-paste into a chat UI.

This is what people mean when they say "you can kind of do "learning" (not literally) for LLMs"


While I hate anthropomorphizing agents, there is an important practical difference between a human with no memory, and an agent with no memory but the ability to ingest hundreds of pages of documentation nearly instantly.

That is true, but does it actually matter if the outcome is the same? GP is saying they don't need to remind them.

The outcome is definitely not the same, and you need to remind them all the time. Even if you feed the context automatically they will happily "forget" it from time to time. And you need to update that automated context again, and again, and again, as the project evolves

They document how to do something they just figured out. They store/memorise it in a file.

It's functionally working the same as learning.

If you look at it like a black box, then you can't tell the difference from the input and output.


I believe LLMs ultimately cannot learn new ideas from their input in the same way as they can learn it from their training data, as the input data doesn't affect the weights of the neural network layers.

For example, let's say LLMs did not have examples of chess gameplay examples in their training data. Would one be able to have an LLM play chess by listing the rules and examples in the context? Perhaps, to some extent, but I believe it would be much worse than if it was part of the training (which of course isn't great either).


50 first new Date()

Ah, so it's like you have a junior developer that can't learn

Can this additional prompt from you also be automated? I do this too but I forget sometimes. I don't know if a general rule will be enough ?

> I add these files to context as needed.

Key words are these.

> They then know how to do stuff in the future in my projects.

No. No, they don't. Every new session is a blank slate, and you have to feed those markdown files manually to their context.


The feeding can be automated in some cases. In GitHub copilot you can put it under .github/instructions and each instructions markdown file starts with a section that contains a regex of which files to apply the instructions to.

You can also have an index file that describes when to use each file (nest with additional folders and index files as needed) and tell the agent to check the index for any relevant documentation they should read before they start. Sometimes it will forget and not consult the docs but often it will consult the relevant docs first to load just the things it needs for the task at hand.

So, again, they don't learn.

Do you want them to?

I would. Getting tired of redirecting them in correct directions from scratch every time

I tend to think it would lead to them forming opinions about the people they interact with as they learn what it's like to interact with them, and that this would also influence their behaviour/outputs. Just imagining the day where copilot's chain of thought starts to include things like "Greg is bossy and often unkind to me in PR reviews. I need to set clear boundaries with him and discontinue the relationship if he will not respect them."

Doesn't this also consume context

Having a good prompt file ("memory") is an artform.

The AI hype folks write massive fan fiction style novellas that don't have any impact.

But there's middle ground where you tell the agent the specific things about your repo that it doesn't know based on its training. Like if your application has a specific way to run tests headless or it's compiled a certain way that's not the default average.


This works surprisingly well for Claude: https://github.com/obra/superpowers (in the context of rather small side projects in Elixir).

Unless, of course, the phase of the moon is wrong and Claude itself is stupid beyond all reason


Yes.

https://agents.md/

AGENTS.md exists, Codex and Crush support it directly. Copilot, Gemini and Claude have their own variants and their /init commands look at AGENTS.md automatically to initialise the project.

Nobody is feeding aything "manually" to Agents. Only people who think "AI" is a web page do that.


Ah yes. Agents.md is a magical file that just appears out of thin air. No one creates it, no one keeps it updated, and LLMs always, without fail, not only consult it but never forget it, and in every new session know precisely what changed in the project and how to continue.

All of them often can't even find/read relevant docs in a new session without prompting


Literally every single CLI-based Agent will show you a suggestion to run /init at startup.

And of course it's up to the developer to keep the documentation up to date. Just like when working with humans. Stuff don't magically document itself.

Yes "good code is self-documenting", but it still takes ages to find anything without docs to tell you the approximate direction.

It's literally a text file the agent can create and update itself. Not hard. Try it.


> Just like when working with humans. Stuff don't magically document itself.

Humans actually learn from codebases they work with. They don't start with a clean slate every time they wake up in the morning. They know where to find information and how to search for them. They don't need someone to constantly update docs to point to changes.

> but it still takes ages to find anything without docs to tell you the approximate direction.

Which humans, unsurprisingly, can do without wiping their memory every time.


Imagine having these complaints about a screwdriver

It's a tool, not an intelligent being


Yeah, if my screwdriver undid the changes I just made to my mower, constantly ignored my desire to unscrew screws and instead punched a hole in my carb - I'd be throwing that screwdriver in the garbage.

I do not need to babysit my screwdriver.

Yet.

Next year there will be AI screwdriver your employer force you to use.


At first, I thought “ponector’s forgotten to add the /s”

Then I realised that this will actually happen, and was sadly reminded we’re now in the post-sarcasm era.


And you can buy AI screwdriver today from Amazon!

- don't learn from what you tell them

We'll fix that, eventually.

- don't have career growth that you can feel good about having contributed to

Humans are on the verge of building machines that are smarter than we are. I feel pretty goddamned awesome about that. It's what we're supposed to be doing.

- don't have a genuine interest in accomplishment or team goals

Easy to train for, if it turns out to be necessary. I'd always assumed that a competitive drive would be necessary in order to achieve or at least simulate human-level intelligence, but things don't seem to be playing out that way.

- have no past and no future. When you change companies, they won't recognize you in the hall.

Or on the picket line.

- no ownership over results. If they make a mistake, they won't suffer.

Good deal. Less human suffering is usually worth striving for.


> We'll fix that, eventually.

> Humans are on the verge of building machines that are smarter than we are.

You're not describing a system that exists. You're describing a system that might exist in some sci-fi fantasy future. You might as well be saying "there's no point learning to code because soon the rapture will come".


That particular future exists now, it's just not evenly distributed. Gemini 2.5 Pro Thinking is already as good at programming as I am. Architecture, probably not, but give it time. It's far better at math than I am, and at least as good at writing.

Computers beat us in maths decades ago, yet LLMs are not able to beat a calculator half of the time. The maths benchmarks that companies so proudly show off are still the realm of a traditional symbolic solvers. You claiming much success in asking LLMS for math makes me question if you have actually asked an LLM about maths.

Most AI experts not heavily invested in the stocks of inflated tech companies seem to agree that current architectures cannot reach AGI. It's a sci-fi dream, but hyping it is real profitable. We can destroy ourselves plenty with the tech we already have, but it won't be a robot revolution that does it.


The maths benchmarks that companies so proudly show off are still the realm of a traditional symbolic solvers. You claiming much success in asking LLMS for math makes me question if you have actually asked an LLM about maths.

What I really need to ask an LLM for is a pointer to a forum that doesn't cultivate proud exhibition of ignorance, Luddism, and general stupidity at the level exhibited by commenters in this entire HN story, and in this subthread in particular.

We already had one Reddit, we didn't need two.


Replace suffering with caring and have your AI write that again.

Why would I do a goofy thing like that?

>>Humans are on the verge of building machines that are smarter than we are. I feel pretty goddamned awesome about that. It's what we're supposed to be doing.

Have you ever spent any time around children? How about people who think they're accomplishing a great mission by releasing truly noxious ones on the world?

You just dismissed the entire notion of accountability as an unnecessary form of suffering, which is right up there with the most nihilistic ideas ever said by, idk, Dostoevsky's underground man or Raskolnikov.

Don't waste your life on being the Joker.


> Humans are on the verge of building machines that are smarter than we are. I feel pretty goddamned awesome about that. It's what we're supposed to be doing.

It's also the premise of The Matrix. I feel pretty goddamned uneasy about that.


(Shrug) There are other sources of inspiration besides dystopic sci-fi movies. There's the Biblical story of the Tower of Babel, for instance. Better not work on language translation, which after all is how the whole LLM thing got started.

Sometimes fiction went in the wrong direction. Sometimes it didn't go far enough.

In any case, the matrix wasn't my inspiration here, but it is a pithy way to describe the concept. It's hard to imagine how humans maintain relevancy if we really do manage to invent something smarter than us. It could be that my imagination is limited though. I've been accused of that before.


> It's what we're supposed to be doing.

Why?


Because venture capital managers say so.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: