Hacker Newsnew | past | comments | ask | show | jobs | submit | rtgfhyuj's commentslogin

ai slop justifications will become the norm.

sounds like a bunch of agents can do a good amount of this. A high horse isn’t necessary


I wonder how you have reached this conclusion without having the faintest idea of what I write about.

Nonetheless, I live from that work. If you are correct, there's a fair bit of money on the table for you.


A good amount != this. AI being able to do the easy parts of something doesn't replace the hard ones.


There is just much denial of all this right now.

I could say I make my living with various forms of communication.

I am not worried about AI replacing me 1:1 communication wise. What is going to happen is the structure around me that gives my current communication skill set value is going to change so much and be optimized to a degree that my current skill set is no longer going to have much value in the new structures that arise. I will have to figure out how to fit into these new structures and it surely won't be doing the same thing I am doing now. That seems so obvious.


give it a more thorough look maybe?

https://trails.pieterma.es/trail/collective-brain/ is great


It’s any interesting thread for sure, but while reading through this I couldn’t help but think that the point of these ideas are for a person to read and consider deeply. What is the point of having a machine do this “thinking” for us? The thinking is the point.


And that’s the problem with a lot of chatbot usage in the wild: it’s saving you from having to think about things where thinking about them is the point. E.g. hobby writing, homework, and personal correspondence. That’s obviously not the only usage, but it’s certainly the basis for some of the more common use cases, and I find that depressing as hell.


so consider them deeply. Why does the value diminish if discovered by a machine as long as the value is in the thinking?


This is a software engineering forum. Most of the engineer types here lack the critical education needed to appreciate this sort of thing. I have a literary education and I’m actually shocked at how good most of these threads are.


I think most engineer types avoid that kind of analysis on purpose.


Programmers tend to lean two ways: math-oriented or literature-oriented. The math types tend to become FAANG engineers. The literature oriented ones tend to start startups and become product managers and indie game devs and Laravel artisans.


That doesn’t speak well towards your literary education, candidly.


We should try posting this on a literary discussion forum and see the responses there. I expect a lot of AI FUD and envy but that’ll be evidence in this tools favor.


lol yes that’s the only reason anyone could find this uh literary analysis less than compelling


I had a look at that. The notion of a "collective brain" is similar to that of "civilization". It is not a novel notion, and the connections shown there are trivial and uninspiring.


are subagents just tools that are agents themselves?


pretty much…they have their own system prompts, you can customize the model, the tools they use, etc.

CC has built in subagents (including at least one not listed) that work very well: https://code.claude.com/docs/en/sub-agents#built-in-subagent...

this was not the case in the past, I swore off subagents, but they got good at some point


why would it early stop? examples?


Models just naturally arrive at a conclusion that they are done. TODO hints can help, but is not infallible: Claude will stop and happily report there's more work to be done and "you just say the word Mister and I'll continue" --- this is a RL problem where you have to balance the chance of an infinite loop (it keeps thinking there's a little bit more to do when there is not) versus the opposite where it stops short of actual completion.


> this is a RL problem where you have to balance the chance of an infinite loop (it keeps thinking there's a little bit more to do when there is not) versus the opposite where it stops short of actual completion.

Any idea on why the other end of the spectrum is this way -- thinking that it always has something to do?

I can think of a pet theory on it stopping early -- that positive tool responses and such bias it towards thinking it's complete (could be extremely wrong)


My pet theory: LLM's are good at detecting and continuing patterns. Repeating the same thing is a rather simple pattern, and there's no obvious place to stop if an LLM falls into that pattern unintentionally. At least to an unsophisticated LLM, the most likely completion is to continue the pattern.

So infinite loops are more of a default, and the question is how to avoid them. Picking randomly (non-zero temperature) helps prevent repetition sometimes. Other higher-level patterns probably prevent this from happening most of the time in more sophisticated LLM's.


> Any idea on why the other end of the spectrum is this way -- thinking that it always has something to do?

Who said anything about "thinking"? Smaller models were notorious for getting stuck repeating a single word over and over, or just "eeeeeee" forever. Larger models only change probabilities, not the fundamental nature of the machine.


Not all models are trained with long one-shot task following by themselves, seems many of them prefer closer interactions with the user. You could always add another layer/abstraction above/below to work around it.


Can't this just be a Ralph Wiggum loop (i.e. while True)


Sure, but I think just about everyone wants the agent to eventually say "done" in one way or another.


so don’t? others have said they like it, you don’t, move on


nah, the pump is visibly physically, dont need an arroq thats idiotic


in the absence of industry wide standards, this is an ingenious design, 100% best thing possible


extremely useful in gaming even!


Thanks !


really? when you were 10?


when you sleep under clear skies in the night, these questions are normal.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: