Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Huginn is a bunch of mechanical rules explicitly created by the user and easily understandable by them. It's "autonomous" in the sense of a grandfather clock.

The rules governing LLMs, while in principle mechanical, cannot be accurately controlled or even understood by humans. They are "autonomous" in the sense of animals, which may be trained to some degree, but may still surprise or even become a danger to their "owners", no matter how much effort is spent trying to control them.

An LLM may do what you ask it to, using the methods you expect it to use. Or it may do what you ask it to, using methods you weren't expecting at all. Or it may not do what you have asked at all. This isn't comparable to a rule-based system like Huginn.



Huginn was just an example. With the similar definition to yours, are these "LLM autonomous agents" really actually "autonomous?"

>They are "autonomous" in the sense of animals

That's a really weird equivalency, which frankly, I'm not even sure is true.


> That's a really weird equivalency, which frankly, I'm not even sure is true.

A mind that's a black box to us, that we can predict to a degree based only on observing it at work, that's also a general-purpose intelligence we're trying to employ for limited set of tasks. It's not a bad analogy. Like animals, LLMs too have the capacity to "think outside the box", where the box is what you'd consider the scope of the task.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: