Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I personally think AI will be it's most powerful not as some generated output, but as invisible glue that binds parts of a larger system.

AI (specifically the LLM variety) should be performing small tasks via agents and then using structured output to allow those agents to pass information in a larger system. There, as an example, countless zero/few shot classification tasks that LLMs crush traditional ML at. You want user tickets routed to the correct rep? That sounds like a task an LLM should be doing behind the scenes.

Code gen as an output is likewise boring, agents that adaptably learning to code themselves, generate tests, allow for debugging etc, that has the potential to be very powerful.

Unfortunately I still feel the next AI winter will hit before people even really scratch the surface of how these tools can be used in practice. Everyone is tragically trapped in the prompt -> output model, rather than really thinking about agent based approaches.



IPcenter by Amelia (former IPSoft) was like that, it could use Bayesian statistics on incoming events/alerts to determine where to route a ticket. This would only work after a few tickets, with roughly same content, being routed manually.

One issue with this was it learned that a particular database event would be routed to team_a after an incident. Next time similar tickets was raised, it would be routed to team_a incorrectly. This was an issue since events/alarms tend to look same for eg an application database and the organisations would route tickets to each application team first - not the centralized database team.

It had "virtual engineers" which could do investigation (collecting logs etc) and remediations (basically scripts) too.

https://en.wikipedia.org/wiki/Amelia_(company)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: