Hacker News new | past | comments | ask | show | jobs | submit login

Another one not understanding the power of LLMs

The magic is not that it can tell you thinks but that it understands you with a very high probability.

It's the perfect interface for expert systems.

It's very good in rewriting texts for me.

It's very good in telling me what a text is about.

And it's easy enough to combine a LLM with expert systems through apis.

I for example mix languages when talking to chatgpt just because it doesn't matter.

And yes it's often right enough and for GitHub copilot for example it doesn't matter at all if it's always right or only 80%.

It only has to be better than not having it and 20 bucks a month.




> it doesn't matter at all if it's always right or only 80%

People get fired every day for pasting code that is 80% right and worked on a couple test-cases.


And yet, nobody has regulated stackoverflow...


Likely because human generation of content is more expensive and doesn't scale as far?

Though SO has a lot of moderation, so it's somewhat self regulated.


What?

Never seen this happening. In contrary still not every team is doing code review and plenty of people regularly fix bugs in production.

One ex colleague invalidated all apple device certificates, didn't get fired.

A previous tech lead wrote code which deleted customer data and we found that a half year later, no one was fired.

And no one got fired at a code review.


Get a summary of a contract or law wrong, lose a few million dollars... it really is, sadly again, another peace of AI only for low risk applications.


You make it out as all/most things we do is 'high risk'.

And I clearly showed an example how LLM is more an interface than a answering machine.

If a LLM understands the basics of law it is by sure much better than a lot of paralegals of transforming the info into search queries for a fact database.

And I'm pretty sure there are plenty of mistakes in existing law activities


No, most/a lot of activities are low risk, but my point is we seem to struggle with AI in high risk while we automate human flaws.

Also, LLMs don't understand other than via the language representation.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: