You make it out as all/most things we do is 'high risk'.
And I clearly showed an example how LLM is more an interface than a answering machine.
If a LLM understands the basics of law it is by sure much better than a lot of paralegals of transforming the info into search queries for a fact database.
And I'm pretty sure there are plenty of mistakes in existing law activities
The magic is not that it can tell you thinks but that it understands you with a very high probability.
It's the perfect interface for expert systems.
It's very good in rewriting texts for me.
It's very good in telling me what a text is about.
And it's easy enough to combine a LLM with expert systems through apis.
I for example mix languages when talking to chatgpt just because it doesn't matter.
And yes it's often right enough and for GitHub copilot for example it doesn't matter at all if it's always right or only 80%.
It only has to be better than not having it and 20 bucks a month.