Jurisprudence, I hope! A huge heap of detailed cases, formal codes, decisions made and explained in detail, commented, overturned, etc. Especially civil cases.
Also, probably, medicine, especially diagnostic. Large amounts of well-documented cases, a fair amount of repeatability, apparently non-random mechanisms behind, so statistical models should actually detect useful correlations. Can use more formalized tokens from lab tests, etc.
There's definitely a lot of wiggle room for lawyers and doctors to up their game. People cannot keep up with all the stuff that's published. There's simply too much of it. Doctors only read a fraction of what is published. Lawyers have to be aware of orders of magnitude more information than is humanly possible.
LLMs allow them to take some short cuts here. Even something like perplexity that can help you dig out relevant source material is extremely helpful. You still have to cross check what it digs out.
The mistake people make is confusing knowledge with reasoning when evaluating LLMs. Perplexity is useful because it can use reasoning to screen sources with knowledge; not because it has perfect recollection of what's in those sources. There's a subtle difference. It's much better at summarizing and far less likely to hallucinate than it is when it wouldn't base its answers on the results of a search. Like chat gpt used to do (they've gotten better at this too).
For lawyers and medical professionals this means that they have all the best knowledge easily accessible without having to read and memorize all of it. I know some lawyer types that are really good at scrabble, remembering trivia, etc. That's a side effect of the type of work they do: which is mostly just reading and scanning through massive amounts of text so that they can recall enough information to know where to look. Doctors have to do similar things with medical texts.
A friend of mine just defended his law PhD and in the introductory lectio said that (even) current LLMs would likely give better verdicts than human judges. Law isn't really a cognitively such demanding task as walking a dog or waiting tables.
He probably meant _brainwashed_ LLMs. They can consistently produce desired results if you wash them the right way. It's more about personal opinion than computation. Actually it would be fun to manipulate verdicts with prompt injections ;)
Judges are very much "brainwashed" too, and by design. The judges should apply the law, and the same case should ideally lead to the same verdict regardless of the judge.
With the caveat that this applies to sane legal systems, and not the ones where "making examples" etc are part of the system.
> The judges should apply the law, and the same case should ideally lead to the same verdict regardless of the judge.
hmm.. :) I like this. But the reality is very different and some factors which shouldn't matter can change the outcome dramatically. Like skin colors of defendant and judge. Pointing this out can be punished as well.
This is nonsense though. What does "better" mean in this case? A judge is not a black box with an input (the case) and an output (the verdict), the entire point of having a judge is to have empathy, conscience, and personal responsibility built into the system.
It's a blind spot that too many people have because we take those qualities for granted. LLMs unbundle them, so we need to start recognising the inherent value of humans, fast. I wrote a few words about it here: https://dgroshev.com/blog/feel-bad/
Someone has to make a call. The weight of the call rests on the person's life experience, their understanding of the context and the cost to the society, their empathy to both the defendant and the accused, and their conscience. Treating it as a black box exercise misses the point completely.
RFP responses. In enterprise sales, there's a huge amount of back and forth with different teams in a customer when you're selling anything but very simple applications. Most enterprise customers require certified or authoritative responses with backup material that is tested later during formal verification.
These LLMs are already very helpful when studying scientific fields. If you're reading a scientific paper and come across an equation you don't know how to derive, LLMs can often correctly derive it from first principles. It's not 100% reliable, but when it works, it's incredibly helpful.
Management consulting - I expect less than 20% of what a random 24 year old in a suit that you pay $3000 per day produces is actually specific to your business problem, and the rest is formulaic.