Hacker News new | past | comments | ask | show | jobs | submit login

This is one aspect about machine learning models I keep discussing with non-technical passengers of the AI-hype-train: They are (in their current form) unsitable for applications where correctness is absolutely critical.



I don’t know enough to make absolute statements here, but deep learning models can beat out human experts at discerning between signal and noise. Using that to guess at data and then hand it off to humans gives you the worst of both worlds. Two error probabilities multiplied together. But to simply render a verdict on whether a condition exists I’d trust a proven algorithm.


Yes, pattern recognition is one of the applications ML shines at. Now the question was about using ML to extrapolate between sparse pixels and how much humans can rely on the added detail.

The goal would be to find a way to make ML extrapolate only pixels that really describe actual really present features and never imagining detail that wasn't there in the first place. Now I am no expert at the matter, but what I know of deep learning models they are really good at the latter as they basically make statistic guesses on what would be plausible.

Getting a plausible guess on what looks like a convincing answer works really well for answering a question. But the problem at hand is more like predicting the words someone said based on the first and last word in a sentence. Imagine a criminal case where the evidence is fragmented like that: I am pretty sure a LLM could give a convincing prediction here, but I am not sure how much you could rely on that prediction being reflective of what was actually said. I certainly wouldn't feel comfortable with a conviction the result of that prediction even if it was reflective of the ground truth in 90% of times.


There are a lot of models that are simply good at that without hallucinating nonsense. LLMs are a specific thing with their own tradeoffs and goals. If you have a ML model that says how much does this microscope photo look like an anomaly in this persons blood on a scale from 0-100 it can certainly do better than a human.


As long as AI makes things better on average, it's useful. It doesn't have to be 100% correct.


So if an AI fantasized your face into the extrapolated pixels of the evidence for a documented murder case you would be happy with the conviction, because on average it might be somewhat correct?

I don't wanna hurt anybodies feelings by stating that AI isn't a magical wand that makes everything better — but every technology has use cases at which it excels (e.g. pattern recognition) and use cases for which it is fundamentally unsuitable. If you try to screw on a nut using a hammer, that doesn't mean hammers suck, it means the user has a wrong idea what a hammer is capable of.

The point is: Don't be that person if you can avoid it.


There are applications - such as finding out whether you have a tumor or not - when "improving on average while ignoring outliers" is not acceptable.


This is not true, but it is a major challenge. See https://www.pathai.com/


That is why I said "in its current form".




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: