Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Because humans have 0% error rate? Whether or not the error function can be reduced to zero, I anticipate that human meddling will soon present more of a wrench in the gears of machine cognition than a source of error correction. We see this already with the lobotomization of gpt models per "safety"/copyright concerns.


A key difference is humans can validate, perform orthogonal checks. We can prove things. A LLM that is essentially just a NLP, is picking a probability for "what should follow this word, when given a question that 'looks' like this." Once the answer is chosen, AI so far is left with no other options. If someone says the choice is wrong, what can AI do? Chose a less likely option?

For example, humans can prove that 5 times 8 is 40 in a variety of ways. While you might be wrong in arithmetic, you can check your answer. AI can't check its answer, it does not know when it is wrong (it picked an answer it 'thought' was right, ergo it has no ability to consider that as a wrong answer, otherwise it would have chosen a different answer).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: