Hacker News new | past | comments | ask | show | jobs | submit login

You would have to grade every user on every knowledge axis though. Just because someone is an expert in software doesn’t mean you should believe their takes on medicine, no matter how good faith their model interactions appear. I’d argue that coming up with an automated way to determine the objective truthfulness of information would be among the greatest creations of humanity (basically “solving” philosophy), so this isn’t a small task.





I've been thinking about how this happens with human cognitive development. There's a constant reinforcement mechanism that simply compares one's predicted reality with actual reality. The machines lack an authoritative reality.

If we had to grade truthiness of data sources - our sight or other main senses would probably be #1. Some gossip we heard from a 6 year old is near the bottom.

We know how to grade these data sources based on longitudinal experience and they are graded on multiple axes. For instance Angela is wrong about most facts but always right about matters of the heart.


of course. Each user input would be compared with other user input and existing data in the model before. Only legit and cross-referenced data could be used. Other data could still be used but marked as "possible controversial data". Good model should know that controversial data exists too and should distinguish it from the proper scientific data on each topic.



Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: