Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Are employees aware that they can't trust AI results uncritically, like the article mentions? See: the lawyers who have been disciplined by judges. Or doctors who aren't verifying all conversation transcriptions and medical notes generated by AI.

Does your organization have records retention or legal holds needs that employees must be aware of when using rando AI service?

Will employees be violating NDAs or other compliance requirements (HIPAA, etc) when they ask questions or submit data to an AI service?

For the LLM that has access to the company's documents, did the team rolling it out verify that all user access control restrictions remain in place when a user uses the LLM?

Is the AI service actually equivalent or better or even just good enough compared to the employees laid off or retasked?

This stuff isn't necessarily specific to AI and LLMs, but the hype train is moving so fast that people are having to relearn very hard lessons.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: