A partner showed me their CRM tool, it had an AI component that created a profile of each of their contacts. It was pretty complete and clearly had info from LinkedIn and other sources.
The personality summary was concerning as it was both deeply accurate in some respects and deeply inaccurate in others. Most concerning, it said I was "risk adverse" and "struggles to make decisions with incomplete data".
With 35 years at startups and independent contracting, risk tolerance and ability to make decisions with incomplete data are kind of in my wheelhouse.
Worse, if this profile was being shown to potential employers, it could (would?) be a deal breaker. It's kind of like being judged by your MBTI results.
Similar with social media advertising profiles. I've seen what FB and Twitter thought of me, and it included interests in various spectator sports, which were wrong both as a general statement about my personality, and specifically those sports don't even have any societal significance outside the USA.
Also several languages that I don't speak.
And they showed me ads for both dick pills and breast surgery; and others for a lawyer who specialised in renouncing a citizenship I've never had (for tax purposes) when moving to a country that I left; and also an announcement by the government of a country I don't live in about a ban to a specific breed of dog I've never heard of, when I don't own any dog anyway and never have.
And people complain about LLMs making things up :P
I’m no expert in tort law but that really sounds like it could be libel. Afaik proving reckless disregard for the truth is one of the ways you can build a case.
In a decision making process, if many parties are involved, it is in interest of all parties to not turn on each other in case of wrong decisions. Just let it slide.
It's impossible to detect whether some statement in isolation is a hallucination or not, with LLMs.
It's better for it to aggregate the information and then provide the resources to be able to verify whether any deduction is well supported or not.
I guess with a few more iterations you could have another agent verify whether a deduction is well justified, but that will also have some significant percentage of errors too.
IANAL, but I would think that would open up the software maker and users of it potentially to libel lawsuits by unscientifically speculating about a person's qualities with little or no proof and no ability to prove their claims.
What we need is (a) AI literacy (education), and (b) Ai-generated content being marked as such. Then no one would take such a personality summary at face value.
The personality summary was concerning as it was both deeply accurate in some respects and deeply inaccurate in others. Most concerning, it said I was "risk adverse" and "struggles to make decisions with incomplete data".
With 35 years at startups and independent contracting, risk tolerance and ability to make decisions with incomplete data are kind of in my wheelhouse.
Worse, if this profile was being shown to potential employers, it could (would?) be a deal breaker. It's kind of like being judged by your MBTI results.