Yes, but they definitely have a vested interest in scaring people into buying their product to protect themselves from an attack. For instance, this attack requires 1) the victim to allow claude to access a folder with confidential information (which they explicitly tell you not to do), and 2) for the attacker to convince them to upload a random docx as a skills file in docx, which has the "prompt injection" as an invisible line. However, the prompt injection text becomes visible to the user when it is output to the chat in markdown. Also, the attacker has to use their own API key to exfiltrate the data, which would identify the attacker. In addition, it only works on an old version of Haiku. I guess prompt armour needs the sales, though.
Not quite, Polymarket is decentralized so they are even more removed from the outcome. When a dispute happens like this, a vote happens in the UMA DAO, essentially a decentralized "democratic" vote. What people are complaining about is UMA whales skewing votes.
Out of nowhere Cognition with a banging product. Probably not 100% yet but the idea is so good I'll be surprised if within 6 months all the other IDEs aren't copying.
Haha! LLMs themselves are pure edge cases because they are non-deterministic. But if you add a 7-step pipeline on top of that, it's edge cases on top of edge cases.
reply