Hi — I’m the security firm CEO mentioned, though I wear a few other hats too: I’ve been maintaining open source projects for over a decade (some with 100s of millions of npm downloads), and I taught Stanford’s web security course (https://cs253.stanford.edu).
Totally understand the skepticism. It’s easy to assume commercial motives are always front and center. But in this case, the company actually came after the problem. I’ve been deep in this space for a long time, and eventually it felt like the best way to make progress was to build something focused on it full-time.
That's not the technical report; it's also just a blog article which links to someone else's paper, and finishes off by promoting something:
"Socket addresses this exact problem. Our platform scans every package in your dependency tree, flags high-risk behaviors like install scripts, obfuscated code, or hidden payloads, and alerts you before damage is done. Even if a hallucinated package gets published and spreads, Socket can stop it from making it into production environments."
Read this instead, it's the technical report that is only linked to and barely mentioned in the article: https://socket.dev/blog/slopsquatting-how-ai-hallucinations-...