Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We use a mix of static analysis and AI. Flagged packages are escalated to a human review team. If we catch a malicious package, we notify our users, block installation and report them to the upstream package registries. Suspected malicious packages that have not yet been reviewed by a human are blocked for our users, but we don't try to get them removed until after they have been triaged by a human.

In this incident, we detected the packages quickly, reported them, and they were taken down shortly after. Given how high profile the attack was we also published an analysis soon after, as did others in the ecosystem.

We try to be transparent with how Socket work. We've published the details of our systems in several papers, and I've also given a few talks on how our malware scanner works at various conferences:

* https://arxiv.org/html/2403.12196v2

* https://www.youtube.com/watch?v=cxJPiMwoIyY



So, from what I understand from your paper, you're using ChatGPT with careful prompts?


You rely on LLMs riddled with hallucinations for malware detection?


I'm not exactly pro-AI, but even I can see that their system clearly works well in this case. If you tune the model to favour false positives, with a human review step (that's quick), I can image your response time being cut from days to hours (and your customers getting their updates that much faster).


You are assuming that they build their own models.


He literally said "Flagged packages are escalated to a human review team." in the second sentence. Wtf is the problem here?


What about packages that are not "flagged"? There could be hallucinations when deciding to (or not) "flag packages".


>What about packages that are not "flagged"?

You can't catch everything with normal static analysis either. LLM just produces some additional signal in this case, false negatives can be tolerated.


static analysis DOES NOT hallucinate.


So what? They're not replacing standard tooling like static analysis with it. As they mention, it's being used as additional signal alongside static analysis.

There are cases an LLM may be able to catch that their static analysis can't currently catch. Should they just completely ignore those scenarios, thereby doing the worst thing by their customers, just to stay purist?

What is the worst case scenario that you're envisioning from an LLM hallucinating in this use case? To me the worst case is that it might incorrectly flag a package as malicious, which given they do a human review anyway isn't the end of the world. On the flip side, you've got LLM catching cases not yet recognised by static analysis, that can then be accounted for in the future.

If they were just using an LLM, I might share similar concerns, but they're not.


well, you've never had a non-spam email end up in your spam folder? or the other way around?

when static analysis does it, it's called a "misclassification"


> We use a mix of static analysis and AI. Flagged packages are escalated to a human review team.

“Chat, I have reading comprehension problems. How do I fix it?”


Reading comprehension problems can often be caught with some static analysis combined with AI.


"LLM bad"

Very insightful.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: