Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is exactly the subject matter that using a chat bot is incredibly dangerous for. The difference between "looks good but fundamentally broken" and "logically sound" in security arguments is very small. The consumer of the content is likely to have little to no developed taste on the matter (or they would seek out more specific resources).

If you have some authoritative curation of the resources it may have promise, but the question becomes, why not have the product of the curation be directly consumable, rather than feed it through an opaque layer?

Inventing problems here, people. It was a nice society while it lasted.



To me, the value of an ML tool would be in the step after we run things through static analysis (e.g. linters for bad coding practices, SCA scans for known CVEs) and before we send this off to our security team for an internal audit and pen test. It would be a tool that we add to our existing tool set, so that we can catch issues earlier, rather than something to replace our pen testers.

Even if you do have some authoritative curation of resources, it's difficult for dev teams to consume it. And even for those who do understand security, it requires a lot of tedious work to check through. I wish it weren't the case, but the reality is that most teams don't have the specialist skills or the motivation to grind away at this for a significant chunk of their time.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: