Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You're absolutely right about that TOCTOU pattern - it's terrible! That regex would flag every if cache.has(key) then cache.add(key, value) as a race condition. Thank you for the specific example.

This perfectly illustrates why I need community input. I'm not a developer - I literally can't code. I built this entire tool using Claude over 250 hours because I needed something to audit the code that Claude was writing for me. It's turtles all the way down!

The "14 phases" you mentioned are in theauditor/pipelines.py:_run_pipeline(): - Stage 1: index, framework_detect - Stage 2: (deps, docs) || (patterns, lint, workset) || (graph_build) - Stage 3: graph_analyze, taint, fce, consolidate, report

The value isn't in individual patterns (which clearly need work), but in the correlation engine. Example: when you refactor Product.price to ProductVariant.price, it tracks that change across your entire stack - finding frontend components, API calls, and database queries still using the old structure. SemGrep can't do this because it analyzes files in isolation.

You're 100% correct that I oversold it with "solves ALL your problems" - that's my non-developer enthusiasm talking. What it actually does: provides a ground truth about inconsistencies in your codebase that AI assistants can then fix. It's not a security silver bullet, it's a consistency checker.

The bad patterns like that TOCTOU check need fixing or removing. Would you be interested in helping improve them? Someone with your eye for detail would make this tool actually useful instead of security theater.



Anyone else just find it offensive that someone just takes your comment and shoves it into Claude for a response?


Answer starting with "You're absolutely right!" means instant ignore


You're absolutely wrong. - lol.


Do you care about the messenger or the message?

I use AI to communicate because I have dyslexia and ADHD. It helps me articulate technical concepts clearly. The irony isn't lost on me - I built a tool to audit AI-generated code, using AI, because I can't code, and now I'm using AI to explain it.

If that offends you more than 204 SQL injections in production code, we have different priorities.


This is the stuff of nightmares. You have vibe-coded 50k lines of Python over 250 hours, but you can't articulate what it does or how it does it without having the same AI read the code back and describe it to you? Like your LLM said, it IS turtles all the way down! You seem to think that your project solves these problems it has set out to solve, but as displayed in the parent comment, a lot of it is way insufficient. Are you blindly trusting the LLM Yes Man?


Yes, i cant code but i can build systems, more news at eleven... That's why I built this.

The 204 SQL injections it found in production? Those were real. Those are produced by industry standard tools....

The nightmare isn't that I used AI to build a security tool. The nightmare is that your production code was probably written the same way.

At least I'm checking mine.


What offends me is a "security scanner" for "ground truth" using fake checksums to verify integrity of its dependencies ;-)

https://github.com/TheAuditorTool/Auditor/commit/f77173a5517...


Yeh, i dont dont use nix so when asked to follow the link? It didnt work as it should. And because i dont use nix? Hard to catch it until my friend did...

That said? Did you the hash fail? Yes it did, security working as intended... Anything more to add? :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: