Hacker Newsnew | past | comments | ask | show | jobs | submit | veunes's commentslogin

That's a dangerous distinction in the AI era. If you reduce your work to solving problems given a set of requirements, you put yourself in direct competition with agents. LLMs are perfect for taking a clear spec and outputting code. A "pure" engineer who refuses to understand the product and the user risks becoming just middleware between the PM and the AI. In the future, the lines between PM and Tech Lead will blur, and the engineers who survive will be those who can not only "do as told" but propose "how to do it better for the business"

The only way to be bulletproof is to be the person who takes ownership. AI can generate an app, but it can't answer to a court, clients, or the CTO when the database crashes on Black Friday. Shift from writing code to risk management. Architecture, security, complex legacy integrations, and distributed systems debugging are zones where the cost of error is high, and where AI still operates like a random number generator. You need to be the one who knows why the system works, not just the one who writes the syntax

Fair, but the threat model matters here. For a static mortgage calculator, the data leak risk is zero (if it's client-side). The risk here is different - logical. If the AI botches the formula and someone makes a financial decision based on that - that's the problem. For "serious" projects vibe coding must stop where testing and code audits begin

Yep including that too obviously, but OP isn't trying to market this I think, just sharing his passion project

I feel you. There's a massive difference between crafting and assembling. AI turns us from artisans carving a detail into assembly line operators. If your joy came from solving algorithmic puzzles and optimizing loops, then yes, AI kills that It might be worth looking into low-level dev (embedded, kernel, drivers) or complex R&D. Vibe coding doesn't work there yet, and the cost of error is too high for hallucinations. Real manual craftsmanship is still required there.

Vibe coding will eventually come for that.

The cost of hallucinations though - you potentially have a stronger point there. It wouldn’t surprise me if that fails to sway some decision makers but it doesn’t give the average dev a bit more ground to work with.


It helped me finish my webRTC client for a esp32 microcontroller. Thats fairly low level. It did it without breaking a sweat - 2hrs, and we had a model which works with my pipecat-based based server.

I loaded the lowest level piece of software I wrote in the last 15 years - a memory spoofing aimbot poc exploiting architectural issues in x86 (things like memory breakpoints set on logical memory - not hw addresses - allowing to read memory without tripping kernel-level detection tools, ability to trigger PFs on pages where the POC was hiding to escape detection, low level gnarly stuff like this). I asked it to clean up the code base and propose why it would not work under current version of windows. It did that pretty well.

Lower level stuff does of course exist, but not a whole lot IMHO. I would not assume claude will struggle with kernel level stuff at all. If anything, this is better documented than the over-abstraced mainstream stuff.


The key phrase here is "I still had domain expertise". Many miss that AI is a multiplier. If you multiply 0 by AI, you get 0 (or hallucinated garbage). You multiplied your knowledge of compound interest and UX by AI's speed. Without your background, the AI would have generated a beautiful interface that calculates mortgages using a savings account formula. Your role shifted from "code writer" to "logic validator" - this is the future of development for domain specialists

> Your role shifted from "code writer" to "logic validator"

No it didn't, in fact, your job shifted from code writer to code fixer


There's a health and capacity angle. A lot of today's grandparents are still working, dealing with their own medical issues, or simply don't have the energy to provide full-time childcare

When people say "there are barely any kids," they're often describing the outcome of past policy choices, not a reason to avoid changing them

Subsidizing childcare helps families stay, but it doesn't address why childcare, housing, and everything else are so expensive in the first place

Housing is expensive because of lack of housing supply and because of high housing demand because of both soft (non-finance-driven) desirability conditions and a sufficient concentration of very-high-income, price insensitive buyers on prices.

Everything else is so expensive because of the second of those reasons, plus everyone having higher salary demands because of high housing prices.

Increasing housing supply can mitigate the problem somewhat, but the other drivers of cost will still remain, and I Think most people would agree you don't actually want to deal with the other cost drivers to aggressively. I mean, even dealing with the high-income-earners-as-cost-drivers problem softly by raising high-end marginal tax rates somewhat is a a highly controversial position.


Housing is expensive because homeowners have weaponized zoning laws to make it illegal to build housing the city needs.

Sadly, in San Francisco, renters have been even more NIMBY than homeowners.

https://www.mhankinson.com/documents/renters_preprint.pdf


Just because of supply/demand alone? If sitting in a comfy office pays as well as it does, why would people take care of children or build houses for way less?

I think this is just Baumols cost disease in action: you really cant have amazingly well paying jobs (like in SF generally) AND super low paid laborers without some kind of class system/feudalism/etc.


Childcare is fundamentally expensive because it fundamentally involves a large portion of a person's labor and this labor needs to be local to you. One person can only watch so many infants (and we have reasonable regulations limiting the number people are allowed to watch).

Even if you eliminate all other overhead costs (rent, admin, materials, insurance, etc) you are still paying for a large portion of somebody's salary.

The reason childcare feels expensive is because society has spent generations undervaluing childrearing as labor.


Reading works when you generate 50 lines a day. When AI generates 5,000 lines of refactoring in 30 seconds, linear reading becomes a bottleneck. Human attention doesn't scale like GPUs. Trying to "just read" machine-generated code is a sure path to burnout and missed vulnerabilities. We need change summarization tools, not just syntax highlighting

Whether you or someone/something else wrote it is irrelevant

You’re expected to have self-reviewed and understand the changes made before requesting review. You must to be able to answer questions reviewers have about it. Someone must read the code. If not, why require a human review at all?

Not meeting this expectation = user ban in both kernel and chromium


This is exactly the gap I'm worried about. human review still matters, but linear reading breaks down once the diff is mostly machine-generated noise. Summarizing what actually changed before reading feels like the only way to keep reviews sustainable.

The tools exist, they're just rarely used in web dev. Look into ApiDiff or tools using Tree-sitter to compare function signatures. In the Rust/Go ecosystem, there are tools that scream in CI if the public contract changes. We need to bring that rigor into everyday AI-assisted dev. A diff should say "Function X now accepts null", not "line 42 changed"

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: