Hacker Newsnew | past | comments | ask | show | jobs | submit | syndacks's commentslogin

Can the same thing be said for using docker compose etc on a VPS to host a web app? Ie you can get the ergonomic / ease of using Fly, Renderer?

Historically, managed platforms like Fly.io, Render, and DigitalOcean App Platform existed to solve three pain points: 1. Fear of misconfiguring Linux 2. Fear of Docker / Compose complexity 3. Fear of “what if it breaks at 2am?”

CLI agents (Claude Code, etc.) dramatically reduce (1) and (2), and partially reduce (3).

So the tradeoff has changed from:

“Pay $50–150/month to avoid yak-shaving” → “Pay $5–12/month and let an agent do the yak-shaving”


Crayons, paper, some magnatiles.

It would be a deep irony if LLMs ended up ushering in the social rupture that never arrived in the industrial era. When the pigs turn hogs and refuse to share even the scraps, they shouldn’t be surprised if the system they depend on becomes their undoing.


We should all hope so. It's clear that mass surveillance, the vast psyops architecture including social media platforms, autonomous drone warfare, Starlink & Neuralink, the whole Silicon Valley project in general is intended to have everyone eventually so discombobulated and "interfered with" that they can't even tell they're experiencing exploitation that should cause discomfort and radicalization (and quickly dispatch the few stragglers who can). It's either social rupture or total oligarch victory in the class war and a 10,000-year Thielreich.


> s intended to have everyone eventually so discombobulated and "interfered with" that they can't even tell they're experiencing exploitation that should cause discomfort and radicalization (and quickly dispatch the few stragglers who can).

It sounds like you have not read Harrison Bergeron by Kurt Vonnegut.


Yes, the real danger we face is that the sorts of special, gifted people who "seek tax advice" from Jeffrey Epstein might some day have all their brilliant, wondrous contributions to the world stymied by oppressive systems of control. Not sure what systems those would be, since they own and are building all the ones we can see around us today, but still: collectivism ooga booga!


I can’t get over the range of sentiment on LLMs. HN leans snake oil, X leans “we’re all cooked” —- can it possibly be both? How do other folks make sense of this? I’m not asking for a side, rather understanding the range. Does the range lead you to believe X over Y?


I believe the spikiness in response is because AI itself is spiky - it’s incredibly good at some classes of tasks, and remarkably poor at others. People who use it on the spikes are genuinely amazed because of how good it is. This does nothing but annoy the people who use it in the troughs, who become increasingly annoyed that everyone seems to be losing their mind over something that can’t even do (whatever).


Well, this is the internet. Arguing about everything is its favorite pastime.

But generally yes, I think back to Mongo/Node/metaverse/blockchain/IDEs/tablets and pretty much everything has had its boosters and skeptics, this is just more... intense.

Anyway I've decided to believe my own eyes. The crowds say a lot of things. You can try most of it yourself and see what it can and can't do. I make a point to compare notes with competent people who also spent the time trying things. What's interesting is most of their findings are compatible with mine, including for folks who don't work in tech.

Oh, and one thing is for sure: shoving this technology into every single application imaginable is a good way to lose friends and alienate users.


Only those with great taste are well-equipped to make assertions about what we have infront of us.

The rest is all noise and personally I just block it out.


Then why are you still here?


I think it may be all summed up by Roy Amara's observation that "We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run."


I think this is the most-fitting one-liner right now.

The arguments going back and forth in these threads are truly a sight to behold. I don’t want to lean to any one side, but in 2025 I‘ve begun to respond to everyone who still argues that LLMs are only plagiarism machines, or are only better autocompletes, or are only good at remixing the past: Yes, correct!

And CPUs can only move zeros and ones.

This is likewise a very true statement. But look where having 0s and 1s shuffled around has brought us.

The ripple effects of a machine doing something very simple and near-meaningless, but doing it at high speed and again and again without getting tired, cannot be underestimated.

At the same time, here is Nobel Laureate Robert Solow, who famously, and at the time correctly, stated that "You can see the computer age everywhere but in the productivity statistics."

It took a while, but eventually, his statement became false.


The effects might be drastically different from what you would expect though. We’ve seen this with machine learning/AI again and again that what looks probable to work doesn’t work out and unexpected things work.


The problem with X is that so many people who have no verifiable expertise are super loud in shouting "$INDUSTRY is cooked!!" every time a new model releases. It's exhausting and untrue. The kind of video generation we see might nail realism but if you want to use it to create something meaningful which involves solving a ton of problems and making difficult choices in order to express an idea, you run into the walls of easy work pretty quickly. It's insulting then for professionals to see manga PFPs on X put some slop together and say "movie industry is cooked!". It betrays a lack of understanding of what it takes to make something good and it gives off a vibe of "the loud ones are just trying to force this objectively meh-by-default thing to happen".

The other day there was that dude loudly arguing about some code they wrote/converted even after a woman with significant expertise in the topic pointed out their errors.

Gen AI has its promise. But when you look at the lack of ethics from the industry, the cacophony of voices of non experts screaming "this time it's really doom", and the weariness/wariness that set in during the crypto cycle, it's a natural tendency that people are going to call snake oil.

That said, I think the more accurate representation here is that HN as a whole is calling the hype snake oil. There's very little question anymore about the tools being capable of advanced things. But there is annoyance at proclamations of it being beyond what it really is at the moment which is that it's still at the stage of being an expertise+motivation multiplier for deterministic areas of work. It's not replacing that facet any time soon on its current trend (which could change wildly in 2026). Not until it starts training itself I think. Could be famous last words


I’d put more faith in HN’s proclamations if it hadn’t widely been wrong about AI in 2023, 2024, and now 2025. Watching the tone shift here has been fascinating. As the saying goes, the only thing moving faster than AI advances right now is the speed at which HN haters move the goalposts…


Mmm. People who make AI their entire personality and brag that other people are too stupid to see what they see and soon they'll have to see the genius they're denying...does not make me think "oh, wow, what have I missed in AI".


AI has risen the barrier to all but the top and is threatening many peoples' livelihood. It has significantly increase the cost of computer hardware and is projected to increase the cost of electricity. I can definitely see why there is a tone shift! I'm still rooting for AI in general. Would love to see the end of a lot of diseases. I don't think we humans can cure all disease on our own in any of our lifetimes. Of course there all sorts of dystopian consequences that may derive from AI fully comprehending biology. I'm going to continue being naive and hope for the best!


I'm not really convinced that anywhere leans heavily towards anything; it depends which thread you're in etc.

It's polarizing because it represents a more radical shift in expected workflows. Seeing that range of opinions doesn't really give me a reason to update, no. I'm evaluating based on what makes sense when I hear it.


My take (no more informed than anyone else's) is that the range indicates this is a complex phenomenon that people are still making sense of. My suspicion is that something like the following is going on:

1. LLMs can do some truly impressive things, like taking natural language instructions and producing compiling, functional code as output. This experience is what turns some people into cheerleaders.

2. Other engineers see that in real production systems, LLMs lack sufficient background / domain knowledge to effectively iterate. They also still produce output, but it's verbose and essentially missing the point of a desired change.

3. LLMs also can be used by people who are not knowledgeable to "fake it," and produce huge amounts of output that is basically besides-the-point bullshit. This makes those same senior folks very, very resentful, because it wastes a huge amount of their time. This isn't really the fault of the tool, but it's a common way the tool gets used and so it gets tarnished by association.

4. There is a ridiculous amount of complexity in some of these tools and workflows people are trying to invent, some of which is of questionable value. So aside from the tools themselves people are skeptical of the people trying to become thought leaders in this space and the sort of wild hacks they're coming up with.

5. There are real macro questions about whether these tools can be made economical to justify whatever value they do produce, and broader questions about their net impact on society.

6. Last but not least, these tools poke at the edges of "intelligence," the crown jewel of our species and also a big source of status for many people in the engineering community. It's natural that we're a little sensitive about the prospect of anything that might devalue or democratize the concept.

That's my take for what it's worth. It's a complex phenomenon that touches all of these threads, so not only do you see a bunch of different opinions, but the same person might feel bullish about one aspect and bearish about another.


From my perspective, both show HN and Twitter's normal biases. I view HN as generally leaning toward "new things suck, nothing ever changes", and I view Twitter generally as "Things suck, and everything is getting worse". Both of those align with snake oil and we're all cooked.


As usual, somewhere in between!


I use them daily and I actively lose progress on complex problems and save time on simple problems.


Because it turns out that HN is mostly made up of cranky middle-aged conservatives (small c) who have largely defined themselves around coding, and AI is an existential threat to their core identity.


Truth lies in the middle. Yes LLM are an incredible piece of technology, and yes we are cooked because once again technologists and VC have no idea nor interest in understanding the long-term societal ramifications of technology.

Now we are starting to agree that social media has had disastrous effects that have not fully manifested yet, and in the same breath we accept a piece of technology that promises to replace large parts of society with machines controlled by a few megacorps and we collectively shrug with “eh, we’re gonna be alright.” I mean, until recently the stated goal was to literally recreate advanced super-intelligence with the same nonchalance one releases a new JavaScript framework unto the world.

I find it utterly maddening how divorced STEM people have become from philosophical and ethical concerns of their work. I blame academia and the education system for creating this massive blind spot, and it is most apparent in echo chambers like HN that are mostly composed of Western-educated programmers with a degree in computer science. At least on X you get, among the lunatics, people that have read more than just books on algorithms and startups.


"that have not fully manifested yet"

This is not true..

"I find it utterly maddening how divorced STEM people have become from philosophical and ethical concerns of their work. I blame academia and the education system for creating this massive blind spot, and it is most apparent in echo chambers like HN that are mostly composed of Western-educated programmers with a degree in computer science. At least on X you get, among the lunatics, people that have read more than just books on algorithms and startups."

Steve Jobs had something to say about this. Shame hes gone.


Because there is a wide range of what people consider good. If you look at that the people on X consider to be good, it's not very surprising.


In CC or Codex (or whichever) — “run git diff and review”


100% required on all Pynchon novels that's for sure.


This is pretty cool!

Since we’re sharing related work, I’ve been building something at a very different layer of the stack. Shameless plug warning!

Where Phind gives you an interactive answer right now, I built SageNet for the opposite problem: when you want to go from zero → actually good at something over weeks/months, not just get a one-shot result.

SageNet:

- builds a personalized learning plan

- adapts as you progress

- generates short audio lessons

- gives real projects

- has a daily voice check-in agent

- lets you share a public progress dashboard

If anyone wants to try it: https://www.sagenet.club


or, they wrote it and asked an LLM to improve the flow


I've been building SageNet, a voice-first AI coach that turns your goals into structured, adaptive learning plans.

After a 2-minute voice conversation, Sage generates a personalized 6-module roadmap with build-first projects. It checks in by voice, analyzes your reflections, and regenerates your plan if needed. You can invite friends to your Support Squad for accountability.

The biggest insight so far is people don’t want “infinite content.” They want structure and someone who remembers them.

would love feedback!

http://sagenet.club


I recommend the books Cherry or Wasting Talent for two semi-autobiographical novels that show heavy/gnarly addiction. As a non addict, it wasn’t until I read a book like these until I was able gain more empathy for the spiral. Unlike news articles or stats, these offer much more human (perhaps mildly fabricated) anecdotes of how seemingly impossible it is for addicts to get clean. I’m aware my consumption of these stories borders on “poverty porn”, while also helping me empathize with that side of the human condition.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: