Hacker News new | past | comments | ask | show | jobs | submit | swatcoder's comments login

Alternately, sleep is the optimal ultra-high-efficiency survival state and wakefulness only exists to give creatures enough time to get their affairs in order so that they can safely return to dormancy.

It's easy to think of sleep as a compromise to be defeated because we're culturally preoccupied with the achievements and pleasures of wakefulness, but that's really just us claiming personal preference for one narrow part of a holistic system that's just doing its own survival and propagation thing.

Consider trees, mushrooms, cicadas, snakes, or cats. Chilling out in low power mode as much is possible is maybe not a error to be fixed so much as it is an outcome of efficient design.


I enjoy this flipped perspective.

The default and optimal state of a life form is waiting and efficiently using resources.

Moving around, socializing and reproducing, killing and eating, are all energy expenditures or necessary to be able to prolong the sleep-life. Annoyances, from the POV of the sleep-being.


"Hurry up and idle" for CPU design ended up being great for power efficiency. Nature came to the same conclusion for biological organisms a long time ago.

I think one of our goals is to reduce entropy in the Universe, and being awake lets us do 100x more of that than if we were in low power mode

What kind of sleep are you having that is more entropic than your waking?

Entropy increases in the universe by default, so sleeping allows that to happen more than in the waking state

Consider hibernation. Evolution went to great trouble there to maximize the duration that some animals can sleep for - and it's pretty clearly solely designed to save power.

Which doesn't mean that's the only thing sleep is good for, evolution doesn't believe in separating concerns, but it's definitely a thing it does.


Are you sure about that?

We're several years in now, and have lots of A:B comparisons to study across orgs that allowed and prohibited AI assistants. Is one of those groups running away with massive productivity gains?

Because I don't think anybody's noticed that yet. We see layoffs that makes sense on their own after a boom, and cut across AI-friendly and -unfriendly orgs. But we don't seem to see anybody suddenly breaking out with 2x or 5x or 10x productivity gains on actual deliverables. In contrast, the enshittening just seems to be continuing as it has for years and the pace of new products and features is holding steady. No?


> We're several years in now, and have lots of A:B comparisons to study across orgs that allowed and prohibited AI assistants. Is one of those groups running away with massive productivity gains?

You mean... two years in? Where was the internet 2 years into it?


You’re not making the argument you think you’re making when you ask “Where was the [I]ntwenet 2 years into it?”

You may be intending to refer to 1971 (about two years after the creation of ARPANet) but really the more accurate comparison would be to 1995 (about two years since ISPs started offering SLIP/PPP dialup to the general public for $50/month or less).

And I think the comparison to 1995, the year of the Netscape IPO and URLs starting to appear in commercials and on packaging for consumer products, is apt: LLMs have been a research technology for a while, it’s their availability to the general public that’s new in the last couple of years. Yet while the scale of hype is comparable, the products aren’t: LLMs still don’t anything remotely like what their boosters claim, and have done nothing to justify the insane amounts of money being poured into them. With the Internet, however, there were already plenty of retailers starting to make real money doing electronic commerce by 1995, not just by providing infrastructure and related services.

It’s worth really paying attention to Ed Zitron’s arguments here: The numbers in the real world just don’t support the continued amount of investment in LLMs. They’re a perfectly fine area of advanced research but they’re not a product, much less a world-changing one, and they won’t be any time soon due to their inherent limitations.


They're not a product? Isn't Cursor on the leaderboard for fastest to $100m ARR? What about just plain usage or dependency. College kids are using chrome extensions that direct their searches to chatgpt by default. I think your connection to the internet uptake is a bit weak, and then you've ended by basically saying too much money is being thrown at this stuff, which is quite disconnected from the start of you arg.


>they’re not a product

I think it's pretty fair to say that they have close to doubled my productivity as a programmer. My girlfriend uses ChatGPT daily for her work, which is not "tech" at all. It's fair to be skeptical of exactly how far they can go but a claim like this is pretty wild.


Both your and her usage is currently being subsidized by venture capital money.

It remains to be seen how viable this casual usage actually is once this money dries up and you actually need to pay per prompt. We'll just have to see where the pricing will eventually settle, before that we're all just speculating.


I pay for chatgpt and would pay more.


> And I think the comparison to 1995, the year of the Netscape IPO and URLs starting to appear in commercials and on packaging for consumer products, is apt

My grandfather didn’t care about these and you don’t care about LLMs, we get it

> They’re a perfectly fine area of advanced research but they’re not a product

lol come on man


We’ll need at least three more years to sort out the Lycos, AltaVista, and HotBots of the LLM world.


You're absolutely right, but you've just asserted that almost all companies making software are unreasonable.

Distressingly, doing what you suggest remains the exception by orders of magnitude. Very few people have internalized why it's necessary and few of those have the political influence in their organizations to make it happen.


A career and a lifestyle are not the same thing.

For many, the "homesteading" labor is an fulfilling and concrete complement to lucrative but abstract desk work, not a replacement.

It takes the place of idle hobbies like consuming more media on screens, lifting abitrary weights or running in place on a treadmill, etc

It's natural to assume we'd be pretty deeply wired to productively tend to our own lives and our own well-being in very concrete way, and many people who intentionally take up neglected homesteading tasks at their own pace and convenience often find it ameliorates many of the odd feelings of depression, anxiety, restlessness, etc that hung over them previously.

We probably shouldn't be doing anything in particular all day, but doing concrete productive things in a world where so many things are abstract and alienating can provide great balance.


In this context, a "doer" might commit to an agenda, making ongoing decisions that furthered accomplishment and success on that agenda. While their nominal role is to decide, the decisions they make are organized to effect some end.

In contrast, a "discusser" or "decider" makes decisions in order to satisfy the social role of making decisions, but often with a lack of surety, clarity, follow-through or commitment. Perhaps in fear of missing some greater opportunity, or fear of being credited with some failure, their decisions are not organized in a way that actually effects some end.


No commercialized LLM product released so far would meet Steve Jobs' standard for a high-profile Apple product, and while that standard is clearly much more lax now, there still is one and still higher than what people accept from OpenAI et al. I wouldn't call that perfectionism, just a struggling brand standard that can't afford to lose even more face.

I think many inside and outside Apple hoped that the ways that they scoped their features in the Apple Intelligence announcement would help them pull off something duly reliable and practical, but it's not that surprising that even those ambitions might have bought too deeply into the hype.


> No commercialized LLM product released so far would meet Steve Jobs' standard for a high-profile Apple product

I can put words into Steve Jobs' mouth as well!

Steve Jobs would reframe LLMs (and other ML-based solution) as creative assistants, insisting that their job is not to get the right answer, but to help you get your creative juice running. To get this to happen, he would have personally convinced engineers that have made cool AI demos to work for Apple to turns these demos into features that form a new Apple creative suite.


Your putting of words in his mouth is better. I think he would sell it as not an AI or LLM or whatever.


> I wouldn't call that perfectionism, just a struggling brand standard that can't afford to lose even more face.

It's very well-known that Apple is perfectionistic. I'm not meaning to say that perfectionism is a negative quality or a bad thing, just that it takes a while.


Apple is not perfectionistic. Apple is performative. The entire company is performing software development instead of actually doing it.

Apple's development process is a marketing pitch-driven hallucination - project management by buzzword and individual career status progression.

It's almost entirely inward-looking. The connection to Rest of World is increasingly mythical and remote.

Some good work gets done in spite of this. But senior management doesn't understand quality - either in the internal sense of having bug-free robust product, or the design sense, where products meet real user needs in a satisfying, creative, and delightful way.

Nice graphic design though. Apple is still the leader there. Processor dev has also been exceptional.

IMO it's time for most of the C-suite to step down and let much younger talent take over and shake things up.


Has things gone cracy since I last used OSX for real in 2008 or whatever? Windows have become such a shitshow since Windows 7. I kinda assumed Apple didn't follow suite.


> Apple is not perfectionistic. Apple is performative.

My guy, Apple is autistic as shit. Just look at how in-house everything is. Every time Apple runs into someone doing it wrong, they do it themselves. You know the saying; if you want something done right...

Sure, they're performative as well, but that doesn't change that they are still in the process of doing absolutely everything from scratch because the existing solutions are not good enough for them.

I don't know if you've used the new Apple Silicon MacBooks, but I have a 128GB/8TB one lined up for me to pick up from the Apple Store in about an hour. It's certainly not the best that Apple offers (that would be the new 512GB/16TB Mac Studio) but it practically wipes the floor with every other laptop on the planet. Because Apple made their own chips, because everyone else was doing it wrong.


Really?

If Apple released OpenAI's voice mode, without calling it "AI," referring to a "GPT" or a "model" -- if it was just integrated into the iPhone with a wake word, that absolutely would satisfy Jobs.

The problem with these companies is that they can't visualize how a product works for non-technical customers any more. Because everyone they know lives and breathes Silicon Valley. They see billboards for vector databases on their commute, so they think it's perfectly normal to name a product "GPT."

A product containing a language model should never, ever be called "AI" or "GPT" or even "Intelligence." 10 years ago, the only people who knew what the term "AI" even was were the nerdy readers of pulp scifi novels. I'm half joking but the whole town of silicon valley needs their glasses smashed and to be shoved into a locker.


> Look at books. Look at how little variation there is in presentation

I look at a lot of books. I don't think you've summarized them well here. I suggest you look at more yourself. You're missing a lot of beautiful art.

That said, you're right that minimizing decoration and distraction is itself a treasured kind of expression enabled by independent publishing, set against a world where some six Lead Designers get to say precisely how everybody must read and see things this year.

So I very much agree with your sentiment, but you happen to be doing a great injustice to books on your way to saying it. :)


> I look at a lot of books. I don't think you've summarized them well here. I suggest you look at more yourself. You're missing a lot of beautiful art.

This comment is pretty obnoxious. (I've stepped into a library, thanks.)

For all the books that you're thinking of that don't fit the mold, when you put that set up against the set of those that do, it's not even close.

I'm not talking about hypotheticals (i.e. the potential for stuff that could be made that deviates from the norm) or creative works that really exist but only at the margins (i.e. in small numbers by comparison). I'm talking about what people actually produce and consume in large numbers and the average experience for each context/form.

There is undeniably a lot more homogeneity in books than blogs to a large degree. It's something like over a hundred or a thousand to 1.


Twenty years ago, the norm was to email your personal "social network" using a completely standardized and decentralized protocol.

Twenty years ago, the norm was to maintain a bookmark list of personally maintained websites you might visit in your online time, using a completely standardized protocol distributed across hosts ranging from home PC's to closet racks to countless disparate ISP's, universities, and hosting providers.

Twenty years ago, the norm was to have topical conversations with strangers using completely standardized protocols using decentralized or federated networks (NNTP and IRC).

Twenty years ago, the norm was to trade with and organize within your community on minimalist bulletin boards that took no fees and had almost no rules.

Twenty years ago, the norm was still that you kept most of your purchases in your local economy, but when you chose to remove money from your local economy (a certain kind of travesty, now become the norm), you did so with nearly unmediated relationship with far away merchants, with only a few points in fees being absorbed by agencies situated in between.

Twenty years ago, the norm was that when you bought digital media, you received a commodity copy of it that you could duplicate and access and re-encode as you saw fit.

Twenty years ago, the (illicit) norm was to digitize your own physical media and share it freely with others using numerous clever protocols to overcome bandwidth and accountability concerns, all decentralized.

True, twenty years ago, you did not get to have trivial access to trivial interactions with celebrities. You did not get endless streams of trivial headlines about trivial things. You did not get to drone out to endless autoplay of trivial videos. You did not get to monetize your own trivialities so easily.

Frankly, having been around for twenty years and then quite a few more, I have no idea what you're talking about in your comment.


>Twenty years ago, the (illicit) norm was to digitize your own physical media

No it wasn't. It was the norm among an ingroup of technically savy young people who, on HN, always confuse themselves with the average user at the time. A normal guy two decades ago was as the other commenter above points out, accessing the internet through a bunch of apps provided by their ISP (it's where the name comes from) a sort of vertically integrated company town, likely completely reliant on proprietary software that was so monopolistic, like IE, it sucked completely. When was browser choice better, then or now?

If you're an average user, not an average hacker in 2025 you have significantly more ways to avoid "big corpo" than you did back then. You weren't on IRC as a normie in 2005, you were on AIM and ICQ, also AOL owned of course. The norm, as in for a normal person, was to engage with the web and the world of computing to two, maybe three mega-companies, they weren't torrenting, ripping and re-encoding media.


No, the "normal guy" was downloading Napster and Limewire and 100 other applications from fly-by-night publishers, following folklore suggestions on how to get everything they could for free, getting into messy encounters on Craigslist, visiting independently hosted warez sites, infecting their PC with all kinds of crap, etc, exactly because it was easy and exciting and inviting to do so.

Largely, the most "technically savvy young people" who you're engaging with on HN these days were not doing those things, but were complaining on their many independent and decentralized communities about all the headaches involved in incessantly being brought in to help those people clean up their messes.

Twenty years ago, it was the wild west. Those who had savvy had staked their claims and knew how to survive safely, but the news had made it to everybody else and rails had been laid and all the starry eyed naive people were flooding in to an exciting, if dangerous, new world of novelty and opportunity. It was very different than what we have now, and grossly less consolidated.


Were you by any chance in college 20 years ago?

Absolutely none of this tracks for me, it smacks of someone extrapolating their tiny niche to the entire world.

PS: Limewire was loaded with browser toolbars and malware. The definition of corporate hell.


I was very much not.

And I agree that the uncountably many software publishers of the time were aggressively experimenting with the dark monetization patterns that Doubleclick, Facebook, Google, Amazon and others were soon to refine into what are now only a handful of ~trillion dollar business engines.

Of course there was corporate participation in that era, but following the dot-com boom and its bust, the scale and number of these operations was very different than the small-and-consolidated Compuserve/AOL/Prodigy era of a decade before and equally different than titanic-and-consolidated Facebook/Google/Amazon era a decade later.


I was in college even earlier, and living in a third world country to boot.

Every single soul in my class was pirating music online.

Napster was a blip (too early, very few had pcs) and only the tech saavy used it, but Kazaa, Soulseek and Limewire were absolutely huge among my cohort.

Btw I asked German friends my age and pretty much all of them have a similar experience.


Emule. Even the older relatives of my SO knew about it.


Browser toolbars came from shady websites, not limewire. Limewire was for Trojans Source: I was doing amateur mallet research 20 years ago


The official installer literally shipped with ask.com Toolbar.


> It was the norm among an ingroup of technically savy young people

This is a mainstream mass-market advert from Apple in 2001:

https://www.researchgate.net/figure/Apples-Rip-Mix-Burn-camp...

"Rip. Mix. Burn."

I think you're totally wrong.


Normies were pirating movies and series like crazy in Europe even on blue-collar homes.


and another 20 years prior to that, we had mixed tapes, from a friend who had one of those dual deck tape recorders, or VHS copies (alas poor Betamax) from that friend who worked in the video store that had access to multiple video recorders.

Nothing has really changed, has it? Just the speed and scope of access has increased, and the mass of noise amidst the (hopefully still existing after AI aftermath) quality signal (by magnitudes).


Twenty years ago you would use IM protocols and forums; IRC and Usenet were for die hard hardcore nerds/geek/academics.


The author is clear that they're talking about "billion dollar tech companies" for an audience of those people called to them.

You're right that these are not the only place that people can write software and that many of us have recognized for a very long while that these are noxious places to write software, or that they were eventually going to become so.

Billion dollar FAANGs and their smaller, cargo culting, shadows represent a certain sector with a certain work atmosphere, much as game development companies and hedge/trading firms do. 15 years ago, during the ascent of Facebook and Google, this atmosphere was different than it is now -- innovative and luxurious and inviting -- and some people still look see them through the lens of the past, but they're much larger machines now, with different priorities and incentive structures, and as the author notes, those are mostly not aligned with sustainable, satisfying, or healthy environments for most of the engineers who've found themselves inside of them.

Like finance, they pay extremely well, and like games, they can make you feel like you're part of something you can brag about at a dinner party, but also like both, they have little concern about chewing you up for as long as you're willing to bear it.


I strongly do not think that things like 80 hour weeks, abuse, uncaring managers, and especially AGILE of all things are super common at FAANG. If you join a startup (in any industry) I think there's an understanding that you will probably work over 40 hours a week and that things will generally be hectic. Many companies will openly advertise this and tell you if you ask.

I really found myself wondering who the audience was for this. The person who works hard, produces quality engineering artifacts, and DOESN'T have options at other companies? I don't think that person exists?


I have friends who are extremely smart where this is not the case. Some of them didn't know other options were available. Some did not have the bandwidth to interview.


> The author is clear that they're talking about "billion dollar tech companies" for an audience of those people called to them.

> We’re in an industry where burnout isn’t just common - it’s expected. If you’re not pulling all-nighters, you’re "not committed." If you’re not answering Slack messages at midnight, you’re "not a team player." This culture is toxic, and it’s only getting worse. The relentless churn of projects, the constant pressure to innovate, and the ever-present threat of obsolescence create a perfect storm of stress.

No, the author is generalizing what work at a billion dollar tech company is like to the whole industry. I've never worked for a company similar to the one described in this post, and I think that the vast majority of people in tech haven't either. Silicon valley is not the world.

Either ways, unionizing sounds like a great idea.


Yup.

Inasmuch as these are collaborative document generators at their core, "minimally ambiguous prompt and conforming reply" is a strongly represented document structure and so we benefit by setting them up to complete one.

Likewise, "tragi-comic dialog between increasingly frustrated instructor and bumbling pupil" is also a widely represented document structure that we benefit by trying to avoid.

Chatbot training works to minimize the chance of an LLM engaging in the latter, because dialog is a intuitive interface that users enjoy, but we can avoid the problem more successfully by just providing a new and less ambiguous prompt in a new session, as you suggest.


> dialog is a intuitive interface that users enjoy

Do people enjoy chat interfaces in their workflows?

I always thought that cursor/copilot/copy.ai/v0.dev were so popular because they break away from the chat UI.

Dialog is cool when exploring but, imo, really painful when trying to accomplish a task. LLMs are far too slow to make a real fluid conversation.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: