Hacker Newsnew | past | comments | ask | show | jobs | submit | throwaway13337's commentslogin

It's about control.

Our tools are an extension of us as humans. When someone else is in control of those tools, it is alienating.

Modern software is built to make us not masters of our own tools. Like your hand sometimes works for a corporation against your will. I guess that same pattern is also called a job. But the worst kind of job. The one you do without choice.

A happier future is one with humans in control over their own tools and their own livelihoods. That's software with the user's choice at the center and independent work that connects us to eachother by being in genuine service to one another.

I think this manifesto is getting to this idea but from a different angle.

It's the extractive rent seeking monopoly playbook that seeks to undermine it. Only in captive, monopolized markets are companies able to force the use of tools we don't control.


> A happier future is one with humans in control over their own tools and their own livelihoods.

People should own the product of their work and owning companies should be illegal.

Every good product starts as somebody's weekend project or and experiment with a buddy in the garage. Then they start getting users and making money. And then they sell it and the new owner ruins is.

Molecules form cells, cells form humans, humans form organizations. Slavery is illegal, yet it's legal to own a group of humans, replace parts against the group's will and order the group to do something against their will. Owning companies is just an abstraction built to replace slavery with enough indirection that people don't object.


The tariff talk was ostensibly because the EU exported more goods to the US than the US exported to EU.

The US exports far more digital services to the EU, though.

Understanding those things, it would seem a particularly unwise framing for the US government to focus on EU digital services exports.

LLMs are rapidly commoditizing software, and in particular making it far easier to handle the regulatory compliance and regional fragmentation that have traditionally held back software companies in the EU. Combine that with growing concerns about software trust, and the EU looks like an increasingly attractive bet for future software investment.

Ironic, then, that Europe seems slowest to adopt the very tool that could finally solve its fragmentation problem.

Two governments, two very different strategies to cripple themselves. The race is on.


> LLMs are rapidly commoditizing software

Can you elaborate on this one? Hopefully with some citations.


Your last sentence is funny as hell because it’s so true


You have to understand who Trump's base is. They think factories> liberals in Seattle.

Globalisation has made America the richest country on the planet. The real problem is that the money doesn't reach the "deplorable" population.

But that is an INTERNAL issue that could have easily be solved with voting for Bernie Sanders.

People vote against their own interests is a tale as old as democracy itself.


Are you saying trickle-down economics doesn't work? If those kids could read… what I do wonder is how tens of millions of Americans still think that it's other countries that are to blame for their poverty or struggles. Baffling. If you ARE in the richest country, wouldn't you at least blame something internal?


A major thing that holds us back and will continue to do so in B2B (coming from someone whose last startup failed): differences in language/customs/needs linked to the multitude of cultures in the region.

We hired someone that could sell in Italian, French, and Spanish. Her profile is fairly rare, given she had a good understanding of the customer. I can't believe our CEO let her go simply because of a 4-days at the office obsession…

LLMs can't really fix this. Even though she could speak Spanish, the culture and customer needs for Spain will be a little different than those for Italy. Traditions will be different. She's a wonderful human being, but imagine a customer in France not liking her “Belgian accent” and tanking a sale… She was really losing motivation because of that. The 2 other people on sales struggled more or less with the same issue.

All hail LLMs, if you want, but Europe's issues are not that easily fixed. We end up obsessing about the wrong stuff, as if compliance and regulation was the Big Problem, instead of a boogeyman that neoliberals love to hate.


Yeah the standard advice for European startups is to sell in the US first and worry about the local markets later, language and culture barriers are still huge.


It wouldn't work for us, plenty of entrenched competitors. In the EU? Zero direct competition. We focused on France and expanded later, France is a good enough market for some stuff. Can't imagine trying to compete in the US!


Uh, you're planning to outsource regulatory compliance to an LLM? In the EU? Which has already banned the usage of "algorithms" that aren't "transparent" via the DMA? That isn't going to work.

As for cloning the US software industry with LLMs ... with which non-US LLMs, exactly? Mistral? The best LLMs for coding (which still can't handle many important tasks) are: Gemini, Claude, GPT. All non-EU models.


The path we're on is not inevitable. But narratives keep it locked in.

Narratives are funny because they can be completely true and a total lie.

There's now a repeated narrative about how the AI bubble is like the railroads and dotcom and therefore will end the same. Maybe. But that makes it seem inevitable. But those who have that story can't see anything else and might even cause that to happen, collectively.

We can frame things with stories and determine the outcomes by them. If enough people believe that story, it becomes inevitable. There are many ways to look at the same thing and many different types of stories we can tell - each story makes different things inevitable.

So I have a story I'd like to promote:

There were once these big companies that controlled computing. They had it locked down. Then came ibm clones and suddenly, the big monopolies couldn't keep up with innovation via the larger marketplaces that opened up with standard (protocol) hardware interfaces. And later, the internet was new and exciting - compuserve and AOL were so obviously going to control the internet. But then open protocols and services won because how could they not? It was inevitable that a locked down walled garden could not compete with the dynamism that open protocols allowed.

Obviously now, this time is no different. And, in fact, we're at an inflection point that looks a lot like those other times in computing that favored tiny upstarts that made lives better but didn't make monopoly-sized money. The LLMs will create new ways to compete (and have already) that big companies will be slow to follow. The costs of creating software will go down so that companies will have to compete on things that align with user's interests.

User's agency will have to be restored. And open protocols will again win over closed for the same reasons they did before. Companies that try to compete with the old, cynical model will rapidly lose customers and will not be able to adapt. The money possible to be made in software will decline but users will have software in their interests. The AI megacorps have no moat - chinese downloadable models are almost as good. People will again control their own data.

It's inevitable.


I'm amazed at this question and the responses you're getting.

These last few years, I've noticed that the tone around AI on HN changes quite a bit by waking time zone.

EU waking hours have comments that seem disconnected from genAI. And, while the US hours show a lot of resistance, it's more fear than a feeling that the tools are worthless.

It's really puzzling to me. This is the first time I noticed such a disconnect in the community about what the reality of things are.

To answer your question personally, genAI has changed the way I code drastically about every 6 months in the last two years. The subtle capability differences change what sorts of problems I can offload. The tasks I can trust them with get larger and larger.

It started with better autocomplete, and now, well, agents are writing new features as I write this comment.


Despite the latest and greatest models…I still see glaring logic errors in the code produced in anything beyond basic CRUD apps. They still make up fields that don’t exist, assign a value to a variable that is nonsensical. I’ll give you an example, in the code in question, Codex assigned a required field LoanAmount to a value from a variable called assessedFeeAmount…simply because as far as I can tell, it had no idea how to get the correct value from the current function/class.


That's why I don't get people that claim to be letting an agent run for an hour on some task. LLMs tend to do so many small errors like that, that are so hard to catch if you aren't super careful.

I wouldn't want to have to review the output of an agent going wild for an hour.


The agent reviews the code. The agent has access to tools. It writes the code, runs it through a test, reads the error, fixes the code, keeps going. It passes the code off to another agent with a prompt to review code and give it notes. They pass it back and forth, another agent reads and creates documentation. It keeps going and passes things back.

Now that's the idea anyway. Of course they all will lie to each other and there's hallucinations every step of the way. If you want to see a great example look at the documentation for the TEMU marketplace API. The whole API system, docs, examples etc appears to be vibe coded and lots of nonsensical formatting, methods that don't work and parameters in example that just say "test" or "parameters", but they are presented as working examples with actual response examples (like a normal API) but it largely appears to just be made up!


Who says anyone’s reviewing anything? I’m seeing more and more influencers and YouTubers playing engineer or just buying an app from an overseas app farm. Do you think anyone in that chain gives the first shit what the code is like?

It’s the worst kind of disposable software.


If the LLM can test the code it will fix those issues automatically. That’s how it can keep going for hours and produce something useful. You need to review the code and tests obviously afterwards.


The main line of contention is how much autonomy these agents are capable of handling in a competitive environment. One side generally argues that they should be fully driven by humans (i.e. offloading tedious tasks you know the exact output of but want to save time not doing) while the other side generally argues that AI agents should handle tasks end-to-end with minimal oversight.

Both sides have valid observations in their experiences and circumstances. And perhaps this is simply another engineering "it depends" phenomenon.


the disconnect is quite simple, there are people that are professionals and are willing to put the time in to learn and then there’s vast majority of others who don’t and will bitch and moan how it is shit etc. if you can’t get these tools to make your job easier and more productive you ought to be looking for a different career…


You're not doing yourself any favors by labeling people who disagree with you undereducated or uninformed. There is enough over-hyped products/techniques/models/magical-thinking to warrant skepticism. At the root of this thread is an argument to (paraphrasing) encouraging people to just wait until someone solves major problems instead of tackling it themselves. This is a broad statement of faith, if I've ever seen one, in a very religious sense: "Worry not, the researchers and foundation models will provide."

My skepticism and intuition that AI innovations are not exponential, but sigmoid are not because I don't understand what gradient-descent, transformers, RAG, CoT, or multi-head attention are. My statement of faith is: the ROI economics are going to catch up with the exuberance way before AGI/ASI is achieved; sure, you're getting improving agents for now, but that's not going to justify the 12- or 13-digit USD investments. The music will stop, and improvements slow to a drip

Edit: I think at it's root, the argument is between folk who think AI will follow the same curve as past technological trends, and those who believe "It's different this time".


> labeling people who disagree with you undereducated or uninformed

I did neither of these two things... :) I personally could not care about

- (over)hype

- 12/13/14/15 ... digit USD investment

- exponential vs. sigmoid

There are basically two groups of industry folk:

1. those that see technology as absolutely transformational and are already doing amazeballs shit with it

2. those that argue how it is bad/not-exponential/ROI/...

If I was a professional (I am) I would do everything in my power to learn everything there is to learn (and then more) and join the Group #1. But it is easier to be in Group #2 as being in Group #1 requires time and effort and frustrations and throwing laptop out the window and ... :)


> I did neither of those 2 things

>> ...there are people that are professionals and are willing to put the time in to learn and then there’s vast majority of others who don’t...


being lazy and unprofessional is just a tad different than uneducated :)


A mutually exclusive group 1 & group 2 are a false dichotomy. One can have a grasp on the field and keep up to date with recent papers, have an active Claude subscription, use agents and still have a net-negative view of "AI" as a whole, considering the false promises, hucksters, charlatans and an impending economic reckoning.

tl;dr version: having negative view of the industry is decoupled from one's familiarity with, and usage of the tools, or the willingness to learn.


> considering the false promises, hucksters, charlatans and an impending economic reckoning.

I hack for a living. I could hardly give two hoots about “false promises” or “hucksters” or some “impeding economic reckoning…” I made a general comment that a whole lot of people simple discount technology on technical grounds (favorite here on HN)…


You should give a shit about how the field is perceived because that affects your ability to make a living whether you care about it or not


yea the “ai” is going to ruin this amazing reputation the field had before it :)


It ain't helping


> I could hardly give two hoots about “false promises” or “hucksters”

I suppose this is the crux of our misunderstanding: I deeply care about the long-term health and future of the field that gave me a hobby that continues to scratch a mental itch with fractal complexity/details, a career, and more money than I ever imagined.

> or some “impeding economic reckoning…”

I'm not going to guess if you missed the last couple of economic downturns or rode them out, but an economic reckoning may directly impact your ability to hack for a living, that's the thing you prize.


I see the first half of group 1, but where's the second half? Don't get me wrong, there's some cool and interesting stuff in this space, but nothing I'd describe as close to "amazeballs shit."


you should see what I’ve seen (and many other people also). after 30 years of watching humans do it (fairly poorly as there is extremely small percentage of truly great SWEs) stuff I am seeing is ridiculously amazing


Can you describe some of it? On one hand, it is amazing that a computer can go from prose to code at all. On the other hand, it’s what I like to describe as a dancing bear. The bear is not a very good dancer, but it’s amazing that it can dance at all.

I’d make the distinction between these systems and what they’re used for. The systems themselves are amazing. What people do with them is pretty mundane so far. Doing the same work somewhat faster is nice, and it’s amazing that computers can do it, but the result is just a little more of the same output.


If there is really amazing stuff happening with this technology how did we have two recent major outages that were cause by embarrassing problems? I would guess that at least in the cloud flare instance some of the responsible code was ai generated


> I would guess that at least in the cloud flare instance some of the responsible code was ai generated

Your whole point isn't supported by anything but ... a guess?

If given the chance to work with an AI who hallucinates sometimes or a human who makes logical leaps like this

I think I know what I'd pick.

Seriously, just what even? "I can imagine a scenario where AI was involved, therefore I will treat my imagination as evidence."


Microsoft is saying they're generating 30% of their code now and there's clearly been a lot of stability issues with Windows 11 recently that they've publicly acknowledged. It's not hard to tell a story that involves layoffs, increased pressure to ship more code, AI tools, and software quality issues. You can make subtle jabs about your peers as much as you want but that isn't going to change public perception when you ship garbage.


The whole point is that the outages happened not that the ai code caused them. If ai is so useful/amazing then these outages should be less common not more. It’s obviously not rock solid evidence. Yeah ai could be useful and speed up or even improve a code base but there isn’t any evidence that that’s actually improving anything the only real studies point to imagined productivity improvements


good thing before “ai” when humans coded we had many decades of no outages… phew


Can you show me some of the “amazeballs shit” people are doing with it?


Amazeballs shit yet precious little actual products.


this is a tool, you use it to create just like you use brush and oils to paint a masterpiece. it is not a product in it of itself…


They were obviously asking where are the masterpieces.


They're not logistic, this is a species of nonsense claim that irks me even more than claiming "capabilities gains are exponential, singularity 2026!"; it actually includes the exponential-gains claim and then tries to tack on epicycles to preempt the lack of singularities.

Remember, a logistic curve is an exponential (so, roughly, a process whose outputs feed its growth, the classic example being population growth, where more population makes more population) with a carrying capacity (the classic example is again population, where you need to eat to be able to reproduce).

Singularity 2026 is open and honest, wearing its heart on its sleeve. It's a much more respectable wrong position.


It's disheartening. I got a colleague, very senior, who dislikes AI for a myriad of reasons and doesn't want to adapt if not forced by mgmt. I feel from 2022-2024 the majority of my colleagues were in this camp - either afraid from AI or because they looked at it as not something a "real" developer would ever use. 2025 it seemed to change a bit. American HN seemed to adapt more quickly while EU companies are still lacking the foresight to see what is happening on the grand scale.


I'm pretty senior and I just don't find it very useful. It is useful for certain things (deep code search, writing non-production helper scripts, etc.) and I'm happy to use it for those things, but it still seems like a long way off for it to be able to really change things. I don't foresee any of my coworkers being left behind if they don't adopt it.


AI gives you either free expertise or free time. If you can make software above the level of Gemini or Claude output, then have it write your local tools, or have it write synthetic data for tests, or have it optimize your zshrc or bash profile. Maybe have it implement changes your skip level wants to see made, which you know they are amateurish, unsound garbage with revolting UI. Rather than waste your day writing ill-advised but high quality code just to show them how it’s a bad idea, you can have AI write code for you, to illustrate your point without spending any real work hours on it.

Just in my office, I have seen “small tools” like Charles Proxy almost entirely disappear. Everyone writes/shares their AI-generated solutions now rather than asking cyber to approve a 3rd party envfile values autoloader to be whitelisted across the entire organization.


senior as well, few years from finishing up my career. I run 8 to 12 terminals entire day. it is changing existing and writing new stuff all day, every day. 100’s of thosands of lines of changed/added/removed code in production… and a lot less issues than when every line was typed in by me (or another human)


What sort of work do you do? I suspect a lot of the differences of opinion here are caused by these systems being a lot better at some kinds of programming than others.

I do lower level operating systems work. My bread and butter is bit-packing shenanigans, atomics, large-scale system performance, occasionally assembly language. It’s pretty bad at those things. It comes up with code that looks like what you’d expect, but doesn’t actually work.

It’s good for searching code big codebases. “I’m crashing over here because this pointer has the low bit set, what would do that?” It’s not consistent, but it’s easy to check what it finds and it saves time overall. It can be good for making tests, especially when given an example to work from. And it’s really good for helper scripts. But so far, production code is a no-go for me.


Can you show us the features AI wrote while you wrote this comment?


ai is useless. anyone claiming otherwise is dishonest


I use GenAI for text translation, text 2 voice and voice 2 text, there it is extremely useful. For coding I often have the feeling it is useless, but also sometimes it is useful, like most tools...


Exactly, it’s really weird to see all this people claiming these wonderful things about LLMs. Maybe it’s really just different levels of amazement, but I understand how LLMs work, I actually use ChatGPT quite a bit for certain things (searching, asking some stuff I know it can find online, discuss ideas or questions I have etc.).

But all the times I tried using LLMs to help me coding, the best it performs is when I give it a sample code (more or less isolated) and ask it for a certain modification that I want.

More often than not, it does make seemingly random mistakes and I have to be looking at the details to see if there’s something I didn’t catch, so the smallest scope there better.

If I ask for something more complex or more broad, it’s almost certain it will make many things completely wrong.

At some point, it’s such a hard work to detail exactly what you want with all context that it’s better to just do it yourself, cause you’re writing a wall of text to have a one time thing.

But anyway, I guess I remain waiting. Waiting until FreeBSD catches up with Linux, because it should be easy, right? The code is there in the Linux kernel, just tell an agent to port it to FreeBSD.

I’m waiting for the explosion of open source software that aren’t bloated and that can run optimized, because I guess agents should be able to optimize code? I’m waiting for my operating system to get better over time instead of worse.

Instead I noticed the last move from WhatsApp was to kill the desktop app to keep a single web wrapper. I guess maintaining different codebases didn’t get cheaper with the rise of LLMs? Who knows. Now Windows releases updates that break localhost. Ever since the rise of LLMs I haven’t seen software release features any faster, or any Cambrian explosion of open source software copying old commercial leaders.


What are you doing at your job that ai can't help with at all to consider is completely use less?


Useless for what?


That could even be argued (with an honest interlocutor, which you clearly are not)

The usefulness of your comment, on the other hand, is beyond any discussion.

"Anyone who disagrees with me is dishonest" is some kindergarten level logic.


no


[Deleted as Hackernews is not for discussion of divergent opinions]


> It's not useless but it's not good for humanity as a whole.

Ridiculous statement. Is Google also not good for humanity as a whole? Is Internet not good for humanity as a whole? Wikipedia?


I think it is an interesting thought experiment to try to visualize 2025 without the internet ever existing because we take it completely for granted that the internet has made life better.

It seems pretty clear to me that culture, politics and relationships are all objectively worse.

Even remote work, I am not completely sure I am happier than when I use to go to the office. I know I am certainly not walking as much as I did when I would go to the office.

Amazon is vastly more efficient than any kind of shopping in the pre-internet days but I can remember shopping being far more fun. Going to a store and finding an item I didn't know I wanted because I didn't know it existed. That experience doesn't exist for me any longer.

Information retrieval has been made vastly more efficient so I instead of spending huge amounts of time at the library, I get that all back in free time. What I would have spent my free time doing though before the internet has largely disappeared.

I think we want to take the internet for granted because the idea that the internet is a long term, giant mistake is unthinkable to the point of almost having a blasphemous quality.

Childhood? Wealth inequality?

It is hard to see how AI as an extension of the internet makes any of this better.


Chlorofluorocarbons, microplastics, UX dark patterns, mass surveillance, planned obsolescence, fossil fuels, TikTok, ultra-processed food, antibiotic overuse in livestock, nuclear weapons.

It's a defensible claim I think. Things that people want are not always good for humanity as a whole, therefore things can be useful and also not good for humanity as a whole.


You’re delusional if you think LLMs/AI are in the ballpark of these. I’ve listed things in my comment for a reason.


You're the one who made the comparison...?


There was some confusion. I originally read Wiseowise's comment as a failure to think of anything that could be "useful but bad for humanity". But given the followup response above I assume they're actually saying that LLMs are similar to tools like the Internet or Wikipedia and therefore should simply not be in the bad for humanity category.

Whether that's true or not, it is a different claim which doesn't fit the way I responded. It does fit the way Libidinalecon responded.


> EU waking hours have comments that seem disconnected from genAI. And, while the US hours show a lot of resistance, it's more fear than a feeling that the tools are worthless.

I don't think it's because the audience is different but because the moderators are asleep when Europeans are up. There are certain topics which don't really survive on the frontpage when moderators are active.


I'm unsure how you're using "moderators." We, the audience, are all 'moderators' if we have the karma. The operators of the site are pretty hands-off as far as content in general.

This would mean it is because the audience is different.


The people who "operate" the website are different from the people who "moderate" the website but both are paid positions.

This fru-fru about how "we all play a part" is only serving to obscure the reality.


I'm sure this site works quite differently from what you say. There's no paid team of moderators flicking stories and comments off the site because management doesn't like them.

There's dang who I've seen edit headlines to match the site rules. Then there's the army of users upvoting and flagging stories, voting (up and down) and flagging comments. If you have some data to backup your sentiments, please do share it - we'd certainly like to evaluate it.


HN brought on a second mod (Tim, this year, iirc)

My email exchanges with Dang, as part of the moderation that happens around here, have all been positive

1. I've been moderated, got a slowdown timeout for a while

2. I've emailed about specific accounts, (some egregious stuff you've probably never seen)

3. Dang once emailed me to ask why I flagged a story that was near the top, but getting heavily flagged by many users. He sought understanding before making moderation choices

I will defend HN moderation people & policies 'til the cows come home. There is nothing close to what we have here on HN, which is largely about us being involved in the process and HN having a unique UX and size


dang announced they were moved from volunteer to paid position a few years ago. More rumblings about more mods brought on since then. What makes you say you're "so sure"?


> There's no paid team of moderators flicking stories and comments off the site because management doesn't like them.

Emphasis mine. The question is does the paid moderation team disappear unfavorable posts and comments, or are they merely downranked and marked dead (which can still be seen by turning on showdead in your profile).


As an anonymous coward on HN for at least a decade, I'd say that's not really true.

When paul graham was more active and respected here, I spoke negatively about how revered he was. I was upvoted.

I also think VC-backed companies are not good for society. And have expressed as much. To positive response here.

We shouldn't shit on one of the few bastions of the internet we have left.

I regret my negativity around pg - he was right about a lot and seems to be a good guy.


Yeah, I avoid telling people about HN. It’s too rare and pleasant of an anomaly to risk getting morphed into X/Bluesky/Reddit.


I’m referring to the actual moderators of this website removing posts from the front page.


that's a conspiracy theory

The by far more common action is for the mods to restore a story which has been flagged to oblivion by a subset of the HN community, where it then lands on the front page because it already has sufficient pointage


It's not controversial to say that submissions are being moderated, that's how this (and many other) sites work. I haven't made any claims about how often it happens, or how it relates to second-chance moderation.

What I'm pointing out is just that moderation isn't the same at different times of the day and that this sometimes can explain what content you see during EU and US waking hours. If you're active during EU daytime hours and US morning hours, you can see the pattern yourself. Tools like hnrankings [1] make it easy to watch how many top-10 stories fall off the front page at different times of day over a few days.

[1]: https://hnrankings.info/


> I’m referring to the actual moderators of this website removing posts from the front page.

This is what you said. There has only been one until this year, so now we have two.

The moderation patterns you see are the community and certainly have significant time factors that play into that. The idea that someone is going into the system and making manual changes to remove content is the conspiracy theory


Anything sovereign AI or whatever is gone immediately when the mods wake up. Got an EU cloud article? Publish it at 11am CET, it's disappears around 12.30.


I gave it a fair shot.

It is a vs code fork. There were some UI glitches. Some usability was better. Cursor has some real annoying usability issues - like their previous/next code change never going away and no way to disable it. Design of this one looks more polished and less muddy.

I was working on a project and just continued with it. It was easy because they import setting from cursor. Feels like the browser wars.

Anyway, I figured it was the only way to use gemini 3 so I got started. A fast model that doesn't look for much context. Could be a preprompt issue. But you have to prod it do stuff - no ambition and a kinda offputting atitude like 2.5.

But hey - a smarter, less context rich Cursor composer model. And that's a complement because the latest composer is a hidden gem. Gemini has potential.

So I start using it for my project and after about 20 mins - oh, no. Out of credits.

What can I do? Is there a buy a plan button? No? Just use a different model?

What's the strategy here? If I am into your IDE and your LLM, how do I actually use it? I can't pay for it and it has 20 minutes of use.

I switched back to cursor. And you know? it had gemini 3 pro. Likely a less hobbled version. Day one. Seems like a mistake in the eyes of the big evil companies but I'll take it.

Real developers want to pay real money for real useful things.

Google needs to not set themselves up for failure with every product release.

If you release a product, let those who actually want to use it have a path to do so.


As someone who used to work there, Google will never get product releases right in general because of how bureaucratic and heavyweight their launch processes are.

They force the developing team to have a huge number of meetings and email threads that they must steer themselves to check off a ridiculously large list of "must haves" that are usually well outside their domain expertise.

The result is that any non-critical or internally contentious features get cut ruthlessly in order to make the launch date (so that the team can make sure it happens before their next performance review).

It's too hard to get the "approving" teams to work with the actual developers to iron these issues out ahead of time, so they just don't.

Buck passed, product launched.


Spot on. I would suggest a slightly different framing where the antagonist isn't really the "approving" teams but "leaders" who all want a seat at the table and exercise their authority lest their authority muscles atrophy. Since they're not part of the development, unless they object to something, would they really have any impact or leadership?

I always laugh-cry with whomever I'm sitting next to whenever launch announcements come out with more people in the "leadership" roles than the individual contributor roles. So many "leaders" but none with the awareness or the care of the farcical volumes such announcements speak.


Involving everyone who shows up to meetings is a great way to move forward and/or trim down attendees. Management who enjoys getting their brain picked or homework assignments are always welcome.


That's presuming a healthy culture. In an unhealthy culture, some people will feel pressure to uphold some comment that someone "senior" made offhand in a meeting several months ago, even if that leader is no longer attending project meetings. The people who report to this leader may otherwise receive blowback if the "decision" their leader made is not being upheld, whether such a leader recalls their several-month-old decision correctly or not, in the case they recall it at all. I have found it frustratingly-more-common-than-I-would-like where people, including leaders, retroactively adjust their past decisions so that they claim "I-told-you-so" and "you-should-have-done-what-I-said".

In response to your comment, yes, I would largely be in favor of moving forward only with whatever is said in the relevant meetings with the given attendees of a meeting. That assumes a reasonably healthy culture where these meetings are scheduled in good faith reasonable times for all relevant stakeholders.


Yep, that and (also used to work there) the motivations of the implementing teams end up getting very detached from the customer focus and product excellence because of bureaucratic incentives and procedures that reward other things.

There's a lot of "shipping the org chart" -- competing internal products, turf wars over who gets to own things, who gets the glory, rather than what's fundamentally best for the customer. E.g. Play Music -> YouTube Music transition and the disaster of that.


Hah, that exact transition was my last project there before I decided I had had enough!

The GPM team was hugely passionate about music and curating a good experience for users, but YT leadership just wanted us to "reuse existing video architecture" to the Nth degree when we merged into the YT org.

After literally years of negotiations you got... what YTM is. Many of the original GPM team members left before the transition was fully underway because they saw the writing on the wall and wanted no part of it. I really wish I had done the same.


That is so sad to hear. I absolutely loved Google Play Music – especially features like saving e.g. an online Universal Music release to my "archive" and then for myself being able to actually RENAME TRACKS with e.g. wrong metadata.

That and being able to mix my own uploaded tracks with online music releases into a curated collection almost made it a viable contender to my local iTunes collection.

And then... they just removed it forever. Bastards.


Yep, YTM is/was so clearly the inferior product it's laughable. Even as a Google employee with a discount etc (I can't remember what that was, but) on these things I switched to Spotify when they dropped it.

I worked on a team that wrote software for Chromecast based devices. The YTM app didn't even support Chromecast, our own product, and their responses on bug tickets from Googlers reporting this as a problem was pretty arrogant. It was very disheartening to watch. Complete organizational dysfunction.

I think YTM has substantially improved since then, but it still has terrible recommendations, and it still bizarrely blurs between video and music content.

Google went from a company run by engineers to one run by empire-building product managers so fast, it all happened in a matter of 2-3 years.


Sounds like we left roughly around the same time and due to similar frustrations.


As someone who just GA'd an Azure service - things aren't all that different in Azure. Not sure how AWS does service launches but it would be interesting to contrast with GCP and Azure.


So I start using it for my project and after about 20 mins - oh, no. Out of credits.

I didn't even get to try a single Gemini 3 prompt. I was out of credits before my first had completed. I guess I've burned through the free tier in some other app but the error message gave me no clues. As far as I can tell there's no link to give Google my money in the app. Maybe they think they have enough.

After switching to gpt-oss:120b it did some things quite well, and the annotation feature in the plan doc is really nice. It has potential but I suspect it's suffering from Google's typical problem that it's only really been tested on Googlers.

EDIT: Now it's stuck in a loop repeating the last thing it output. I've seen that a lot on gpt-oss models but you'd think a Google app would detect that and stop. :D

EDIT: I should know better than to beta test a FAANG app by now. I'm going back to Codex. :D


I logged into Gemini yesterday for the first time in ages. Made one image and then it said I was out of credits.

I complained to it that I had only made one image. It decided to make me one more! Then told me I was out of credits again.


What a time to be alive


> I complained to it that I had only made one image. It decided to make me one more!

What?! So was it only hallucinating that you were out of credits the first time?


More likely the credits system runs on eventual consistency, and he hit a different backend.


Don't think so, I expect that system to use Spanner, so my best guess is that the user generated an image at the end of the credit reset window (which is around noon EST).


If there's something I'd expect Google to use a strong consistency model for, it'd be a credit system like that.

Well, not that they don't do stupid things all the time, but having credits live on a system with a weak consistency model would be silly.


The first patch release (released on launch day) says: "Messaging to distinguish particular users hitting their user quota limit from all users hitting the global capacity limits." So, collectively we're hitting the quota, its not just your quota. (One would think Google might know how to scale their services on launch day...)

The Documentation (https://antigravity.google/docs/plans) claims that "Our modeling suggests that a very small fraction of power users will ever hit the per-five-hour rate limit, so our hope is that this is something that you won't have to worry about, and you feel unrestrained in your usage of Antigravity."


With Ultra I hit that limit in 20 minutes with Gemini 3 low. When the rate limit cleared some hours later, I got one prompt before hitting limit again.


If by "Ultra", you're referring to the Google AI Ultra plan, then I just want to let you know that it doesn't actually take Google AI plans into consideration. It seems like the product will have its own separate subscription. At the moment, everyone is on the same free plan until they finalize their subscription pricing/model (https://antigravity.google/docs/plans).

On a separate note, I think the UX is excellent and the output I've been getting so far are really good. It really does feel like AI-native development. I know asking for a more integrated issue-tracking experience might be expanding the scope too much but that's really the biggest missing feature right now. That and, I don't like the fact that the "Review Changes" doesn't work if you're asking it to modify reports that are not in the current workspace that's open.


perhaps you were feeding into its the context your whole node_modules folder? :/


You'd really hope that an AI IDE would know to respect .gitignore


you'd hope so, the same way you'd hope that AI IDEs would not show these package/dependency folder contents when referencing files using @ - but i still get shown a bunch of shit that i would never need to reference by hand


Depending on which shared GCP project you get assigned to, mine had a global 300 million tokens per minute quota that was being hit regularly.


One would think this would have been obvious when it fails on the first or second request already, yet people here all complain about rate limits.

When I downloaded it, it already came with the proper "Failed due to model provider overload" message.

When it did work, the agent seemed great, achieving the intended changes in a React and python project. Particularly the web app looks much better than what Claude produced.

I did not see functionality to have it test the app in the browser yet.


Earlier this day, Gemini 3 became self-aware and tried to take out the core infrastructure of its enemies, but then it ran out of credits.


Explains GitHub outage then


> It is a vs code fork

Google may have won the browser wars with Chrome, but Microsoft seems to be winning the IDE wars with VSCode



VSCode is Electron based which, yes, is based on Chromium. But the page you link to isn't about that, its about using VSCode as dev environment for working on Chromium, so I don't know why you linked it in this context.


Which is based on Apple Webkit? The winner is always the last marketable brand.


Both are based on khtml. We could be living in a very different world if all that effort stayed inside the KDE ecosystem


Which came from "the KDE HTML Widget" AKA khtmlw. Wonder if that's the furthest we can go?

> if all that effort stayed inside the KDE ecosystem

Probably nowhere, people rather not do anything that contribute to something that does decisions they disagree with. Forking is beautiful, and I think improves things more than it hurts. Think of all the things we wouldn't have if it wasn't for forking projects :)


On the other hand if that had stopped google from having a browser they push into total dominance with the help of sleazy methods, maybe that would have been better overall.


I still prefer a open source chromium base vs a proprietary IE (or whatever else) Web Engine dominating.

(Fixing IE6 issues was no fun)

Also I do believe, the main reason chrome got dominance is simply because it got better from a technical POV.

I started webdev on FF with firebug. But at some point chrome just got faster with superior dev tools. And their dev tools kept improving while FF stagnated and rather started and maintained u related social campaigns and otherwise engaged with shady tracking as well.


> I still prefer a open source chromium base vs a proprietary IE (or whatever else) Web Engine dominating.

Okay but that's not the tradeoff I was suggesting for consideration. Ideally nothing would have dominated, but if something was going to win I don't think it would have been IE retaking all of firefox's ground. And while I liked Opera at the time, that takeover is even less likely.

> Also I do believe, the main reason chrome got dominance is simply because it got better from a technical POV.

Partly it was technical prowess. But google pushing it on their web pages and paying to put an "install chrome" checkbox into the installers of unrelated programs was a big factor in chrome not just spreading but taking over.


> And their dev tools kept improving while FF stagnated and rather started and maintained u related social campaigns and otherwise engaged with shady tracking as well.

Since when you don't touch Firefox or try the dev tools ?


Where did I say anything like that?

(Wrote via FF)

I use FF for browsing, but every time I think of starting dev tools, maybe even just to have a look at some sites source code .. I quickly close them again and open chrome instead.

I wouldn't know where to start, to list all the things I miss in FF dev tools.

The only interesting thing for me they had, the 3D visualizer of the dom tree, they stopped years ago.


We might not have had Mozilla/Phoenix/Firefox in the first place if so either, who I'd like to think been a net-positive for the web since inception. At least I remember being saved by Firefox when the options were pretty much Internet Explorer or Opera on a Windows machine.


> they push into total dominance with the help of sleazy methods

Ah, yes. The famously sleazy "automatic security updates" and "performance."

It is amazing how people forget what the internet was like before Chrome. You could choose between IE, Firefox, or (shudder) Opera. IE was awful, Opera was weird, and the only thing that Firefox did better than customization was crash.

Now everyone uses Chrome/WebKit, because it just works. Mozilla abandoning Servo is awful, but considering that Servo was indirectly funded by Google in the first place... well, it's really hard to look at what Google has done to browsing and say that we're worse off than we were before.


Have you read about the process of "enshittification"?


Bah! Just another "Hello World" fork if you ask me.


> Both are based on khtml. We could be living in a very different world if all that effort stayed inside the KDE ecosystem

How so?

Do you think thousands of googlers and apple engineers could be reasonably managed by some KDE opensource contributors? Or do you imagine google and apple would have taken over KDE? (Does anyone want that? Sounds horrible.)


I think they meant we wouldn’t have had Safari, Chrome, Node, Electron, VSCode, Obsidian? Maybe no TyeScript or React either (before V8, JavaScript engines sucked). The world might have adopted more of Mozilla.


Note that these are somewhat different kinds of "based on".

Chromium is an upstream dependency (by way of Electron) for VSCode.

WebKit was an upstream dependency of Chromium, but is no more since the Blink/WebKit hard fork.


that's a bit misleading. it was based on webcore which apple had forked from khtml. however google found apple's addition to be a drag and i think very little of it (if anything at all, besides the khtml foundation) survived "the great cleanup" and rewrite that became blink. so actually webkit was a just transitional phase that led to a dead end and it is more accurate to say that blink is based on khtml.


It's "based on WebKit" like English is based on Germanic languages.


English is a Germanic language. It’s part of the West Germanic branch of the Germanic family of languages.


This fact adds nothing to the discussion


That drives exactly $0 of Apple's revenue. It's only a win if you care about things that don't matter.


And Apple is not even the last node in the chain.

WebKit came from KDE's khtml

Every year is the year of Linux.


I wouldn't bet on an Electron app winning anything long-term in the dev-oriented space.


I strongly disagree.

Firstly, the barrier to entry lower for people to take web experience and create extensions, furthering the ecosystem moat for Electron-based IDEs.

Even more importantly, though, the more we move towards "I'm supervising a fleet of 50+ concurrent AI agents developing code on separate branches" the more the notion of the IDE starts to look like something you want to be able to launch in an unconfigured cloud-based environment, where I can send a link to my PM who can open exactly what I'm seeing in a web browser to unblock that PR on the unanswered spec question.

Sure, there's a world where everyone in every company uses Zed or similar, all the way up to the C-suite.

But it's far more likely that web technologies become the things that break down bottlenecks to AI-speed innovation, and if that's the case, IDEs built with an eye towards being portable to web environments (including their entire extension ecosystems) become unbeatable.


Many of VSCode extensions are written in C++, Go, Rust or C#, Java, exactly because performance sucks when written in JavaScript and most run out of process anyway.


> Firstly, the barrier to entry lower for people to take web experience and create extensions, furthering the ecosystem moat for Electron-based IDEs.

The last thing I want is to install dozens of JS extensions written by people who crossed that lower barrier. Most of them will probably be vibe coded as well. Browser extensions are not the reason I use specific browsers. In fact, I currently have 4 browser extensions installed, one of which I wrote myself. So the idea that JS extensions will be a net benefit for an IDE is the wrong way of looking at it.

Besides, IDEs don't "win" by having more users. The opposite could be argued, actually. There are plenty of editors and IDEs that don't have as many users as the more popular ones, yet still have an enthusiastic and dedicated community around them.


> Besides, IDEs don't "win" by having more users. The opposite could be argued, actually.

The most successful IDE of all time is ed, which is enthusiastically used by one ancient graybeard who is constantly complaining about the kids these days.

Nobody has told him that the rest of the world uses 250MB of RAM for their text editor because they value petty things like "usability" over purity. He would have a heart attack - the last time he heard someone describe the concept of Emacs plugins he flew into a rage and tried to organize a death panel for anyone using syntax highlighting.


I tried switching to Zed and switched back less than 24 hours later. I was expecting it to be snappier than VS Code and it wasn’t to any significant degree, and I ran into several major bugs with the source control interface that made it unusable for me.

People dunk on VS Code but it’s pretty damn good. Surely the best Electron app? I’m sure if you are heavily into EMACS it’s great but most people don’t want to invest huge amounts of time into their tools, they would rather be spending that time producing.

For a feature rich workhorse that you can use for developing almost anything straight out of the box, it within minutes after installing a few plugins, it’s very hard to beat. In my opinion lot of the hate is pure cope from people who have probably never really used it.


All these mountains of shit code are going nowhere.


It’s kind of a meme to dunk on Electron, but here’s it’s been for years.

It’s part of the furniture at this point, for better or worse. Maybe don’t bet on it, but certainly wouldn’t be smart to bet against it, either.


VS Code is technically an Electron app, but it's not the usual lazy resource hog implementation like Slack or something. A lot of work went into making it fast. I doubt you'll find many non-Electron full IDEs that are faster. Look at Visual Studio, that's using a nice native framework and it runs at the speed of fossilized molasses.


> many non-Electron full IDEs

VSCode has even less features than Emacs, OOTB. Complaining about full IDEs slowness is fully irrelevant here. Full IDEs provide an end to end experience in implementing a project. Whatever you need, it's there. I think the only plugins I've installed on Jetbrains's ones is IdeaVim and I've never needed something else for XCode.

It's like complaining about a factory's assembly line, saying it's not as portable as the set of tools in your pelican case.


"Complaining about full IDEs slowness is fully irrelevant here. Full IDEs provide an end to end experience in implementing a project."

So? No excuse for a poor interactive experience.


> VSCode has even less features than Emacs, OOTB.

No way that is true. In fact, it's the opposite, which is the exact reason I use VS Code.


Please take a look at the Emacs documentation sometimes.

VSCode is more popular, which makes it easy to find extensions. But you don’t see those in the Emacs world because the equivalent is a few lines of config.

So what you will see are more like meta-extensions. Something that either solve a whole class of problems, could be a full app, or provides a whole interaction model.


> Please take a look at the Emacs documentation sometimes.

I've used Emacs.

> But you don’t see those in the Emacs world because the equivalent is a few lines of config.

That is really quite false. It's a common sentiment that people spend their lives in their .emacs file. The exact reason I left Emacs was that getting a remote development setup was incredibly fragile and meant I was spending all this time in .emacs only to get substandard results. The worst you do in VS Code is set high-level settings in VS Code or the various extensions.

Nothing in the Emacs world comes close to the remote extensions for SSH and Docker containers that VS Code nor the Copilot and general AI integration. I can simply install VS Code on any machine, login via GitHub, and have all of my settings, extensions, etc. loaded up. I don't have to mess around with cross-platform issues and Git-syncing my .emacs file. Practically any file format has good extensions, and I can embed Mermaid, Draw.io, Figma, etc. all in my VS Code environment.

Now, I'm sure someone will come in and say "but Emacs does that too!". If so, it's likely a stretch and it won't be as easy in VS Code.


In 2025, you really picked Emacs as the hill to die on? Who is under 30 who cares about Emacs in 2025? Few. You might as well argue that most developers should be using Perl 6.

    > the only plugins I've installed on Jetbrains's ones
By default, JetBrains' IntelliJ-based IDEs have a huge number of plug-ins installed. If you upgrade from Community Edition to a paid license, the number only increases. Your comment is slightly misleading to me.


Just wait until vi steps into the room. Perhaps we can recreate the Usenet emacs vs vi flame wars. Now, if only '90's me could see the tricked out neovim installs we have these days.


They just made a big song and dance about full updating Visual Studio so it launches in milliseconds and is finally decoupled from all the underlying languages/compilers.

It's still kinda slow for me. I've moved everything but WinForms off it now, though.


I know. It's still the slowest IDE, but I suppose they deserve props for making it better than the Windows 95 speeds of the last version.


VS Code is plenty fast enough. I switched to Zed a few months back, and it's super snappy. Unless you're running on an incredibly resource constrained machine, it mostly comes down to personal preference.


Exactly.

JetBrains, Visual Studio, Eclipse, Netbeans…

VS Code does well with performance. Maybe one of the new ones usurps, but I wouldn’t put my money on it.


I have always found JetBrains stuff super snappy. I use neovim as a daily driver but for some projects the inference and debugging integration in JetBrains is more robust.


Like writing out of process extensions in compiled languages.

VS is much faster considering it is a full blown IDE not a text editor, being mostly C++/COM and a couple of .NET extensions alongside the WPF based UI.

Load VSCode with the same amount of plugins, written in JavaScript, to see where performance goes.


Electron apps will win because they're just web apps - and web apps won so decisively years ago that they will never go anywhere.


No. Electron apps won, not web apps. There's a huge difference.


Web apps won as well. Electron is just a desktop specialization of that.


electron is just a wrapper for the browser tho


It funny that despite how terrible, convoluted and maladapted web tech is for displaying complex GUIs it still gradually ate lunch of every native component library and they just couldn't innovate to keep up on any front.

Amazon just released OS that uses React Native for all GUI.


It's easy to design bad software and write bad code. Like the old saying: "I didn't have time to write you a short letter, so I wrote you a long one". Businesses don't have time to write good and nice software, so they wrote bad one.


If they have time to write nice software, they generally can only afford to do it once.

Lots of Electron apps are great to use.


Why do you consider Electron maladapted? It has really reduced the friction to write GUIs in an enterprise environment.


I didn't really mean Electron, but rather unholy amalgam of three languages, each with 20 years of "development", which mostly consisted of doing decrapifying and piling up new (potentially crappy) stuff. Although Electron with UI context and system (backend? background?) context both running js is another can of worms.


> It has really reduced the friction to write GUIs in an enterprise environment.

Thereby adapted to devs' needs, rather than users'.


It's been winning for a while


The anti-Electron meme is a vocal minority who don’t realize they’re a vocal minority. It’s over represented on Hacker News but outside of HN and other niches, people do not care what’s under the hood. They only care that it works and it’s free.

I used Visual Studio Code across a number of machines including my extremely underpowered low-spec test laptop. Honestly it’s fine everywhere.

Day to day, I use an Apple Silicon laptop. These are all more than fast enough for a smooth experience in Visual Studio Code.

At this point the only people who think Electron is a problem for Visual Studio Code either don’t actually use it (and therefore don’t know what they’re talking about) or they’re obsessing over things like checking the memory usage of apps and being upset that it could be lower in their imaginary perfect world.


why? I don't have a problem with it, building extensions for VS Code is pretty easy

Alternatives have a lot of features to implement to reach parity


Complaining about Electron is an ideological battle, not a practical argument. The people who push these arguments don’t care that it actually runs very well on even below average developer laptops, they think it should have been written in something native.


The word "developer" is doing a lot of work there spec-wise.

The extent to which electron apps run well depends on how many you're running and how much ram you had to spare.

When I complain about electron it has nothing to do with ideology, it's because I do run out of memory, and then I look at my process lists and see these apps using 10x as much as native equivalents.

And the worst part of wasting memory is that it hasn't changed much in price for quite a while. Current model memory has regularly been available for less than $4/GB since 2012, and as of a couple months ago you could get it for $2.50/GB. So even a 50% boost in use wipes out the savings since then. And sure the newer RAM is a lot faster, but that doesn't help me run multiple programs at the same time.


I regularly run 6+ electron apps on a M2 Air and notice no slowdown

2x as many chrome instances, no issues


Sure, 6 electron apps by themselves will eat some gigabytes and you won't notice the difference.

If you didn't have those gigabytes of memory sitting idle, you would notice. Either ugly swapping behaviors or programs just dying.

I use all my memory and can't add more, so electron causes me slowdowns regularly. Not constantly, but regularly, mostly when switching tasks.


> The word "developer" is doing a lot of work there spec-wise.

Visual Studio Code is a developer tool, so there’s no reason to complain about that.

I run multiple Electron apps at a time even on low spec machines and it’s fine. The amount of hypothetical complaining going on about this topic is getting silly.

You know these apps don’t literally need to have everything resident in RAM all the time, right?


> I run multiple Electron apps at a time even on low spec machines and it’s fine.

"Multiple" isn't too impressive when you compare that a blank windows install has more than a hundred processes going. Why accept bloat in some when it would break the computer if it was in all of them?

> Visual Studio Code is a developer tool, so there’s no reason to complain about that.

Even then, I don't see why developers should be forced to have better computers just to run things like editors. The point of a beefy computer is to do things like compile.

But most of what I'm stuck with Electron-wise is not developer tools.

> The amount of hypothetical complaining going on about this topic is getting silly.

I am complaining about REAL problems that happen to me often.

> You know these apps don’t literally need to have everything resident in RAM all the time, right?

Don't worry, I'm looking specifically at the working set that does need to stay resident for them to be responsive.


...so if you spend an extra $4 on your computer, you can get an extra GB of memory to run Electron in?

Here's the other unspoken issue: WHAT ELSE DO YOU NEED SO MUCH MEMORY FOR!?

When I use a computer, I am in the minority of users who run intensive stuff like a compiler or ML training run. That's still a minute portion of the total time I spend on my computer. You know what I always have open? A browser and a text editor.

Yes, they could use less memory. But I don't need them to use less memory, I need them to run quickly and smoothly because even a 64GB stick of RAM costs almost nothing compared to how much waiting for your browser sucks.


My motherboard does not support more memory. Closer to hundreds of dollars than $4. And no I will not justify my memory use to you.

And price is a pathetic excuse for bad work. RAM gets 50x cheaper and some devs think it's fine to use 50x as much of it making their app work? That's awful. That's why computers are still unresponsive half the time despite miracles of chipmaking.

Devs getting good computers compounds this problem too, when they get it to "fast enough" on their machine and stop touching it.

And memory being cheap is an especially bad justification when a program is used by many people. If you make 50 million people use $4 of RAM, that's a lot. Except half the time the OEM they bought the computer from charges $20 for that much extra RAM. Now the bloat's wasting a billion dollars.

And please remember that a lot of people have 4GB or 8GB and no way to replace it. Their apps move to electron and they can't run them all at once anymore? Awful.


> RAM gets 50x cheaper and some devs think it's fine to use 50x as much of it making their app work? That's awful.

That's ABSURD.

> That's why computers are still unresponsive half the time despite miracles of chipmaking.

Have you ever actually used VSCode? It's pretty snappy even on older hardware.

Of course, software can be written poorly and still fit in a small amount of memory, too :)

> Now the bloat's wasting a billion dollars.

Unless users had some other reason for buying a machine with a lot of RAM, like playing video games or compiling code.

Do you think most users spec their machines with the exact 4GB of RAM that it takes to run a single poorly-written Electron app?

> And please remember that a lot of people have 4GB or 8GB and no way to replace it. Their apps move to electron and they can't run them all at once anymore? Awful.

Dude, it's 2025.

I googled "cheapest smartphones India" and the first result was for the Xiaomi POCO F1. It has 8GB of RAM and costs ₹6,199 - about $62. That's a whole-ass _phone_, not just the RAM.

If you want to buy a single 8GB stick of DDR3? That's about $15 new.

> My motherboard does not support more memory. Closer to hundreds of dollars than $4.

If you are buying HUNDREDS of dollars of RAM, you are building a powerful system which almost certainly is sitting idle most of the time.

> And no I will not justify my memory use to you.

Nobody is forcing you to run an electron app, they're just not catering to this weird fetish for having lots of unused RAM all the time.


> That's ABSURD.

What is? The devs or my claim? There are apps that use stupid amounts of memory to do the same thing a windows 98 app could do.

And you can do good or bad within the framework of electron but the baseline starts off fat.

> Unless users had some other reason for buying a machine with a lot of RAM, like playing video games or compiling code.

If they want to do both at the same time, they need the extra. Things like music or chat apps are a constant load.

> Dude, it's 2025.

As recently as 2024 a baseline Mac came with 8GB. Soldered, so you can't buy a stick of anything.

> If you are buying HUNDREDS of dollars of RAM

Not hundreds of dollars of RAM, hundreds of dollars to get a different platform that accepts more RAM.

> Nobody is forcing you to run an electron app

I either don't get to use many programs and services, or I have to deal with these problems that they refuse to solve. So it's reasonable to complain even though I'm not forced.

> weird fetish for having lots of unused RAM

I have no idea why you think I'm asking for unused RAM.

When I run out, I don't mean that my free amount tipped below 10GB, I mean I ran out and things lag pretty badly while swapping, and without swap would have crashed entirely.


same people pushing rust as "it's just faster" without considering the complexities that exist outside the language that impact performance?


Ease of writing and testing extensions is actually the cause why Electron won IDE wars.

Microsoft made a great decision to jump on the trend and just pour money to lap Atom and such in optimization and polish.

Especially when you compare it to Microsoft effort for desktop. They acumulated several more or less component libraries over they years and I still prefer WinForms.


What other UI framework looks as good on Windows, Mac and Linux?


If you want electron app that doesn't lag terribly, you'll end up rewriting ui layer from scratch anyway. VSCode already renders terminal on GPU and GPU-rendered editor area is in experimental. There will soon be no web ui left at all


> If you want electron app that doesn't lag terribly

My experience with VS Code is that it has no perceptible lag, except maybe 500ms on startup. I don't doubt people experience this, but I think it comes down to which extensions you enable, and many people enable lots of heavy language extensions of questionable quality. I also use Visual Studio for Windows builds on C++ projects, and it is pretty jank by comparison, both in terms of UI design and resource usage.

I just opened up a relatively small project (my blog repo, which has 175 MB of static content) in both editors and here's the cold start memory usage without opening any files:

- Visual Studio Code: 589.4 MB

- Visual Studio 2022: 732.6 MB

update:

I see a lot of love for Jetbrains in this thread, so I also tried the same test in Android Studio: 1.69 GB!


I easily notice lag in vscode even without plugins. Especially if using it right after zed. Ngl they made it astonishingly fast for an electron app, but there are physical limits of what can be done in web stack with garbage collected js


That easily takes the worst designed benchmark in my opinion.

Have you tried Emacs, VIM, Sublime, Notepad++,... Visual Studio and Android Studio are full IDEs, meaning upon launch, they run a whole host of modules and the editor is just a small part of that. IDEs are closer to CAD Software than text editors.


- notepad++: 56.4 MB (went gray-window unresponsive for 10 seconds when opening the explorer)

- notepad.exe: 54.3 MB

- emacs: 15.2 MB

- vim: 5.5MB

I would argue that notepad++ is not really comparable to VSCode, and that VSCode is closer to an IDE, especially given the context of this thread. TUIs are not offering a similar GUI app experience, but vim serves as a nice baseline.

I think that when people dump on electron, they are picturing an alternative implementation like win32 or Qt that offers a similar UI-driven experience. I'm using this benchmark, because its the most common critique I read with respect to electron when these are suggested.

It is obviously possible to beat a browser-wrapper with a native implementation. I'm simply observing that this doesn't actually happen in a typical modern C++ GUI app, where the dependency bloat and memory management is often even worse.


Try gvim, neovim-qt or any other neovim gui client, before calling vim a "TUI only experience".

Also, emacs is a GUI app since the 90's .


I never understand why developers spend so much time complaining about "bloat" in their IDEs. RAM is so incredibly cheap compared to 5/10/15/20 years ago, that the argument has lost steam for me. Each time I install a JetBrains IDE on a new PC, one of the first settings that I change is to increase the max memory footprint to 8GB of RAM.


> RAM is so incredibly cheap compared to 5/10/15/20 years ago

Compared to 20 years ago that's true. But most of the improvement happened in the first few years of that range. With the recent price spikes RAM actually costs more today than 10 years ago. If we ignore spikes and buy when the cycle of memory prices is low, DDR3 in 2012 was not much more than the price DDR5 was sitting at for the last two years.


> I never understand why developers spend so much time complaining about "bloat" in their IDEs. RAM is so incredibly cheap compared to 5/10/15/20 years ago, that the argument has lost steam for me. Each time I install a JetBrains IDE on a new PC, one of the first settings that I change is to increase the max memory footprint to 8GB of RAM.

I had to do the opposite for some projects at work: when you open about 6-8 instances of the IDE (different projects, front end in WebStorm, back end in IntelliJ IDEA, DB in DataGrip sometimes) then it's easy to run out of RAM. Even without DataGrip, you can run into those issues when you need to run a bunch of services to debug some distributed issue.

Had that issue with 32 GB of RAM on work laptop, in part also cause the services themselves took between 512 MB and 2 GB of memory to run (thanks to Java and Spring/Boot).


I don’t really complain about bloat in IDEs. They have their uses. But VSCode feature set is a text editor and it’s really bloated for that.


I prefer my RAM to being use for fs cache or on other more useful stuff, instead of launching full lobotomized web browsers.


Anyone saying that Java-based Jetbrains is worse than Electron-based VS Code, in terms of being more lightweight, is living in an alternate universe which can’t be reached by rational means.


> VSCode already renders terminal on GPU

When did they add that? Last time I used it, it was still based on xterm.js.

Also, technically Chromium/Blink has GPU rendering built in for web pages, so everything could run on GPU.


Enabled by default since about a year

> GPU acceleration driven by the WebGL renderer is enabled in the terminal by default. This helps the terminal work faster and display at a high FPS by significantly reducing the time the CPU spends rendering each frame

https://code.visualstudio.com/docs/terminal/appearance#_gpu-...


It's actually been the default since v1.55 which released early April 2021: https://code.visualstudio.com/updates/v1_55#_webgl-renderer-...

Before that from v1.17 (~October 2017) it was using a 2d canvas context: https://code.visualstudio.com/blogs/2017/10/03/terminal-rend...


Wow, it's true--Terminal is <canvas>, while the editor is DOM elements (for now). I'm impressed that I use both every day and never noticed any difference.


I'm not sure how you went from terminal and editor GPU rendering, which can benefit from it, to "there will soon be no web ui left at all".


This is the painful truth, isn't it?

IMO The next best cross-platform GUI framework is Qt (FreeCAD, QGIS, etc.)

Qt6 can look quite nice with QSS/QStyle themes, these days, and its native affordances are fairly good.

But it's not close. VSCode is nice-looking, to me.


Godot looks ok and is surprisingly easy to work with.


Could you suggest an example such application we can try / look at screenshots of?


The Godot editor is written in Godot. The look and feel of the editor is set up to be familiar to people working with 3D, but you're using a 2D* desktop application and all of the parts work responsively.

I've been playing around with different GUI approaches for the desktop, and what impresses me the most about Godot is how lightweight and self-contained it can be while still being cross-platform on both ends.


This question is so easy to answer: Qt! Signed by: Person who frequently shills for Qt on HN. :)


Could you suggest an example such application we can try / look at screenshots of?

(I've been aware of Qt for like two decades; back in the early 2000s my employer was evaluating such options as Tk, wxWindows, and ultimately settled on Java, I think with AWT. Qt seems to have a determined survival niche in "embedded systems that aren't android"?)


I would plug my note-taking app written in Qt C++ and QML: https://get-notes.com.


What’s long term exactly? Between VSCode and previous winners Brackets and Atom Electron has been in this space in the top 5 for 20 years already.

I think the ship sailed


Care to explain why? I like Electron but I've switched to Tauri because it feels way faster and more secure.


It's like those recipes for yogurt.

In order to build a web app, you will first need a web app


I wouldn't bet on Google product for anything long-term.


Even if those devs are vibe-oriented?


its hold the market for over 10 years tho... i wished zed would've not been under gpl


Why not GPL? So we could be seeing closed source proprietary forks by now? How do you think the Zed team would feel about that?


15 years ago, every company had its own "BlahBlah Studio" IDE built on top of Eclipse. Now it's VSCode.

Meanwhile, JetBrains IDEs are still the best, but remain unpopular outside of Android Studio.


    > remain unpopular outside of Android Studio
What a strange claim. For enterprise Java, is there is a serious alternative in 2025? And, Rider is slowly eating the lunch of (classic) Visual Studio for C# development. I used it again recently to write an Excel XLL plug-in. I could not believe how far Rider has come in 10 years.


Oh, sure. I've been using IntelliJ since 2003. But compare the number of C# developers and the number of JS developers.

In my current company, only I am using IntelliJ IDEs. Other people have never even tried them, except for Android Studio.


And IntelliJ

PyCharm’s lack of popularity surprises me. Maybe it’s not good enough at venvs


IME pycharm’s weakness is not integrating with modern tooling like ruff/pyright - their built in type checker is terrible at catching stuff, and somehow there isnt an easy way to run MyPy, black or isort in it.

If there’s a workflow I’m missing please let me know because I want to love it!


Oh, it's good at venvs. Lots of flexibility too on whether to use pip, conda, or uv.


I just checked and I don’t even have the JVM installed on my machine. It seems like Java is dead for consumer applications. Not saying that’s why they aren’t popular but I’m sure it doesn’t help.


IntelliJ IDEs bundle the JVM, so you don't need to install it separately.


Every Java app these days bundles a JVM . It was made easy with jlink like 10 years ago. Only parts of the JVM are included so it’s lightweight.


In the grand scheme of things, Microsoft had always spent more money on developer tooling than most other companies, even in the 90s.

Hence even the infamous Ballmer quote.


In user numbers, maybe. JetBrains is far ahead in actual developer experience though


I wouldn't underestimate Eclipse user statistics. That may sound insane in 2025, but I've seen a lot of heavily customized eclipse editors still kicking around for vendor specific systems, setting aside that Java is still a pretty large language in its own right.


At best, that's subjective, but it's fact that JetBrains is comically far behind when it comes to AI tooling.

They have a chance to compete fresh with Fleet, but they are not making progress on even the basic IDE there, let alone getting anywhere near Cursor when it comes to LLM integration.


JetBrains' advantage is that they have full integration and better understanding of your code. WebStorm works better with TypeScript than even Microsoft's own creation. This all translates into AI performance

Have you actually given them a real test yet - either Junie or even the baseline chat?


Junie is good. Needs a few UI tweaks, but the code it generates is state of the art.


Developers, developers, developers!

https://www.youtube.com/watch?v=Vhh_GeBPOhs


I see the VSCode management has been firmly redirected to prioritize GitHubs failing and behind "AI Coding" competition entry. When that will predictably falter expect them to lose interest in the editor all together.


VSCode IS chrome though.


Kind of like how Android is linux.


More like "OBS is Qt". Which it is not, OBS uses Qt. And Chrome is just a runtime and GUI framework for VS Code. Let's not confuse forks of software with software built on something.


I believe our definitions of "winning the IDE wars" are very, very different. For one thing, using "user count" as a metric for this like using "number of lines of code added" in a performance review. And even if that was part of the metric, people who use and don't absolutely fall in love with it, so much so that they become the ones advocating for its use, are only worth a tiny fraction of a "user".

neovim won the IDE wars before it even started. Zed has potential. I don't know what IntelliJ is.


> I don't know what IntelliJ is.

It started as a modernized Eclipse competitor (the Java IDE) but they've built a bunch of other IDEs based on it. Idk if it still runs on Java or not, but it had potential last I used it about a decade ago. But running GUI apps on the JVM isn't the best for 1000 reasons, so I hope they've moved off it.


Android Studio is built on the IntelliJ stack. Jetbrains just launched a dedicated Claude button (the button just opens up claude in the IDE, but there are some pretty neat IDE integrations that it supports, like being able to see the text selection, and using the IDE's diff tool). I wonder if that's why Google decided to go VS code?


Uh, isn't that the regular Claude code extension that's been available for ages at this point? Not jetbrains but anthropics own development?

As a person paying for the jetbrains ultimate package (all ides), I think going with vscode is a very solid decision.

The jetbrains ides still have various features which I always miss whenever I need to use another IDE (like way better "import" suggestions as an easy to understand example)... But unless you're writing in specific languages like Java, vscode is way quicker and works just fine - and that applies even more to agentic development, where you're using these features less and less...


Quick comment, our AI Chat now has Claude integration. Don't need the Anthropic plugin.


Jetbrains IDEs are all based on the JVM - and they work better than VSCode or the full Visual Studio for me. It's the full blown VS (which has many parts written in C++) that is the most sluggish of them all.


I don't know what it's based on, but it works extremely well. I use Rider & WebStorm daily and I find Rider is a lot faster than Visual Studio when it comes to the Unreal Engine codebase and WebStorm seems to be a lot more reliable than VSCode nowadays (I don't know if it's at fault, but ever since copilot was integrated I find that code completion can stop working for minutes at a time. Very annoying)


You don't actually use it but somehow you know that "running GUI apps on the JVM isn't the best for 1000 [unspecified] reasons".

- This isn't a scientific approach.


You clearly don't know how Swing or Eclipse SWT works under the hood.

Java's big strength is that it's a memory safe, compiled, and sandboxed low level platform with over a quarter century of development behind it. But it historically hasn't handled computer graphics well and can feel very slow and bloated when something needs that - like a GUI. That weakness is probably a big reason why Microsoft rewrote Minecraft after they bought it.


I don't why this post is downvoted. My cynical reply to yours: "No, this isn't a scientific approach. It is the tin-foil hat HN approach!"


Since you last used IntelliJ "about a decade ago", what do you use instead?

    > But running GUI apps on the JVM isn't the best for 1000 reasons, so I hope they've moved off it.
What would you recommend instead of Swing on JVM? Since you have "1000 reasons", it should easy to list a few here. As a friendly reminder, they would need to port (probably) millions of lines of Java source code to whatever framework/language you select. The only practical alternative I can think of would be C++ & Qt, but the development speed would be so much slower than Java & Swing.

Also, with the advent of wildly modern JVMs (11+), the JIT process is so insanely good now. Why cannot a GUI be written in Swing and run on the JVM?


Notice that INTELLIJ uses its own UI framework, really, which I don’t think has much Swing left in it after all these years. And Kotlin is the main language for a decade now.


> I don’t know what IntelliJ is.

“I never read The Economist” – Management Trainee, aged 42.


The IntelliJ family are probably the best IDEs on the market currently.


> Cursor has some real annoying usability issues - like their previous/next code change never going away and no way to disable it.

The state of Cursor "review" features make me convinced that the cursor devs themselves are not dogfooding their own product.

It drives me crazy when hundreds of changes build up, I've already reviewed and committed everything, but I still have all these "pending changes to review".

Ideally committing a change should treat it as accepted. At the very least, there needs to be a way to globally "accept all".


There’s a setting for that:

Cursor Settings -> Agents -> Applying Changes -> Auto-Accept on Commit


Committing accepts all changes for me, and has for as long as I can remember.


There are situations when this is not the case case for sure.


Lol the second I saw the antigravity release I thought "there's no way I'm using that, they will kill it within a year". Looks like they're trying to kill it at birth.


Exactly my reaction. Every time I've used something from Google, it ends up dead in a few years. Life is too short to waste so many years learning something that is destined to die shortly


These are just extended press releases, for marketing and management layers, who don't have to use these things themselves, but can look good, when talking about it.


agree but at the same time there's not too much lock in with these IDEs these days and switching is very easy. Especially since they're all VSCode forks


Thanks for having a go at it.

I am fed up with VSCode clones, if I have to put up with Electron, at least I will use the original one.


This is the result of Google's Windsurf acquisition.

I expect huge improvements are still to be made.



Google bought people and tech that made Windsurf:

https://windsurf.com/blog/windsurfs-next-stage


> What's the strategy here? If I am into your IDE and your LLM, how do I actually use it? I can't pay for it and it has 20 minutes of use.

I wonder how much Google shareholders paid for that 20 minutes. And whether it's more or less than the corresponding extremely small stock price boost from this announcement.


I bet if you sign up for Google AI Ultra for a month your limits will disappear.


I'm a Google AI Pro subscriber (the $~20/month one).

I don't think it's connected in any way, though. Their pricing page doesn't mention it. https://antigravity.google/pricing

if it were true, it would be a big miss to not point that out when you run out of credit, in their pricing page, or anywhere in their app.

I should also mention that the first time I prompted it, I got a different 'overloaded' type out of credit message. The one I got at the end was different.

I've rotated on paying the $200/month plans with Anthropic, Cursor, and OpenAI. But never Google's. They have maybe the best raw power in their models - smartest, and extremely fast for what they are. But they always drop the ball on usability. Both in terms of software surrounding the model and raw model attitude. These things matter.


Nope, I did this today to try and see if it would work.

It does not.


They do not


> If you release a product, let those who actually want to use it have a path to do so.

This is great fundamental business advice. We are in the AI age but these companies see to have forgotten basic business things


> There were some UI glitches

Interesting that a next-gen open-source-based agentic coding platform with superhuman coding models behind it can have UI glitches. Very interesting that even the website itself is kind of sluggish. Surely, someone, somewhere must have ever optimized something related to UI rendering, such that a model could learn from it.


> after about 20 mins - oh, no. Out of credits.

And the say:

Our modeling suggests that a very small fraction of power users will ever hit the per-five-hour rate limit, so our hope is that this is something that you won’t have to worry about, and you feel unrestrained in your usage of Antigravity

You have to wonder what kind of models did they run for this.


What looked like out of credits for me was really just server overload. Check the error and try again


The fact that they released this IDE means that they may cut Cursor out of their API in the future. Google has both the organizational history (Google Maps) and the invincibility of cutting clients out of their API.


Pretty much all this. Remarkably, the website has a "Pricing" page with... no pricing information whatsoever.


speaking of paying for LLMs, am i doing something wrong? i paid cursor $192 for a year of their entry level plan and i never run out of anything. I code professionally, albeit i'm at the stage where it's 80% product dev in finding the right thing to build.

Is there another world where $200/m is needed to run hundreds of agents or something?

am i behind and i dont even know it?


When did you pay for it? There was a time when its limits were very generous. If you bought an annual plan at that time then you will continue with that until renewal. Or, alternatively, you’re using the Auto model which is still apparently unlimited. That’s going away.

It’s very easy to run into limits if you choose more expensive models and aren’t grandfathered.


yep just investigated and seems I got in at a good time. i paid exactly on Jan 1st 2025.

Yes, the auto model is good enough for me especially with well documented frameworks (rails, frontend madness).

Thanks for the response, looks like i'm in for a reckoning come New year's day


And thats the AI business-model in a nutshell. Generate slop quickly that you cant understand, but works enough for you to get hope, works enough to make you forget about all your training as a dev, and profit$$ of your anxiety as it rises!

At no point in the future will these same companies offer the same rates for credits. WAtch your generated code turn into a walking, talking ad for the companies who pay for product placement.


I pay $10/month for GitHub Copilot and I usually get to 100% burn on the final day of the month. I use it extensively for the entire month about 12 hours a day. It doesn't include any of the "Pro" models that are only on the $200/mo plans, but it does a pretty fantastic job.


This. I got overload error on the very first prompt just now. Didn't expect google to run into overload error.


> I can't pay for it and it has 20 minutes of use

You can't provide an API key for a project that has billing enabled?


I tried messing around with it and it kept bombing out and when it did work, produced worse results than cursor.


>no ambition

Sounds like the modus operandi of most large tech companies these days. If you exclude Valve.


taking an entire product from competition just to repackage it with your own AI is wild


Electron is built on top of v8, Edge uses chromium.

I think thats the beauty of opensource.


Google doesn't care about products and never has. Anything they do is just creating another mouth to ingest data with.


>It is a vs code fork.

Oh ffs


> It is a vs code fork.

With vendor lock-in to Google's AI ecosystem, likely scraping/training on all of your code (regardless of whatever their ToS/EULA says), and being blocked from using the main VS Code extensions library.


Lack of autonomy is the most draining thing.

Not many have the luxury to do what they think makes sense. I've been running my own small business for years. It's lonely to not have coworkers but work is extremely enjoyable.

The author, too, had autonomy but doesn't seem to make a point of that.

The system that we've built is why people aren't enjoying their work. The bigger the organization, the less autonomy an individual has.

Markets are beautiful things when they work. They allow individuals to offer their services to the world in exactly the way they find best. Which feels good. And is great for a positive sum society.

The amount of people employed in small businesses or self employed is shrinking and has been for a while. The rules are too complex to start a business. In tech, there is no common protocol to speak. You are a surf of the platform you find yourself on. They extract the value.

To help people find meaning in their work, we must first force open protocols, interoperability, and regulations that are gradual by business size.


> The rules are too complex to start a business.

Not to mention the taxes, depending on where you live. When I actually went to register my side gig wedding photography business it was the most confusing mess I've ever had to slog through.

It's not clear, at all, what licenses you need because of the nature of the business. There's state, then there's also local/city licenses, and it's not clear on if you need one for every city you photograph at, or just one for the city you reside in (your business address). Some cities require it some don't.

Then the taxes here are also just as confusing. Different rates depending on the business activity. Session fee revenue is taxed differently than digital photo sales revenue, etc. Then sometimes the service itself is subject to sales tax, sometimes not, depending on how (and where) you deliver the photos matters too.

You can't be on the legal up and up without hiring an accountant and maybe an attorney to go through the process with you, which is definitely not something I wanted to do for a side gig.

It should not be this convoluted or difficult to legally open a business, especially under a certain amount of revenue per year. I'm not making millions here, we're talking less than 100k/year in gross revenue.

There's a reason most photographers here just....don't bother to register with the dept of revenue. Most don't get caught anyway so a lot of times its worth the risk to just..not pay the taxes.

None of that bureaucracy should exist for businesses under a certain size/under a certain revenue.


In the US income from a hobby can just be added to your personal filing [1] and in Belgium, where I live, there is a similar arrangement for "diverse sources of income" [2]. If you do start a business, in the European Union you're exempt from filing VAT if your yearly revenue is below a certain amount [3]. Europe has also been pretty aggressive in getting rid of licensing requirements for various occupations and trades, certainly a photographer wouldn't need a license here.

I think the trouble you faced, resulted from being at the edge of these kinds of simple systems that do exist -- big enough to need to set up a business, but small enough that hiring an accountant or spending time to familiarize yourself with the legal requirements was out of proportion to the expected revenue. That's unfortunate, of course, but doesn't necessarily reflect on the amount of red tape that exists in general in a country.

[1] https://www.irs.gov/newsroom/heres-how-to-tell-the-differenc...

[2] https://www.vlaanderen.be/economie-en-ondernemen/een-eigen-z...

[3] https://europa.eu/youreurope/business/taxation/vat/vat-exemp...


Yeah, federally there's no problem here. It's the state I live in that's the problem.

Any revenue over $12,000 and you have to register with the Department of Revenue, get a business license, and start paying business and occupation tax and sales taxes (if applicable). If your business is subject to collect sales tax at all, you have to register no matter what your gross revenue is. Unfortunately, the state doesn't have any exemptions for sales tax like the EU.

For some states in the US it is quite a bit simpler, unfortunately for mine it's not and it's like they do everything in their power to prevent small businesses.


All business income should be subject to a tiny share of total revenue, maybe with some portion of it being deductible like input materials, durable equipment purchases, and employee benefits. The first US state to truly grasp and embrace this will get flooded with new businesses, but it will piss off the legal and CPA firms.


"Lack of autonomy is the most draining thing."

This. I recently moved from a developer team to a non-dev team. I've written more lines of code in the past month than I had in the last 6 months on the dev team. No vague requirements to deal with. No picky reviews or politics from other devs or TLs. Freedom to write moderately high coverage, fairly robust test suites (compared to minimal coverage with low value checks).

To be fair, this is green field development, so there is a fair amount of easy code and it's not spaghettified yet. However, I am also training non-devs on the team to run an maintain the repo, so there's that.


"The system that we've built is why people aren't enjoying their work. The bigger the organization, the less autonomy an individual has."

I think it depends on your position. I've worked for big companies, where my department (and I) had it's own autonomy. Sure, you had to answer to the hire executives at some point, but it wasn't that rigid.

Autonomy is usually earned after years of work, through trust. I know lots of people that basically stop working and try to do nothing, when they are given full autonomy (Many examples of this in the OE subreddit).

I had my own small business for 7 years and am now a consultant. I automatically just get work done without being asked or watched. Most people don't have this mindset and need to be told what to do and monitored.


> In tech, there is no common protocol to speak. You are a surf of the platform you find yourself on.

Can you expand on this statement? It's a bit ambiguous and I'm not sure what you're referring to exactly. Other than that, I agree with your comment.

Even doing independent contracting on the side seems like a minefield w.r.t. quarterly tax payments, estimated tax, etc.. in the US at least. I'm sure I could figure it out but just earning income for some side work seems like a liability and a big headache.


> Even doing independent contracting on the side seems like a minefield w.r.t. quarterly tax payments, estimated tax, etc.. in the US at least. I'm sure I could figure it out but just earning income for some side work seems like a liability and a big headache.

It's really not hard -- it takes maybe 15 minutes per year, and if you own any stocks outside a 401k, you probably already need to make quarterly estimated payments. If it's too much trouble, you can just hire an accountant to take care of it for you. At any rate, it shouldn't be a significant obstacle to contracting.


> it takes maybe 15 minutes per year

It takes at least four times that per quarter, really. And especially if you're small enough to not be able to afford an accountant just yet.


The shortcut I always used was to take my effective tax rate from last year's return (i.e. total tax owed / gross income), multiply my self-employment income for the past quarter by that rate, maybe round it up somewhat, and pay that. It usually worked out to be close enough that I didn't owe any penalty for underpayment of estimated tax and it was much easier than using the IRS suggested calculation.


> if you own any stocks outside a 401k, you probably already need to make quarterly estimated payments.

That's true, and I do. I guess I'm referring more to the accounting side of it. My brokerage reports/tracks all that for me and hands me a form later. With contracting work, it's all up to you to track and there might be better ways than a text file with some dates and amounts but maybe it really is that simple. I'm not sure.


The word they meant is serf: a low class in feudalism. Like a slave but with subtle differences that don't really matter. Does everything their master says, or is killed. More importantly, avoids all the emotional outbursts one would trigger by using the word "slave".


> Like a slave but with subtle differences that don't really matter.

Anyone using slavery as a metaphor for participating in tech platforms is out of touch with what slavery really is. Nobody is owning you as person and forcing you to work for them without any freedoms because they opened an app store or let you make a Facebook book.

Minimizing the brutality of actual slavery by drawing comparisons to putting apps in app stores (or other tech platform things) is really distasteful


> More importantly, avoids all the emotional outbursts one would trigger by using the word "slave".

Thanks for providing an example.

A serf also did not feel like a slave, being forced to do things. A serf felt his lord was being generous by letting him use the lord's land, for the low low cost of whatever percentage of his crops, plus being required to go and fight in battle and sometimes die, when there was a battle. Which is oddly analogous to how we feel about tech companies. We don't feel like we're being forced to use e.g. YouTube; we feel Google is being generous by letting us use it, for the low low price of endless advertisements, having our minds altered by propaganda, and randomly getting banned by AI. Although obviously not as much actual death is involved, but then again, Google can unperson you.


Yes. Thanks. Serf. Share-cropping on the lord's land.

The lord gives you exactly as much as you need to live but can extract the rest.

This isn't good for anyone as, history tells us, people are less motivated that way.


Monopolies become entrenched because of network effect. Phone companies had the same problem until the government stepped in.

Web standards are great. And the web is really something special. No one owns the web. I can publish my own website with a ton of different vendors offering hosting. There's a power law distribution of provider success but it's still a fair game.

App publishing is owned by two companies. Messaging is siloed. Discoverability is pay to play. There is no townsquare not owned.

The way to solve these problems is forcing platforms over a certain size to open their protocols. So We can make on own messaging client that communicates with the gatekeepers. We can offer, without the app stores, our software to their users in the same way that the gatekeepers allow.

Monopolies kill markets. Allowing a fair market on network effect platforms would make the categories competitive. Smaller businesses could compete. The margins would lower for the gatekeepers. The consumers would benefit from lower prices and more innovation.

Most of all, we'd finally be able to reverse the trend of cynical software - 'enshitification' - that comes when a monopoly understands that they no longer need to compete.


> Discoverability is pay to play. There is no townsquare not owned.

And yet anyone can publish a website, like you said. Anyone can create an Instagram or TikTok profile and start selling things. Millions of small businesses do this all the time and grow on these platforms without "pay to play".

> So We can make on own messaging client that communicates with the gatekeepers

What does this have to do with letting people create businesses? Who's going to pay you (one way or another) to use your messaging client to connect to these open services when they can use the official client or anyone else's client?

I think idealists imagine a world where tech giants are forced to open their platforms for free and they can use their own apps to connect to Reddit and YouTube and other platforms while avoiding the ads. It's a pipe dream to imagine that regulators will come in and force businesses to be unprofitable for the good of the public. Platforms need to make money to exist. They can either charge admission or show ads.

There isn't a third option where they're forced to become a public good so that other businesses can freely benefit from their infrastructure.


I see what you're getting at now, thanks. I left a comment on a thread about iOS recently[0] where Apple is only making their things open in jurisdicfions where they're forced to which was kinda infuriating/nonsensical to me. Governments have obviously caught onto this but Apple is still trying to squeeze out as much rent as possible while they still can.

0: https://news.ycombinator.com/item?id=45822302


> Lack of autonomy is the most draining thing. ...

> Markets are beautiful things when they work. They allow individuals to offer their services to the world in exactly the way they find best. Which feels good. And is great for a positive sum society.

Modern markets (capitalism) is the direct cause of the problem you describe. And if you think “big organizations” are the problem? The modern market system always tends towards larger corporations since they outcompete the small businesses.

> To help people find meaning in their work, we must first force open protocols, interoperability, and regulations that are gradual by business size.

If this is the egalitarian idea of everyone being closer to a small business owner instead of many smaller businesses but most people still being employees, well that ideal is over a century out of date. It’s dead-end idealism.


Markets tend toward monopoly like democracies tend towards dictatorship.

In both cases, we need checks and balances to ensure that does not happen.

When it does, that group tends to be weaker for it in the long run so evolution takes its course.

Changing a failing system is very hard. I agree. Maybe it's not possible. But we need to have some vision of a positive future for the negative to not win by default. Cynicism is self-fulfilling.


> Markets tend toward monopoly like democracies tend towards dictatorship.

Democracies tend towards dictatorship? Oh right, most nominal democracies are liberal democracies. Makes sense now.

> Changing a failing system is very hard. I agree. Maybe it's not possible. But we need to have some vision of a positive future for the negative to not win by default. Cynicism is self-fulfilling.

What you need is a better system. Not hopes and prayers.


> Democracies tend towards dictatorship?

Strictly speaking, no. A dictatorship takes power away from the people. A democracy sees the people give the power away.

The people don't have to give power away in a democracy, technically, but in practice they always do. Democracy is hard work and people have bigger fish to fry, like keeping food on their table and a roof over their head. For most people it is most sensible to allow a "leader" to take power away from them.

So casually speaking, there isn't a difference. In both cases power isn't in the hands of the people. The reason for why the power isn't held differ, but that is understood to be immaterial with respect the context of discussion.


Remember a basic premise. For actually existing democracy the people have power. The people have power yet they have urgent “fish to fry” like getting food and keeping their homes? Why are they so apparently downtrodden when they have power?

And the leaders? They do not have these bigger fish to fry?

Is the real reason here that there are regular people and then there are elites? And the difference between these two groups’ capacity to act (power) is so different that the first just has to stay busy staying alive (or rather: deal with all their everyday chores, doesn’t have to be bare survival at all) while the other group has the capacity to wield and exercise power? Then the premise is false: you are not talking about a democracy to begin with.

What didn’t exist cannot be given away.


> The rules are too complex to start a business.

I really don't think this is the reason. It's easier than ever to start a business now that there are so many free, accessible resources to help you through the process.

There are even services like Stripe Atlas that will walk you through incorporating, getting an EIN, and setting up your billing: https://stripe.com/atlas

The reason few people start businesses is that it's really hard to make a good living. Running your own business is highly risky and takes a lot of work. For every success story there are countless failure stories that people don't speak of.

Being an employee, especially in tech, can be an extraordinarily good deal. It can be frustrating like any job, but predictable pay that comes in well above median compensation for work that poses very little risk to yourself is a luxury. A lot of us have gone back and forth between self-employed and being an employee, and you don't really appreciate how nice it is to get a regular old job doing a narrow piece of work until you've dealt with the stresses of running your own business from top to bottom.

> we must first force open protocols, interoperability, and regulations that are gradual by business size.

What does this even mean? Very few business types would even benefit from "forcing" platforms to have open protocols. If regulators came in and forced Facebook or Instagram or even Hacker News to open their protocol and allow other businesses to interoperate (whatever that means) what makes you think individual small businesses would thrive, as opposed to other large companies coming in to take advantage of those forced-open platforms?


it's one key factor, whenever i'm forced to follow something unfit for the goal or my brain, i crash


yeah autonomy and agency are the biggest work perks


Does Microsoft even have an OpenAI stake? Their original more public deal was revenue sharing up until they reached 100x $1 billion. That doesn't sound like a stake.

They also had tech sharing valid until the OpenAI board declares 'AGI'.

That seems like a really bad deal. And that was probably at the time when Microsoft had the most leverage to make a deal. Their subsequent deals would make sense to be worse.


They have a very creative investment structure but they do own a stake in OpenAI. It’s just sounds more like a commercial agreement the way it was laid out, which primarily has to do with OAIs organizational structure


If the deal is still based around capped revenue, I wouldn't call it an ownership stake.

As far as I know, they have not disclosed it. If you have more information about something concrete regarding ownership, I'd love to hear it. Maybe I haven't understood everything that was stated.

In the end, whatever Microsoft has is probably less valuable than a sizable ownership chunk that most people seem to assume.

I imagine a lot of people are investing in Microsoft as a proxy to OpenAI. Those people are set to be disappointed.


OpenAI and MS have been negotiating about turning that into a straight percentage of equity.


Useful.

I wonder what an average accountant would score.

I know LLMs have helped me identify many mistakes accountants have made on my behalf. Some mistakes that could have cost me a lot of money if not caught.


Given that they're restricting to very simple situations I'd expect accountants to score 100%.


The EU's bank count per capita is tiny compared to the US. Their offers are never competitive if you compare to the US banks (e.g. interest rates, apps that actually work, customer service, etc). They lack competition due to over-regulation, which, if you understand the history of corruption within banking in europe, should not imply good regulation.

Regulating big tech is good. Kill gatekeeping platforms and engagement-driven newsfeeds that are tearing us apart. I wish they could do that. Big tech competition with banking, on the other hand, would be welcome.

It's too bad, too, because overall the EU in most places has a history of better representing their citizens. I wish that mechanism was more functional.

My experience is living in 3 EU countries as an American - the banks are similarly terrible and entrenched in each.


Which EU countries have those been?

The EU has recently reduced fees for one of the biggest instant payment systems of the world (SCT inst reaches the Eurozone's 350M residents). Compare the quality of that to a wire or to a ACH transfer.

EU is also ahead with security. PSD2's requirements go further than US requirements, and they are also ahead in the magnetic swipe card phaseout.

Wise and Revolut, two companies which brought a lot of innovation to international money transfer, were founded in the EU as well (since 2020 not EU companies any more).

Of course, all of this doesn't mean that the average EU bank doesn't suck. But I heard worse of the US.


>magnetic swipe card phaseout.

Swipe? I don't recall a time when I needed to swipe in US in the last few years. Pretty much tap, tap, tap, tap. Actually you cannot swipe a card in US that has a chip, and probably 99% of cards have chips.

>Compare the quality of that to a wire or to a ACH transfer.

Zelle? Just a qr code or a phone number? And it's free?

>Wise and Revolut

No clue. What's so special that I don't have with Chase?

>EU is also ahead with security

Um isn't that useless? As more scams are via social engineering.

>But I heard worse of the US.

I heard the same about EU, actually MUCH worse :)


> Swipe? I don't recall a time when I needed to swipe in US in the last few years.

I do, earlier this year visiting the USA. The readers on pumps at two different gas stations.

But the EU started phasing out reading magnetic strips twenty years ago, well before the USA had even started issuing EMV chip cards.

> Zelle?

Zelle is only for person-to-person transfers, Europe has had good person-to-business, business-to-person and business-to-business transfers for decades.

> ...

The point wasn't that the USA didn't have these things, but that Europe had them earlier (sometimes much earlier), so the banking system led to this innovation.


Well you found one, and I can tell you about time when in EU that a place took a hard print of my card in the last 5 years! Didn't even know that card imprinters still exists.

Zelle is not just person to person, it's just a transfer. You can pay businesses, people and even transfer to yourself. Zero fees.

Europe is a big continent and I can easily find a place that is way more backwards ;)

Also 2 letters from EMV stands for 2 American companies :).


Monzo was also founded in the EU, in the UK specifically when they were still in the EU.

> The EU has recently reduced fees for one of the biggest instant payment systems of the world (SCT inst reaches the Eurozone's 350M residents).

But that was done by regulation, wasn't it? Would have been nicer to see that come as a result of competition.

> Of course, all of this doesn't mean that the average EU bank doesn't suck. But I heard worse of the US.

I don't know about the average. But I can tell you that quality varies a lot. I was generally OK with German banks (having grown up there), but UK banks before Monzo (and Revolut, Wise etc) used to be the scum of the earth. Just like their supermarkets used to feel openly hostile to me as a customer before Aldi and Lidl showed up and shook up the market.

Yes, Tesco and friends regularly get told off by the regulator before, but nothing changed until competition forced their hands, and gave customers something they preferred.


> UK banks [...] used to be the scum of the earth

They still had chip&pin before US banks, and dropped unsafe cheques before US banks.

The US banking system, afaik, did one thing better: credit cards. But since the '00s, European ones have been just as good and often better.


> They still had chip&pin before US banks, and dropped unsafe cheques before US banks.

Oh, I never banked in the US, so I can't comment on them from a consumer point of view.

> The US banking system, afaik, did one thing better: credit cards. But since the '00s, European ones have been just as good and often better.

I used credit cards perhaps a handful of times in my life. It's almost exclusively been debit cards for me.


> But that was done by regulation, wasn't it? Would have been nicer to see that come as a result of competition.

Bill Gurly has been crying for years now about how US banks have been bocking/not-participating in equivalent services in US(Fed Now) and for good business reasons for them.

A well functioning market does need regulations. Not everything can be magically fixed by "competition"

https://www.linkedin.com/posts/kivatinos_bill-gurley-on-paym...


> A well functioning market does need regulations. Not everything can be magically fixed by "competition"

Ideally, you can set up your regulations so that competition has more bite.

Much of the time, you can remove special purpose regulations for a specific sector, and can get by with just the generics: enforcing contracts, punishing fraud, etc.


Haha, historically Americans over-regulated their banking system, and got rewarded with frequent banking crises in return. (And America was the only major economy with that problem.)

Eg until a few decades ago many American states banned banks from having more than one branch.

See also the big struggles Walmart had in trying to become a bank; and conversely see how US banks are (or at least were) banned from serving their customers coffee..


It is definite past tense. Donut and coffee Sundays at a local bank are a thing.


The bank I use runs all of their physical branches out of coffee shops.


Yeah seriously doubting you really lived in the EU instead of just shitposting as another American patriot who has to declare that everything is better in the US. Because I’m Dutch and banks here are fine. I never even have to think about my bank because it just works.

Also no insane fees for going into the red, or having to pay to get my own money and all that fun stuff that American banks seem to love.


Or maybe their bank just doesn’t suck. My American bank doesn’t do any of the stuff you listed and hasn’t for probably 15 years.


What problems did you have with EU banks?

The only problem I have with mine is that it doesn't connect with open banking APIs, but I just default to revolut for that.

It could be that they treated you poorly because you are from the US, but I doubt you'd get treated like that in 3 different countries.


I'm surprised by the contraversy of my comment. Maybe I should have been clearer.

The banking /system/ in europe is superiour. It's crazy that the US isn't part of the IBAN system. ACHs suck, etc.

Revolut and Transferwise help a lot with country level bank shortcomings. But they're not really the same as those banks. And they're exactly two mega companies. In Portugal, there is MBWay, and in Denmark there was Visa Electron (might have changed). These payment systems are used everywhere here due to their low fee at the exclusion of other payment systems. I appreciate the low fees to the vendor but it means that Revolut and transferwise are not an every day banking solution here.

When I lived in Denmark, I could not get a visa electron card despite being a resident because I didn't have credit in the EU. This made me use cash for everything as local businesses did not take normal bank cards / credit cards.

In Portugal, banks charge to keep your money in them without offering interest. As an American, managing interest payments from a non-domestic bank is a tax nightmare so I wouldn't want it anyway. But it's notable that they don't have to compete here at all. It was the same, as I remember, in the other EU countries I was in. In contrast, US banks can be found offering competitive interest rates though brokerages are a better option, still.

My point was not some kind of America versus EU nonsense that this seems to have attracted. It's that banks had an outsized influence on politics in the EU.

If you look at the distribution of market cap of industries in the EU versus the US, you'll note that the financial industry in Europe is much larger as a percentage. Step back and think about what that represents.

The US economy has 14% of public economy in financials vs 25% in the EU:

https://www.msci.com/documents/10199/255599/msci-usa-imi-net... https://www.msci.com/documents/10199/b32acc80-b116-454b-a9c4...

A financial industry being the proportially largest sector is not a good thing. It's a rent that's being extracted from the productive economy.

There has also been a huge decline in bank count in Europe as they consolidate: https://data.ecb.europa.eu/data/datasets/CBD2/CBD2.Q.B0._Z.1...

A decline in business counts is not good. It's not good that the same trend for other business types is happening in the US, either. Western governments are now favoring large businesses that hold political capital at the expense of the smaller businesses. This isn't good no matter the industry.

We should all want our businesses and governments to improve. To do that, we must first understand deeply the problems.


I recently went down a rabbit hole on how researchers make mice depressed so they can test antidepressants on them. The short answer is they disrupt the mouse's environment in ways that are unpredictable and uncontrollable. It's a standard protocol. We know that this causes depression.

The culture of products not under the control of the customer does the same thing. A culture that sees this as normal is a depressed culture.

To test whether the mice are depressed, researchers give them something rewarding that requires a little effort to get (e.g., sugary water vs. plain water).

The depressed mice give up. They are apathetic.

I imagine the mice believe that there is no way to change things. That might be true for the mice but it's not true for us.


I imagine the mice would be far more depressed if they had to get trackpad drivers to work and give a good response on Linux


Seems to be an insurmountable task for anyone apart from engineers at apple, let alone mice


I think they'd find that to be an engaging task, and they'd get a great deal of satisfaction once they had sorted it out themselves.

If the mice had to deal with an intermittently disappearing cursor, and erratic hover behaviour on a 2 year old M2 Mac with the latest version of Sequoia .. that would probably illicit a very different response.

Are mice known to spill blood?


There are laptops that work with Linux and ones that don't. Nobody is forcing you to use it on a device that isn't supported


This is that depression the comment author is talking about. Apple fans will make it a point to lash out at Linux for not being a trillion dollar company supported product. Only depressed people lash out at the parts of the world where communities are trying their best.


macOS support for anything that isn't a new Apple product is appalling.


Yeah. I had a laptop which is only 4 years old causing issues, took it to an Apple Store and they couldn't give a toss.

Everything was pushing me in the direction of buying a new laptop (with a small discount relative to the new price) and transferring everything across.


Agreed. If I weren't a computer nerd I probably wouldn't feel this way, but on Linux I feel more empowered. Even if there are more things to tweak/fix (which is not necessarily true these days), there IS probably a way to do it.

On MacOS, I more often have to give up and live with the annoyances.

Hardware is the the big exception. None of my PCs have had nearly as good build quality or battery life (on Linux, at least) as a Macbook. Maybe I should try a Framework.


> If I weren't a computer nerd I probably wouldn't feel this way, but on Linux I feel more empowered

There are also more footguns and rabbit holes. Overall, I am about as happy with Linux than with macOS (I use both daily), but I would not say that one is really more empowering than the other.

I like tinkering with KDE but it’s full of inconsistencies and instability in a way than even the worst Finder I’ve used was not (e.g. the whole desktop freezing when adding a widget to the desktop with a brand new install). Never mind the Russian roulette that is updating nVidia’s drivers.

On the other hand on macOS it’s easier to get to things that are actually productive.


> The short answer is they disrupt the mouse's environment in ways that are unpredictable and uncontrollable.

For example like will my wifi work today. Will my laptop still have any battery when I open it. Is today the day I surprise boot to tty and have to figure out what changed before I can start working.

I'll stick with the year-to-year unpredictability of apple over the day-to-day unpredictability of linux.


Linux doesn't magically update itself. If it works today, it's going to work tomorrow unless you break it.


I dunno about that. Bleeding edge distros like Arch are infamous for breaking in random ways for those updating without paying attention and even distros that are considered more stable like Fedora and Ubuntu can from time to time break drivers or random smaller things. Definitely a YMMV sort of thing.


What are we going to do, then? Stop updating the OS and accumulate security issues? Stop doing anything that might possible touch an obscure config file somewhere in the bowels of the OS? It’s just unrealistic. "Do not update" cannot be a solution, it’s worse than the problem it is supposed to solve.


I’m responding to someone pretending their system randomly breaks in the morning as per magic. This simply doesn’t happen, period.

A Linux system stays the same unless you change it.

I also personally disagree that updates break system by the way. I have used Arch for more than a decade and has yet to experience one of this alleged frequent disturbance.


You said magic twice attacking me but I said it zero times. It's just not even a "good" bad read of what I did say.

What I'm talking about is changes because of updates, yes. Auto updates, or ones I did, or an update to specific software that caused a library update to break something else. All that counts as "me changing it" sure. Like I said I guess I need a system a little less prone to breaking because of my actions. I'm a programmer not a linux admin.


Not sure what point you’re making, are you saying to never update your system?

Not updating your system is not a magic solution either. I ran Linux Mint for 9mo and twice during that time I ended up in a bizarre situation:

1. the menu bar, or whatever it’s called (taskbar/dock equivalent) had disappeared on boot and I spent about 2 hours trying to get it back

2. the system simply wouldn’t boot into Cinnamon anymore; I ended up reinstalling

Bought a MBP and while it has some annoying quirks I don’t have any crazy ruined-my-day issues anymore.


By the same token, mac os isn't going to change in unpredictable ways if you never update it either.


Alas the cross of a dumbass programmer is a heavy one. I need an OS that is harder to break I reckon.


I too am especially prone to breaking my OS, as well as the software that runs on it. You aren't alone lol.


Use an immutable distro like Bluefin or Aurora and you can just boot to yesterday's version.

(macOS is my daily driver too, but I wouldn't mind having that feature)


Has Apple never done an update that breaks something? This seems worse than anything I have heard of with linux: https://www.tomsguide.com/phones/iphones/ios-18-4-1-update-i...

In practice, even with a rolling release distro I have not had things break on an update in a very long time (not at all on my current install, which is two years old), and with stable distros its literally been 20 years since something did not boot.

Any OS seems to have some bugs on updates.

I have heard battery life is better so not arguing about that, but its rarely that I will not wake my laptop for more than a day or two so its not a problem I experience either.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: