Hacker Newsnew | past | comments | ask | show | jobs | submit | anyonecancode's commentslogin

> There's this notion of software maintenance - that software which serves a purpose must be perennially updated and changed - which is a huge, rancid fallacy. If the software tool performs the task it's designed to perform, and the user gets utility out of it, it doesn't matter if the software is a decade old and hasn't been updated.

If what you are saying is that _maintenance_ is not the same as feature updates and changes, then I agree. If you are literally saying that you think software, once released, doesn't ever need any further changes for maintenance rather than feature reasons, I disagree.

For instance, you mention "security implications," but as a "might" not "will." I think this vastly underestimates security issues inherent in software. I'd go so far say that all software has two categories of security issues -- those that known today, and those that will be uncovered in the future.

Then there's the issue of the runtime environment changing. If it's web-based, changing browser capabilities, for instance. Or APIs it called changing or breaking. Etc.

Software may not be physical, but it's subject to entropy as much as roads, rails, and other good and infrastructure out in the non-digital world.


Some software - what I take issue with is the notion that all software must be continuously updated, regardless. There are a whole lot of chunks of code that never get touched. There are apps and daemons and widgets that do simple things well, and going back to poke at them over and over for no better reason than "they need updates" is garbage.

There's the whole testing paradigm issue, driven by enshittification, incentivizing surveillance in the guise of telemetry, numbing people to the casual intrusion on their privacy. The midwit UX and UI "engineers" who constantly adjust and tweak and move shit around in pursuit of arbitrary metrics, inflicting A/B testing for no better reason than to make a number go up on a spreadsheet be it engagement, or number of clicks, or time spent on page, or whatever. Or my absolute favorite "but the users are too dumb to do things correctly, so we will infantilize by default and assume they're far too incompetent and lack the agency to know what they want."

Continuous development isn't necessary for everything. I use an app daily that was written over 10 years ago - it does a mathematical calculation and displays the result. It doesn't have any networking, no fancy UI, everything is sleek and minimal and inline, there aren't dependencies that open up a potential vulnerability. This app, by nearly every way in which modern software gets assessed, is built entirely the wrong way, with no automatic updates mechanism, no links back to a website, to issue reporting menu items, no feature changelog, and yet it's one of the absolute best applications I use, and to change it would be a travesty.

Maybe you could convince me that some software needs to be built in the way modern apps are foisted off on us, but when you dig down to the reasons justifying these things, there are far better, more responsible, user respecting ways to do things. Artificial Incompetence is a walled garden dark pattern.

It's shocking how much development happens simply so that developers and their management can justify continued employment, as opposed to anything any user has ever actually wanted. The wasteful, meaningless flood of CI slop, the updates for the sake of updates, the updates because they need control, or subscriptions, or some other way of squeezing every last possible drop of profit out of our pockets, regardless of any actual value for the user - that stuff bugs the crap out of me.


These posts are in a thread about someone pumping out a large amount of software in a short amount of time using AI. I'm guessing that you and I would agree that programs flung out of an AI shotgun are highly unlikely to be the kind of software that will work well and satisfy users with no changes over 10 years.

Well, there's the variation I heard recently:

There are only two problems in computer science. We only have one joke, and it's not very funny.


> In order to sell anything, people need to know about it. Google and Meta provide a way to make this possible. If they didn't exist, you wouldn't somehow have a more affordable way to get people to know about your product. However frustrating the current situation is, it is still more accessible than needing access to the airwaves or print media to try to sell anything new.

The places people can find out about your product are controlled by a very small number of companies. And those companies not only own those spaces, they also own the means of advertising on those spaces. So if you have a product you want to advertise, you're not paying to distribute your message broadly to consumers, you're paying a toll to a gatekeeper that stands between you and your potential customers.


but that’s not really true. You’re not paying, you’re bidding. You are competing against thousands of other advertisers for eyeballs. If you are the only advertiser targeting a group of people, you will spend almost nothing to advertise. If you are targeting a group of people that everyone targets (e.g: rich people in their 30s) you will pay through the nose.

Facebook, Google etc. are the most “fair” forms of advertising. We can dislike advertising, their influence, product etc. but when you compare them to almost every other type of advertising, they’re the best for advertisers.

The reason they generate so much revenue is because they are so accessible and because they are so easy to account for. The reason LTV and CAC are so widely understood by businesses today is because of what Google, Facebook etc. offer.


No financial market would be able to run the way Google and Facebook run their ad markets. They are the supplier, the exchange, and the broker all at the same time. This is not a competitive market. It's a captured one where the supplier effectively gets to set their price, and the exchange and the broker incentivize and advise you to trade at that price.

Google has famously and repeatedly rigged this bidding system in anti-competitive ways and has had to pay billions in fines because of it (which I am sure were less than the amount they profited from)

Who are these mythical people who can pay $500/month to park below 60th street but will be bankrupted by the congestion toll?


$1.50 toll on rideshare drivers/users and/or people getting dropped off at work.


> $1.50 toll on rideshare drivers/users and/or people getting dropped off at work

Anyone Ubering to and from work is not among New York's poor.


> I still remember in the early 2000s Barnes and Noble would still have massive shelf space devoted to every technical topic you could imagine.

B&N, and Borders, are how I learned to code. Directionless after college, I thought, hey, why not learn how to make websites? And I'd spend a lot of time after work reading books at these stores (and yes, buying too).


Over my career, I've been in a big company twice. This article definitely tracks with my experience. At one company, I think management actively didn't care, and in fact my direct manager was pretty hostile to any attempts at improving our code base as it meant disruption to what was, for him, a stable little niche he had set up.

At the second, it wasn't hostility but more indifference -- yes, in theory they'd like higher quality code, but none of the systems to make this possible were set up. My team was all brand new to the company, except for two folks who'd been at the company for several years but in a completely different domain , with a manger from yet another domain. The "relative beginner" aspect he calls out was in full effect.


The way you're defining "eventually consistent" seems to imply it means "the current state of the system is eventually consistent," which is not what I think that means. Rather, it means "for any given previous state of the system, the current state will eventually reflect that."

"Eventually consistent," as I understand it, always implies a lag, whereas the way you're using it seems to imply that at some point there is no lag.


I was not trying to "define" eventually consistent, but to point out that people typically use the term quite loosely, for example when referring to the state of the system-of-systems of multiple microservices or event sourcing.

Those are never guaranteed to be in consistent state in the sense of C in ACID, which means it becomes the responsibility of the systems that use the data to handle the consistency. I see this often ignored, causing user interfaces to be flaky.


Inconsistent sounds so bad :)


I know. That is why it is useful way to think about it, because it both is true and makes you think.


I think you have to assume that faster-than-light travel is both possible and economical. At that point, far-flung supply chains across the galaxy really aren't any more surprising than the far-flung supply chains across the globe of our current reality. When distance becomes less economically relevant, other factors (like labor availability and costs, regulations, ease of access, security, etc) become more important.


FTL isn't even necessary. Consider the majority of tanker ships travel at bicycle speeds[1]. If you're transporting sufficiently profitable nonperishable goods in extremely high quantities, and have enough automated ships, you could have a functional interstellar supply line at a fraction of light speed.

Of course, this isn't how it's usually presented in science fiction, but that's because a sci-fi story about a non-sentient fully automated mining machine wouldn't be very interesting. Gotta get humans out there.

1. https://en.wikipedia.org/wiki/Slow_steaming


And they said five year plans struggled with predicting demand ;)

I'd rather go with "for any delta in mining convenience between solar systems, there exists a level of FTL magic where shipping would become economically feasible"

Perhaps space slow steaming might be an option if your goal was to make a Dyson sphere exist before the star inside burns out?


Hogwash! We can do it much faster than that. With a machine in Alpha Centauri capable of flinging rocks full of rare earth metals back towards the solar system at 1/10th the speed of light, we could be up and running in <150 years.

This feels about as realistic as most of the spacetech proposals I hear.


I'd imagine insurance premiums to get quite ugly with all those 0.1c rocks passing by


Not my problem, I'll be long dead.


I strongly suspect the answer is yes -- or more broadly, what makes us conscious. And yes, this implies consciousness is something all life has, to some degree.

I'm not going to pretend to have a good definition of what "consciousness" is, but directionally, I think having goals -- no, that's too weak -- having _desires_, is an important part of it. And I'm not sure it's possible to have desires if one cannot die.

Something like an LLM can't actually die. Shut down all the machines its code runs on, then turn them back on, and it's in the same state it was before. So it's not the "hardware" that an LLM lives in. Is it the code itself? Copy it to another set of machines and it's the same program. Code + data? Maybe we run into storage issues, but in theory same thing -- transfer the code and date somemplace else and its the same program. You can't actually "kill" a computer program. So there's no inherent "mortality" to it that where any kinds of "desire" would emerge from.


> The internet was born out of the need for Distributed networks during the cold war - to reduce central points of failure - a hedging mechanism if you will.

I don't think the idea was that in the event of catastrophe, up to and including nuclear attack, the system would continue working normally, but that it would keep working. And the internet -- as a system -- certainly kept working during this AWS outage. In a degraded state, yes, but it was working, and recovered.

I'm more concerned with the way the early public internet promised a different kind of decentralization -- of economics, power, and ideas -- and how _that_ has become heavily centralized. In which case, AWS, and Amazon, indeed do make a good example. The internet, as a system, is certainly working today, but arguably in a degraded state.


preventing a catastrophe was ARPA's mitigation strategy. the point is where it's heading, not where it is. It's not about AWS per se, or any one company, it's the way it is consolidating. AWS came about by accident - cleverly utilizing spare server capacity from amazon.com.

In it's conception, the internet (not www), was not envisaged as a economical medium - it's success was a lovely side-effect.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: