Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Curious what folks think the outlook on this technology is. Rather, will there be any significant shift towards centralized compute in the next decade?

Currently, a large institution/corporation has to manage thousands of individual machines. Say a physical component fails, now a technician must go to the location of the machine and give the user a temporary replacement. Alternatively, in a centralized compute environment, they could just allocate a new machine, and work entirely out of a data center.

And what about software updates and upgrades?

In the centralized model, both software updates and hardware upgrades can be managed more easily. Sure, we have good software tools to update all networked devices, but if that fails, the admin still sometimes needs physical access.

One market I see this potentially taking off in is academia and hospitals. (Though I’m biased because I’m employed by a Medical School)

Much of the record keeping is already done with a centralized infrastructure. Liberal use of active directory and low powered clients is the norm.

And particularly for research, there’s the added benefit of being able to allocate more resources without any physical action. Say I’m trying to run a script to fold proteins on my lab workstation. Usually, I’d be limited to the hardware on hand, but in the centralized model I could request or allocate a more powerful machine. Sure, the current solution is to spin up your own VM and move your program. Often, academic institutions have their own on-prem compute for this purpose. However, both still require technical ability on part of the user.

How close are we really to the model of giving all users a dummy client (think Chromebook) and centralizing The real compute? What challenges, disadvantages am I missing?



As someone in the Apple ecosystem, the first thing I think when I see a service like this is that this would let me keep doing the type of computing I want to be doing (e.g., software development, and using the big creative apps[0]), while still using devices that Apple is invested in improving (i.e., iOS devices, because they're not investing in macOS, or at least not the parts of macOS that support the type of software I want to run).

Therefore, unless Apple changes course, if I want to stay in their ecosystem (which is debatable), the only way I'd be able to do that is to start using a a service like this.

The way I see it there are three options:

1. Apple changes course and starts supporting powerful software again.

2. All powerful software becomes web-based a à la Figma.

3. Start using services like this.

The status quo cannot continue indefinitely, history has shown that when a popular product stagnates, an external player eventually figures out how to capitalize on it and takes over from the existing players (e.g., see the iPhone vs. flip phones, Firefox vs. IE, Sketch vs. Photoshop).

[0]: https://blog.robenkleene.com/2019/08/07/apples-app-stores-ha...


Even if there's "centralized" computing, everyone still needs a device they can use to access this central environment. Of course these devices can still fail and need repair. (The Chromebook in your example). At least in that case, these machines are fungible so a permanent replacement can just be given while the faulty device is repaired and used to replace another device the next time one fails.


> they could just allocate a new machine, and work entirely out of a data center.

Not really, you still have to manage endpoints, and they still have peripherals. One of the big advantages to systems like this is access to high-performance compute, but generally you'll want good displays and peripherals to interface with machines like that.

With a remoting system, you also need to make sure that your network performs well enough not to cause strain for your workers.

All that is not to say that it's exactly the same workload, but it may not be as major of an improvement in administrative complexity as it seems at first blush.


So we're back to 21st century mainframes.

Count me out. Personal computing forever!


The benefits to this are substantial, which is a bit worrying.

If this style of computing becomes the norm it would centralise and fragilize any systems, companies, economies that rely on it. As well as passing control over what can be run to a third party (see: "The war on general purpose computing").


When making multiplayer games in the 90's I realized that the players computer was sometimes more powerful then the server, so the strategy since have been to put as much work on the client as possible, in order to fit as many players as possible on a server. Because hosting servers is expensive. You want reliability on the server side. Server goes down - hundreds of people cant play. One client goes down - not your problem.

The advance in network capacity and low latency io opens the possibility of thin clients. For example gaming consoles - in a few generations youll probably just connect via your smart tv. The biggest incentive is probably DRM. Where you stream the content rather then owning it.


Depends on latency requirements. Data analysis? Yep. Creative pros? Nope.


Why? People are ok to play online via stream. Why shouldnt for work also be ok?


Terminal servers were a good idea back in the 90s. All that is old is new again.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: