Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The next century of computing (bzogrammer.substack.com)
139 points by bzogrammer on Sept 27, 2022 | hide | past | favorite | 76 comments


My first thought when I saw this was Dan Luu's recent blog post on futurism and how to accurately predict the future (https://danluu.com/futurist-predictions/). And on that scale of futurists, the one presented here does seem to be on the less accurate scale of things--the understanding does seem to be closer to cocktail party (or perhaps more accurately "I once saw a technical talk") than the deep domain expertise that appears to be necessary for accurate predictions.

I'm only a compiler developer, so my knowledge of computer architecture, operating systems, and formal methods aren't all that great--but it does appear stronger than the author. Like the statement of how processors work today in prediction 4 is just plain wrong (as is the analysis of OS complexity in prediction 7), and my experience in exotic accelerator hardware paradigms makes me lol at prediction 3.


On the contrary, evidence has repeatedly shown that deep domain expertise results in tunnel vision and worse predictions.

See eg https://www.smithsonianmag.com/smart-news/why-experts-are-al... or the fact that breakthrough papers get more citations outside their field at first than within it


Yeah - I look at the last few decades of computing and see the great god of compatibility driving almost everything. It's all very well the author seeing some cute ideas and running with them, but not many are in a position to discard everything, so we will do as we've done for the last few decades and pick the technology that lets us build on what we've got.

You don't have to look far for examples. Remember Intel Optane memory - a genuine improvement to memory. But because it doesn't match well with existing OS and apps, Intel struggled to do much with it, and seems to have largely dropped it :(

I'd love if even a few had the opportunity to tidy-up our decades of history and mistakes, but I don't see it happening.


This can be summarized as “hardware and software coevolve.” Neither can deviate too terribly from the constraints and requirements of the other.

Of the two software can perhaps deviate more, but at the cost of performance. Software that deviates too far from hardware ends up effectively being an emulator for alien hardware.

Hardware is more constrained. If there is no software nobody uses it. There is perhaps a way out in the form of JITs and translators and emulators but it’s very difficult to make that work well. The current crop of x64 to ARM64 translators found on Mac and Windows ARM PCs are one of the few cases where I have seen that work.

Intel with Optane should have just presented it as a crazy fast hard drive specializes for databases. If you can get some use case going the hardware can find a niche and then software makers will start exploring what else it can do.


Exactly. This is all an economic question that involves humans, how plentiful they're trained in different topics, the availability of different resources, security requirements (Cold War 2.0 ups security costs, for example), and many other unknown factors like relative breakthroughs in AI, physics, quantum, etc.

But that said, I think point 2 (much more bizarre hardware) is basically guaranteed. As OSS and cloud provided services (e.g., RDS) centralizes our approaches to common problems to a small set of commonly used libraries the common functions of those libraries will continue to get baked into hardware and lower and lower overheads (not just chips, think busses between, etc). Basically TPUs and M2s are the pretremors of what is to come.

I basically think of this all as a massive economic / search problem with highly aligned actors between their neighbouring layers. Though not necessarily beyond their neighbouring layers. For example, breakthroughs in mathematics or in AI driven mathematics could, say, 10x LLVM optimization for common tasks which would dampen demand for CPU in the short to medium term until the economic growth catches up to demand even more in the long term.


I've also seen Optane recommended as cache for ZFS pools[0]. I wonder if Intel is sufficiently disillusioned with Optane to license it to another manufacturer. Particularly with the CHIPS act in place, perhaps there's room for someone to take another run at it.

[0] https://www.servethehome.com/exploring-best-zfs-zil-slog-ssd...


I'd love to see Optane used as the basis for a non-volatile OS. Unfortunately pretty much all our substantial software "needs" to be restarted occasionally, so a non-volatile system is not so practical :(


A non-volatile system doesn't mean nothing can restart, just that everything doesn't have to always restart.


i feel hardware evolve and we now can use more of we already develop decades ago but was inpractical, i fell not as much progres is made in the software side of thing dont get me wrong we got better compilers, and software piplelines. but i feel most of our stack isnt more productive, we are moving in circuls dont get me wrong cloud is cool but is 90% hardware, the same ci/cd and deep learning therese some spark of brilaince software desing surrrealdb for example but is small and sparce and in generals this type of ideas doesnt steak even if they are better


There are several technologies that can make transistors that are too expensive to pursue in earnest until 2nm reaches commodity pricing which means multiple companies making fabs near that performance point. That will be a while probably, but it will be inside a hundred years, unless we end ourselves. Some of those technologies may exceed silicon performance and allow our continued growth. After 2nm hits commodity pricing, but before graphene or diamond or whatever hit feature parity, I wouldn't be surprised to see more intrinsics popping up to support common usecases in devoted silicon, but that just makes more work for you as a compiler developer.

I do think in the nearer term, computing will get less abstracted and people will spend more time on optimization. Not because of moores law, but because the demand for new chips is far exceeding supply from bleeding edge fabs, and it will take a long time to catch up. This may get exacerbated by politics and shrinking manufacturing workforces. So we will have to do more with older tech. These exotic architectures the author posits have to get fabbed by someone, and they won't do that while they can use existing designs to print money.


""" 1. Let us start by getting the obvious out of the way. Moore’s law is coming to an end. It is slowing down rather than coming to a grinding halt, but already Dennard scaling has broken down, which eliminates many of the real benefits from scaling """

Is it? I've heard this before but I'm having trouble seeing this statement justified.

Wikipedia shows the cost of computing over time and, as far as I can tell, is almost exactly a halving of cost every 1.5 years [0].

The rest of the article looks like it's a mish-mash of predictions based on the faulty (?) assumption that Moore's law is coming to an end yet that we're going to be using computers that are closer to physics but not get the benefits of running computers "closer to the metal".

""" 22. The “Singularity” will fail to materialize. ...

23. Human-level AI will materialize ... """

What?

""" 28. Various forms of non-text programming tools will be developed. Purely audio-based programming will be available for blind programmers as well as someone who may want to just code via a conversation with their computer while going for a walk. """

So a "purely audio" based programming will evolve that is not text based but we'll program it by using a text based language. Huh?

I got to 28. I'm giving up. This is lazy prediction. It's making vague predictions that sound psuedo intellectual without giving concrete conditions under which case it would fail to materialize. Part of the value of scientific predictions is its falsifiability.

If I were to say "Hard AI by 2032", there might be some ambiguity in what "Hard AI" means, but at least it's getting towards falsifiability and it's giving a time bound. Here are some others "1 Bitcoin = $1M by 2026", "50% of the worlds power by 2040". You may disagree but at least when the time comes, there's a way to say that I was wrong.

Can most of these predictions in the article even be wrong?

[0] https://en.wikipedia.org/wiki/FLOPS#Cost_of_computing


> Can most of these predictions in the article even be wrong?

There's no specific timeline given for falsifiability, but the title implies that we could generously bound it by 2100 or 2122. Assuming we had such a timeframe, and looking only at the first 20:

3, 4, 8, and 12 contain specific predictions that can be easily determined to be true or false.

1 and 5 contain predictions that lack numerical specificity, but if they had it, they would be easily determined to be true or false.

2, 7, 9, 10, 14, and 16 contain predictions that are somewhat vague, but I think most people could reasonably agree on a specific, easily determinable prediction that captures the intent of the prediction.

11 and 13, to the extent that I can draw a prediction out of them, seem to reflect things that already exist today. (I might also stick 2 in here as well).

15 and 18 describe things that contain somewhat vague predictions where I think reasonable agreement on specific, easily determinable predictions is challenging.

6, 17, 19, and 20 are hot garbage.

So if you're being pretty strict, 20% of the predictions contain something falsifiable; if you're being charitable, you can't push it past 70%. Although I would personally credit only the first two categories (so, 30%).

(FWIW, I agree with... almost none of these predictions.)


> Is it? I've heard this before but I'm having trouble seeing this statement justified.

> Wikipedia shows the cost of computing over time and, as far as I can tell, is almost exactly a halving of cost every 1.5 years [0].

Not moore's law (stacking transistors will likely help for a whiles yet), but you can pretty easily chart the progress towards the landauer limit and see the plateau in the current paradigm coming (by my count about 3-5 orders of magnitude in energy efficiency of classical computing is the asymptote before you start throwing away 99% of the cpu and replacing it with 100 special case gated ones if you want more efficiency). As you approach the limit then the tricks that CPUs pull become a larger and larger portion of your energy, and progress is made by simplifying things and making each task more specialised as the article talks about.

For more transistors on a plane they're hitting some hard limits. Currently gate pitch is ~30nm. Maybe at max a factor of 10 improvement is available for things made of atoms pushing around electrons. This gives 100x the transistor density as the end state of an industry that is already spending exponentially more on developing each generation of fab

At some point stacking layers will have a greater than logarithmic cost per layer, and then storage will be done too. Very roughly maxing out at a 1 exabyte microsd card.

The predictions I'm making here can be considered to have a somewhat hard time limit as the remaining 6-12 accessible doublings in each category must happen at a pace such that it is obviously true or false by 2040. eg. If there are 3 doublings in general purpose single threaded CPU power efficiency between 2035 and 2040 or gate pitch of transistors hits 1nm or storage costs are halving faster than once every 18mo in 2040 I am definitely wrong.

Reversible computing is not bound by the ~5 OOM efficiency increase, but will be a substantially different paradigm (you can't delete anything for example).


Perhaps you are unaware of the angstrom scale process nodes on the semiconductor roadmaps? i.e., IMEC, TSMC and Intel all aggressively pursuing "20A" at the moment with an eye toward a new generation of sub-1n production.

This isn't just some arbitrary guesswork on a blog, it's practical planning by the major suppliers. Check it out.


Please don't be condescending and wrong at the same time. The quantum tunnelling limit didn't mysteriously go away because the marketing department decided to name things based on the thickness of one particular feature. Nor do the laws of physics change when you bury your transistor in dialectric (the wavelength does, but that only works once).

Node name just barely correlates with minimum feature size (how thin can a line be drawn). That's starting to break down (hence intel 7 and 2N and so on) but that's not the limit I'm speaking of. The "2nm" nodes wind up with functional units about 30nm from center to center.

Compare to the 22nm node which was almost precisely 44nm center to center for a dram cell. In density terms there's maybe a factor of ~20 between 22nm and 2nm processes, so functionally the 2nm node can make things which are about a fifth the linear dimension.

Gate pitch has a fairly hard limit based on voltage and electron wavelength.

You're never going to put another transistor within about 2nm of a first one and have them both work. Hence the roughly order of magnitude in linear dimension past current.


I wasn't being condescending, and I am objectively not 'wrong'. I merely urged you to gather information you may not be familiar with. Particularly IMECs roadmap.

I will point out that you instantly went into condescending mode after complaining about my imagined condescension. I assure you, I'm well aware of what a process node implies about measurement of larger features. I'm also aware of the promising approaches coming down the pike.

I'm _also_ aware of the decades-old Internet tradition of people posting about the hard physical limits and immediate death of Moore's law, even as the limits are overcome year after year.

I am glad that you have a handle on the problems we face here, but please try to keep in mind that these challenges are a beginning, not an end.


You continue to he condescending and wrong. You urged me to look up marketing names as if they trumped reality when the relevant number was already in my comment.

Those predictions initially pointed directly to the 2020-2030 range, then the goal posts were moved to keep moores law alive (it's not a doubling every 18 months, it's 2 years). They point to 2040 now. If you stick to the 18 months rule under which the predictions based on the hard physical limits were made in 2000-2008 then it has already come true. Putting ever smaller numbers in the names of process nodes where the density of functional units is barely increasing does not change this.

Nehalim was noted for it's relatively large die area and low transistor count at about 2.7MTr/mm^2.

core2 on 45nm was about 3.8

Zen 4 is about 93.

93/2.7 is a factor of 36, 93/3.8 is 24. 2^7 is 128, 2^9 is 512.

Mobile processors are slightly better from a raw numbers POV, but already have a large portion of their area dedicated to tiled functional units like sram and gpu cores. Switching from a complex bespoke design to tiling only helps once. Even an m1 at 133MTr/mm^2 vs nehalim is only a factor of 48. These numbers are not even commensurable, because a much larger fraction of the transistors in the M1 do not do anything than in a core2

Prices per unit of die area are about stable although chiplets are needed (again) to maintain yield. This will go up with EUV and more exotic materials used to increase the dialectric constant.

Clocks are stable.

Performance per watt will continue to increase for a while after using architectural changes similar to some of those mentioned in TFA, but there is no getting around the second law with a classical computer.


Just out of curiosity, can you explain to me what you mean by 'marketing names' in relation to IMEC? I'm not quite convinced you understand what that organization is, what they do, or what they have accomplished.


The name at the top of the slide for a node which says '3NM' or '20A' and increasingly the 'transistor density'.

The former were divorced from a specific measurement of a functional unit at around 22nm to make press releases sound better and not scare investors when it became clear that Moore's law was dying, and the latter are increasingly divorced from a measurement relating to the density of actual functioning logic on the die.

The best bit is metal pitch and poly pitch are actually on those slides which would have allowed you to see what I meant if you'd read them and see that there's no roadmap past a functional unit roughly 10nm in linear dimension. That roadmap will likely be extended to somewhere in the 5-2nm range at some point, but after that, transistors are basically done because that's how big 'an electron in a low voltage potential well' is.


Note that the size label of process nodes is almost completely uncorrelated to actual feature size. GP makes the comment that the existing ~5-7nm nodes are actually running at ~30nm gate pitch.


Cost of computing is not Moores Law.

"Moore's law is the observation that the number of transistors in a dense integrated circuit (IC) doubles about every two years."

Unless you're talking about Dennard scaling, which started breaking down in 2006 according to Wikipedia:

https://en.wikipedia.org/wiki/Dennard_scaling

I'm not defending every point the author made, but some sure seem right to me. Especially point #2: that we will get more exotic computing hardware.

As for #28, I can easily imagine what the author means. Think the movie Her. You don't have to type out "from typing import Option" over and over again. You just say to the computer: "Let's add types to the classes in module orders.py" and it comes back with "Able to add typing to 98% of variables automatically, but SpecialOrdersController has complex inheritance and I wasn't able to ..." and you go from there.

Anyway, don't get too huffy and stuffy with these types of things. It's an exploration of ideas not a PhD defence.


22 v. 23: A human-level AI does not bring the singularity. The singularity is an AI that is self-improving exponentially faster. A human-level AI (probably) doesn't do that, because humans don't do that.

(Never mind that it's far from clear whether even a better-than-human AI will do that...)


If the AI is actually human-level and is made by humans, then it will be capable of working on itself. Assuming the evolution of human intelligence was mainly limited by biological factors that don't apply to AI (like that brains can't be too big to go through the birth canal, that a body needs to optimize how much energy it gives to the brain vs other parts, etc), then if we hit human-level AI then we're going to be able to go past that. If humans are able to improve AI at any speed, then the AI would be able to improve itself exponentially faster because it will benefit from its own improvements.


Why would anyone assume that AI won't suffer from either physical limitations or abstract ones?

We can barely make a GPU that doesn't melt, and we certainly can't make reliable non-trivial software.

If you increase complexity by a few orders of magnitude those problems not only don't go away, they become far worse.


I think it's much more likely that we either fail to make human-level AI or we wildly surpass human-level AI than us getting to human-level AI, petering out exactly there, and not facing a radically AI-changed world.

If we or AI run into a barrier stopping us from improving AI, it would be surprising if that barrier was at the same exact spot where evolution ran into a barrier at improving our intelligence given all the differences between the situations. Even if it turns out in some sense that humans are running the "optimal" algorithm for intelligence already and there are no true algorithmic improvements over our intelligence available, the abilities to scale up (or network) a computer, to clone it, to serialize it, and to introspect and modify it will be huge force multipliers. If some human minds were given these abilities then they individually would have a major impact on society; if we're presuming actual human-level AI then they will too.

The only reason someone would assume that when we get to human-level AI that we'll definitely get comfortably stopped there is because they're hoping for a simple outcome that they can understand, of a world that doesn't change too much.


Humans can't upgrade their own hardware all that easily though, unfortunately


Can an AI? Software can upgrade the hardware it runs on? You need a lot more than just a human-level AI for that to be true.


Nothing about the increasingly dystopian direction that computing is taking, a prediction which I very much hope will be incorrect but currently shows no sign of it.


I agree with this. I believe computer scientists and programmers especially just solve problems because they can, not really giving much thought to how those technologies actually affect society. They do it because it's cool and because they get paid a lot. And while a lot of computing has made life better, a lot of other computing (tracking, social media, making consumerism more efficient, AI to replace people) generally makes society worse.


Dystopian for you yet billions of people have access to computing and it works intuitively for them


They don't have access to computing. They have access to some apps that syphone their data.


Communication between humans is increasingly being intermediated by organizations who have incentives to manipulate.


I think most of this list is baloney, which however raises a deeper point: what's the point of compiling such a list of predictions?

In all likelihood, most of the predictions will turn out false and some might turn out to be true. Then the author will be able to say: "See? Told you!" on those, and for the ones where he was off, no-one will hold it against him because - after all - predicting the future is known to be hard.


> Common programmers will be forced to actually learn how CPU caches and similar systems work

Don't agree with that. Rather, hardware makers will ship a code-generator backend specialized to their design. Compiler users will be able to plug in a code-generator matching their choice of hardware.


Just sticking to the hardware predictions, I am unsure if things will really go as planned. The thing with classical computing is that it's an easy-to-understand model, compilers and programmers can both produce good code for it. Predictions that simple, core-parallel processors will sweep away clever, ILP-based processors have been big failures historically, and I expect that to continue. The end of garbage collection also seems a step too far. It, like our current processors, is good enough that radically different successors are unlikely to succeed.

EDIT: I should clarify this is for general-purpose architectures. For domain-specific purposes, we definitely are seeing new, weird designs pop up. I liked this breakdown of the 'Dojo' architecture, to give an example: https://chipsandcheese.com/2022/09/01/hot-chips-34-teslas-do...


>46. Deeper research into P v NP will result in a far deeper and clearer understanding of exactly what is and is not possible in machine learning, and the nature of intelligence itself.

This seems like one of the most far fetched and least informed predictions on the list. I am exceedingly confident that P v NP research will not have relevance in the philosophical field of understanding the nature of intelligence. Additionally, I am not aware how "machine learning" would intersect with P v NP research.


> "The basic concept of computing as a machine executing a stream of instructions, shuffling data back and forth between processor and memory, will eventually be abandoned in favor of more exotic models."

Hard disagree. Thing is compared to the size of an ALU, a register file, some SRAM, all of which are necessary for doing meaningful work, the control logic and wiring to turn the thing into a CPU is negligible.

I did signal processing related stuff work a living around a decade ago using FPGAs, and even those things had multiply-accumulate units and SRAM cells next to their programmable logic soup.

The guys next door did something similar, but they used DSPs, which were specialized CPUs, and the performance was a toss-up.

Thing is people (programmers) are a lazy and ignorant bunch. Looking at Geekbench benchmarks a 2022 CPU is about 4x-5x faster on a single thread thana a Core 2 Duo, a 16 year old CPU.

That means it's perfectly possible that a 2006 C++ program on 2006 hardware was as fast, or faster than a modern one (written in JS) on the computers of today.

Yes, modern hardware architecture sucks and the reason behind it is the pretense of a shared, flat memory space. And even that abstraction is worthless, since you can't access any memory that's used by another CPU, without jumping through a ton of synchronization hoops.

But at least we have those wonderful cache coherency mechanisms that gave us side channel attacks.

Yes, the answer is to do away with this and find another way. The PS3 did this with Cell. It was the the future of computing. It's coincidentally about as old as the Core 2 Duo. It's also no longer around. But Javascript is, and its more popular than ever.

But I'm sure most of the problems outlined in the article were discovered decades before the PS3 was a thing and solutions were proposed then abandoned.

I'm sure there are CS people with many decades more experience who have seen these insurmountable problems solved, and the solutions abandoned many times over.


> The computing industry will get over its irrational fear of Turing. Undecidability is deeply misunderstood; undecideable problems must be solved, and are regularly solved by any useful static analysis tool. Undecidable problems are not incomputable

Here the author displays their own misunderstanding of undecidability. The fact that you can decide many instances of an undecidable problem doesn't change its undecidability in the least. Furthermore, when you consider computational hardness rather than undecidability, there are problems that are hard nearly everywhere (think of things like discrete log over elliptic curves), so there's no hope to approximate solutions.


Any algorithm can be made to create "yes", "no", and "I don't know" buckets, and all undecidability means is that "I don't know" cannot be empty. I don't really know of any case where undecidability truly caused people to shy away from attempting to solve problems: static analysis is frequently undecidable and yet it's a fairly thriving field.


As a hard result, undecidability may be unimportant. Someone rarely wants to know whether a Gödel sentence is "true" or determine whether a program that references itself halts.

But such theorems, by placing a theoretical, if contrived, limit, often seem to have a qualitative implication as well: namely that solving the problem in the general case, even if we manage to exclude pathological cases, is hard or even unfeasible.

It seems consistent with the fact that there is no known algorithm for deciding whether a theorem is true, or for accurately and fully predicting the behaviour of arbitrary programs. As you said, that doesn't make formal methods useless. But it reframes the discussion away from trying to find a universal problem solver towards discussing tradeoffs (e.g. pushing the proof burden to the programmer, rejecting valid programs, or using imperfect heuristics).


Although, to be fair, undecidability often does serve as a blanket "do not even bother attempting this" refusal. But having a semi-algorithm with a work limiter bolted on ("if solution attempts take more than X operations and/or Y storage, declare failure") should often be pretty practical.

After all, we already know of several theoretically exponential but practically linear problems.


Yeah that part was basically where I stopped reading because it's clearly nonsense.

Point 3 about the "outdated computing models from the 1940s and 50s" also ignores the fact that Turing machines are provably equivalent or stronger than any other system of computing ever conceived.


It's no nonsense in the terms that it is explained: although a problem may be mathematically undecidable, it may still be a solved problem in practical terms - i.e. as finding good enough solutions for real world instances.

As for point 3, I'd say you're making the same mistake; a model may be mathematically powerful yet practically unwieldy, and computing models from the 50's suffer this problem. Brainfuck is Turing-complete yet it would not make sense to recommend using it because it is provably equivalent or stronger than any other system of computing. Similarly, the Von-Neumann architecture was a great solution for computing physics problems with the electronics available in the 50's, but basing every modern programming language on that paradigm is a mistake.


> It's no nonsense in the terms that it is explained: although a problem may be mathematically undecidable, it may still be a solved problem in practical terms - i.e. as finding good enough solutions for real world instances.

This is well known, that's the whole reason for formal methods in CS or of languages like Coq, Idris and so on. There is nothing revolutionary about it.

The author should probably spend more time explaining why they think we will see a golden age of formal methods when the industry has consistently been pointing in a different direction for decades, instead of making vague associations or insinuating that we "misunderstand" Turing.

> As for point 3, I'd say you're making the same mistake; a model may be mathematically powerful yet practically unwieldy, and computing models from the 50's suffer this problem.

Compilers, interpreters etc. can easily translate between equivalent computational models (e.g. Lambda calculus and Turing machines), but the formal results, including undecidability, apply to all of them. So it doesn't matter that a Turing machine is hard to program; nobody wants to actually use it as a programming language. It's a computational model.

> Similarly, the Von-Neumann architecture was a great solution for computing physics problems with the electronics available in the 50's, but basing every modern programming language on that paradigm is a mistake.

I would say that, if you make such a claim, you should state why it is a mistake and what should be done instead.


> I would say that, if you make such a claim, you should state why it is a mistake and what should be done instead.

Others have explained the problem with the von Neumann bottleneck,[1][2] and ways to overcome it or even redefine it.[3]

The industry is evolving towards programming paradigms less affected by that original computation model such as functional, functional-reactive, or agent-based programming(*), yet most of those are still converted down to Von-Neumann-style runtimes to be processed. Now that machine learning is increasingly being used to solve new problems, many computations may be done with dedicated non-VN hardware like tensor processors or analog computers (which are coming back with a vengeance).

(*) Listed in increasing order of real-world expressivity, although their computational expressivity in all them is mathematically equivalent to Brainfuck.

[1] https://dl.acm.org/doi/10.1145/1283920.1283933

[2] https://www.techtarget.com/whatis/definition/von-Neumann-bot...

[3] https://www.sigarch.org/the-von-neumann-bottleneck-revisited...*


Most of his predictions are nonsense, and show a lack of understanding of the logical foundations of modern computing. His assertion that instruction sets are "obsolete" is absurd. The Turing/Von Neumann computer uses an instruction set and RAM to do computing. He is proposing some nebulous jelly computer that would be unprogrammable.

The idea that ordinary people can write their own OS is laughable. There are several million lines of code in every OS related to just drawing and responding to a text entry field. There aren't even 100 people who know the TrueType language which underlies all font rendering. The last small team generated OS of any accomplishment was the Oberon project at ETH, and that was decades ago, before the web, and his team of very smart people were not ordinary.

Hardware has to evolve with software. Intel cross-point memory tech, which was marketed under the Optane name, failed because people didn't know how to use it properly. New hardware will easily be stunted if it can't get traction. Look at the failure of the Adapteva Epiphany chip. It was a 10X improvement in computation per watt, but nobody knows how to program/debug a chip with thousands of independent cores.

The underlying mathematics that we all use originates from linear, sequential, Greek Proof, and we have no parallel logic math developed yet.


This looks a lot like a hardware geek's wish list (of course, deprecating software). It's rather disappointing that hardware folks still can't stop throwing shade onto software. I worked for hardware companies for almost my entire career, and it was constant "background noise," the whole time.

> Far more powerful software diagnostic tools will be built.

Anyone remember ICEs? I would love to see an ICE, these days, but I think that it would be insanely expensive.


> "The basic concept of computing as a machine executing a stream of instructions, shuffling data back and forth between processor and memory, will eventually be abandoned in favor of more exotic models."

I suspect that if SSD drives could be made faster, maybe we wouldn't need separate RAM chips. The SSD drives could be used both for long-term and short-term storage. That said, it feels like the author is referring to something more radical than that. It reminds me of a concept I read about which involves moving the processor to the data instead of moving the data to the processor. It all sounds very abstract at this stage though.

> "rather than relying on context switches. Cores dedicated to specific programs or services will minimize the need to move large amounts of data around"

Makes sense.

> "I wouldn’t be surprised if we eventually drop big-O notation for a different, more powerful model of computational complexity."

That's going to be a tough one.

I sincerely hope that the decade of the 'big refactoring' is upon us because I'm getting tired of all the layers of complexity which have kept getting piled on top of each other over decades.


Liking the theme of a return to simplicity.

It's a view I unfortunately believe isn't likely given the prior art of escalating complexity in both software (think web, OS) and hardware (modern CPU's / GPU's).

However I guess there are many exceptions to this rule though. Further back I can think of the creation of high level programming languages, or more recently the progress in AI tooling to assist various crafts.


Some of these predictions are already happening so maybe "prediction" is the wrong word? Traditional computation has for a long time assumed that "computation" is expensive and "communication" between logic components is almost free. That's reversed now - the communication needs of computation are already a lot more expensive than the raw computation needs (transistors/gates).

That means something like binary tree traversal can no longer be O(logN) it actually becomes O(sqrt(N)), or at best O(N^(1/3)). We're also seeing that hardware circuits, computer software and the human brain already have common emergent communication behaviour. If you're interested in fundamentals of what this means for future computation see for example:

Communication Locality in Computation: Software, Chip Multiprocessors and Brains https://www.cl.cam.ac.uk/~swm11/research/greenfield.html


> Dennard scaling has broken down, which eliminates many of the real benefits from scaling further for chips that are not almost entirely memory.

Weird; this suggests we get real benefits from scaling for memory while it is my impression that SRAM has seen smaller density improvements than logic circuitry in the recent scaling down from 7nm to 5nm and 3nm.


> Sensory augmentation tools will be developed for software development. It will be possible to listen to sounds that communicate useful information about the structure and behavior of code

One wonders if this is outlandish to high noon or just lack of imagination on one's part. Has anyone heard of such a thing before?


In a way we used to have this and lost it relatively lately. We used to be able to hear/feel the hard drive noise/vibration and make informed decisions on what to do based on it.


8-bit home computers often (unintentionally) output noise over the audio output while the processor was busy. You could gain even more insight by putting an AM radio next to them. With practice you could get a rough idea of which part of your program the computer was in.

Of course, we're talking about processor speeds <10MHz, you couldn't do this today, but maybe the same principle applies.


When driving, hearing the engine and the road underneath the tires "communicate[s] useful information" that many drivers understand intuitively from experience. I don't see why the same can't be applied to almost any activity, so long as there's information to communicate.


I don't think that the current style of CPU architecture will completely disappear.

But it will likely be used as a control plane CPU, connected to much more powerful data-plane hardware, like the GPUs we use today.

I am not convinced that exotic / ultra-specialized hardware will be the norm. I think there will be the usual back and forth between performance of dedicated silicon and the convenience and power of programmability.

General purpose programmable silicon is likely to prevail once the dust settles.


I would class this more as science fiction rather than prediction. You could preface each prediction with 'What would happen if..' and it is quite thought provoking. The last 30 predictions are paywalled mind you.


It does seem rather like someone has just scanned for IT futurist stuff and put it all in a list - there's little consistency between many of the points, and sometimes little consistency within a point!


Predicting the future of technology seems to be difficult...

https://www.smithsonianmag.com/history/1923-envisions-the-tw...


Ehhhhh

A lot of these are quite annoyingly ideological and opinionated.

This is one of those articles that show up 100 years later with: look how dumb and wacky out ancestors thought future would be!

Also, not a word about alternatives to 2D silicon, in fact doubling down on it.

LMAO and he paywalled the last 25 "predictions". Very shady.


This article is dumb today too, and don’t present any idea of the future is a big fallacy for cash grabbing 3d stacks are probably happen next year so is true in some sense and quantic computers are massive non 2d things some years but the point is all he say haven’t time frame, nor strict definitions of what happens.


About #10: issn't undecidability the same as uncomputability? AFAIK, they are synonyms.


What if…the “next century” of computing is happening in the next five years? And after that, software will mostly be written by text prompts, just as AI creates images now.

This is the disruption that nobody sees coming…but it very well could happen.


n+1. COBOL and various JavaScript frameworks will still be heavily used.


Rust will have become a pile of rust.


Rust cannot rust. You cannot light fire on fire.


But you can "Fight fire with fire"


That's like the next decade, not century. AI will render most of those tasks obsolete , to the point where 'computing' will hardly be a recognizable industry by 2122


That's what they sad in the 90. And after 30 years you have a "voice assistant" which sends your data to another human.


Ah, that deus ex AI that pops in every hackernews thread and not in reality.


The future takes time until it distributes itself


I may have a frog in the well's view but this seems to be very optimistic given the state of investing in these futuristic ideas


I didn't see much about moving past traditional, small screens in this list, but I sure we have room-scale computing that I can walk around in!


81. Video games will get more realistic


The author seems to have major issues with VR for some reason.


Just tell me how many AI winters we will endure this century




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: