Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Something that I often find frustrating about modern technology is how little we do with the vast amounts of processing power we have. A lot of what we are doing now - word processing, spreadsheets, email, web browsing - was done by machines with a fraction of the computing power two decades ago. What does it matter if the numbers get bigger if the use cases are the same?


Personally I talk to computers a lot these days.

It isn't riveting conversation, just stuff like "Sunset" (to make the lights warmer), "remind me to buy cheese when I get to $GroceryStore", "play $album please", but it adds up.

I also take a lot of pictures, which have become unreasonably good to the point where I'm still learning how to take a better picture with my fancy mirrorless than I can take with my phone. Both of them are computers.

After I take those pictures it does accurate analysis of what's in them, so when I search for cats, or spider, or flowers, it finds them. It does this on the device, which is pretty cool.

I have another computer that flies, I can tell it to fly circles around a target or do a bit of following. It's neither an expensive nor featureful example of its class. It flies for a real 25 minutes on one battery and weighs 249 grams.

There's another one which cleans my floor, to be honest we could have done an okay job of that in the 90s, batteries and chips were almost up to it.

Then there's the one that I can tell to make fantasy dwarves and it just does it. I think that's the one younger me would have been most impressed by.


You couldn't do it at the same time. It's nice to say "we could run a spreadsheet or listen to music", but it's almost like these were mutually exclusive. Winamp was light on resources, but if you went on to try a large-ish sheet on Excel, the music would skip, or the app would crash, or both.


The main amazing draw to Linux in the early days was that you could renice mpg123 just enough to keep audio playing while using the computer for other things.

It could also burn a CD without freezing the system or producing a coaster.


true, early pc's were really bad at multitasking but i haven't had much problems since DMA and L2 caches were available.


Modern software benefits from increased computational power because it allows new features and speeding up older ones. Sure, “office” apps don’t benefit much, but you’re ignoring many fields where they do benefit.

For example, the field of 3D graphics. Games and animated movies have become a lot more realistic and feature filled thanks to more powerful graphics cards. In fact, Disney specifically puts a lot of effort into making hair realistic. That was impractical a decade ago, and impossible a decade prior.


Meh. Are the stories being created with games getting better? For example, Half Life and Portal are pretty modern and immersive and run on some 20-year-old hardware.


The story lines and the graphics are orthogonal. It’s possible to immersive and fun games with “poor” graphics (Portal) and it’s possible to have bad storylines with amazing graphics.

Even if you’re fond/nostalgic for older hardware and games, that doesn’t mean you can’t recognize that things have improved.


Well if the overarching point is that nowadays we have so much computing power, and it doesn’t really result in better experiences, and that most things one would want to do could’ve been done on much older hardware, then it’s kind of the point that the graphics are orthogonal to a fun gaming experience.


Meh. Are stories being created in books getting better? For example, The Decameron and Canterbury Tales are pretty impressive and were written before the printing press.


Stellaris and Cities Skylines have incredibly detailed models that I basically never see because I always play zoomed out.


You're absolutely right about 3D graphics, but how much time does the average desktop computer user spend rendering hair?

Even if you need bigtime compute power for video games, there are game streaming services where someone else's computer will do that for you.

I have a high-end graphics card and all the processing power I need to play games... but I am still wasting all of that whenever that isn't what I'm doing, aren't I?


> I am still wasting all of that whenever that isn't what I'm doing, aren't I?

How is this different from owning anything? I have a bike, but I’m not riding it literally all the time. But I still don’t think owning it is a waste.


I feel that single-threaded processing power stopped increasing at 2 major events in history:

* The arrival of video cards around 1997 (focus shifted from general computation to digital signal processing)

* The arrival of the iPhone around 2007 (focus shifted from performance to power consumption)

I'd vote to undo these setbacks by moving to local data processing, where a large number of cores each have 1/N of the total memory, shared by M memory busses. Memory controllers would manage shuffling data to where it's needed so that the memory appears as 1 contiguous address space to any process.

In other words, this would look identical to the desktop CPUs we have today, just with a large number of cores (over 256) and a memory bandwidth many hundreds or thousands of times faster than what we have now if it uses content-addressable memory with copy-on-write internally. The speed difference is like comparing BitTorrent to FTP, and why GPUs run orders of magnitude faster than CPUs (unfortunately limited to their narrow use cases).

This would let us get back to traditional programming in the language of our choice (perhaps something like Erlang, Go or Octave/MATLAB) rather than shaders.

Apple appears to be trying to do this with their M1 and ideas loosely borrowed from transputers. But since their goals are proprietary, they won't approach anything close to the general computing power available from the transistor count for at least a decade, maybe never.

So there's an opportunity here for someone to reintroduce multicore CPUs and scalable transputers composed of them. Then we could write whatever OpenGL/Vulkan/Metal/TensorFlow libraries we wanted over that, since they are trivial with the right architecture.

This would also allow us to drop async and parallel keywords from our languages and just use higher-order methods which are self-parallelizing. Processing big data would "just work" since Amdahl's law only applies to serial and sequential computation.

The advantages are so numerous that I struggle to understand why things would stay the way they are other than due to the Intel/Nvidia hegemony. And I've felt this way since 1997, back when people thought I was crazy for projecting to the endgame like with any other engineering challenge.


> I'd vote to undo these setbacks by moving to local data processing, where a large number of cores each have 1/N of the total memory, shared by M memory busses. Memory controllers would manage shuffling data to where it's needed so that the memory appears as 1 contiguous address space to any process.

Cheap RAM is DDR. Fast RAM would be on-die but that would be very expansive, or maybe now on package (but with some tech to be developed). But appart from decoupling latencies of accesses, I don't really see the point of having N busses (from local core to its local memory), especially if you need a very large number of cores. More memory channels seems good enough. The bandwidth is already hard to saturate on well-designed SoC like the M1 Pro and above, probably improvement to the latency could yield to better benefits than trying to increase the bandwidth more.

> In other words, this would look identical to the desktop CPUs we have today, just with a large number of cores (over 256) and a memory bandwidth many hundreds or thousands of times faster than what we have now if it uses content-addressable memory with copy-on-write internally. The speed difference is like comparing BitTorrent to FTP, and why GPUs run orders of magnitude faster than CPUs (unfortunately limited to their narrow use cases).

"content-addressable memory with copy-on-write internally" are you describing what caches already kind of do, in a way (esp. if I mix that with: "memory appears as 1 contiguous address space to any process")? The good news would then be: we already have them :)

What remains, that I think I fully understand what you mean, seems to be: more cores. The other good news here is that: it is in progress. If 6 years ago you would have gotten 6 to 8 cores on an enthusiast platform, you would now probably chose 12 to 16 cores on just a basic one (and even more on a modern enthusiast one)

There has been a pause but in recent years but it was basically Intel having process difficulties, and being caught up by the rest of the industry. Including some with power consumption also in mind, and given what an high perf CPU dissipates today, power consumption has also become key to unlock raw performance anyway.


I don't know how to control for other factors, like bus speed and RAM bandwidth, but:

- 2007 single-core performance: Geekbench 5 score ~ 500.

- 2021 MacBook Air M1 single core: 1750

Ok, only a factor of 3 or so. And only 2x as many cores.

I'm comparing Core 2 Extreme to a low power portable design, albeit one with notably high single-core performance.


The shift to a focus on power consumption was already happening anyway without the iphone even on desktop. CPUs were already in the nuclear reactor territory as far as being able to produce as much heat per unit area


If developers could trade more resources usage (cpu, memory, storage, network bandwidth) for better developer ux, they would do it in a heartbeat, which is why no matter how much computing power has progressed, most softwares doesn't seem to get any faster and keep using more and more resources. On the plus side, software development is much easier today compared to decades ago.


A lot of what I'm doing now would have been insanely expensive or simply impossible when I was a kid. Just as a for instance, I have a half petabyte of video and music stored on a local server to play over my local network. That half petabyte of storage is fast enough to serve over the local network and cost less than 1/3 the price of 10 megabytes of storage in the advertisements in the article.


The difference is that you can now casually manipulate a spreadsheet of the size that would choke a supercomputer back then … on an iPad.

My watch has orders of magnitude more processing power and working memory than my first PC in the mid 90’s. It weighs maybe 200 grams and runs on battery power for ~20 hours.

If that doesn’t feel like progress then I dunno …


We sell entry level computers that choke on small spreadsheets and are less responsive than the same size spreadsheet was 25 years ago on a computer with a thousand times less computing power.

Yes, we can handle much larger data now with proper hardware, however, most people don't do that, their needs for documents and spreadsheets are just the same as it was earlier, but modern systems somehow manage to be worse despite having orders of magnitude more processing power and working memory.


The icon image for the hard drive on MacOS is larger than the entirety of the original Mac system disk.


Youtube, Netflix, and Zoom wave hello, in 1080p+ and stereo sound.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: