Hacker News new | past | comments | ask | show | jobs | submit login
PlayStation 5: the specs and the tech (eurogamer.net)
159 points by ericzawo on March 19, 2020 | hide | past | favorite | 256 comments



I'm not a big gamer, but at this point it seems like Microsoft and Sony are just selling the same hardware. There are very minor differences, but I remember when N64 was such a vastly different experience from the PlayStation. Even the original Xbox was differentiated - it had a hard drive and an Intel Pentium III and Nvidia graphics. Now, they're basically identical hardware. Maybe the PS5 will load games a little faster and maybe the Xbox Series X will have a little more power, but there's nothing radically different.

In a lot of ways, it feels like I should be able to buy one and then just buy a license to play games designed for the other system. Why should I buy (basically) the same processor, same RAM, same graphics, etc. twice?

N64 and PS1 could do very different things and the consoles felt very different and led to very different games. With XSX and PS5, I'd rather throw Sony some money so I can play PS5 games on an Xbox Series X (or Microsoft to play XSX games on the PS5) than create a lot of e-waste buying an undifferentiated platform twice.

Am I missing something about the consoles (other than the whole business/market aspect)?


Just to inject a bit of history.

The previous eras of consoles had a design complexity that made it hard for developers to get anything done or tap the potential of the hardware. This mostly includes the fifth through seventh generation of video game consoles, so PS1-PS3 era, roughly 1993-2010.

The “radically different” architectures are sure a bit more exciting, but as a developer it’s gonna take a few years to really figure out how to use these different architectures to a reasonable semblance of their full potential. The Atari Jaguar had this crazy bus with a 68000 plus two custom CPUs on it, and to unlock the full potential you have to reduce bus contention which means paying attention to what data is local to the SRAM available to each processor. Maybe you need to rearchitect your game to really unlock the potential of the Jaguar. Fast-forward to 2006 and the PlayStation 3 has this fancy new Cell processor where… guess what… you need to pay attention to what data is on the local SRAM for each core and it was a nightmare to figure out how to really use that.

Honestly, I think that console manufacturers were in danger of getting strangled by game developers if they kept up these radical architectures.

Exclusivity, on the other hand, is a key tool in Sony, Microsoft, and Nintendo’s arsenal. Good luck prying that away from them.


I definitely concur that the radically different architectures weren't good for developer productivity. There was a lot of crazy going on.

I guess it just seems like we're now wasting money on duplicate consoles. It feels like it would be better if Sony and Microsoft created a "standard" for manufacturers to build to. Zen 2 3.5GHz with a specified level of Radeon graphics and 16GB of RAM (with a bit more specifics). You could still have exclusives to your platform - Microsoft has DirectX, Xbox Live, and other exclusive software features. Developers would still want access to a platform for their games.

Don't they usually lose money on the actual hardware?

Nintendo is doing a slightly different thing with the Switch, but the PS4/XBONE and PS5/XBX it seems like they're just making people pay twice for the same hardware just because they've locked up some exclusives.

What's the business model there? 1) Lose lots of money selling a console; 2) We'll pay to develop first-party games that we'll sell to fewer people because they're exclusive; 3) We'll pay third-parties to make their games exclusive to our platform (as compensation for the lost sales due to exclusivity); 4) charge huge licensing fees for access to the platform for non-exclusive developers.

It just seems like there are a lot of money-losing (and consumer-happiness-losing) steps to get to #4. That's a lot of loss to society just to get to #4.

I guess the reason why they can charge such high prices for licensing is that developers know that consumers will likely only purchase one system. If people had access to both systems on a single box, then developers could negotiate them against each other. Now, developers can't negotiate them against each other unless they want to lose out on a lot of potential customers.


This is basically what the "steam machine" model was. Standardized controller, hardware specifications, OS, available from multiple vendors. It was unfortunately a bit of a flop.


Not really, Steam Machines were just a PCs running Linux at a time when Linux was in a far worse state for gaming than they are now (and almost all of Steam Machines produced either had weak Intel-based graphics or AMD GPUs when their drivers were still woefully broken). There was never any standard for hardware or even controller and most machines were overpriced, ugly, huge or all of that.

So of course they flopped.

Steam Machines should have used standardized hardware (even if it was in terms of "levels" like "entry level" or "high end level"), software and peripherals the developers could rely on and Valve should have made sure that the hardware actually fit their respective levels.

But Valve never did that and while they still slowly update SteamOS, they have been long sidetracked by VR so there are low chances for them trying a second time.


They'd have been better off defining a standard with something vaguely like a year, or pinned against a console generation as a reference point.


That could help some but IMO the most important part would be to have stable drivers and standardized hardware. What sort of hardware that would be and when they'd updated it isn't that important (though i disagree about the year part - updates should be much less frequent than yearly, otherwise it defeats the purpose of having standardized anything as a year is too short of time span).


My impression of "steam machine" and SteamOS was that they were at least intended as a hedging strategy, at a time when Microsoft was seriously considering becoming a games distributor on Windows PCs. In that, it seems to me that Valve succeeded, in that there is now good awareness of SteamOS and Steam machines that Valve can start to refocus on should Microsoft start considering locking down on Windows apps again.

I still think that with more investment, Steam machines and SteamOS can become a moderate success, but until push comes to shove it seems to me likely that Valve thinks concentrating on traditional OSes and PCs is far more lucrative, while maintaining work on Linux (see: Proton) so Steam machines and SteamOS always remains a feasible backup platform.


What you're describing is a PC.


To be fair, it’s more about the PC that itself became a gaming console. (Try buying a non-“gaming” motherboard or monitor or even RAM these days.)


I almost got kicked out of a local HW retailer for asking "where can I find the parts for grown-ups wanting to get work done?". I did it to provoke, of course, so I probably deserved it.

Even the threadripper motherboards have RGB and are branded as gaming motherboards, despite being beaten by $300 processors any day of the week for games.

it makes me sad.


> It feels like it would be better if Sony and Microsoft created a "standard" for manufacturers to build to.

You're talking about two companies notorious for ignoring existing APIs and standards, coming up with their own proprietary systems, using marketing and FUD to knock down and strangle existing and emerging standards, and then spending the rest of their effort locking customers into their proprietary stuff. And they're not alone. It's the standard tech company mentality: Deliberately make your product different and incompatible with everyone else and compete instead of cooperate. Best case outcome is monopoly and customer lock-in. Most likely case is no one solution gains enough traction and you end up with 15 incompatible chat apps. Worst case is you emerge from the fight as HD-DVD.


I hadn’t known about the jaguar’s hardware complexity until that article that was on HN a few days ago, it’s really fascinating. I think a lot of the complexity comes from needing to cram what’s essentially a high end gaming PC into a <$500 price point. You can do it by creating specialized hardware and putting developers face to face with the metal, but that comes at a cost of complexity.


The complexity is only an issue at the beginning, having standardized hardware without having to deal with hardware fragmentation and driver issues, is usually seen in the community as positive.


I always wondered who thought it was a good idea to make a gaming console with seven cores in 2006. iirc, most pc games at the time barely used more than one thread. even in 2020, most games have a main thread that pegs one core at 100% and creates a hard ceiling for fps. what were they thinking?


The PS3 originally planned to run the game on the PPE and shaders on the SPEs; you could think of it as a single-core CPU with a 7-core GPU on one chip. At some point they realized this wasn't powerful enough so they added the RSX but that left the SPEs with no particular job to do. It was too late to redesign Cell so they just shipped it and punted the problem to game developers.

In retrospect, a G5+RSX with unified memory probably would have been much better.


It was supposed to have two Cells, with one of them focused on rasterization (sort of like larrabee). To meet that, they were expecting to clock it to 4GHz. It's really one of the first casualties in the end of Moore's law.

And I'd say that the SPEs were fantastic, and in a lot of ways was the only thing making up for the anemic GPU. It took a while for game devs to orient their thinking from "this thread is the render thread, this thread is the physics thread" to "this is the dependency graph of work that needs to be done, have all the (even heterogeneous) cores picking work off of that graph and completing them as fast as possible in order", but that was the right direction to go either way. That's about the only way to take advantage of the PS4/Xbone effectively too.


> In retrospect, a G5+RSX with unified memory probably would have been much better.

Which is not _that_ far from what the 360 ended up with. The 360 CPU is basically a triple-core variant of the the Cell without the SPEs, and it has 512MB unified memory.


I'll throw out there that a G5 was never in the cards from a count count budget perspective.


*from a gate count perspective


Maybe they took some cues from the people who released an eight-processor system in 1994? Granted, only two of those are CPUs.


> The previous eras of consoles had a design complexity that made it hard for developers to get anything done or tap the potential of the hardware.

I thought the developers that did figure out how to tap the potential of the hardware did quite well?


The PS2 was where this peaked. It has a weird multi CPU architecture with multiple CPUs and coprocessors, but was popular enough (dominant, really) that devs really invested in some cases. Shadow of the Colossus is a good example.

They tried the same playbook again with the PS3, but paniced when they realized how hard the cell SPEs would be to program to execute shaders effectively and added an Nvidia GPU. The PS3 was not sufficiently more popular than the Xbox 360 to justify the level of investment to really capitalize on the unusual architecture.


Well the panic was when Toshiba started slipping on their 100 SPE coprocessor design.


Do you want your game creators focuses on twonking around in the guts of an obscure, probably poorly documented and understood one-off architecture, or worried about creating an interested and enjoyable game?


They can do both. Often the constraints of the hardware allow for some very unique games.


This is a false dichotomy. If anything, I could see arguments that having to squeeze out capability it's a filter to only get things in the game that pass a development budget. Not just all you can throw at it.


I don't agree that it's a false dichotomy.

You have a limited budget and a limited number of employee-hours with which to create and ship a game.

Every hour spent fighting with the platform itself is very literally an hour spent not creating cool and fun stuff.

That time investment would only be worth it if a "novel" platform delivered huge gains in return for time invested. However, IMO we've long passed the point where that's feasible. CPUs and GPUs are very mature at this point: they are the result of decades of evolution and trillions of dollars of R&D.

Inventing your own radically-different, competitive, bespoke architecture for a game console at this point would require an investment orders of magnitude beyond the expected earning potential of these systems.

It also wouldn't be guaranteed to work.


Agreed I used too strong of language. I should have said it could be a false dichotomy.

My perspective is colored by being a huge fan of the demo scene and loving older games. A large part about what made some of them fun was understanding just how hard they were to do. Is also, I feel, why so many games did not have as much in game help. Couldn't put it in. There is something to be said for that.

My hunch is there was a selective pressure such that only really amazing things survived. That said, how many potentially amazing things were killed fighting the platforms? Could Daikatana have been amazing with more powerful tools? I honestly don't know.


Ars is running a series of game developer interviews.

All of them have a common theme, fighting with the platform is what made all of those classical games a success.

Demoscene and game development communities have a different point of view about "fighting the platform".


I've watched them -- they're great!

Those developers succeeded despite the struggles inherent to their given platforms.

If you watch (and pay attention to) the videos you'd see what many of those struggles were. Hardware bugs. Missing/bad documentation. Bugs in the software toolchain.

Those things don't make standout games better. They make every other game worse.

(And even the standout games would have been better -- or at least developed in less time -- had the developers not been forced to heroically struggle through those problems...)


They make them better, because had those obstacles not been there in first place, the game design decisions to be creative in spite of them would have not been taken.

For example, the shadow character in Prince of Persia.

Also, I am talking out of experience here, started coding in 1986 and was into portuguese demoscene PC/Amiga.


Having watched that I would agree that the memory limitations of the Apple II directly lead to the creation of that brilliant game feature. =)

However, I was responding to the comment chain below this comment:

   "Do you want your game creators focuses on 
   twonking around in the guts of an obscure, 
   probably poorly documented and understood 
   one-off architecture"
This was not about hardware limitations like the RAM limits Mechner faced while writing PoP. This is about bugs and other such things.

The XOR instructions used to draw the shadow-man weren't a "wonky" feature of the Apple II hardware... it was a pretty standard and well-documented feature.

In fact, in the video, if I recall correctly, Mechner specifically mentioned seeing the XOR instruction in the documentation. It was not "poorly documented." The Apple II was quite well documented.

    Also, I am talking out of experience here, started coding in 1986 
    and was into portuguese demoscene PC/Amiga. 
I am always in awe of the demoscene... props to you.

However even though I am not a demoscene or game developer I am speaking from 20+ years software development experience. Nearly as long as you. I did not start doing this yesterday.


Hasn't NUMA support improved in the last 14 years though? Surely someone will try again.


Yeah, Xbox and PlayStation are basically differentiating on services and exclusive IP at this point. Things like ability to cross play, online services, etc. Xbox Live used to be huge, and then Sony caught up. Microsoft tried to shift the Xbox One into a home entertainment center and that failed miserably. Sony has stuff like the PSVR.

Nintendo's really the only one really trying to do something different and not just go for the latest and greatest hardware. They have the benefit of having a lot of beloved IP and ability to make great games around them, so raw computing power is less of a focus.


> Nintendo's really the only one really trying to do something different and not just go for the latest and greatest hardware.

At this point, that version of “different” is basically “sell a mid-range phone with a gamepad attached to it”. Don’t get me wrong, the Switch is great, but it’s very boring on the inside.


"Lateral thinking with withered technology", mate. It's a long-standing Nintendo philosophy.


> Don’t get me wrong, the Switch is great, but it’s very boring on the inside.

That's been their strategy almost every console generation since the NES—that was a 8-bit console when 16-bit processors were common. I think maybe the N64 had some exciting hardware on the inside (at the time), but that's pretty much it. They look for excitement in the controller.


No, that wasn’t really true during the third and fourth generations (NES, SNES). Other consoles contemporary to the NES also used 8-bit processors. The definition of 8-bit depends on how you look at it, but I think we can agree that the Z80 is pretty similar to a 6502.

The SNES and Genesis continued the trend. You took a decent but cheap processor (65C816 or 68000), made a custom chip to draw sprites on the screen, and paired the whole thing with a dedicated processor to handle audio. You might call the 68000 a “32-bit processor” but however you label it, it’s not so different from a 65C816 which both have 16-bit ALUs and 24-bit address busses.

It wasn’t until later that we got people adding MORE CHIPS to make it go MORE FAST, and Nintendo stepping back from the race.

I did a homebrew project for the NES and before I started, I spent a long time sitting down analyzing how these consoles worked so I could choose the console I liked. The 6502 and Z80, despite their design differences and different clock speeds, end up being fairly similar in terms of performance (although I think the SMS is slightly faster).


You're right about the controller, but I don't know ... IMHO N64 is where they started to get less exciting as far as their internal hardware, but capabilities were getting more exciting overall.

The NES's CPU was a 6502 without decimal mode and certainly not exciting, but it's PPU wasn't too usual. 64 sprites, 56 colors, and a 2x2 screen virtual tilemap. I don't think any system had a big virtual tilemap quite like that at the time

SNES is definitely unique. Rotation and other transforms on a huge tilemap in hardware. No programmable sub-CPU or DSP is pushing pixels at all, you literally program 8 registers on the video chip to set the scaleX, scaleY and other parameters and the graphics chip outputs transformed pixels.

32x brought similar capability to the Genesis but it had some SH-1's performing transforms on textures I think, it was more similar to SuperFX than the SNES.

IIRC N64 was pretty much an upgraded PS1 polygon engine (hardware was SGI dervied like the PS1): one with a higher clock rate and a separate DSP for calculations instead of the GTE, but hamstrung by cramped cartridge ROM instead of a CD and really low texture cache.

After that Nintendo started using ATI for graphics hardware and adopted an IBM-designed PowerPC-arch CPU (as did Xbox 360) that lasted until the Will U.


The N64's RSP was quite exciting for the era because of the high degree of programmability. As it turned out, though, almost nobody did anything to program it. They used the microcode that shipped with the devkit. And since the N64 was the odd one out of the pack in other respects, it never really took off in terms of developer support.

So Nintendo's biggest lesson from the N64 was, in fact, that they needed to make boring hardware, because weird stuff wasn't going to be adopted anymore.

Sony learned the same thing almost a decade later with the PS3.


Education is funny. Sony should have learned that they had a killer feature with the PS2 if not ballooning price. And hanging the value add that old games would still work! Just amazing.


I dunno, the NES was pretty unique due to CPU selection, but generally in line with other 8-bit computers and systems of the era. Key developers even ported lots of games almost 1:1 from some home computers like the MSX.

The SNES was basically a mid-range Apple II with some slightly different hardware. The scaling and rotation hardware was getting to be pretty common in arcades and had been around for quite a while in Sega and Taito games. Nintendo focused on tilemaps instead of sprites, and later games quickly added more hardware in the carts to overcome the limitations beyond what F-Zero and Pilotwings could do.

I agree with you on the N64.

Sega had a completely different philosophy, theoretic backwards compatibility all the way back to the beginning. The SG-1000 was a spec revision of the Coleco and MSX specs, the Mark III was a rev of the SG-1000. The Genesis/Mega-drive had the same hardware, but then added an entire other console's hardware around it (IIR Master system compatibility was just a pin adapter). The Sega-CD was a console wrapped around the Genesis. The 32X was a console plugged into that.

The Saturn dropped the 8-bit stuff, but kept theoretic compatibility with the Mega Drive (it had a 68000 in it and a cartridge slot and was rumored to be geared up to be fully compatible at one point IIR).

Sony more or less followed suit for a while as well. The PS2 basically contained a PS1 in it.


> The SNES was basically a mid-range Apple II with some slightly different hardware.

I assume you mean the IIgs with the 65816 like the SNES.

This made me dive into what the IIgs video hardware was like. I know it had an Ensoniq chip for 32-voice sound but didn't know too much about the graphics. Looks like it could do 640x200 with 256 colors.

The Wolfenstein 3d port is interesting even if it does need an accelerator card to run decently. I'm not sure how that was pulled off on the SNES (I don't think Wolf3d had the SuperFX).


> N64 was pretty much an upgraded PS1 polygon engine (hardware was SGI dervied like the PS1)

N64 was literally SGI derived, but PS1 was at best "SGI inspired". This comparison overstates the similarities.

> After that Nintendo started using ATI for graphics hardware

Nintendo was using the ArtX designs before ATI bought them. It was a very different design than the native ATI line.


This high-level difference is really interesting, thank you. Do you know of any good more in-depth resources? Or even a book? I have tried to search but my DDG-foo just gives me dodgy emulator websites with none of the type of content I'm after.


I don't know of a great resource that compares one platform to another like that excellent parent post up there.

However, if you want to know more about individual platforms, you may want to look for:

1. Homebrew coding tutorials for various retro platforms

2. "War stories" from devs who pulled off these tricks.

Gamehut is fantastic, with some of each: https://www.youtube.com/channel/UCfVFSjHQ57zyxajhhRc7i0g/vid...

Ars Technica also has a "war stories" video series that is quite good!


And with MS saying that exclusives would be for XBox and PC it makes more sense now to have a gaming PC and a PlayStation.


yeah anyone serious about gaming has a pc and will go with ps5 this gen

with microsoft and sony getting into game streaming even someone without a good pc will be able to play xbox games, xbox exclusives are worthless and the library doesnt have any real competitive ip

xbox console is essentially doa


What Nintendo is doing is different, but not that much anymore. They're using a ARMv8 cpu and a Nvidia Tegra GPU.

They're pretty close to what you'd find on a top tier smartphone really


>> They're pretty close to what you'd find on a top tier smartphone really

A top tier smartphone from a few years ago, maybe. My not-so-top-tier phone has better specs than my Switch.

I held off from getting a Switch until this past November, and it was a weird product called Ring Fit Adventure that ultimately made me pull the trigger.


How have you found that? I've been debating the same purchase.


You have to commit to it and have a bit of self discipline like any other exercise regime. But if you can do that it's a lot more fun than just running on a treadmill and lifting weights. The game inside is actually pretty good, and has some depth (but that depth kinda slowly unlocks as you play).


I run ~3 miles every day on the treadmill and dislike just about every minute of it. If i can get a fun gamified workout in the same amount of time that sounds way better.


I like it a lot. I mix it with Just Dance and Fitness Boxing. I only have fitness based software on my Switch.


I assumed just dance was awful for the longest time (motion controls, dancing, yearly releases, tacky box art - no way this could be any good right?). But then I played it at a party and was blown away by how much goofy fun it is, especially when everyone's inebriated.


It also gets your heart rate up to cardio levels (according to my fitbit at least)


it's very hard to buy right now, there's been a run on them due to coronavirus, and the rings are manufactured in China so supplies may not be replenished. They're going for hundreds of dollars.


> A top tier smartphone from a few years ago, maybe.

Well the switch was release several years ago, so it makes sense.


It wasn't top tier when it was released either.

The 2017 Gen 1 switch was using a variant of the X1 chip that had been around in other devices since 2015.


Same as me. But looking forward to fight sim 2020; just starting getting the gears for xplane 11


For the developers and third-party publishers, it's a really big deal to have a standard of some kind. It means that they don't have to spend their time making the market: the console makers do that, and then they just sell into it. There is always grumbling about the various overheads of consoles, but as long as it results in unique experiences that people pay for, this state of affairs probably isn't going away either.

And the differentiation of special case acceleration like what we're seeing on these architectures is less exciting on its surface than strapping a high nominal spec to some wild programming model, but it's also a more sane way of achieving next-gen performance.

Cerny's PS5 talk put it this way: The PS1's "time to triangle" (i.e. the basics of the asset pipeline are proven and real development can commence) took 1-2 months, the PS2 doubled that, the PS3 doubled that again, and everyone protested so much that they got it back down to 1-2 months for PS4, and with PS5 they are aiming for under a month, which is actually really ambitious for any development platform!

Also, I sense that the special-case approach also has neat goodies in store down the line because console devs always end up abusing hardware features to get results beyond the original requirements. There's stuff in the PS5 arch intended for DSP code, for raytracing, for decompression, and probably other things too, but most of these things are a little bit more open than just being an accelerated single-task function. I haven't seen what they're doing on the XBox end yet but I wouldn't be surprised if it has some good details to geek out over too. These functions are uniformly win-wins since they give an immediate result that accelerates something common, and then we find out what else they're good for later. Sometimes they aren't, but it's common that they surprise.


Anyone who writes for a platform other than x86 is going to significantly complicate cross-platform ports for their platform. The other competitor would retain commonality with PC. So the days of POWER or Cell or whatever are done.

Once you accept x86 as a given, there’s only two companies of any significance on the market. You could hypothetically do an Intel CPU and an NVIDIA or AMD discrete GPU but then you don’t have an SOC and that increases system integration costs. You’d have to talk two competitors into licensing you their IP for integration and it’s just not really either easy or something those companies really do lightly or often.

AMD is basically the only game in town for a fast CPU core integrated with a fast GPU on a single package, at least for now. After Intel starts getting serious about XE you could hypothetically see a high performance SOC.

As in many areas, the world would be drastically different if patent law didn’t protect a processor’s “API” and x86 could be freely implemented by many companies. For some reason it’s (probably) legal with a software API but not a hardware one, despite the “magic” all being in the actual implementation and the instructions just being “function calls”.


Non-technical people can't begin to appreciate this problem, much less bother to solve it or even know how. Engineers need to lobby. Have you considered meeting your senator and creating a powerful coalition of interested parties to bring about legislative change? You can even recruit an economist to estimate the annual loss in economic surplus due to hardware API lock-in. Politicians often cave to narrowly interested groups that complain about a very tangible grievance (e.g. a NIMBY screaming "that building would block my view and increase traffic!") or very large corporations with a lot of resources. It doesn't have to be that way.


You might be able to get the average person on board if approached from a repair specifications angle.

What a given part does, what size it does it within, where it links up to other parts (with what sort of links), and how well it does the thing are all facts. After knowing those facts a 'cleanroom' implementation of a generic part shouldn't infringe on anything.

Convince the average person of that, and then extrapolate that to making a machine that responds the same exact way to a 3rd party's recipe.

Often, given physics and current or near possible tech, there's only a small number of possible answers, and for a given set of criteria one is going to be 'the best' answer.

Physics and Math are tools that express how nature works.


>AMD is basically the only game in town for a fast CPU core integrated with a fast GPU on a single package, at least for now. After Intel starts getting serious about XE you could hypothetically see a high performance SOC.

Intel could have done it with there Iris Graphics as well if they wanted to. The problem is Intel doesn't like low margin, the are too focused on their high margin they let slip the iPhone SoC and Consoles SoC.

Not saying this is a bad thing, financially Intel is still doing great


>There are very minor differences, but I remember when N64 was such a vastly different experience from the PlayStation. Even the original Xbox was differentiated

Did it ever matter though? I played a lot of video games and owned nearly every system. I played the systems that had the games I liked, and I don't recall anyone I knew doing differently. Games sell consoles, not specs.

That said, I can think of one counter example: Sega Genesis and their "Nintendon't" campain, as well as the subsequent "blast processing" marketing push. I remember kids talking about "blast processing" during our console holy wars at lunch. The genesis was also initially targeted at the NES.


And specs make games, specially in what concerns game design.


No they don't. Gameplay, story, character development... these things make games. Specs can enhance the experience, but a good looking piece of crap is still a piece of crap.

The XBOne is more powerful than the PS4, but it got murdered last generation. The 3DO and Jaguar both had specs that topped the competition at the time, but they didn't have the games and they failed.


Specs isn't about teraflops or triangles per second, rather what experiences the hardware allows to deliver, or what constraints it imposes on game design that have to be worked around to deliver a good experience.

Most are discussing this here from the point of view from gamers, I rather see it from game development point of view.


Sure, but ultimately there are things more important than visuals and load times. Look at the switch right now; it's basically a high end phone, but there are some amazing games being developed for it.

If I were making video games I know I'd get excited about developing for a platform which lets me do more than I could before, so I get it, but I don't recall any console in history winning their generation because of specs.


Again misreading what I write, specs go both ways, as limitation or as liberating, it is all a matter of what the platform provides and making games in fuction of what they allow.

In no moment have I equated specs to super computers on the bleeding edge of hardware engineering.


>In no moment have I equated specs to super computers on the bleeding edge of hardware engineering.

Don't know how you came to the conclusion that I implied that, but I guess I'm just completely missing the point you're trying to make here. You said:

>And specs make games, specially in what concerns game design.

And I'm saying that specs mean very little in relation to gameplay and story. Yeah, obviously you wouldn't be successful releasing the NES today, but no one would do that.

The switch has been beating the crap out of the XBOne since release. The reason is that it has better games. At the end of the day, consumers want fun games. All talk of specs and pretty graphics fades away if your console doesn't have those. It's like web devs who get all excited about their framework of choice. No one else cares.

Maybe I just read too much into that single sentence you first replied with?


So I guess we have been talking past each other.

That is exactly my point Switch specs lead to the game designers targeting it, to create game designs that would accommodate a more interesting experience, given the target platform.

Had the same team started with XBOne and the gaming experience would have had a completely different design.


I disagree, each company has a different take on a similar set of hardware, but the results are quite divergent. In particular Sony is pursuing some interesting, unique angles such as the new 3D Audio stuff, and the high performance SSD.

I found this comparison quite interesting, it goes into more detail about each platforms differences: https://kotaku.com/playstation-5-versus-xbox-series-x-the-te...


It is definitely not the same hardware, just because it has a couple of AMD CPU/GPUs on them doesn't mean programming is anything like a PC.

You are missing the way the CPUs, GPU, audio controllers and DMA channels are actually layed out, the low level programming aspect (console OSes are pretty much like bare metal programming), and how their GPUs offer capabilities and shading languages differ from mainstream hardware and hardware agnostic 3D APIs.


Agreed - I anticipate the experiences made possible these technological choices will be novel:

- Instant feeling gameplay: startup, load next, and world density, speed not seen since the Vectrex-era (1980's). - Uhhh, hopefully more than just that.


To the gamer, all that shouldn’t matter besides the game experience that are possible, the controls allowed and the exclusives. If the developers cannot utilize he API to properly handle the power nothing matters. The only spec you can care about is the SSD speed, that most can relate to.


> Am I missing something about the consoles

They are just PCs with shitty OS and lockins now


I don't think its fair to say its a shitty OS. They're single purpose PCs with an OS tuned for single app performance. One is NT based and the other will probably be BSD based. I believe the Switch is a fork of Android.


Switch uses a micro kernel OS, it has nothing to do with Android.


Apparently there's a bunch of FreeBSD code in it as well.


It makes a bit more (business-)sense thinking of those consoles as hardware-dongles to play certain exclusive games, the same way coffee-machines have become hardware dongles for certain types of coffee capsules, etc...

Thankfully the PC as a gaming platform has survived despite being declared dead for the last 20 years, and more and more console games are making the jump and are also available on PC (there's also exclusivity deals on PC, see the Epic Store, but those are usually limited to a few months).

The whole situation sucks for users, but unfortunately it's the business model everybody is following today.


If they keep going this way, it's probably only a matter of time before a common "gaming box" standard is set and these companies just become manufacturers against this standard.

It's happened before to various levels of success.


Already tried by Steam with SteamOS and Steam machines [1]. They even developed a custom controller. I haven't heard much about it since its launch roughly 5 years ago, however I would assume it is all but dead now.

1. https://en.wikipedia.org/wiki/Steam_Machine_(hardware_platfo...


It was tried at least a couple of times (3DO and Steam machines) but it failed for various reasons, I think mostly because the result wasn't very cost-competitive with proprietary consoles.


Yeah, and others: MSX, Nuon, and the modern PC compatible systems.

However, outside of gaming (and even computing), it becomes very common once it's no longer a differentiator to build your own version of a thing. Consider VHS, DVD, shipping containers, standardized faucet fixtures and so on.


> In a lot of ways, it feels like I should be able to buy one and then just buy a license to play games designed for the other system

This is slowly becoming the state of reality for Xbox, at least in regards to PCs. They've dropped exclusivity for all of their titles, meaning every game Microsoft produces will be also released for PC. And for many of those, if you buy the game, you get it for both the Xbox and PC.

It remains to be seen if the trend continues to grow outside Microsoft though, and it seems unlikely that it would occur between Xbox and Playstation.


From a systemic perspective, it never really made sense for lots of consumers to be buying multiple consoles. Sure, the N64 and PS1 were good at different things, but you could have built a super-console with most of the advantages of both for less than the combined cost of an N64 and PS1. And then games could use both N64 and Playstation features simultaneously.

The reason for owning multiple consoles, then and now, is to play exclusives. And yes, it totally creates e-waste, but what are you going to do about it?

Exclusives have always been anti-consumer.


That's been the case forever though. How many systems used the 6502 chips? Vic-20, C64, NES, 2600, Apple IIe? It was often the same chip, with slightly different (rarely significantly higher powered, just different) memory and i/o around it.


Despite only having 2 (ok 3) popular CPUs (Z80, 6502 and the 6809), 8-bit home computers had massively different hardware.

The CPU only played a very minor role in those machines and mainly was used as a controller for the video and audio custom chips which did the actual hard work.


I agree with you, and am having a hard time talking myself into a new console purchase. I have a gaming PC, which is what these consoles have become with the added insult of paying subscription fees for online play.


Aren’t most games available for both major consoles?


Most games are, but the exclusives tend to be very good.


Most, but I will be buying a Playstation just because of FFVII. I so wish they have made it an PS5 title launch I would have brought both together.


it's true

the older you go back the deeper they were in a way, there's a book about Nintendo with a chapter on how Nintendo hardware team managed to add (or free) a kilobyte of ROM then told the software team. Everybody stopped because that kilobyte meant making a whole new level.

Today even with 2x the TFLOPS I don't think it will make a difference


Yeah I think you're right, but the main way to differentiate is probably the exclusives. Halo -> Xbox, Resident Evil etc. -> PlayStation.


you just described a PC


If you can’t install Linux on it, it’s not a PC.


so the ps3 was a PC?


Shorter load times is an attractive argument, but it doesn't matter when the #1 issue with current-gen consoles is how unoptimized most games are. The traditional argument to game consoles was that it was cheaper and allowed developers to make the most out of known hardware configurations.

As things are now, flagship games on the PS4/Xbox One struggle to stay at 30FPS, frequently dropping to 20FPS or less, without the option to reduce graphics quality or resolution PC gamers have. All the while the low power Nintendo Switch is a joy to game on because most games are specifically optimized for it.


> Shorter load times is an attractive argument,

Visible shorter loading times are not the main effect. Cernys talk was enlightening. A big part of level design is making sure that the player doesn't move so fast that the hardware cannot deliver the assets any more. So, you get corners and corner, and elevators and whatever tricks dev use just to make sure they can load the next part of the level fast enough without showing a loading screen. If a game can request assets and they are almost instantly you don't have to redesign levels again and again because "yes, this looks the way we want, but the hardware isn't fast enough".

For me the interesting question is if developers will use the playstation specific options. For playstation exclusives it's a given, but for multi platform releases there probably won't be different levels for playstation vs pc/xbox, so much of it could be wasted. Same for all the other things. Doesn't help much if the controller has six levels of priority if game devs don't use them because PC has only two.


The talk for those interested: https://www.youtube.com/watch?v=ph8LyNIT9sg

I found it really quite interesting, even if I don't game on consoles primarily.


The Crash Bandicoot CTR game that's a remake of a PS1 game takes anywhere between 2 and 5 minutes to start up on my Xbox One. It's absolutely ridiculous.


that's funny when games would load faster on the original C64 with a 1541 than today.


I have the PS4 and I hate how long it takes for games and menus and all of that stuff to transition. I just want my N64 back.


wow it loads waaaay faster than that on my PS4


Yeah it's brutal. Even loading the Playstation Store takes wayyy too long on my original PS4. It's like it's doing an npm install every time I want to spend money...


I play Crash Team Racing Nitro Fueled with my kids. I have calculated that you can spend only about 3/4 of the time actually playing the game if you don't play the same map every time, 1/4 is necessarily wasted looking at various loading screens. Pretty disappointing for a relatively simple game.


It’s a lot better than it wa on the PS3. In both cases I suspect the problem is that the store app is a webview wrapper for a “webapp”. Though in the PS4 I also suspect they’ve stuck webtech stuff all over the main menu and it only remains tolerably quick because the machine’s a beast.


Downloading updates for games has been a sore point with me, particularly when I get a new game. Oh great, 8 hours of downloads, so much for that instant gratification of playing it tonight...

Same with ongoing updates. I replaced my original HD with a FireCuda which helps some but I feel like the PS online infrastructure is run on fractional T1s and Sun boxes or something


I have a theory that's more to do with their serverside/networking architecture than anything to do with the PS4 itself.

Do you find it fails to work when you're most interested in buying something? Because that's how it goes with me, literally as soon as I'm like "oh yes I would like to buy that game" the store stops working completely.


and then every menu tab on the left that you scroll through takes forever as well.


The current gen hardware is over 6 years old, and when writing for it you can definitely feel its age. Modern PC hardware even at the lower end is so much better and easier to deal with. Also you are forced to make the game run on the lowest specification of any console version, so that means the least powerful machine of all the consoles of this generation.

For a low end PC system we are talking about having 8 cores, 16GB of ram, a decent video card, and an SSD usually or at least a disk capable of 100+ MB/s. Also the platform is pretty easy to develop for and release on.

For a console you get 5 or 6 cores to use, each of which is not as good as a modern CPU core, 8GB of shared ram, and a disk where you have to handle being only able to read from it at 30MB/s. You also have to deal with the black box of vendor tools, that don't necessarily work easily.

So the whole team ends up spending a lot of time actually optimising the game to actually run on these underpowered and lowest spec'd consoles. Removing extra details, downgrading graphics, culling data, compressing, managing memory, just so it can run at 30fps.


> For a low end PC system we are talking about having 8 cores, 16GB of ram, a decent video card, and an SSD usually or at least a disk capable of 100+ MB/s.

That's definitely not low-end. More like mid to high-end.

Low-end is still 2 cores, 4 GB of RAM, with integrated graphics.


8 processor threads, 4 physical cores.

Yes it's more of a mid tier system but it's a low end system that you would get to play games on.

Also given that games still target the consoles you can get away with 8GB of ram easy.


You said cores, not threads. In any case, 2 cores are still a thing. Even the new Macbook Air has 2 cores, so yeah...

Further, no one targets 16 GB RAM for a game. Not even the non-released, next-gen consoles have 16 GB dedicated CPU memory.

Similarly, not everyone one has an SSD and therefore companies cannot target that. Only in a few months with the new gen of consoles we will see actual games targeting SSD speeds.


Yes I made a mistake during editing the comment, I had processor threads in there before and somehow got mixed up with cores, my apologies.

As with the other points though, a low-mid tier gaming PC is something like 4 core/8 thread, 16GB ram, a HDD for bulk storage and an SSD for system & some select game(s). Someone that is buying a Macbook Air or an Ultrabook or something with only 2 cores is selecting it for something else rather than gaming. So irrelevant to the discussion.

The reason no big publisher targets 16GB for a game is because of these old consoles. The games can and do use more memory if available but they are limited by having to target these low spec consoles. They're also required to support the slow HDD speed of the current generation of consoles, so no exploiting SSDs either. This takes significant time and resources to optimise for!


Most games simply don't target integrated graphics.


Most AAA games don’t, but a lot of games do. Sims 4 for example is still a very popular 3D game, but it’s not very demanding. MMO’s are often designed for very low minimum hardware.


False. Most games do target iGPUs, including most eSports and most indie games, which are the /most popular games.

The console AAA games are the ones that target proper GPUs because consoles have them. If they were targeting PC only, you wouldn't see them.


> The current gen hardware is over 6 years old, and when writing for it you can definitely feel its age

But it is also very much a known quantity and games should be designed to take that into account. Arguably it should be easier to write games that hit 60fps today than it was 6 years ago.


If you want to make the same game now that you would have released 5 years ago, sure.

But games have become way more complex in that time, and the expectations of players have grown considerably. So you end up having to put more content in, more features and more data. The disk and memory become a bottleneck and you need to write and optimise some pretty complex systems to be able to handle that well.

PC Hardware has actually advanced since that time as well, so the artists have now more power to create better looking and feeling things in their tools, which means more data to process and get into the game.

Also designers will often surprise you as a programmer with the features and creative solutions to problems. They don't really think and design the same way a programmer would, and don't really design for the system hardware (except for the input/output devices of it).


I stopped playing Zelda on switch because of the loading times. It gets very frustrating when you try out a dungeon.

I would replace my current switch in a blink of an eye if a new model with reduced loading time would come out


The annoying part of breath of the wild is how much a better experience you can get for it by emulating the game with CEMU.

With a gaming computer you'll get 2k resolution and 60+fps easily.

On switch you barely get 720p/30fps


Wow, I wish I’d known this. I bought the handheld-only Switch for BotW, and lamented not having an HD edition that ran with good frame rates.


This is great, thank you for this piece of information!


An update a good few months ago decreased loading times pretty substantially. Not sure when you stopped playing, but I hardly notice it loading sometimes.


Could easily be. I got it when it came out.

Will have more time the next few weeks anyway.


To be fair the game's best content is the overworld. The shrines and main quests could have been cut from the game and it would arguably be a better game.


Could be but I got hung up on one of those shrines and whenever I start it now, I'm still there and try it a few times and get annoyed by it.

I have to get out of it but then it's probably better to start a new game... :-(


Why not look up the solution online? If the challenge is making your experience worse, it seems like an okay thing to do in my opinion.


I loved the shrines. There were some really cool designs, like the one that was basically minigolf but you whacked stuff with your weapons. And the electricity puzzles (that you could cheat at if you were savvy and realized all your metal equipment could conduct electricity the same as the actual puzzle blocks).


Really? I thought most of the puzzles were either insultingly easy or the same combat shrine for the 30th time. Granted, it's technically a product for kids so it makes sense that some of the content is that easy, but it's such a missed opportunity for the game to introduce so many cool ways of interacting with the world and never build up on them in the puzzles. There's not a single shrine in the game that uses more than two "apps" at once, and even those that do use two are so few and far between I can't remember a single example.


The shrines would've been better when they were part of the overworld. I don't like the fact that they feel like an extension.

I hope for the second part to include more surprises into the world itself and not in shrines.


I do agree there were too many identical combat shrines, I was always disappointed when I found one of them, but did you try to get all the chests as well? Most shrines had one or more optional chests that basically acted as "hard mode" and your minimap reported which shrines you'd found all the chests in too.

The ideas in the shrines definitely could have been expanded on more though for sure.


That's a good point. I remember some of the chests being fun to get, though I never bothered to get all of them because there's really no reward besides the extra challenge. I know that's what I'm asking for, but it always felt like a waste of time.


Well, if you’ve got a good PC Cemu emulates the Wii U botw amazingly well.


Too bad about it being closed source.

Once they get a C&D and they close shop and disappear into the void taking the source with them, the patreon supporters are going to feel so dumb.

My advice is to _Never_ support the development of a closed source emulator.


Or their Patreon backers know exactly what they're supporting, and are content to fund people creating the software they use.


That's some faith in mankind.


>As things are now, flagship games on the PS4/Xbox One struggle to stay at 30FPS, frequently dropping to 20FPS or less, without the option to reduce graphics quality or resolution PC gamers have.

Developers are doing what they can here, but the average game that has these sorts of drops is dealing with limited CPU resources, budgeted around a 30FPS target. Especially on consoles like Xbox One X (due to its powerful GPU but limited CPU), it becomes evident that, on average, resolution and graphics cuts have little effect on performance in these situations. The only mitigation is to build your game with a higher frame rate target in mind from the start, requiring serious compromises to things like simulation complexity, for instance.

PS5 and Xbox Series X have a good chance of pushing past this limit. Current generation consoles are using aged laptop-grade CPUs while the upcoming console generation features desktop-grade Zen 2 processors at surprisingly high clock rates.


The big benefit of this generation is that with almost no porting effort PS4 and Xbox One games will run as they should given how much better Zen 2 and RDNA2 are than Jaguar and GCN.


Another problem that arises with the (small) SSD in the machine, games are getting up into the 200GB range. Specifically, the most recent call of duty title clocked in at around 175 GB.

That means you'll be able to store maybe 4 AAA grade games on the SSD.


An interesting note from Mark Cerny's talk, a lot of games have multiple copies of data installed to keep locality when loading up say a region of the game world, with the way SSD reads work they won't have to do that anymore which should cut down sizes, though how much I don't know if they have figured out yet.


I'm sure the SSD will get bigger on later revisions, and they are having support for expandable storage which should come down in price over the lifetime of the console.


That was the promise for the last generation of consoles as well, and some of their generation-closing consoles still had small internal storage quantities; the launch PS4 had 500GB, and the latest PS4 Pro only has 1TB (about 150GB more than the PS5).


I could see that happening, but on the discounted revisions. At the same price over time I’d expect more storage.


You can put an off-the-shelf SSD into your PS4.


Does it matter that much when you can connect a 5TB+ external drive (or upgrade the internal drive trivially on PS4)?


That is my biggest problem with it too. Sony and MS need to crack down on that. We don't need 4k assets. We're not f'ing there yet. Quit making us all have to download them.


? A significant portion of Xbox One X games are native 4k


Switch is a weird case because plugged in its 1080p and in handheld mode it runs at 720p.


It was actually worst with older games; For example, games such as Goldeneye64, Perfect dark or Zelda ocarina of time were running at a mere 21 FPS (19 on PAL!)


N64 was the worst of that gen by far in terms of frame rate though, a lot (maybe a majority?) of ps1 and saturn games ran at a solid 30 or 60fps.


Would be interesting to compare the effective latency though. Switch adds a few tens of milliseconds to input, so 60 FPS on it may feel worse than 21 FPS on N64.


I am a fan of the Switch too, but the Xbox One X offers the ability to use performance modes which reduce the resolution for better frame rate. Also, the arguably best game on Switch, Zelda: BOTW, drops in frame rate a lot.


> I am a fan of the Switch too, but the Xbox One X offers the ability to use performance modes which reduce the resolution for better frame rate

Many (most ? ) Switch games already do that automatically.


Which is to say that xbox/ps5 have basically become branded PCs, and the switch is basically a branded Android SoC.

Consoles don't really make sense as a concept anymore. The only remaining appeal is their DRM capability for publishers.


> Consoles don't really make sense as a concept anymore. The only remaining appeal is their DRM capability for publishers.

As someone who prefers console gaming to computer gaming (for the most part), I have to disagree. I like that the console is hooked up to my gigantic TV instead of my less gigantic computer monitor. I like that it uses controllers instead of mouse and keyboard (the same things I'm using all day at work). I like that my console is in a different room from my computer. I like that it's more conducive to playing a game with (local) friends, since they can just sit down on the couch and turn on another controller. I like that I don't have to worry about what graphics card my console has, or worry about a game not working well with that graphics card. I like that it's simply not my computer.

Yes, I know that you can hook a computer up to a TV, and you can use controllers with your computer, and not all of these apply to everyone, and so on and so on. There's still a difference, and I like that difference.


I think console still do make sense insofar as that they provide developers the confidence that their game can run on a given set of hardware.

Yeah, they're mostly just a PC under the hood. But they're PCs with consistent hardware. How much easier would PC game development be if you know all of your customers had the same CPU, GPU, RAM, and OS?


That's not true due to hardware fragmentation and dynamic clocks anymore though. Which was the point that the root comment raised. Even first party titles sag to 20fps, and the settings screen of a modern console game contains more and more dials that are starting to look a lot like what you get in PC releases.


The clocks are dynamic but they are also deterministic because they're based on power usage, not temperature like on smartphones and PCs.


They do make sense to me and others. My pc is a tool, and my consoles are toys. I need the physical separation.


You can build a gaming PC in the same form factor as the new xbox using nothing but off the shelf components and have a separate device that can play a larger gamut of games on higher settings.

I'm not sure how anyone can dispute that the new xbox and play station are anything other than slightly customised branded PC builds that offer no physical advantages over the real thing, only marketing ones like DRM and exclusive releases.


Or you can walk over to the store, put down a few green pieces of paper, be back at home and play the games without worrying about CPU frequencies, graphic card speed, or motherboard compatibilities.


My friends will come over for 4 hours on a Sunday. I will spend no more than 10 minutes when they're here figuring out controllers. I will spend no more than 15 minutes at any point ahead of time to keep things working.

When they come over, the thing we want to spend most of our time doing is 2 v 2 FIFA. If I spend 30 mins doing anything here, we're losing precious time. If I spend 60 mins debugging, I'm just not going to do it. I'll even take a zero probability of fixing if it means a lower probability of failure of any sort (even recoverable).


I have done that, and I finally got a console this year.

A lounge PC is a shithouse experience compared to a console. Absolute garbage. Simply supporting multiple people in a household on an Xbox is trivial and easy. On a PC using Steam it's absolute trash. Steam treats PCs like it's 1995 and single-user.


Consoles offer lots of advantages, to the MASS MARKET, that you are overlooking. In fact I am a former PCMR gamer that moved to consoles for a few reasons:

* I can rent games from my local redbox to try out for a few days. Sometimes, that's all I need and I saved myself the price of the game.

* Couch co-op is typically superior on console.

* Ease of use: simple connection to TV, simple remote controllers. The story for HDR support is also steamlined (as in, my TV supports it so if the game does too, done)

* Consoles are much quieter, making them a much better TV-room appliance.

* Easy purchase and sharing games, especially with a physical copy. Loan out the physical game, no fuss.

* Some sort of resale value for physical games, albeit not that much. However if you are the type to locust the content of a game quickly, as in the first week of release, you can often resell for most the purchase price.

* A UI built for TV and remote use. No, I'm not excited about controlling Windows from across the room by keyboard/mouse.

* Great portability. One of my consoles is a Nintendo Switch, and it travels far better than a PC. Yes you can get a gaming notebook but that solution is no longer competitive on pricing.

* Better multi-user design. It's easier to have a family console with multiple accounts than it is have a Windows PC with multiple accounts. And have parental controls, etc.

Sure, with suitable amount of time/research and other stuff I'm just no longer interested in doing, I can get wireless controllers, quiet(er) fans, apps for launching that are usable at typical resolution from 6' or more away, fiddle with multiple accounts on Windows (who ACTUALLY does this outside a work setting? It's just... not smooth at all), but there is a cost in my time I don't want to invest anymore.

and other reasons such as

* I want to step away from Microsoft Windows and various issues from privacy to updates. Since the consoles are only for entertainment (gaming, playing videos) I am less concerned about telemetry possibly swooping up everything I'm doing.

Many might not care about that last case... but how many times do half the people in an HN thread about Windows gripe loudly about privacy or monthly updates? It seems like there is some problem with a Windows update 2 or 3 times a year. Yeah, I want off that merry-go-round for my ENTERTAINMENT system.

So, I'm actually doing something about a Window dependency, rather than just griping online. I migrated to Linux for my dekstop (on a NUC) and console gaming to fill the gaming gap. Yeah it isn't perfect and there are a handful of games I still need Windows for, but between linux and consoles, that's about 90% of my need for Windows removed.


I agree with almost everything you wrote, but...

> * Consoles are much quieter, making them a much better TV-room appliance.

This is a joke, right? My gaming PC is virtually silent (140 mm exhaust fan FTW) while my friend's PS4 Pro sounds like a hair dryer under load.


My PS4 Pro is quieter than the PC gaming systems I've had, but not as quiet as the NUC (for obvious reasons). Quieter as in, I can play a movie and not hear the console above the dialog/soundtrack.

My Switch is essentially 100% silent, as in, not detectable that it is on from any noise that it makes.

I don't know about Xboxes.

As far as anecdotes, I have a friend whose gaming PC is loud enough to be distracting while watching movies. Heck it's audible above the A/C.

Now that I'm more into consoles, I'm more sensitive to PC noise... before I'd just accept and tune out a certain amount, but now it is grating because I've seen the alternatives and am constantly unintentionally comparing.

Having built various gaming PC's over the years, yes you can have a fairly quiet gaming PC. But you also need to invest some time/effort/money to get liquid cooling, quiet case fans, quiet power supply, quiet CPU fans, etc and you need to take that into account.

Yes, I've read PS4's can be pretty loud. It's a common complaint on /r/PS4. I can't say I've tracked actual numbers, but my feeling is that 80% of those situations are the same root cause: dust/pet hair clogging up the fan, and after cleaning, the system is much quieter.

My current gaming PC is a PowerSpec G706. You know, for those handful of games I still need Windows for. ;) And I got lazy just buying a system instead of building individual components because... I'm just not into that anymore. Anyway, I would rate this system as pretty quiet compared to other ones I've had, but I always know that it is on. Even not playing a game, the ambient noise under zero load is there. Meanwhile, I can't always tell my PS4 is on. And I can't tell if the Switch is on without looking at it.

So, YMMV but in my experience, for off the shelf purchases, consoles are quieter than PC's on average. Also, the people that have "silent" PC's, like myself in a former gaming life, bought specialized components to get there.

If you have any links to computer systems, pre-assembled (since the average person isn't going to build them component by component) that are also price competitive with consoles (around $275 for a ps4 slim or xbox one s, around $350 for a ps4 pro or xbox one x) that are ALSO silent, I'd like to see them. I have a feeling that a PC in that price range is nowhere near as quiet as a console, but I'd love to hear otherwise.

I'll also state without proof that there is zero chance of a gaming PC, price competitive with a switch, that is quieter. That's equivalent to a gaming PC quieter than a cell phone. I don't believe it exists at that price point ($200 to $300 depending on whether you get a switch lite or regular switch).


Sure, at the same price point as consoles, all gaming PCs are basically garbage ;)


The only thing you’ll be physically separated from is your money.


Many/most people on HN don't do their computing on Windows. (I think you are talking about windows and not pc, as pc gaming is pretty different on other pc operating systems)


If you have looked at a switch then you may have noticed that it uses a completely different form factor than a standard smartphone.


Which games are these that are optimized for the switch that play so much worse on the other consoles? DFRetro usually does game comparisons and the switches games are always worse with a much lower resolution. Hell Doom Eternal targeting 60fps on the Xbox and PS4.


I'm really bummed that this absolute nonsense is also prevalent on hackernews.

Low framerates are almost always completely unrelated to optimization. I can make a horribly optimized version of Tetris that still runs at 144hz on modern hardware.

Likewise, I can write a perfectly optimized game that still runs at 20fps on modern consoles.

Framerates are determined by how much the developer wants to push graphical fidelity/AI etc.


If you are trying to fight nonsense, please don't promote additional nonsense.

Generally, in game development, the term "optimization" refers to the holistic process around getting the game performing at speed, be it compiler optimization options, SIMD-ification, or even adjusting texture resolution (graphical fidelity) or AI parameters. It often starts with profiling, and identifying bottlenecks, and focusing on what can be done to get the game to perform within spec.

Optimization is entirely about context, so, while I generally agree with your comment about Tetris, I'd say your second example is "poorly optimized" by definition.

As to your comment about being determined by how much the developer wants to push... Yes, aiming for higher fidelity will use more resource, it's hardly every simply the "developer's wants" that determine the performance of a particular title, instead performance ends up being the end result of thousands of tiny decisions and elements that contribute to the final experience.


Developers determine a framerate target.

They have a bunch of levers to pull to reach that target. Failing to pull the lever that reduces particle effects does not mean the game is un-optimized.


In game development, the process of identifying, balancing, and pulling those levers is called optimization.

Source: This, specifically game optimization, is my profession and I've been doing it for 15 years.

Edit: To add, this is clearly a vocabulary argument, it'd help your argument if you gave context and defined what you think specifically optimization refers to. But even then, words are used in context, and the context here seems to be strongly game development flavored.


Part of optimization is optimizing your code. Part is optimizing your graphical assets.

If you're making an 8 bit 64x64 pixel platformer, your main bottleneck is likely code. If you're making a 3D flight simulator, your main bottleneck is likely draw distance, texture resolution, and the number of assets filling the screen.

If your game is running 20 FPS, your problem very likely is lack of optimization in some way.


None of that is optimization. Thats deciding what you want your game to look like


Since in many cases, adjusting texture resolution (in the case of under-sampled textures), or removing items from the scene or from being rendered (in the case of items being entirely obscured) can dramatically affect a game's performance without changing at all "what you want the game to look like" So clearly that phrase isn't sufficient to describe the action being taken. Please try again.


> Low framerates are almost always completely unrelated to optimization.

I would be interested to see a source on this.


> Framerates are determined by how much the developer wants to push graphical fidelity/AI etc.

Framerates are determined by how much the console has to process before it can display a frame. Apart from possibly setting an upper bound, FPS is a variable developers do not directly control.

The issue I am pointing is unintentional drops in framerate caused by game events. A developer can spend time testing their game, and tailoring the amount of processing each possible configuration of events and objects require so that at any one time, it doesn't exceed the processing budget allowed by the target framerate. That is optimization. If you do not do it, the framerate will be essentially random.


> The only problem is that PC technology is significantly behind PS5. It'll take some time for the newer, PCIe 4.0-based drives with the bandwidth required to match Sony's spec to hit the market.

This is what stands out for me (I know this quote is a bit dramatic but bare with me): for a long time consoles have lagged behind PC technology - in the sense that PC was always the early adopter and spread out before consoles.

I'll dare to say that the last console that achieved this was probably the PS3.

The PS4 was a bit of a turn off for a lot of people spec wise.

So it's good to see they're using these platforms to push the early adoption of new standards.

Also, good for Sony to let standard NVMEs be used, and not some proprietary form factor that adds no value, and has no benefit to the consumer. That's one of the reasons some of their consoles lost traction - proprietary storage media.


The hardware in the PS5 is better than the average mid-high end gaming PC now, but it won't be more than a year or two before gaming PC hardware is better again. And other than this special super fast SSD you can absolutely build a better gaming PC already, albeit expensively. This isn't really any different than launch time of the PS3 or PS4 (although the 512mb of ram in the PS3 was already laughable and a predictable issue at launch time).

(Not saying that being able to buy a machine that will play all the AAA titles for the next ~7 years for ~$500 isn't a really great deal though, just that I don't think there's anything fundamentally different about this generation of consoles than the last two)


Yes, but when the PS4 was released it was matched to a low-mid end PC.

About the PS3, I think the CELL CPU that got a lot of attention for it's high end performance (though hard to use).


You can buy PCIe 4.0 x4 NVMe drives today. Ryzen CPUs have been providing PCIe4 lanes for a while now. So this is nothing new. And if you want to have the fastest and shiniest then x16 drives are also an option, if you have the money.

The only thing where the specs are momentarily ahead are the GPU architectures, but by the time they actually hit the market the equivalent PC GPUs will be available too.


>the equivalent PC GPUs will be available too.

And RDNA2 should have scary performance, on the GPU power budgets that are standard these days.


you _can_ but no one will write a game for that PC spec for another few years at best.


One of the major pain points is disk i/o on the PS4, however the PS5 seems to have some special magic in the disk controller that allows kraken decoding (to a max theoretical decompression rate of 22GB/s!!). They put the storage controller on the motherboard, but if that wasn’t possible then I can say sincerely that this has value for the consumer.


I'm confused. Doesn't AMD already have support for PCIe 4.0? The ASUS ROG Strix X570 series has it as a feature for sure. And the NVMe M.2 SSDs using PCIe 4.0 are out as well.


PS5 requires ~6 GB/s but current consumer PCIe 4.0 SSDs only hit ~5 GB/s.


The PS4 had GDDR5 RAM for main memory, which no one else (including PCs) was doing at the time. According to Cerny (at the time), a single pool of fast memory was the #1 thing Sony's 3rd party devs were begging for.


What? Ps3 and ps4s used 2.5 in form factor drives. They didn't need to be proprietary.


These new drives are using a different technology (though NVME is not proprietary), in a form factor that we've seen on the PC for awhile. They offer much more throughput than normal 2.5" form factor. They're attached to the PCI bus, not a separate disk controller on the motherboard.


Definitely applies to the the Vita though. Proprietary cards, with 16GB being $60/£45 and 32GB being $100. They never really came down either - I paid £76 for a 64GB card in 2014.


I remember swapping my PS3 drive for a faster 1TB laptop drive, such a good upgrade. Likewise for sticking an SSD in a PS4, although I never got around to doing that.


When did you do that? If you said you did that in the late 00's I would call you nuts haha!


It was pretty late in the lifecycle, probably around ps4 launch time.

I'm one of those weird people that spends more time playing games on consoles from past generations than I do newer stuff.


I meant the PSP and PS Vita lines.


I’m a layman, so I’d like someone from the community to elaborate on Sony’s approach of constant power to the CPU/GPU while the clocks are variable. My interpretation is that the silicon will downclock based on load - if it is only running the UI dashboard, it will slow down appropriately; if running an intensive software title, it will clock up to theoretical max frequency.


They picked a fixed thermal budget based on realistic usage scenarios, and then translated that into a fixed power budget. They balance the power budget between CPU and GPU using SmartShift (https://www.amd.com/en/technologies/smartshift), which is their marketing name for the power management on the control plane of their fabric.


I have no idea if I understood it right, but what I interpreted was:

current systems throttle down when things start to overheat, this is "non-deterministic" and depends on environmental factors etc. this is harder for developers to optimise for

whereas the PS5 knows it's own power use and power budget, as long as it stays within that budget then it won't overheat. and this means the throttling is 'deterministic' - the same code running will always throttle in the same way at the same time, so this is easier for developers to test and optimise for


How can this be true? If I put the PS5 within a small enclosed case with a poor ability to dissipate heat, it should overheat / throttle down thus making the throttling variable, no?


Apparently they have a really good cooling solution (which they haven't disclosed yet) so that even in a (almost) worst case scenario, developers can rely on the performance specs.


if true, this is kind of sad. would it be so hard to make a "dev mode" where the device pretends to be severely thermally throttled, target that performance for development, and then let everyone who isn't a complete idiot enjoy the full performance of the hardware?


We're talking about running the hardware within its ambient spec. For example, PC turbo implementations might run faster at 10C than 20C ambient but the PS5 won't.


And easier for Sony to make a cooling system for.


That isn't what I took from it; it sounds like various components (the example given was CPU and GPU) can sort of "trade" energy budgets:

> send any unused power from the CPU to the GPU so it can squeeze out a few more pixels

What it sounded like to me (also a layman) is that they have a fixed energy budget that's tied to the thermal properties of a baseline unit in a neutral environment. As long as they're under that budget, they can boost up the clocks of individual components that are under load. So heavy-rendering cycles can use relatively more GPU and heavy-compute cycles can use relatively more CPU.

That might be totally wrong, though, and I would also appreciate an expert's explanation.


if it is only running the UI dashboard, it will slow down appropriately; if running an intensive software title, it will clock up to theoretical max frequency.

There should be power saving modes but that's not what they're talking about. Sony is saying that you can run the CPU at 3.5 GHz xor the GPU at 2.2 GHz but they can't both run at full speed at the same time. So maybe you can have CPU @ 3.3 and GPU @ 2.2 (favoring GPU) or you can have CPU @ 3.5 and GPU @ 2.0 (favoring CPU). I guess most games will favor the GPU but I'm not a game developer.

(As an aside, I coined the term "power shifting" in 2005 to refer to this concept and I'm happy to see it being incorporated into basically everything, even if they probably independently reinvented it.)


Surely it's the CPU at 3.5 GHz nand the GPU at 2.2 GHz


I’d like to understand what’s the benefit of this opposed to running at a fixed clock speed?

My guess is it has something to do with cost savings (cheaper to manufacture silicon that has to perform “better” some of the time as opposed to all of the time).


You can't do that with a console. All of them have to hit the min specs.

You can allow one game to use more gpu while reducing cpu and vise versa.

Just imagine a cpu heavy rts vs a graphic intense rpg


If they fixed the clock it would have to be lower which would lead to lower performance.


I'm not from the industry, but from the outside, it looks to me as if the actual graphics quality these days is more about the studio art budget than clever tricks with hardware. Further improvement - to my eye - brings diminishing returns.

In this context, converging hardware with multiple compatible frameworks on top looks like a logical choice.

Console differentiation is more about exclusivity contracts these days.


I’m hyped, the PS5 sounds incredible.

The SSD is twice as fast as the XB SX and the 3D audio chip is huge. The design is fascinating and it includes a ton of propriety tech. Sony’s devs will have a field day with this.

Watch how 1st party games blow Microsoft out the box at 10.3 teraflops.

Ppl spouting that 12 teraflops from the SX will steamroll Sony’s 10.3 are fools. Teraflops is a vastly overstated term purposely popularized by Microsoft. It isn’t the sole determining factor for performance, nor are all teraflops necessarily created equal.

PS5 is going to be the premier console next gen with the best games and….VR


Consoles are now branded PCs with DRM lock-in. Makes way more sense to buy a PC.


I had a PS2, an Xbox 360 and now a PS4. Total hardware cost of playing the latest games with zero performance issues, hardware worries or upgrades in the last 20 years is around the price of a single gaming PC, which lasts what, 4-5 years tops?

It's just not the same market.


> the latest games with zero performance issues

I don't know what you're playing, but it's a joke to imply that games on consoles run without performance issues.


I was switching between gaming PC and console for the last 15 years and to be honest I had much more trouble with PC games. Just after I buy a PC everything runs very smoothly, but the performance declines very fast with new games. It's always an internal struggle between "I want it to look better" and "I cannot play on this framerate anymore". Console games for the most part run on the same level. It's not great, but in this case I'm grateful that someone took the choice from me. Obviously there are games that are just slower than the average, but everyone knows very fast about them. I would not buy a 3/10 game and I would not buy a game that has performance problems. With PC it's always a bit of a gambit.


I'll be honest and say that I don't play games that much anymore so any game I tried ran smoothly. Is QA failing or are Sony and Microsoft allowing shoddy games on their plaforms?

Anyway, just graphics cards, CPU upgrades, noise and ventilation... It's expensive and it's a mess for anyone except hardcore gamers and people who need a desktop PC that costs $2000 anyway.

For most folks a one time cost of $600 settles their gaming needs for years.


A Nvidia 2080Ti costs $1,199.99. A Xbox One X costs $499.

And this one of the main reasons why consoles will continue to thrive. Because not everyone can afford to drop 3k on a gaming PC. Or 2k. Or even 1k.


I like to play games on my couch, without having to hassle with settings or upgrades. I also like having a media center attached to my tv.

Yes, one can hook up a PC to a tv. No, that is not without hassle and upgrades. I want a box to play games and watch shows and movies.


Windows is indeed utterly overcomplicated and thus fragile for gaming use, indeed. I think the same applies to Linux distros, but there's ValveOS that has been an attempt to bring the console seamless experience to PC hardware. If only we'd have more game devs and Nvidia's attention on Linux.


> I want a box to play games and watch shows and movies.

The Nvidia Shield does exactly this. It has Android TV on it and can stream games either from your own PC or from Geforce Now.


I play fighting games and the PS4 versions are generally better than the PC ones. PC is usually a port made by another company (e.g., MK11, and Samsho is promising PC version since forever but nothing yet).

Plus, I don't get to worry about updating drivers, checking system requirements, disabling cortana or all that other stuff. I can just sit and play. Maybe I have to plug the pad on the USB port to recharge it.


You could have said the same about PS1 or N64 (with s/PCs/workstations/) spec-wise. But the market for fixed function consoles is separate from GP computers if you're not a hardware collector interested in specs only. Cf Palm Pilot vs Game Boy.


I still don't want to deal with maintaining a PC and managing Windows. Console are still way more convenient and so easy to own. You buy one and you are set for almost half a decade if not more. No need to constantly keep updating components, deal with drivers and software in general. I can just pick up the gamepad, press on button and dive right in.

I guess in that sense console ARE PCs, just very specialized ones meant to only play games and some other entertainment apps which to me still makes a lot of sense. I like that thing in my living room instead of a big box that needs constant attention on my desk.


Or even better, you don't deal with any hardware at all and buy into the Cloud gaming future :)

It's really sad that Stadia has flopped so far and that xCloud isn't any closer to being a thing. Nvidia's solution looks better, though still requires a Windows/Mac computer.

Maybe the Stadia announcement next week will have some interesting stuff, I'm excited to not have to worry about devices anymore and simply buy games I like.


I'm all in for it but it unfortunately is not ready yet. The day I'll be able to play multiplayer FPS games with <90ms ping running in the cloud, I'll ditch the console.


It's kind of true, just like buying a Mac these days is getting worse specs for more money and more lock-in than a buying a PC.

But some people just like the convenience of a simple, good-enough plug and play solution.


No it doesn't


I'd go further and say they're expensive, very cost-inefficient PCs locked with DRM into only playing licensed games.

PC Hardware is cheaper. PC games are also cheaper. And games are seldom console exclusives anymore. When they are, they're not for long.

There's no value to be found in consoles anymore. They exploit the tech ignorance of the riffraff.


> And games are seldom console exclusives anymore. When they are, they're not for long.

That may be true for the Xbox but certainly not for PS4 or Nintendo games.


Why expensive? seems like a Xbox One S is a few hundred bucks 299 I see on the site. The One X seems 100 more.

Could you buy or even build a comparable gaming PC for that much? Not really to up with all the hardware stuff... so curious. Guess you’d need to pick a case, motherboard and then find similar specs if comparing and compatible hardware combination.

Then I thought consoles were some what subsidized since they also get a cut of game sales. Plus targeting the same hardware sounds like a way to be consistent.


>Why expensive? seems like a Xbox One S is a few hundred bucks 299 I see on the site. The One X seems 100 more.

Do look up release prices. These consoles are old now.

And keep in mind they're not proper computers: They can only run a selection of programs, which are mostly games. Usefulness-wise, they aren't even comparable.


Yeah. I know a new one is out now but still selling the older models... but I kinda like Microsoft's approach. since even the original Xbox One can still run new releases from my understanding. Instead of each new console requiring all new games and OS.


This is a similar argument between say your own server farm and AWS. Consoles are guaranteed to work correctly and always have 100% compatibility. You don't tinker or deal with any headaches, you just play. For many people, that is worth the extra cost.


The CPU in these machines is better than the one in my desktop. Hard to guess what kind of GPU equivalent it will have, but from a cost POV, these new consoles seem too good to be true. It completely changes my assumptions about what sort of graphics the game I am working on can output on consoles (although, the vast majority of pc owners will have weaker hardware; unusual!)

I sold custom gaming pcs very briefly, these components aren’t free!


I believe Microsoft were selling the current XBox at a loss, and making it up in game and services sales.


Interesting, did AMD just sold the same RDNA 2 GPU to both Sony and Microsoft ?


There are some differences, like different clocks, number of CUs and memory bus, so looks like both were customized to some extent


Yes, the same GPU (different size though) and same CPU. The previous generation was also mostly the same.


Same GPU architecture, different custom chips though.


Remember that PS2 required a military export license because it could be used as a missile guidance device? I quote: "Parts of the machine resemble a small supercomputer". Isn't that funny as hell from today's point of view? Here is a link to that story: https://www.latimes.com/archives/la-xpm-2000-apr-17-fi-20482...


And a few years later, there were several PS3 supercomputer clusters. The US Air Force built a supercomputer out of PS3 nodes, and it was the 33rd largest in the world!

https://en.wikipedia.org/wiki/PlayStation_3_cluster


The bits about hrtf made me smile : this technology is at least 25 years old. That’s how every single 3D audio over headphone has been done since the beginning.


I wish they would differentiate on connectivity. Like usb-c ports for video and power delivery, Bluetooth 5.0, WiFi 6, usb-c rechargeable controllers, etc.


Is it x86?


Yes. AMD Zen 2


> PlayStation 5 VS PlayStation 4

The proper comparison would have been against the PS4 Pro


The PS4 Pro GPU is literally just exactly 2x of the GCN that shipped in the original PS4 with scan line interleaving enabled to split the load across the two GPU's.


ELI5?


The screen is cut into slices, one row of pixels at a time, and then dealt between two GPUs like a pack of cards.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: