Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Have these tests been run with or without the security patches? Since Spectre/Meltdown/etc. it has increasingly been difficult to compare numbers. Lately intels IPC advantage compared to AMD has been melted away completely by these patches. It will probably take some time until we can judge if these new IPC gains are solid or if they have been bought with new compromises.

EDIT: To clarify: If you compare both "out of the box" then the old one would be unsafe while the new one hopefully is safe because it comes with hardware/firmware patches built in. If you compare both patched, the old architecture gets a large performance hit compared to when it was launched. With time the patches usually evolve and change the performance characteristic. So you have to be careful which OS or firmware revision you are using. However way you do it, it's not an easy apples-to-apples comparison anymore unless you're talking about a very specific use case at a very specific point in time.



I see the whole Spectre/Meltdown/etc fiasco as an interesting tradeoff: you can have higher performance if you don't care about those side-channel attacks, which is what a lot of applications like HPC are going to do anyway because they don't run untrusted code. That still gives Intel an advantage.


Intel lost me when they released Windows 10 microcode that killed my overclock by rendering the multiplier ineffective. I can get it back by removing the file from the system folder, but if feels dodgy as.

Also that 6850k CPU / Asus X99 Gaming Strix which I bought just after release is the worst combo I've ever experienced; so many problems, which included frying the CPU if using the then promoted as easy XMP settings (took me 2 replacements to figure it out).

3900x or 3950x is coming up as soon as possible. I am peeved about upgrading so early (it'll be only 3.5 years), but I've had it. The old 2600k CPU PC is still ticking along very nicely at 4.4ghz.


Those sandy bridge chips just overclocked amazingly. Slap on an aftermarket cooler (Hyper 212) and you could easily have 4.4Ghz all core boost, and hit higher on water.


Yes! I built the 2600k in 2011, put on a very chunky Noctua NH-D14, found a stable clock and it's just been running like that ever since. Not a single BSOD.

The 6850k was such a massive let down. Yes, when it works the extra performance was great, but it has been anything but stress free. Thus: The new Ryzens look simply astonishing, at a much lower power / heat cost to boot.


To people who purchase their systems and run single tenant on metal. To cloud providers providing shared infrastructure their lunch got ate a bit here.


Between side-channel attacks and the steady improvement in rowhammer techniques, the mantra of "there is no cloud, just someone else's computer" deserves a renaissance.

Tech promoters have spent a lot of time and energy explaining that anyone saying that is just an idiot who doesn't understand that cloud is the future (e.g. 1, 2). But the basic insight was never wrong, and the people saying it knew just fine what they were talking about. 'The cloud' means giving up physical-layer control, essentially by definition. That's a real tradeoff people ought to make consciously, and it's one that lost some ground lately.

[1] https://www.zdnet.com/article/stop-saying-the-cloud-is-just-...

[2] https://www.techrepublic.com/article/is-the-cloud-really-jus...


With a dedicated server one can have have the same isolation in a cloud as with a server in a basement.


I think that's just a question of how "cloud" is defined.

Certainly a server in a datacenter can be as isolated as a server in the basement. And unless your threat model involves governments, a reputable hosting company having physical access to the box shouldn't be much scarier than having it in your office.

But lots of people (including those cloud-hyping articles I linked) claim that dedicated servers, even with virtualization, are just "remote hosting". Their standard for 'cloud' is basically "computing as a utility", with on-demand provisioning and pooled resources. I know some huge companies have attempted "private clouds" that provision on-demand from within a dedicated hardware pool, but I think most smaller projects have to choose between on-demand and dedicated.


If they need to be explicitly disabled, which they do, not many are going to do it, not much of an advantage if you ask me.


Big datacenters have large and talented engineering staff and routinely customize their machines and firmware heavily. Consumers aren't going to do it, that's true (and relevant to the article: all the Ice Lake parts mentioned are consumer chips). But on a per-revenue basis, most of the market is amenable to this kind of thing.


Big data centers are also most likely to be executing customer input. They almost certainly have all side-channel mitigations applies.


Not every physical die is running security sensitive code. In fact, most aren't.


Sure the datacenter infrastructure won't require the mitigations, but every single multi-tenant die will.

And I'm assuming my 5$/mo DO droplet isn't on it's own dedicated die....


> every single multi-tenant die will.

To be fair though, those chips are a comparatively small part of the datacenter market. Most of them are sitting in IT closets, or per the example above are running HPC workloads on bare metal. Cloud services are the sexy poster child for the segment, but not that large in total.


Gamers are absolutely going to tweak every setting to increase performance of their machines.


The trouble is gamers are susceptible to it. They're running running all kinds of untrusted code (javascript, custom game levels created by other users) as well as receiving untrusted game data from other users in multiplayer games which commonly then goes through a data parser optimized for performance over security.


While perhaps true in theory, has there ever been a known case where game DATA from a multiplayer game was able to exploit a remote system and say obtain root access?

Stuff like rowhammer is very different vs something like a SQL injection on a website.



Plenty of gamers download mods that straight up execute code on their machines. They also download game tools that are just code.

They also run games that are just not secure. I know one that was storing user credentials in plain text in the registry (where no special permissions are needed to access it)


They also download pirate games from sketchy Russian sites.


In HPC software is often optimized for the specific machine it is running on.


How many of these security leaks still need software patches for Sunny Cove? I thought Intel implemented hardware mitigations for pretty much all of them in the 10th generation chips?


Some security issues require software patches but those CPUs also include hardware improvements which strive to reduce overhead to negligible value.

According to Anandtech [0] only Spectre V1 requires pure software mitigation.

[0] https://www.anandtech.com/show/14664/testing-intel-ice-lake-...


This sounds very promising. But it also sounds like something that PR would write. I think we'll have to wait some time until independent researchers get their hands on these chips and give them a thorough testing.


I'm also taking that with a grain of salt since AT has a habit of not challenging Intel that often, and only issuing some weak response even when it turns out it was PR. Of course this may not be the case this time.


I read somewhere that it would take several years and generations for them to be fixed on the hardware side.


Some "hardware patches" are microcode changes that also result in performance degradation.


Maybe, but that's hardly relevant when comparing performance of these chips to e.g. Ryzen 2, as there is no way to disable these mitigations on Ice Lake anyway, it's simply what you get out of the box.


The big gotcha is that meaningful mitigation of ZombieLoad on affected chips requires disabling Hyperthreading (the hyperthreading-based version of the attack cannot be fixed in microcode or software) but Intel has taken the position that this isn't actually necessary on normal consumer machines. So when Intel say that they have hardware fixes for all this stuff it's not clear whether they mean they've actually fixed it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: