They are doing the same as the Intel microcode update presumably, offering kernel a bit to enable/disable indirect branch speculation. Usually it will be disabled on entry into kernel and reenabled on exit, so it helps the kernel, not necessary user programs spying on each other (retpoline for those)
As a consumer, will I have the ability to avoid using or disable the IBRS patches?
To be honest I don't really care about some theoretical attack that almost certainly won't be used to target me, and if they do, they can have it. I'd rather have the performance.
Servers don't have/need web browsers (in general). Some on premise compute clusters don't even necesarrily have internet access except SSH from a few trusted users and exploits to run code locally would go through them or SSH (at which point you don't need meltdown to access the data).
I believe there are valid usecases for not applying these patchws.
Web browsers are potentially easier to protect, as I understand it. You prevent javascript from getting access to high accuracy timers and rate-limit their access to syscalls. If a script is repeatedly calling methods that trigger a syscall thousands of times a second, just slow it down a little and the attack becomes impossible.
Disabling high precision timers isn't a long term solution. There's probably only a matter of time until someone finds a new way to make a high enough precision timer, and even if that wasn't the case, features like SharedArrayBuffer are just too useful to completely disable forever - and there's no easy way to make it impossible to create a high precision timer with a SharedArrayBuffer (just have a web worker which just increments a number in a shared buffer in a loop).
Would throttling syscalls really prevent using these syscalls? Surely it would just make the exploits lower to perform? Is there something I've not understood, or is the idea just to throttle syscalls enough to make it infeasably slow? If it's the latter, remember that users might potentially be on a website for half an hour at a time (if they're reading a long article or watching a video), so an attacker (whose code is running on the site, potentially through a compromised ad/tracking platform, or just XSS) would have a lot of time to extract whatever memory they want.
> and even if that wasn't the case, features like SharedArrayBuffer are just too useful to completely disable forever
That's only if you continue to participate in the shared delusion that the WWW can or should be a high-performance applications platform. Those of us who usually browse with JavaScript turned off can get along fine without SharedArrayBuffer or WebGL or WebSockets.
Honestly curious, why is it a delusion to want the WWW to be a more powerful platform that offers more than just being a web of shared documents? Why is it bad for there to be an array of purposes or applications available on the Web? Just because you don't see value in it, doesn't really make it fair to call others who do see value in it delusional.
The WWW started out as a document sharing platform, and that has proven repeatedly to be a horrible base upon which to build an application platform. There's technological ugliness in the form of kludges like WebAssembly and WebSockets and WebGL that are (badly) re-inventing large parts of the operating system. There's the usability nightmares caused by trying to make a web application imitate a native UI (or worse, trying to use web technologies locally as a UI toolkit a la Electron). The addition of application-oriented features has detracted from the original feature set: gratuitous overuse of JS for stuff that can be done in CSS and HTML is rampant, leading to less responsive sites with worse accessibility and worse battery life. And sooner or later, every attempt to shoehorn a real security model onto the Web has fallen. Every month, the idea of a secure sandbox for running untrusted third-party code looks more and more like a pipe dream.
I fully understand how much the world has invested in making the web into an applications platform, and I recognize that many things of value have been built on that platform. But it's been a fundamentally flawed enterprise from the start, and the pitfalls are getting harder to ignore. We need to start taking seriously again the idea that untrusted code cannot be trusted. We need to stop re-inventing the operating system within the browser just because some programmers are too lazy to port their applications to more than one OS even when there are libraries to abstract away all the differences that don't require a UI redesign. We need to stop sacrificing so much efficiency for the sake of portability that doesn't live up to its purpose.
In order for a web browser suitable for document-oriented usage with limited interactivity to be secure and offer reasonable protections for your privacy, it has to take measures that preclude it from being a good applications platform. That conflict seems irreconcilable.
There are myriad reasons beyond programmers being "too lazy" to port applications to more than one OS for the web evolving into an application platform. Your rant is on par with suggesting we go back to horses and buggies because high-speed car accidents can be deadly. It vastly overstates the danger posed by JS specifically, completely discounts its superiority in filling the niche that it does compared to the previous status quo, and advocates a non-sequitur, anachronistic resolution.
> Your rant is on par with suggesting we go back to horses and buggies because high-speed car accidents can be deadly.
Don't be ridiculous. I'm not saying we shouldn't have cross-platform apps at all. I'm just saying we need a clear distinction between the WWW and any applications platform, just like we need a clear distinction between sidewalks and freeways. The web browser cannot do both jobs well.
> I'm just saying we need a clear distinction between the WWW and any applications platform, just like we need a clear distinction between sidewalks and freeways.
One problem is, no one seems to be able to define "web application" in a way that makes the distinction clear. Most sites that use javascript still use it in the context of displaying documents, so if you define any site that uses javascript at all as an "application," then most sites, including Hacker News, count as applications. But that doesn't actually fix any problems.
WebGL is such high performance that most examples posted on HN struggle on my phone, where I routinely play OpenGL ES 3.x games without any visible jank.
That ship sailed the moment JavaScript arrived on the scene, and Flash rode in on its coat-tails. The writing was on the wall: people wanted more from the web, envisioned more for it, than simply interactive documents, and they still do.
Granted, not everybody: some are more than happy with interactive documents. Great, that's awesome, and I hope you're happy, but you don't get to dictate the behaviour of the 99%.
And that figure is not an exaggeration: for my own sites about 1% of people have JS disabled. This post from Yell, over a much larger sample of data, suggests that 0.07% of their visitors disable JavaScript: https://blog.yell.com/2016/04/just-many-web-users-disable-co....
Granted, a large portion of that 99% won't know or care what JavaScript is, but they'll care if they lose the additional functionality it brings to the sites they use (though, honestly, I doubt they'd miss the ads, for which it's so often misused).
The problem I have is that 99% of my web use is the web as an interactive document - news and articles in particular - but those web sites still load an absolute ton of code.
Yeah, I'm certainly on board with that: vast amounts of ad-related assets and tracking code need to become a thing of the past. Still, for that I have uBlock, and I'm increasingly avoiding the worst offenders on that front anyway.
Still, sites I use on a regular basis that benefit from JS: GitHub (most evident with real-time updates to projects, issues, PRs); Office365; GMail, Google Drive, Google Analytics, Google Docs, the Adsense and Webmaster portals, and - of course - YouTube; Clubhouse (project management, heavier than Trello, lighter than JIRA); Azure management portal; pipeline apps such as TeamCity and Octopus Deploy.
Yeah, no dispute from me on that count, there are a lot of valid uses.
I am getting sick of what I presume is coin-mining code. I was reading an article about keyboards on some site the other day and after a few seconds my laptop started to sound like an aircraft taking off.
Yeah, high precision timers aren't a fix-all. Imagine a loop where you test a single bit of some protected memory. If it takes microseconds to determine the value of that bit that's really bad, but even if it takes milliseconds, seconds, or even hours that's still really bad. This is equivalent to how fast an attacker can download your system's memory during an attack. If that rate is mb/s that's catastrophic, if it's kb/s that's still super bad, if it's bits per day that might be enough to mean that only the highest value systems will be targeted, but that's a far cry from actual security.
Plenty of legacy OSes will not receive any patches, plenty of current OS instances will not be updated by technically incompetent owners, and either way attacks using this exploit will continue to be profitable.
Except that the smallpox virus doesn't have the brains to team up with a thousand other viruses, create an exploit kit, test your immune system, and then deploy only the one specific tool most likely to bring you down. The immunology metaphor breaks down once we realize that electronic pathogens are deployed not by innocent rats but sentient human vectors.
Smallpox also doesn't know the difference between Joe Blow and Mark Zuckerberg. Humans have the ability to pick the most valuable targets and concentrate resources on exploiting them.
That's not entirely without precedent. Look at macOS, still comparatively little malware compared to Windows even though macOS has had quite a bit more vulnerabilities in the OS itself compared to Windows.
But everyone here is already thinking about the string of recent macOS vulnerabilities. Even just in the past few months Apple has been hit by a bunch of fairly major vulnerabilities while Windows hasn't been in the news for a while. What bugs me though is that a theres a handful of the Apple vulnerabilities that weren't caused by an implementation error, it was from a flawed design.
In particular the most notable example of this would be the API used by system preferences that would let any user create arbitrary files with arbitrary permissions owned as root. Obviously the first choice with that is to create a setUID binary but even if the permissions weren't user provided this should be something that should jump out during the design process as a bad idea.
As for a comparison of just recent newsworthy macOS vulnerabilities compared to Windows I'll just cite Hacker News as far smarter people than I have commented on this subject to death.
I just used the following query and only looked at the links on the first page.
The macOS results span the last 3 months, the Windows results span the last 8 years. Mac sales only provide about 8% of Apple's revenue which is overwhelmingly dominated by iPhone sales and app store revenue. macOS has become the red headed step child and it's really starting to show.
Yes, from what I've read on the LKML, the IBRS has to be enabled and disabled by the kernel at the appropriate moments, and there will be a kernel command line flag to disable it (or leave it enabled all the time).
Spectre (and maybe Meltdown, I'm not sure?) can be done in a browser with javascript.
Imagine if a random bad advertisement or injected script on a popular website could silently steal all of your stored session information for other sites.
It requires a lot of CPU so there isn't anything silent about it. If some bad advertisement absolutely kills the tab you're looking at, I hope you'd notice. If you're using a multi-process browser (like Chrome) the impact is pretty minimal.
That's not to say it's not bad -- the exploit works remarkably well -- but it's not fast or easy.
Imagine: You open the browser, leave it open on some page and then leave your computer for a few hours or overnight. You can’t notice what a rogue ad did while you were away. Your passwords are maybe gone.
Besides the "hiding" strategy (bots hide themselves under the windows clock. It's a wonderful world out there), I'm afraid it's unlikely you'll notice. Reliability of modern consumer software ain't all that great.
I never liked those "if" statements! "IF"? Why is there an "if"? It implies you're uncertain, or have hesitations, doubts even? That's rookie shit. I never do. Straight and happy path for this cowboy, always.
If you have conditionals in your program, it means that you're trying to do too many things at once. Follow the UNIX philosophy: do one thing, and do it well!
No branches if you movfuscate[1] your code! You do end up with that happy path, that straight path, that clean path that's wholly immune to those pesky branching bugs.
I've heard back from AMD.... At least this PR person is saying: . “Disabling branch prediction” is definitely not an accurate description and we are working to address with SUSE now.
114 changed files with 12,403 additions and 23 deletions.
That's a pretty big patch (although some of those lines are .patch file pre- and post-amble). I feel bad for the people that had to develop/qa/signoff all of that in such a short timeframe. There's got to be at least one mistake in there somewhere.
Almost all of this commit is .patch files. Going through them, it seems to be about 10 from AMD, a few from IBM (for S390 processors), a bunch from Intel, and a few from kernel core maintainers. It's a giant workload, but seems to have been distributed pretty widely.
This related RHEL article contains some information about the various mitigations they use (PTI, IBRS, IBPB, i.e. Page Table Isolation, Indirect Branch Restricted Speculation, and Indirect Branch Prediction Barriers) and their relation to the Intel and AMD microcode updates: https://access.redhat.com/articles/3311301
It does not really answer all my questions about the microcode updates, but I have not seen any more comprehensive writeups yet.
> AMD ships microcode update to disable branch prediction
Mods, please fix the title. The patch adds ability to disable indirect branch prediction sometimes. This is not nearly the same as disabling all branch prediction always.
Surprisingly, due to architectural limitations, it isn't affected by either of the issues in spite of having some speculative execution capabilities. The G5 might be but it appears the G3 and G4 are good to go based on this detailed writeup I found.
"...the G3 and the G4, because of their limitations on indirect branching, are at least somewhat more resistant to Spectre-based attacks because it is harder to cause their speculative execution pathways to operate in an attacker-controllable fashion (particularly the G3 and the 7400, which do not have a link stack cache). So, if you're really paranoid, dust that old G3 or Sawtooth G4 off. You just might have the Yosemite that manages to survive the computing apocalypse." [1]
The way I read it, it's not immune, just harder to attack. Furthermore, Meltdown and Spectre are just two specific ways to attempt a whole class of attacks and could be improved upon especially now that the whole world's attention has been brought upon it.
So yeah probably you're not going to have issues in reality with a G3, especially given its small presence in modern computing, but I wouldn't choose it thinking it's immune either.
I wonder (my evil devil's advocate) if there are satellites or other hard-to-patch vital facilities that could be compromised with either of these vulnerabilities.
It's not that hard. I don't browse the web from my computer. To look at pages, I send mail to a demon which runs wget and mails the page back to me. It is a very efficient use of my time, but it is slow in real time.
I just finished up a senior Computer Architecture class last term and a flavor of this was a popular question for past exams.
For a more fun answer than an exam, consider that the real estate on a CPU is typically dominated by two things, cache and out-of-order speculation hardware(predictors and such). Then, turning off speculative execution would be like turning off a third of your CPU. As to the actual performance impact, it'd be massive. There's a reason why no modern processor is in-order.
There's a reason why no modern processor is in-order.
No modern performance-oriented processor, that is. Plenty of low-power, area-constrained embedded ones still are. The Intel Quark series of SoCs, for example. Those are basically a very fast 486, with the same 5-stage in-order pipeline. (For some amusement, get Quark datasheet and compare it with the 486 datasheet --- some of the microarchitecture diagrams are identical, and some clearly had only "Intel486" replaced roughly with "IntelQuark".)
The Mill CPU which is currently being developed is in-order and its performance target is higher that current Intel CPUs while using 1/10th of the power.
out-of-order is just one way that a hardware vendor choose to increase performance. Mill choose an other way.
I hope that's related to some specific branch prediction type, and not disabling branch prediction completely. I.e. if it is not a mistake in the SUSE comment, it could be painful: branch prediction is fundamental for modern OooE CPUs delivering their potential.
From what I understand of this set of bugs, is that code executed by the branch prediction logic can access privileged memory that it normally wouldn't have access to. And that the CPU can't generate a protection fault, because it isn't known at that time if the code really would have tried to access that memory.
So would it be possible (through a microcode update) to mask the memory values, so that ANY read from privileged gets a NULL value returned?
Or could that cause issues, due to the fact that a branch that doesn't currently have access to a memory segment, may be granted legitimate access to it by the time it would normally be executed?
The bug where speculatively-executed code can access privileged memory is Meltdown, and it doesn't affect AMD systems since apparently they did the sensible thing and blocked even speculative access to unauthorised memory addresses. Unfortunately, Meltdown apparently can't be fixed in microcode on affected Intel CPUs, so they're stuck with a performance-killing workaround that AMD CPUs don't need.
I don't see where the contrary is stated; in a nutshell their architecture is not sensible and deeply flawed in regard to security, if they can't check that without throwing all the perf out the window.
No, my point was, AMD performance in this situation may simply be worse because they didn't have time to implement this efficiency. Not because AMD was somehow prescient about the security issue.
A fence of some kind that stops the speculation on access to protected address space would be better. Trying to stub out values would mean you have to throw the speculation out anyway, and a fence could be setup to prevent access. This might still allow an attack on kaslr since you could use the timing of the speculation to determine if a fence got hit or not.
It it's just one of timing attacks possible on x86. Cache miss timing has been used in the past to create covert channels and exfiltrate keys. Memory controller issues have been explored too. Modern general purpose CPUs sacrificed some security for performance.
There are few security or performance guarantees. It is not an architecture geared to realtime and fully deterministic operation.
> Modern general purpose CPUs sacrificed some security for performance.
I don't think that's a fair way of stating it. I think it's a case of independent performance improvements in branch prediction, cache architecture and out-of-order instruction retirement that on their own have negligible security impact but when combined in a single system reveal emergent behavior.
Privileged memory access only happens in meltdown, spectre is much more difficult to prevent (seems impossible short of killing speculation altogether) but can only access arbitrarily in its own address space.
Spectre is simple to prevent: just don't modify the cache while speculating and the CPU can still do all the same branch prediction and speculative execution pipelines that exist today.
It doesn't even have to be that drastic, the CPU can still fetch any memory that's not calculated. Both sides of a non-computed jump for instance, or the next line of an array that's being scanned in order.
Some memory references would be sequential instead of overlapped, but it wouldn't be the end of the world.
What I'm saying is that speculative execution without modifying the cache is safe from a Spectre reading arbitrary memory, the main concern.
So start from there. That's not a terrible starting position, with all of today's predictive logic and speculative pipelines working at full speed on the fastest codes and partially working as codes use more uncached memory accesses.
Then you add back in things like speculatively prefetching code and global variables (addresses hardcoded in the instruction not calculated) and you're nearly back to 2017 performance. The problem is widespread and severe, but we're not going back to some dark ages of computing.
This is all getting rather silly, right? I was the first one to mock Richard Stallman for being too radical, dramatic and paranoid but I can now confirm he wasn't. Every word he said in last 15 or so years is true.
This all simply makes all our IT related work just kids playing make-pretend security.
I am literally trying to figure out what other processors exist that can be used in everyday BSD/Linux work. Sparc? Some ARM branch? Any suggestions welcome.
Being free (or open) source have nothing to do with security: Linux/BSD still have 0day rootkits, yet all source is available, modern JS software have all tracking in the world, yet all source is available (VSCode telemetry). Even RISC-V, being fully open, most likely will be susceptible to spectre attack.
Instead of false promises we need to develop better engineering: verification, computational math, etc. And also laws - no free/open license in the world will stop companies from tracking, but fine as big as 4% of total revenue (EU GDPR) will make them oblige.
It seems like in the case of Meltdown and Spectre, FOSS may have /everything/ to do with security. FOSS with the addendum that you don't execute arbitrary javascript, anyway.
In a world where personal computers only ran code the user could safely trust, these exploits would only be full attack vectors when applied to multi-tenant computing environments.
I'm not arguing that FOSS systems are inherently more secure than closed systems, but when it comes to exploits that require arbitrary code execution to be effective, Stallman really was/is right. Between a NoScript browser extension and a system whose source code is entirely available to you, you've basically got Meltdown/Spectre immunity.
As much as I would love to believe that I personally can sanely inspect all the code I run - I can't. One day I run apt-get install git and I have to trust all code that is being pulled to my machine (and there is a lot of code there) simply because my lifespan is probably shorter then time required to audit all the code in git dependencies (ssl, crt, kernel, etc). So given I can't possibly audit everything, then I need to trust, and if I need to trust, then being FOSS might make me feel a bit more sure about safety of the code, but in reality it might not even make such a difference - apt-get pulls-in binaries, who knows what/how/where this binaries are made ... trust chain is simply not there.
I personally trust FOSS software more not to do dumb things (like, hopefully my password manager doesn't report all my passwords to NSA), but simply being FOSS doesn't make it any more secure :)
Reproducible builds (https://reproducible-builds.org/) is an initiative to fix that part of the problem. With reproducible builds, a third party with the same source code and compiler will get an identical binary, so we can have independent entities certifying that the code you download with apt-get was built from the corresponding source code.
RISC V is certainly not vulnerable to Spectere as all the the real-world chips are in-order execution, meaning post-branch instruction never happen until after the branch is decided.
Out of order variants are still in the design stage, so they may come up with a way to isolate or invalidate speculation effects on branch misprediction.
> Being free (or open) source have nothing to do with security
My Windows 10 machine got the Meltdown patch yesterday, in an out-of-band, undelayable update. Then it restarted itself during the night and this morning it's secure. My Ubuntu laptop is still unpatched and I'm not sure when the 4.15 upgrade will come to 16.04.
What happened to all the theorem proving software that are supposed to absolutely guarantee correctness of code. I guess they are only as accurate as the assumptions input into them?
The code conforms to the specification. It's not buggy.
The specification does not conform to the user's expectations or documentation. It's flawed.
Theorem provers can only prove that the code conforms to the specification, not to the user's expectations or documentation. They can be very helpful, but they aren't magic and can't interpret non-formal specifications.
Ironically enough, RMS could probably be using the most side-channel-leaking CPU on his computer and it wouldn't matter... because he is the sole user and runs only his carefully-chosen free software on it.
His thoughts on cloud computing, however, are very much worth listening to.
He is quite unusual in his habits, but considering that he is probably a more high-value target than the majority of us, I'd say his approach has served him well.
I don't think that was the point of the parent comment. Obviously Stallman's system is not perfectly secure, but it may well be immune to exploits that require arbitrary code execution like Meltdown and Spectre.
To be absolutely safe, you just need a processor that is simple enough not to use any of these performance enhancing techniques (ie out-of-order execution, speculation, branch prediction, caching.) Of course it might be much slower than you are used to but if security is your primary concern it makes sense.
Only caching use a problem. Indirect branch prediction can be "flattened" at a cost by checking TLB in context switch, presuming the kernel sets it up nicely per process. The other techniques are fine. As usual, secure code needs to be written with timing attacks in mind and they don't change much there.
Apart from usual 'open source' and 'who controls the software' dogma, I find his endless gnawing about libreboot and Intel ME very related to general closed-box-that-we-trust problems. Trouble is now I think he wasn't paranoid enough.
That’s a different issue. An important one, I agree, but kind of orthogonal to the introduction of a hardware bug, which could equally happen in an open architecture.
His "origin story" for free software relates to a printer, and being unable to see the code (I think the driver rather than the firmware). His whole point is that software can be changed, and if only a vendor can touch the software for hardware you've purchased then you're entirely beholden to them.
Of course on some occasions the software isn't going to be able to compensate, but Stallman's ideas about why free software is important are grounded in some very practical realities.
Having the source for anything in the cpu here won't help when the architecture design itself is at issue. If you have to respin new silicon, all the software in the world won't help you.
Ever see a wire on pcb boards going from one end of the board to another? Thats the hardware equivalent of a: "we can't fix this properly, but we can hack a fix on until we can fix it right in the next revision" hack.
We're talking about microcode here - this isn't a question of hardware.
If AMD can fix something like this, Stallman would say the end user should be able to as well. He has a point, and the number of useless IoT devices we're already seeing is testament to that fact.
These CVE's have been exploitable for a long time. This is the first open discovery of this golden key to every single computer, everywhere .. but there has been ample time for this to have been catastrophic.
If Intel were open about yes, even the microcode, maybe we would've seen this issue sooner - more openly. What we have is paltry in comparison to the way it would be if, in a Stallman universe, software was 100% open, always.
That's just not true. Having access to the microcode isn't going to help you here, there won't be a line of code you can point to and say 'this is a bug!' Rather, it's the assumptions that all chip-makers have chosen that were wrong, and those choices have been out in the open and known about for all this time. No-one was demanding that speculative execution should be hardened against timing attacks because no-one had discovered that timing attacks were even possible.
Without wanting to side track the the conversation too much, I'm guessing I'm right in my assumption that implementing x86 cores on an FPGA (so we can add wire traces to the cpu after the fact) is still way to slow and expensive to even be close to a reality?
Correct, FPGA simulation of a 3.0Ghz processor means you'd be waiting on the order of weeks to boot into linux.
I work for a company that works with Intel on new things. Lets just say the FPGA's they use are... really expensive, and despite being capable of simulating a design fast, are still very slow.
By expensive I found out $20 million per FPGA simulator was on the low end. And I think that would simulate at the rate of about 10mhz for the amount of transistors/internal setup present.
Fpgas are cool, but a poor choice for fixing this issue.
You could probably do it for up to an 80386 with some $400ish dollar FPGA's.
With some encouraging sentences to keep on making the world a better place.
He felt honored. He asked me to post this for him.
"Now that you recognize these problems are real, how about joining in the work to fix them? See gnu.org/help for a list of many different kinds of work that we need (programming is just one of many), then pick one and help!"
That’s true to some extent, but it’s not like open source software is free from security issues. Open source might make it easier to find and fix issues, but it can’t avoid them entirely.
IIRC, first generation Intel Atom CPUs are safe from Spectre due to their nature of being an in-order execution design. Those are the most modern/powerful in-order chips I can think of and now I regret selling the old Dell netbook I had in college.
I still have my old MSI Wind U100 from 2010-ish. I was lately wondering what to do with it. It’s already running Mint, so I guess I’ll make it my main machine. :-)
Stallman's argument is a good one against the Intel ME issues, but I don't think it apples to Meltdown.
This is a specific side-channel attack in a super complicated system. Even if we had OSS x86/64 processors, this more similar to the protocol issues found in openssl.
A guy living in a cardboard box in an alley rambling on is just as likely to hit upon a semblance of relevance from time to time. Doesn't make them any less of a nutjob or any more worth listening to.
That's not really relevant though. rms is not "some nutjob living in a cardboard box" and his thoughts have proven to be extremely prescient and insightful over and over again. He's definitely "worth listening to" even if you don't agree with him on every single point.
I agree, in principle, but when the nutjob in question correctly predicts a dozen things in a row with no misses, you have to wonder if you might be missing something important.
Disabling branch prediction isn't mitigation it is effectively a recall without having to worry about logistics because if you disable branch prediction completely you might as well throw Zen based CPUs to the trash.
I really hope that this is just poor wording and that branch prediction is either disabled in extremely rare circumstances or they just exposed some control mechanism like Intel did which allows you to control the branching.
EDIT:
AMD's response was that mitigation from their side isn't needed for SPECTRE either:
I really wonder what has changed, and if now they are vulnerable to variant 2, have they underestimated MELTDOWN as well?
I'm also looking forward for people to look into the victim cache that AMD has implemented with Zen and if InfinityFabric and the added latency between CCXs can't be exploited for side channels attacks.
I might be reading that link wrong, but it looks like it says that Variant 2 is a possibility, just not demonstrated yet. This may be preemptive on their end.. or maybe they were able to replicate it in-house.
That's because those previous assertions were incorrect.
AMD is not affected by Meltdown, true, but Meltdown is also easier to fix. The other issues do affect AMD, and are MUCH harder to fix (also harder to use it should be pointed out).
Meltdown was disclosed first, so everyone was excited to bash Intel. But that was premature.