Without knowing any additional details of the agreement, I don't see how Intel will let this slide. The terms of their agreement are mostly likely violated at least in spirit, if not in the letter. Not only that, but it will have to get past export controls that have very recently stopped exports of Intel processors bound for China, I have no doubt they will find a joint venture to develop similar processors an easy target. If it gets past that, I would expect a lengthy lawsuit from Intel. Seems very risky from AMD's perspective.
All of their cross licensing agreements are private so everything is speculation. That being said...
In 2009 AMD divested itself of it's manufacturing arm by spinning it off into GlobalFoundries (GF) which was a joint venture with Advanced Technology Investment Company (ATIC). Intel sued AMD, GF, and ATIC for violation of the terms of AMD and Intel's prior cross licensing agreements.
Later that year Intel and AMD entered into a Settlement Agreement to halt several on going lawsuits both parties had against each other. The AMD/GF/ATIC lawsuit was one of those that was part of the settlement agreement. Under section 4 of the settlement agreement are the mutual releases each company agreed to. Section 4.2 is Intel's release and it states the following.
4.2 Intel Release. Except for the rights and obligations expressly created or reserved by this Agreement and by the agreements described in Section 3.7, Intel does hereby irrevocably release, acquit and forever discharge AMD, GF and ATIC from any and all Claims that Intel ever had, now has or hereafter may acquire against AMD, GF and ATIC, whether known or unknown, on account of any action, inaction, matter, thing or event, that occurred or failed to occur at any time in the past, from the beginning of time through and including the Effective Date, including, without limitation, any and all Claims based on or arising out of, in whole or in part, the Actions or the facts underlying the Actions and any claims that could have been raised in the Actions up to the Effective Date. All third parties included within the scope of the preceding release, pursuant to Section 1.4, are expressly agreed to be third-party beneficiaries of this Agreement.
It seems somewhat relevant to today's announcement as it seems to release AMD from litigation for any future joint ventures it might partake in. However I'm not a lawyer and it's entirely possible that I'm miss reading this.
Good info! This seems to confirm what some other sources are saying about the license agreement being null as of last year, and opens the floodgates to this sort of deal, as well as real buyouts of AMD. I'm very surprised this hasn't been more widely reported, though I can see why both parties wouldn't want this to necessarily be in the public eye until it benefited them most. It is also possible I missed the reporting on it when it came out. In any case, thanks!
> on account of any action, inaction, matter, thing or event, that occurred or failed to occur at any time in the past, from the beginning of time through and including the Effective Date
I think this is the key part here - Intel is basically agreeing to never sue about anything that happened through and including the Effective Date but it doesn't say anything about things that happen after the effective date. I really doubt Intel would agree to such a thing.
So unless AMD made the licensing agreement 7 years ago and kept it secret, it wouldn't be covered by that agreement. That would be an interesting fuck-you the next time one of these settlements happen, though.
For reference, here's the full text of the settlement(at least what was filed publicly with the SEC)[1]
It appears during the settlement they made a new patent cross-license agreement, which is not public of course. The new agreement is what is alleged by sources to have expired in 2015, which is what would allow AMD to license their IP to third-parties. It could also be that the patent agreement is more lenient than before, because at the very least it expanded to include at least one third party in the form of GlobalFoundries. If that is the case though, one wonders why this hasn't already happened.
Someone should really work towards making legal document as a testing framework against which user can send a query of an action and the output of query should be if action is allowed by that document or not. :)
It has nothing to do with the ROSS link but it shows just how great Watson could be at most things that involve natural language processing and more or less simple questions that could easily be cited from various sources.
Wouldn't surprise me if you could ask ROSS or any other legal aid based on Watson to go over a contract and ask if I do X what would happen, since it has access to both legislation and case law it might even could present you with a probabilistic outcome based on previous law suits which involved similar contracts and circumstances.
Another follow-up, the actual cross-license agreement is here[1]. From my reading of it, it can expire, among other times, when the last of the patents it covers expires. That implies to me that it is probably still in effect, since any patent covering AMD64 most likely is still valid in 2016. I don't know what or who to believe anymore.
Also, another thing that everyone points out when there's a discussion about AMD being acquired, is that "Intel wouldn't allow it" because of that clause in the license agreement. I think that argument makes no sense.
The only reason Intel hasn't completely crushed AMD so far (which it could've done by keeping prices low until it eliminated AMD) in the market, is because it doesn't want to remain a monopoly in the PC market. You think Google has it bad in the EU right now? It would be far worse for Intel (which already got fined $1.5 billion in the EU for anti-competitive practices against AMD).
The second reason why it doesn't make sense is that AMD owns the 64-bit ISA rights. I'm guessing Intel would still need that to operate in all markets except IoT...
So to end that argument once and for all - of course Intel would allow AMD to be acquired, one way or another. Worst case scenario, whoever acquires AMD would have to pay slightly higher royalties to Intel for the next 5 years.
Getting back on point, as the ExtremeTech post mentions, it's likely this is only IP that AMD alone owns anyway.
>> The second reason why it doesn't make sense is that AMD owns the 64-bit ISA rights. I'm guessing Intel would still need that to operate in all markets except IoT...
It would be interesting to see a chip that dropped support for the legacy instruction sets. If a server is running software fully built as 64bit, how much IP could be dropped from the underlying design? There may also be a bit of performance on the table for doing that as well, but probably not much. Certainly no significant die area savings these days.
The long mode 64b instruction set is similar enough to normal 16/32b i386 ISA that droping the 16/32b part will make no significant difference both in terms of IP and hardware resources.
On similar note, first AMD's 64b chips had some intentional limitations in backward compatibility with i386 ISA (that's why Microsoft says that it is not possible to run 16b programs on 64b windows due to hardware limitations), but newer chips do not have these limitations, probably because any cost savings are insignificant (which given the fact that most of the die area today are caches and not logic makes perfect sense).
I'm not sure anymore that EMT64 actually relies on any AMD IP, if articles are correct and the Intel-AMD cross-licensing agreement expired last year.
Intel has basically had a de-facto monopoly on the PC and server space for the past 5+ years so I don't really buy that argument either. The buyout pressure was real enough when AMD was splitting into AMD + GloFo that Abu Dhabi couldn't buy a controlling stake in AMD, much to their chagrin. That might be null now if there really is no more agreement, in which case I would be surprised if there was NOT a buyout offer in the next few years.
Edit: Also, as far as I can see from googling, the Rockchip deal doesn't actually exchange any IP, it simply allows them to customize existing Atom cores into SoCs.
>Intel might challenge the AMD deal, however the China partner apparently already has access to the technology. The Intel/AMD cross license was a five year deal that the companies failed to renew in 2015.
>The license granted rights “intended to cover only the products of the Licensed Parties.” The heavily redacted document lists a dozen exceptions to its terms, most of them not made public. Intel declined to comment on details of the patent agreement.
>“All the technologies licensed [to the China joint venture] are AMD technologies and there are no encumbrances,” said AMD’s Su. “We have closed the deal and have started execution on it,” she added noting she also doesn’t see any regulatory issues. (Recently, U.S. regulators have increased their vigilance in turning down high tech investment proposed by China.)
I have seen speculation that many of the relevant x86 patents expired in 2015.
Well according to this other article, Intel and AMD's patent cross-license expired in 2015[1]. I'm not sure if that was the only agreement they had relating to x86 IP, but if so that makes this deal more likely to go through, if they can get past the export controls.
I am slightly curious how much actual innovation (new patents) there are since 1996, that would need to be cross licensed by either given company... the bulk of legacy x86 (not x86-64) is expired. Excluding certain extensions.
Yeah, I have no idea and I also suspect that very few new patents by either company are directly related to x86. Intel is still one of the most prolific patent recipients though, so one might have to dig through them to see. I suspect they are mostly related to fab design and lithography, though.
It would be microarchitecture, SOC, or commercial application patents. Even one or two critical ones could be good ammo. I doubt Intel or AMD stopped at 1 or 2, though. ;)
Indeed. Here's how the author of the article thinks AMD might skate:
Also not immediately obvious is where this falls under the Intel/AMD x86 cross-licensing agreement. AMD has of course done their own research.... In the meantime and at first glance, because this is a joint venture, it would appear that AMD is in the clear here as they aren’t giving the technology to another business, but rather are using it as part of a new line of products they are developing, albeit in conjunction with an outside firm.
I have this dystopian Foxconn style vision of sweat shops where they force people to unwind stacks in binaries compiled without frame pointers because they're using a register starved architecture and they think that using SP as a general purpose register will make their code run faster.
Now that I think of it, I'm beginning to understand the suicide nets.
Whenever people mention the Foxconn suicide nets, I remember how nobody noted that the measured suicide rate of Foxconn workers is and was lower than the general population, and I laugh at humanity's prospect of ever rising above its animal nature.
I ran the numbers based on the public data and it came out that the Foxconn factory had a suicide rate similar to a Caribbean island country such as the Bahamas.
I am curious if this was caused by the export restrictions on Xeon & Xeon Phi chips to the Chinese government due to the concerns about their super-computers.
If that is the cause, it could be interesting as we may end up with AMD (and the rest of us) getting the benefits of heavy research into upcoming processors, simply as they become a strategic material for the Chinese government.
x86 boxes are about to get a lot cheaper since China has cheap RTL engineers, fabs, and assembly workers. Hopefully it gets as interesting as ARM SOC's and boards have in Shenizen.
China has caught up big time in the fab game over the past few years, though SMIC doesn't have very much capacity for cutting-edge production and it is basically only manufacturing Qualcomm products for 28nm afaik. You'll also notice they didn't mention yields at all there, which are probably not very good for 28nm, but then again no one has good yields at first, they'll catch up in time.
Haha good catch on the yields. Yeah, they probably suck and are strongly encouraging Design for Manufacturing flows to keep losses lower. That's always true for new fabs.
Contrary to popular beliefs, though, most design starts are still in 110-350nm range with a good chunk on 45/65nm and now interest in 28nm. China has all that covered. So, they're only missing a small chunk of market. That said, the cost of 28nm fab investments means they really need more projects on 28nm.
Personally, I thought they were doing all the semiconductor stuff as anti-subversion and economic imperialism. So, they'll probably keep it and work on lower nodes going even at a loss. Just a hunch. ;)
So when nVidia wanted x86 license for Tegra K1 they should have go to AMD? Probably it would backfire their probable relationship with Intel. Do they have one?
It's worth noting that a lot of 32-bit x86 is well beyond patent protections at this point... The Pentium was released over 20 years ago now. The only licensing needed is for certain innovations since 1996, of which is mostly x86-64.
> For servers, really? I don't get it, they could just use RISC-V.
x86 is a mature established architecture, RISC-V is not. Indeed I think the privledged mode architecture is still under development (i.e. what you required to run an actual OS rather than just embedded applications). I'm sure there are many other features x86 possesses than RISC-V lacks too.
I would have thought commercial RISC-V will be IoT/Embedded applications first. Fewer features required, lower costs (less to loose if it doesn't work out) and fairly static software supplied by a single or small group of vendors.
Ah yes, because everyone runs servers based on RISC-V and so -- oh wait, they don't.
I'm as big a lover of RISC-V as anyone (and am excited about the possibility of having a completely open CPU which you can load into a free FPGA). But it's delusional to think that x86 is "legacy" at this point.
x86 is dominant in the server marketplace because Intel makes the best server chips for the vast majority of server use cases. Full stop.
If somebody else comes out with a chip that's substantially better than x86 for servers, I expect the industry to transition quite quickly, just like it abandoned SPARC in the 90's.
The industry doesn't produce readily available SPARC systems, except for NEC and that European space chip consortium, plus including what's left of Sun inside Larry's dungeon, but SPARC is still evolving and is in a similar space as POWER. There are certain workloads that benefit very much from POWER and SPARC, but it's not your general purpose system that works well in a common denominator kinda way. There just aren't enough SPARC and POWER systems around to be economically viable as a x86 alternative, but technologically they're good choices for many scenarios.
The issue is that mfg of cpus at scale is very costly, and short of a large scale, the per-unit cost to cover tooling is higher... lower margins, less profit, higher cost to the customers. Per-Unit cost, and relative throughput is why Intel rules the roost for the most part, even in server.
I've looked at some of the costs of ARM servers, and they're way more expensive than they should be (compared to the cost going into phones)... I would think that may eventually become competitive... though Intel has a bit of room to lower pricing and still make money to compete.
But the chip you get in a smart phone is not what you get in a server. For one, I/O is very different and it includes things like 40Gb network silicon.
You mean you want x86 to only have a "legacy software user [sic] case." Your claim is not backed up by reality whatsoever--just ask every Windows user.
Well, Windows has existed for other platforms, especially in the server world. And for end users, you have things like Windows RT that are still supported.
The serious problem of emulating x86/x86-64 is that it has a strong memory model, while most other platforms (ARM, PowerPC, Itanium, Alpha) have a weak memory model. Only SPARC (SPARC TSO) and zSeries have a similarly strong memory model as x86/x86-64.
Exactly; binary compatibility is a hard issue, and it's especially non-viable in systems with no awesome unified package managers a la Linux.
Case in point: despite an ARM processor, running conventional desktop programs on a Raspberry Pi is mostly fine (barring performance issues). Yes, all operating systems' build stack can emit ARM binaries, but it's useless unless the developer supports it well (not gonna happen), or if there is really nice automation (like Debian).
edit: unless you're trying to say that the binary compatibility wasn't a pain in the butt. they solved it by (IIRC, not a long time OS X user), by bundling two arch binaries together for a bit. Not viable if you have 10 different architectures.
Around OSX 10.4, there was a transition from PowerPC to x86 architecture. It was handled by bundling a JIT interpreter called rosetta [0]. It ran at pretty much the same speed as if you ran it on the original architecture.
They had already done something similar in their first arch transition, 68k => PowerPC [1].
Point is, binary compatible is possible. Far from easy, sure, but it's been done before. The question is, did CPU evolve so much it became impossible to translate from one arch to another ?
>The question is, did CPU evolve so much it became impossible to translate from one arch to another ?
I'd say software got a bit more complex compared to then.
Furthermore, that works if you have controlled hardware (which means easier testing and less edge cases to worry about) and a single transition to worry about (from A to B, not {A,B,C,D,E} -> {A,B,C,D,E}).
Can you imagine how insane it will be if Windows shipped a compatibility layer that translates x86 software to ARM, to RISC-V, to MIPS, to whatever? You need to test compatibility for not one but 3 architectures. No way people are gonna do that; the RoI is almost nonexistent.
So the only solution is to recompile, which is annoying if you don't have the great software infrastructure to do so.
Its not nearly as hard as you think to translate between isas. Some things won't directly translate, like say the matrix multiply resister in some mips super computers, but you can easily just swap that out for a more mundane approach.
I'm hoping that this is really good news for Zen. Because you're right, the server market is a lot less tied to x86 than the desktop market is.
So why would they sign this deal when it would probably be easier to use RISC-V or ARM or MIPS or ...? Perhaps AMD gave them a private demo of Zen and they realized that it was better than all of their other alternatives, even including other architectures.
But if they had their hearts set on x86, unless they could get Intel to play ball (highly unlikely), AMD was their only option even if Zen sucks.
I agree and think the only way they made this deal is if AMD showed that Zen is better than any of the other alternatives. That probably isn't saying much since most of the alternatives are pretty bad, but they probably don't have another Bulldozer on their hands either.
Different Chines companies are trying every possible path to a "homegown" server processor and x86 is only one of them. I'm sure somebody's trying RISC-V.
Google is at least testing OpenPower in their data centers. I don't know how serious they are, but people are starting to explore x86 alternative more.
I highly doubt they care about the specific ISA in this case. The simple fact is, at this point, the only players with proven high-IPC, high-performance CPU designs are Intel(x86), AMD(x86), and IBM(POWER/POWERPC), and that is highly unlikely to change in the near future. IBM already has a licensing deal for POWER in China, and has been moving towards that with OpenPOWER for a long time. Intel would never contemplate such a thing, but AMD is desperate enough at this juncture to try it, even though it could cannibalize their core business in the future.
From what I've seen of Cavium, I wouldn't describe them as general-purpose CPUs, though they are nice for their own niche. I don't think anyone in the west has gotten hands on anything except press releases of ShenWei systems, so it is unproven as far as I can tell. If it is so great why was the Chinese National Supercomputer Center trying to get their hands on Xeons so badly to upgrade Tianhe? :P
Edit: Along the lines of Cavium, you might want to check out Tilera. They aren't exactly general-purpose either but they are more so than Octeons, and have respectable single-thread performance from what I've seen.
It's a multi-core MIPS64 processor with PCI, USB, and do on. Remember that MIPS has been used in everything from embedded to SGI Origin supercomputers. Quite general purpose. Cavium's addition is SOC, lower power, and accelerators.
Regarding ShenWei, it was used in supercomputers. It's probably real. What jumped out is that it's probably stolen IP from Alpha CPU and had high watts. So, not as power-efficient or legal as they'd like was my guesss.
Re Tilera
I read the MIT RAW Workstation papers where all that began. ;) Yeah, it's pretty cool but that's the one that's limited purpose. It's like an overlap between vector processors, FPGA's, and multicores. I dont know who all uses them but I did find a 100Gbps network tap and NIDS that used 3 Tileras for its muscle.
I knew of a company that was testing Tilera for 4G basestation/RNC equipment, not sure if that ever panned out(I think they use Cavium + FPGAs now :P). I think Tilera is basically dead as a standalone product now after all the acquisitions, they'll probably be folded into Mellanox's interconnect acceleration wasteland and never be heard of again.
Sad as there was so much hype back in the RAW days of where it would be applied. Then, it became a commercial activity of a stingy company that didn't foster a strong ecosystem. That effectively sealed its fate given DSP's, FPGA's, and GPU's were killing everything in their path if we're just talking energy/performance/price. Your prediction might come true.
Idk about x86 itself minus legacy as it's defined by legal. However, Intel tried to clean-slate it at least four times. The first, i432, was revolutionary, forward-thinking, and overdid the hell out of the hardware side. Hundreds of man-years lost due to low, raw performance. Market only cared about the latter so rejected it.
Second was for BiiN parallel and high-availability system. Its i960 was a brilliant combo of RISC, HA, and security aspects of i432. Rejected by market due to no backward compatibility although used in F-35, some storage controllers, and so on. Still available for embedded but not the good version in the links. :( Especially see the manual and part with object mechanisms for containment/addressing. Cost them and Siemens a billion dollars.
Third time, in parallel with i960, was i860 RISC core for high-performance supercomputing and embedded systems. Had performance issues and just wasn't popular in general. Another loss.
Itanium combined RISC, VLIW, PA-RISC style security, and reliability features. EPIC/VLIW probably what did it in the most unfortunately because reliability, speed, and security combo were good. Link below is security features as they alone justified in imho over x86 and you probably never saw them in comparisons. Just dollars, GHz, and GFLOPS as if that's all that matters. Used in appliances, supercomputers, and workstations but going away probably at a cost of hundreds of millions of dollars.
So, Intel has tried to give us something better than x86 four times already at a cost of billions of dollars. To their credit, they tried and produced a few great designs with some flaws for sure but great in key ways. Market rejected them in favor of raw price/performance and backward compatibility with shit software. So, we're stuck with that given Itanium will go into legacy mode, VIA is loosing money on their x86 business, Transmeta was bought, and Loongons w/ x86 emulation are shaky investment.
Gabriel's Worse is Better is in full effect here...
This is mostly about custom SoC's, which means like ARM SoC's it's a couple CPU cores with a few special purpose chips all in the same package. Think networking, databases, ai, image processing...