> GPU vendors have quietly deployed all of this technology
Citation or technical details needed.
Obviously it "makes sense" that for 4K HD content you "probably" want to offload the decoding into the GPU, but this is the first time I see this mentioned and there are no links to technical details.
In contrast, TEE / TrustZone and even the recent AVF with pVM - these are well documented technologies.
The Playready docs make it clear the implementation is either in TEE or implemented in GPU hardware, and x86 has no TEE, so. You can easily find driver changelogs describing it being enabled for different hardware generations.
Not really; AMD have PSP (which, okay, isn’t x86, but it’s on the die) and Intel, as you mention in your post, had SGX and have ME. Google use PSP TrustZone to run Widevine on Chromebooks, for example. PowerDVD used SGX to decrypt BluRay, which led to BluRay 4K content keys being extracted via the sgx.fail exploit.
You’re right though that PlayReady is usually GPU based on x86; on AMD GPUs PlayReady runs in GPU PSP TrustZone. On Intel iGPUs I think it runs in ME.
The lower-trust (1080p only) software version of PlayReady uses WarBird (Microsoft’s obfuscating compiler) but this is of course fundamentally weak and definitely bypassed.
Anyway, none of this takes away from your post, which I agree with. The FSF (and many HN commenters) have been whining about TPM in unfounded ways since the 2000s.
Not in general, Intel briefly had a program for allowing vendors to deploy apps on ME but closed it years ago. But yes, ME is involved in this for Intel iGPU.
It was a big deal when Vista was released, with coincided with a lot of generational change in home computers (Watching Blu-Ray on computer still seemed to be a thing to expect, HDMI with HDCP was introduced, etc).
There was a lot of talk about protected media path in Vista, how it linked with HDCP, how it killed hardware accelerated audio (including causing considerable death blow to promises made by OpenAL), etc.
Even game consoles moved into software accelerated audio, as it turns out doing it in software, with CPU vector instructions is fast enough, while being more flexible.
This is also the way of the future for graphics, do way with any kind of hardware pipelines, and go back to software rendering, but having it accelerated on the GPU, as general purpose accelerator device.
EAX and the like were actually that - software components running on DSP inside sound card, and it was supposed that they would be something you would handle in the future akin to how GPUs are programmed.
However while audio accelerators came back the protected media path business means they aren't "generally programmable" from major OS APIs even when both AMD and Intel essentially ended up settling on common architecture including ISA (Xtensa w/ DSP extensions, iirc), and are mainly handled through device specific blobs with occassional special features (like sonar style presence detection)
Integrated GPUs exist. Wouldn't it make more sense that the "high value" content should not be exposed to any external GPU? Then we can treat those integrated ones as part of the "TEE". That's my speculation, waiting for details.
This is the question I had about this. The reason this design works per the article is that the GPU memory is inaccessible to the OS, so the decrypted content cannot be stolen.
With a unified memory architecture, is the shared GPU memory inaccessible to the CPU?
With the proper MMU settings, yes, the CPU can definitely be denied access to some memory area. This is why devices like the raspberry pi have that weird boot process (the GPU boots up, then brings up the CPU), it's a direct consequence from the SoC's set-top-box lineage.
Citation or technical details needed.
Obviously it "makes sense" that for 4K HD content you "probably" want to offload the decoding into the GPU, but this is the first time I see this mentioned and there are no links to technical details.
In contrast, TEE / TrustZone and even the recent AVF with pVM - these are well documented technologies.