Our VR surgical training startup has been working for the last few months towards a big medical conference this week where we're showing multiple training procedures for multiple customers on Oculus Rift, as well as having our own booth. The headsets all stopped working the morning of the conference.
Fortunately one of our engineers figured out we could get our demo rigs working by setting the clock back a few days. This could have been a huge disaster for our company if we hadn't found that workaround though. Pretty annoyed with Oculus about this
This does not bode well for real VR surgery. Imagine if this were surgery day for someone, and because of an expiring certificate the rift shuts down ...
It's not like a cert is necessary for it to function. A VR headset is basically a monitor you wear on your face. This is their own poor design choice that just ensures they're going to lose business of anyone who needs reliability in their headset.
The "monitor you wear on your face" trope is simply inaccurate, and essentially a misunderstanding of the state of VR today.
I say this not to either criticize you or excuse the mistake by Oculus (they really needed to countersign their cert with a timestamp server), but to educate. These are non-obvious issues to people that don't follow the VR sector.
Monitors work without low-level drivers because their maturity (and lack of innovation) allows the hard stuff to be embedded in the operating system. VR is not at that state; it is emergent, and the capability stacks require additional integration into the OS. Vendors frequently add unique features, and will continue to do so for some time, making standardization difficult.
Even at its simplest level, a VR headset with 6 degrees of freedom is two monitors that must remain in absolute synchronization while also returning positional information to the CPU. This alone is enough to go beyond "standard monitor driver" functionality.
But there's much more. Here is a paste of a comment I made elsewhere:
Oculus (and Steam, via SteamVR) engineers a plethora of low-level code to reduce latency and add features. It's not just a monitor, but a whole set of SDKs, APIs, devices, and drivers.
For the Rift, the hand controllers are wireless input devices refreshing at 1,000Hz; the sensors (to know where you are in the room) are USB devices with tight 60 fps synchronization to LEDs on the headset; there is a custom audio stack with spacialized audio and ambisonic sound; video needs specialized warping to correct lens distortion, interpolate frames, and maintain a 90 fps image, etc.
Not to mention, the system creates a virtual monitor so you can see your 2D desktop while in VR. You can reach out and "grab" a window, rip it from the desktop and position it in your VR world. Pin it, and when you enter a game that window remains and is fully interactive with the Touch controllers emulating a mouse. Maybe you want to play Spotify in the background of a table tennis game, or be able to check a Discord screen while working through a puzzler, or watch YouTube videos while flying your ship in Elite:Dangerous. One guy set up a video feed from his baby monitor so he could watch his kid napping while in VR. This is obviously not a standard feature of the Windows compositor.
All this needs to work across AMD and Nvidia, in Unity, Unreal, or any custom game engine. It's not off-the-shelf driver stuff.
Not to mention, the premise that monitors don't have drivers is also mistaken. They may not be necessary, but they are available[1]. And, the decision to sign kernel drivers is not a poor choice by Oculus, but a mandate from Microsoft for Windows 10 build 1607 and above.[2] A cert is, indeed, necessary to function.
You said a cert is required, but the footnote quote says drivers must be signed. Being signed doesn't expire. Could you rectify the discrepancy and explain why an expiring cert is a requirement for VR, your analysis (though clearly highly informed) seems spurious to me.
Good question. An expiring cert is not required for VR. It was a massive screw-up by Oculus.
Most (I won't say all) certificates expire. However, there's a huge difference between an expired certificate and one which is renders a driver invalid - and this is one of the two places Oculus erred.
When you sign a driver, you want it countersigned by a timeserver. This cryptographically assures that the cert used was valid at the time of signing, so the signature on the driver remains valid even if the signing cert expires (the crypto ensures a hacker can't just change the metadata with a hex editor). It allows the OS to confirm that the code was signed by a cert that was valid at the time of signature (even though now expired). Without it, the OS can only assume that the code was signed the same day as the validity check. Two days ago that was fine, but yesterday the signing cert expired and everything broke.
This was screw-up number one. Apparently, during the build process from Oculus's v.1.22 to 1.23 release, the timeserver countersignature was removed. This is obviously a mistake, because that took place about 30 days ago. No sane person would assume that they intentionally did something that would bring down their user base in a month.[1]
Obviously the second mistake was letting their certificate lapse. This was compounded by the fact that their update app was signed by the same cert, so they couldn't just push a quick fix (because the updater didn't work).
So in short, signatures don't expire, but the certificate used to do the signature does. With a timeserver countersignature the code would have kept running but no new code could be signed from the old (expired) cert.
Oculus missed some pretty big devops gaps, and suffered a big black eye for it.
But it had nothing to do with DRM, planned obsolescence, needing to connect to the internet, or Facebook data capture.
[1] Other commenters have mentioned that if a timeserver is down at the time of a build, it can fail to add the countersignature. Maybe that's what happened?
I've not looked at the MS requirements, it seems good to expect signed drivers, but a signature shows that the company made that driver at that time - that should never expire.
Sure, also have a mechanism of certification that shows if a company vouches for a piece of software currently, but using that mechanism to override a [admin level] user and forcibly disable software, that's got to be always wrong.
Rereading your question, I realize I may have not actually answered an underlying topic: what is the difference between a certificate and a signature?
The short answer is:
- a "certificate" contains a number of things: a portion of an asymmetric key (either public or private), and a ton of metadata[1] to give information about that key: validity period, algorithms used, version, etc.
- a "signature" is the result of a crypto operation on data that proves the data (a) has not changed since the operation, and (b) the person doing the signing owns the private portion of that asymmetric key.
As I said in my other message, a signature doesn't expire, but it's related directly (and generated by) the certificate used to create it. So if that creation certificate expires (or is revoked) it calls into question the validity of the signature(s) created from that certificate.
Let me know if you're interested in more background on asymmetric cryptography and the relationship between public keys and crypto, private keys and signatures, and the role of certificate authorities vs. a PGP-oriented 'web of trust'.
> So if that creation certificate expires (or is revoked) it calls into question the validity of the signature(s) created from that certificate.
Are you arguing that already-installed drivers should no longer be trusted? I can't tell.
If a cert expires at time T, the usual assumption is that forging signatures before T is not feasible (otherwise the expiration was poorly chosen), while forging signatures after T might be feasible.
If it's after T and we see a new update, we don't know whether the signature was crafted before or after T, so we should assume the latter and reject it.
But if we've already installed a driver, then we must have received its signature before T, otherwise we wouldn't have installed it at the time. So we should still continue to trust it after T.
To be clear, I'm not arguing that old, already-installed drivers should fail if not countersigned. This seems like an extreme, and customer-unfriendly failure case. However, I am saying that this appears to be the default implementation of Windows 10 build 1607+.
I won't argue it's right or wrong, actually. It's a choice, with different threat models driving different conclusions. Defining the failure modes with respect to security risks is a fraught business, and I hope Microsoft put a great deal of thought into it and has far more visibility into the risks than I. But it's what they appear to do, and we live in their world.
I argued elsewhere (in a late, top-level comment somewhere) that - if this is Windows's failure mode - MS should provide tools for devs to integrate into their build process that flags risky or mis-configured signature scenarios. This is too complicated, and used by too many non-security experts, with extreme failure modes, for it to be half-ass-able or easily done wrong.
> But if we've already installed a driver, then we must have received its signature before T, otherwise we wouldn't have installed it at the time. So we should still continue to trust it after T.
And now you leave open an attack surface of "forge a signature off an old, expired cert and then fool the OS into thinking it's been installed all along."
> Not to mention, the system creates a virtual monitor so you can see your 2D desktop while in VR. You can reach out and "grab" a window, rip it from the desktop and position it in your VR world. Pin it, and when you enter a game that window remains and is fully interactive with the Touch controllers emulating a mouse. Maybe you want to play Spotify in the background of a table tennis game, or be able to check a Discord screen while working through a puzzler, or watch YouTube videos while flying your ship in Elite:Dangerous. One guy set up a video feed from his baby monitor so he could watch his kid napping while in VR. This is obviously not a standard feature of the Windows compositor.
Wait, is this new. I haven't used my Oculus in over 6 months because of how hard it was to interact with the desktop and a few other things while in-game. Is this standard feature now for Oculus' Framework?
This is why half of the blame lies with Microsoft for following the rest of the industry into making software for grandma's protection at the detriment of software freedoms.
An enterprising user can turn off these driver signing enforcement settings but it's quite a song and dance and first you have to even be aware of it.
I'm not going to blame the world's largest desktop operating system, primarily used by the least technical users, for optimizing security over developer ease-of-use.
Besides, this is a false dichotomy - On your own comp you can self-sign the driver cert! The CA just has to be in a driver trust store.
The only people who lose out are those trying to distribute drivers to computers they have no control over and who cannot convince the user to install a certificate.
So, it's basically a specialised-hacks-required-because-operating-systems-weren't-designed-with-it-in-mind-which-requires-driver-signing low-latency monitor for your face?
I believe it is possible, though I doubt many people look at the raw data due to its limited usefulness. Without sensor fusion between the IMU (or inertial measurement unit, AKA electronic gyroscope) and various other inputs (including dead reckoning via external sensors), drift error rapidly accumulates.
So, the SDK takes all the information in directly, does its calculations, and exposes only the resulting positions and orientations for hands and head. This resulting info is what developers typically use.
Here's an excerpt from a blog post[1] regarding the IMU and sensor fusion:
> With the new Oculus VR™ sensor, we support sampling rates up to 1000hz, which minimizes the time between the player’s head movement and the game engine receiving the sensor data to roughly 2 milliseconds.
> <snip interesting info about sensor fusion>
> In addition to raw data, the Oculus SDK provides a SensorFusion class that takes care of the details, returning orientation data as either rotation matrices, quaternions, or Euler angles.
Note that this blog is from back in dev kit 2 days. It's possible that Oculus removed the ability to retrieve raw data; in my hobbyist efforts I only use Unity's integration and don't work directly against the SDK.
Well, I am sorry to have to disagree on this. This is no rocket science, and the software support isn't that different from any standard monitor/gamepad combo. That's for the architecture, at least. Of course, latency requirements are higher. But the differences stops here.
Face it, today's VR headsets simply are monitors that you wear on your face (Head Mounted Displays). Anyone thinking otherwise is simply lying to himself to make it sound more complicated than it is.
Those include a few input peripherals as well, none of them which is particularly complex (valve's lighthouse system is probably as complex as it gets).
And lastly, none of these points should require a certificate. Every computation can be done locally, without the need of an internet connection.
To be a bit more specific, let's break down the arguments (I have nothing against you, I am just interested in those):
> Monitors work without low-level drivers because their maturity (and lack of innovation) allows the hard stuff to be embedded in the operating system. VR is not at that state; it is emergent, and the capability stacks require additional integration into the OS. Vendors frequently add unique features, and will continue to do so for some time, making standardization difficult.
This is true... Somewhat. For now, the only integration that has been done in the Linux kernel is DRM (direct rendering manager) leasing [1], which allows an application to borrow full control of the peripheral, to bypass compositing. That, and making sure that compositors don't detect HMDs as normal displays (so that they don't try to display your desktop on them). Please note that none of these are actually needed if the compositor is designed to support HMDs from the ground up. Those are just niceties, and the HMD is just considered like a regular device.
> Even at its simplest level, a VR headset with 6 degrees of freedom is two monitors that must remain in absolute synchronization while also returning positional information to the CPU. This alone is enough to go beyond "standard monitor driver" functionality.
Even if those monitors are physically separate, this is likely something handled by the HMD board itself. The monitors DON'T return positional information, they just display stuff (accelerometer, gyro, compass, etc. are just other peripherals that happen to sit on the same board).
> Oculus (and Steam, via SteamVR) engineers a plethora of low-level code to reduce latency and add features. It's not just a monitor, but a whole set of SDKs, APIs, devices, and drivers.
Just like every peripheral under the sun, isn't it?
> For the Rift, the hand controllers are wireless input devices refreshing at 1,000Hz; the sensors (to know where you are in the room) are USB devices with tight 60 fps synchronization to LEDs on the headset
Believe it or not, frequency and latency are probably not the most complicated thing with the lighthouse system; these specs are actually not uncommon for USB devices (I admit that I don't have a good example in mind, though).
> there is a custom audio stack with spacialized audio and ambisonic sound; video needs specialized warping to correct lens distortion, interpolate frames, and maintain a 90 fps image, etc.
We are NOT talking about HMDs anymore at this point, and these feats have been accomplished countless times already, in various systems.
The first one already exists in multiple forms of HRTF a bit everywhere, including openAL, and would probably be a lot more common if Creative didn't try to sue everyone into the ground as soon as they try to do something interesting. The second thing (distortion correction) is not really complicated, and was done in Palmer Luckey's first proof of concept (or was it John Carmack who implemented it). Interpolation sounds a bit more complicated, I'll grant you that, but still pretty doable.
> Not to mention, the system creates a virtual monitor so you can see your 2D desktop while in VR. You can reach out and "grab" a window, rip it from the desktop and position it in your VR world. Pin it, and when you enter a game that window remains and is fully interactive with the Touch controllers emulating a mouse. Maybe you want to play Spotify in the background of a table tennis game, or be able to check a Discord screen while working through a puzzler, or watch YouTube videos while flying your ship in Elite:Dangerous. One guy set up a video feed from his baby monitor so he could watch his kid napping while in VR. This is obviously not a standard feature of the Windows compositor.
Again, this has nothing to do with HMDs. But, congratulation, you just wrote another compositor, and reinvented multitasking. This has been done countless times, and VR compositors have been made by multiple teams. Here is a nice open source one: [2].
> All this needs to work across AMD and Nvidia, in Unity, Unreal, or any custom game engine. It's not off-the-shelf driver stuff.
Well, so has: controller support, graphics API support (woops, actually the two only thing needed), but also language support, processor architecture support, sound system support, operating system support, etc.
Everyone needs a bit of code to support new architectures. Supporting the display portion of a HMD is relatively straightforward, and actually uses off-the-shelf APIs. Well, you have to correct for distortion, but I would be surprised if some APIs didn't come out [3] to support small variations between devices.
--
To conclude, yes, it's an impressive technology stack, but you could literally pick any other device in your computer, and you would get comparable the same complexity. I am not trying to undermine the amount of work that went into HMDs and their stack, just pointing out that it's relatively common and straightforward.
And a HMD is by definition a monitor on your face :)
--
On the other hand, I just read the explanation (after writing this), and I agree that having your own kernel module makes sense for some of this (especially on Windows, on Linux you would just mainline support), if you want to make it happen faster. Yet, most of the above arguments do not serve the discussion ;)
I can get kernel drivers needing to be signed, but requiring the cert to remain valid after installation is a bit of a reach, isn't it?
Edit: thank you for the detailed explanation below.
Argument: A modern VR stack is much more complex, and does much more, than just displaying images on two screens.
Counterargument: The 16 things that happen other than just displaying images on the screen aren't relevant, have been done before, or has equivalent complexity to other systems.
Well OK. I just can't argue with that.
"A modern CPU SOC is no more than a souped up 6502."
That's true, if you ignore the integrated video, complex cache management, integration of networking/sound/northbridge/southbridge, secure enclaves, and significantly higher performance characteristics that result in subtle changes driving unexpected complexity. All of those things have been done elsewhere.
So if that's your perspective then we'll just have to agree to disagree.
Though I will point out the fact that all of those non-monitor components that you described also require custom drivers, which require their code to be signed, which was ultimately the item the OP took issue with. I'm frankly surprised that after acknowledging the amount of re-implementation VR requires, across numerous non-monitor disciplines, fusing the data in 11ms, for total motion-to-photon latency of 20ms or less, you still feel this is "common and straightforward."
But OK. I don't know your coding skill level, so this may be true.
And per this point:
> interpolation sounds a bit more complicated, I'll grant you that, but still pretty doable.
Valve has still not released an equivalent to Oculus's asynchronous spacewarp. If you feel it is "pretty doable" you would do a huge service to the SteamVR community if you could implement it and provide the code to Valve.
I would like to apologize for my previous post, I feel that it is unnecessarily long, and a bit inaccurate/exaggerated.
Let me be clear: I pretty much agree with everything you said. Only your original statement was what I felt a bit of a stretch:
> The "monitor you wear on your face" trope is simply inaccurate, and essentially a misunderstanding of the state of VR today
After reading a bit more into it, I feel that Oculus took the correct software approach to bring up its hardware on Windows. What happened appears to have been more of an oversight, one that most people probably could have felt for.
Custom (in-kernel) drivers are indeed probably a necessity to achieve the best possible experience, with the lowest attainable latency. However, they are not actually needed for basic support [1], which is where I think our misunderstanding comes form.
I realize that a tremendous amount of work has gone into making VR as realistic as it could get, and I am not trying to lessen it at all, which is what I think you wanted to point out with your original remark.
As much as I would like to have a go at implementing that kind of feature (and experiment with VR headsets in general), I don't really have the hardware nor the time to do so, unfortunately :)
--
[1] I don't know the latency involved with userspace-based USB libraries, but it seems to be low enough that Valve is using it to support the vive, at least on Linux (and for now).
Thanks, no apologies needed. I didn't mean to come off snarky either. And I obviously am not averse to unnecessarily long messages.
As an aside, Valve's tracking solution is much less USB-intensive than Oculus's.
In Valve's Lighthouse system, sensors on the HMD and controllers use the angle of a laser sweep to calculate their absolute position in a room and provide the dead reckoning needed to correct IMU drift. As a result, the only data being sent over USB is the stream of sensor data and position (I believe sensor fusion still occurs in the SDK, not on device).
Oculus's Constellation system uses IR cameras, synchronized to an IR LED array on the HMD and controllers. The entire 1080p (or 720p, if used over USB2) video images (from 2 through 4 cameras, depending on configuration) are sent via USB to the PC. This is in addition to the IMU data coming from the controllers. The SDK performs image processing to recognize the position of the LEDs in the images, triangulate their position, perform sensor fusion, and produce an absolute position.
The net result is roughly equivalent tracking between the two systems, but the USB and CPU overhead for Rift is greater (it's estimated that 1%-2% of CPU is used for image processing per sensor, but the Oculus SDK appears to have some performance advantages that allow equivalent performance on apps despite this overhead).
There is great debate over which is the more "advanced" solution. Lighthouse is wickedly clever, allowing a performant solution over larger tracking volumes with fewer cables and sensors.
Constellation is pretty brute-force, but requires highly accurate image recognition algorithms that (some say) give Oculus a leg-up in next generation tracking with no external sensors (see the Santa Cruz prototype[1] which is a PC-free device that uses 4 cameras on the HMD and on-board image processing to determine absolute position using only real-world cues). It also opens the door to full-body tracking using similar outside-in sensors.
But overall, the Valve solution definitely lends itself to a Linux implementation better than Oculus's, simply due to the lower I/O requirements. It also helps that Valve has published the Lighthouse calculations (which is just basic math), while Oculus has kept its image recognition algorithms as trade secrets.
> "Each year, the FDA receives several hundred thousand medical device reports of suspected device-associated deaths, serious injuries and malfunctions."
It is also specious to argue that a consumer product is being used for live surgeries without FDA approval.
This does not excuse the mistake, nor does it change the fact that the error will make people question the reliability of the product - as they should.
However, mistakes do happen, even big ones. Rockets blow up. Airbags have defects that make them not work. McAfee pushed out an antivirus update that deleted a Windows system file, crashing hundreds of thousands of PCs.
The important questions are: how does the vendor respond, what procedures do they put into place to prevent it from happening again, and are those procedures enough to give future buyers confidence that the issues are addressed?
Saying "that shouldn't have happened," while perhaps true, is simply not constructive.
As a medical device, I would expect that this possibility would have been caught very early on in one of any number of Failure Analysis meetings and mitigated by the time the device made it to the (FDA) certification process.
I’m going to assume you haven’t used many bits of medical equipment, because doing it for a job leads me to conclude that the software is more flakey than standard commercial software used day to day. Low sales volumes do not make for budgets high enough to support good debugging and development I guess.
I haven't worked in a medical lab (where our instruments were generally used). But I was a software developer for various medical devices for over 15 years and my conclusion was exactly the opposite: the software was far, far more robust than most commercial software.
It wasn’t radiology then (which would be the rough limit of my knowledge). PACs, MRI, CT, RIS, Angio gear, image intensifiers etc. All used over many years with weird glitches, reproducible errors including complete system crashes that take hours to get back and across several vendors.
Our product is a training aid for medical professionals and is not regulated as a medical device, in the same way that a flight simulator is not regulated as an aircraft.
The engineer went on to figuring out that if he set management's clocks back a few days he could take them off, since management clearly remembered him being on premises for those days.
A day off? That's all? Give that man a raise! Something to look back upon each month. He might have saved the company and even if not, probably a lot of money anyway... it's only fair to give something back.
A day off is nice, but doesn't mean that much in the scheme of things.
I would like to see more companies write a Thank You letter from the CEO, signed by his managers. Something that he could use during his performance evaluations at the company, or attach to his resume for any other jobs.
It's hard to get concrete evidence like that, which shows your value to the company. It would great to have documentation that could never be forgotten.
Yeah, but if he's the guy who Googled it, he should get the day off anyway if it wasn't really in his realm of responsibility.
It seems ridiculous in this modern age, but there are a huge number of people who will never bother to look into their problems on their own before asking someone else. Then this other person does a simple Google search and becomes the hero expert.
This all too often results in further dependence, with no real reward for the guy who took this basic step except more requests in the future. If this one guy can get a day off in this instance, it'll be a victory for every person who has ever said "Oh, if you google that, you'll see one of the first results with instructions to do x, y, z." to a time-draining coworker.
Googling for computer problem solutions (or just generally) is a surprisingly nuanced skill. Sometimes one person finds in minutes what another fails to find in days, only because of slightly better search terms and faster (or more accurate) assessment of hit teasers.
Also, a lot of problems have their search engine results "poisoned" by solutions for lesser, but superficially similar problems that are worked to death by SEO content farms competing for attention.
Anyone who had an interest in computers in the mid 90-s to early 2000-s will remember trial software that was good for 30 days but which, if you set the clock back would give you the corresponding amount of extra time.
I even seem to recall one that when I set the clock back much more than 30 days gave me as many more days beyond the 30 as as much as I had set it back.
Then there were a couple pieces of software that would detect such trickery and which would punish you by taking away the time you had left also if you set back the date before the 30 days were up.
Anyway, with this in mind, the first thing I thought when I read the headlines was, “I wonder if one can get around this by setting the clock back”, and I doubt I was alone in that, so to say that it “probably wasn’t his idea”... I dunno man.
With Macromedia Flash you had to write down the time when you stopped using it, then set it to a minute after it before running it again - because it remembered the last time it was running and refused to start if the new time was lower. Fun times. I could never have afforded Flash back then.
I've done it for the games free for a week-end on Steam, when I want to continue playing but I don't want to pay for the game. Of course it only work for single-player games.
But changing the date of the computer quickly makes browsing internet unusable, due to certificate check failing.
Those things aren't cheap for simulators, either - not to mention knock-on costs. "What do you mean - I got the doctors in, which alone took a month of herding cats, and now it won't work, just because?"
How low has the SW development bar gone, if "it's okay" now means "at least it's not directly killing people"?
There has always been a tradeoff between reliability and development time. There wouldn't be a games industry if every video game had the same level of software assurance as a mars lander, because Tetris would cost $200m to develop. A medical simulator lies somewhere between a mars lander and a video game - it needs to provide accurate simulation, but the odd crash isn't a complete dealbreaker.
The bug is now patched, so the downtime appears to be less than 24 hours from discovery to fix. The original error is clearly a major blunder, but Oculus have responded properly.
The GP was suggesting that this could kill people. I simply implied that it wouldn't, and compared to killing people, I would say a lost day is "okay".
I'll try that for my next programming blunder: "Sure, I've set back hundreds of people one day, but hey, didn't kill them! No big deal, they should even be grateful!"
In other words, comparing to the worst possible outcome is, by definition, not a very high bar.
Yes, but it's an interesting question to ponder. The last decade or two government and Military procurement has been leaning more heavily towards COTS (commercial off-the-shelf ) hardware and software. The thought was it's cheaper and possibly more reliable than the bespoke solutions that vendors were delivering in the past. Now we see that it isn't a guarantee of anything. Though still probably cheaper, at least initially.
Something like this probably will happen with computer assisted surgery or medical procedures, or an aircraft in flight. Just a matter of time.
Wow! I'm glad you were able to get it figured out. Sounds like a nightmare scenario.
By the way, do you have any links to your surgical training startup? I'm doing some research into VR/AR for surgical telementoring and training and would be interested in seeing how it's in use.
Why are you basing medical appliance on such a walled-garden technology you aren't in control of, while there are more accessible alternatives? Oculus was already known for locking up fiascos, this really shouldn't be a surprise for you.
Our product is a training aid for medical professionals not a medical appliance. We're not fundamentally tied to any particular VR device but Oculus has been our primary platform due to better ergonomics and an easier setup experience for a portable demo rig than the Vive.
This is fairly common practice in the medical device field. Volumes for specialized equipment are far too low to justify the NRE on custom solutions, so many low-volume medical devices integrate COTS solutions wherever possible.
It doesn't help that vendors are generally nervous about liability in medical equipment (this fear is often unfounded, but persistent). As a result, vendors of commercial and industrial equipment generally don't want to engage medical device OEMs with engineering and customization support. If there had been that sort of support in this case, Oculus might have made a custom build without the cert check, just as a de-risking measure.
This vendor reluctance is especially present at the FDA Class III (high risk device) level - most vendors outright prohibit use of their devices in these products. It's an open secret that this still happens anyway in a wink-wink nudge-nudge fashion, just without vendor support - which is arguably worse, but it keeps the lawyers happy.
Anything based on OpenVR or OSVR? It's not like Oculus is a monopolist in this space, it's just one of the popular options and it's known to be the most locked up.
Hardly original, kids have been using this to extend the trial periods of shareware since forever. Most Rift users have been using RunAsDate which hooks the kernel's time APIs.
Yup, hardly original to think of a non-obvious security work around the morning of a conference that could potentially make or break the company's future success, while dealing with what must have been insane pressure from the everyone there to figure it out.
This is not how Windows code signing is supposed to work. Normally you'd get a countersignature from a timestamp server so that the verification process can prove that the certificate was valid at the time of signing. It would appear that Oculus signed their binaries without using a timestamp server, so without a way to verify when signing happened they become invalid as soon as the cert expires.
Something like that. Certificates aren't supposed to stop working just because they've expired! That would destroy all abandoned or poorly maintained software within a couple of years.
This problem is deeper than forgetting to update it. It should never have caused a failure in the first place. Just the fact that the device apparently can't function at all without the internet is a problem too.
Well, either it shouldn't have stopped working, or it never should have worked in the first place. It's arguable that no signed software should run without the code signature being timestamped/signed by a trusted timestamp server. Otherwise, simple developer laziness causes 99% of software to stop running a couple years after being published.
On the other hand, maybe this is really a lazy feature. It's probably a good idea for the system to disallow both incoming and outgoing network traffic to any program written in a non-memory-safe language that hasn't been signed in the past couple of years. The lazy version of this feature is just not to run any program not signed in the past couple of years.
Edit: Requiring a timestamped signature on the signature also makes it pretty easy to add auditing functionality to the timestamp server whereby the publisher can detect unauthorized signatures due to their private key being leaked/stolen by criminals or governments. If the timestamp server's logs show a signature by your key that you don't recognize, then something has gone wrong. On the attacker's side, they need to either steal the timestamp server's private key or publish their malicious signatures for scrutiny.
Wonderful, let's just autokill all abandoned software out of laziness. I can think of multiple programs that I use which haven't been updated in years, sometimes because there is nobody to develop them (project was cancelled/company ceased to exist/sole developer got fed up and quit/whatever). What are my options? Get a crappier but new alternative, or nothing - just because someone thinks "old == bad" (Meanwhile, new and drool-proof programs tend to exhibit the same bugs of old, despite having been sprinkled with magic memory-safe dust and blessed by current signature)
My browser refuses to connect to a large number of websites, because they're still following the SSL best practices from last week. Apparently this is the reality we've decided to live in.
Firefox and Chrome give me that warning page, but I just click on "Advanced" and it will let me continue to the website. At least for me, it's just a huge warning to be careful but I still have ultimate control.
Watch Google decide that the advanced option is a security problem, and remove it, and Mozilla gladly playing along because "security" and "users are dumb".
The "owner" is no longer in control, and has not been ever since the web became "app-ified".
It's not that 'users are dumb' it's that the only way to keep users and lazy IT staff from telling people to just click through the warnings is to make it difficult to do so. How else can you fight the 'click through until it works because I have work to do' mentality?
Browsers could have bright red flashing lights telling users that they're currently being phished and users would still enter their credentials because doing nothing isn't seen as a meaningful alternative action.
But there's no UI difference between "you're currently being phished", and "there's been a proof-of-concept white paper, that shows a nation-state level actor could theoretically decrypt this communication by spending a few hundred million".
If the license that is presented to the user for acceptance doesn't include specific termination dates, or as is common in most consumer software specifically states the license is non-terminating, then any product that stops because a certificate expires is a flat out violation of the agreement and every single user should promptly sue the publisher for 100% of their money back plus any damages. And, in my opinion, this sort of thing is not accidental, so it should penetrate the corporate veil that protects individuals from liability.
Currently trying to work with the HTC Vive on Linux. Which means I need SteamVR installed, which you only get from Steam. Steam of course nukes a perfectly fine installation with updates the moment you start it, so you need a Linux with just the right versions of packages used by steam.
Maybe I should have just given up the day Oculus dropped Linux support.
TLS certificate expiration tells your browser to stop downloading new pages from that site. It doesn't tell your browser to close the page that's already been rendered and is still on-screen.
But you are making new requests to the Oculus api and that part is failing. You wouldn't expect the ajax requests to continue to work just because you left your browser open for years.
It's more like you had your browser tab open, the certificate expired and now the page is completely and totally unresponsive and the browser has killed all javascript running on it.
It also won't let you refresh the page or close it since the certificate is expired.
> Certificates aren't supposed to stop working just because they've expired!
That’s exactly how they are supposed to work. In the public sector we rely heavily on certificates for inter sector communication for instance, if certificates kept working despite being invalid it would put security at risk.
You’re supposed to build your software with an enterprise certificate store in mind though, meaning you can auto renew and distribute certicates when needed.
I really don’t see the point of adding a certificate to your television though, even if it is a tv that you wear on your head.
Code signing certificates are different from website certificates. When you use a code signing certificate - when doing it right, anyway - you also loop in a timestamp server. That way, you're signing it with your currently valid certificate and a third party is proving it was signed at a time the certificate is valid. This is so that when your code signing certificate expires in a few months, the binary you signed while the certificate was valid can still be used. Without timestamp servers, you couldn't now go back and install an older version of Firefox, for instance, because the certificate it was signed with last year is now expired.
What makes you think this was a code-signing certificate? The fact that the error involves the inability to reach a server suggests strongly that it was a TLS cert or somesuch
The parent of the comment I replied to was talking about a timestamp server. I was explaining what that meant and why it was using it. I don't know what the actual issue is with the Oculus' handling of certificates and whether it was a code signing certificate or not.
I've actually run into this issue with public sector documents such as PDF's. They don't timestamp (what a PDF calls long term validation). I had not idea this was by intent!
What's weird is sometimes they still link or require these docs to be downloaded and completed.
A bit worse, the constant password change requirements - thankfully the password helpdesks in public sector are so used to doing password resets that you can usually get a reset very easily just by giving a username if you are directly accessing a system (ie, internally so have access to helpdesk). The passwords can be crazy long though with 90 days expiration (senior folks write them down or give them to assistants). Some actually expire even without a login (they email you and say unless you login and change it will expire).
And for this to work at all, the signature needs a timestamp so that the OS can know that the certificate was valid at the time of signing.
But for some reason (signtool.exe, etc) makes it really hard to get this done properly.
This is especially true in a CI-setting, where this is one of those areas where signing and timestamping essentially makes a reliably and deterministic process (compiling code) into a unreliable and non-deterministic process, because builds can now fail randomly based on the state of a online timestamping service.
Getting this done properly is a lot more work than you at first would think.
I can see why lots of developers shy away from learning about this, even more so implementing it, when they can spend their time delivering value... And new builds which won't expire for another 2 years.
>This is especially true in a CI-setting, where this is one of those areas where signing and timestamping essentially makes a reliably and deterministic process (compiling code) into a unreliable and non-deterministic process, because builds can now fail randomly based on the state of a online timestamping service.
Why are you signing your drivers during development? If not, how is the "unreliability" of timestamping services an issue? You probably can't push your stuff to prod without timestamps anyway.
I was referring to Windows code-signing in general, not drivers in particular.
This kind of problem affects regular customer-facing applications too, and that's where I've worked hard to minimize the issues caused by the need for timestamping, while still doing things properly.
(That is, if timestamping fails, the build SHOULD fail)
To me it hardly sounds like a problem for code-signing in general either.
You presumably don't need to sign your binaries during development, so you'll only be signing them when pushing updates to prod. I don't know how often and how urgently you usually do that, but it sounds like a small delay in pushing out prod builds caused by a timestamp server issue wouldn't be much of a problem to most orgs.
It’s often a nesting/dependency problem which is hard to solve after the fact.
A website may depend on libraries, which depends on other libraries. They must all be signed. They are all interlinked against a known version during build, and signing them after linking may not be an option.
The website may offer a download, which is probably a setup file of sorts, which must be signed. It will of course contain binaries, which must be signed too.
The website itself may also be something packaged in another setup file, which too must be signed.
Now how do you “just” sign something like that before releasing to prod? You don’t.
Signing (and time stamping!) must be a intergrated part of every step of the build process.
It takes more work than you would expect to get this done properly, systematically and reliably for every part of your build process. It does take effort and expertise.
That said, for certain build-types (like pure CI) we disable things like timestamping. We’re not crazy :)
> Getting this done properly is a lot more work than you at first would think.
It doesn't take a lot more work than I think.
> But for some reason (signtool.exe, etc) makes it really hard to get this done properly.
In our experience, signtool doesn't make this "really hard to get this done properly". The CI for our primary product uses a remote server for timestamping at signing. While that server doesn't go down constantly, it does go down at least once a month. This is not an artifact of signtool but the vendor.
To have security it is far better to have something fail and not sign than to sign incorrectly. The opposite, get it done attitude at all costs, was the likely cause of this article having been written about Oculus. In this case the cost was signing something incorrectly, due to a misunderstanding of the very basics of certificates, and bricking the already working primary product of the company.
Microsoft has a mode for loading unsigned drivers. Every Windows developer should already know about this. If a junior developer without this knowledge is in control of a critical build process, that's the problem not signtool.
I've met far too many people who treat a lack of security knowledge as a positive or a badge of honor of some kind. It's not a positive, it's something undesirable and a loss-leader for employers.
I don't see enough info in the article to conclude what happened. But I have a hypothesis.
I bet they use certificate pinning.
Process A launches process B and checks against a pinned certificate. This is even more secure than just using the windows code signing stuff.
Problem is, when their cert expired, they were supposed to renew the same cert. Instead, somebody got a new one and signed the build of process B.
The device automatically downloads process B, but then the certificate pin check fails when it tries to launch it.
All the security guides that tell you to do certificate pinning need flashing neon signs explaining this problem. You can't pin certs if you intend to ever change certs.
According to one thing I read (which I will admit was third party comments) there was a countersignature present up until version 1.22 and then in 1.23 it disappeared. I am not familiar with Oculus' binaries though so I don't know how far back that would be.
The Windows signature tool, signtool.exe, makes the timestamp an optional parameter. There are no warnings if you don't supply it. I'd suggest this is a poor design--do most signatures not want the timestamp by default?
One wonders if we've made technology unnecessarily complicated. In order to build something like the Oculus Rift, they obviously needed expertise in hardware design, optics, display technology, manufacturing, user interface design, etc etc. Also, they apparently needed expertise in managing the ins-and-outs of the Windows driver security system. Adding one more subject to their already crowded curriculum wasn't very nice of Microsoft.
A lot of applications and environments seem to be built with the assumption that they can add arbitrary complexity to their interface, since they're only going to be used by "experts" who can be expected to know everything of relevance and work through a thick documentation to understand the system. In truth, the "experts" who use your programs are going to also be using a dozen other applications, each with their own piles of documentation (or equal amounts of lack-of-documentation,) and have little brain-space left for the intricacies of your framework. So, they're going to use your system while knowing the minimum possible amount about it; if that system contains traps that cause problems for this kind of user, that's bad design.
I was just reading something here about Cairo and how it's easy to fall into slow code paths with it, and if you happened across falling into a slow code path, somewhere along the line, "you fucked up."
When I read the comment I was immediately flabbergasted: no, someone else fucked up. It's not my fault someone wrote software that sets up undocumented traps for me to fall into. Or provided three ways to do something and two of them are not recommended OOTB. Or is primarily documented by third parties.
The problem in this case is much deeper than their fault / your fault. The problem is that in this industry we do (have to?) lean too much on the power of abstraction.
Whether you are writing SQL or graphics code you are constantly told "just express what you want to express directly, and the system is smart enough to do things as efficiently as possible".
But that might not be very efficient at all. The people who write "the system" have to write software that does specific things in specific situations and there will be endless cases which cannot be dealt with efficiently. And the more the interface hides the implementation, the less likely it is that those cases will be obvious.
Yes, you’re talking about the proliferation of declarative APIs over explicit, imperative ones.
My pet theory is that because we typically understand our needs before we understand the code paths required to fulfill them, our V1 APIs are usually a declarative “this is what should be” interface. Then we spend days or months making it happen and by the time we understand the required code paths, we’ve totally baked our expectations about the “make it so” API into our architecture.
Getting to a good, explicit, imperative API requires a whole nother step, and often a major refactor. You have to step back and ask “what is really happening now and can I conduct it more directly?”
... but by that time your code works and it hardly seems worth the effort.
But it is worth the effort. The declarative API will just get uglier as you add to it, and provide no real guidance about where future additions should go. Just throw another key in the config. Add another conditional. An imperative one will constrain your future choices about how additions can work, and therefore helps you clarify your domain model as you add to it.
I think your theory and seeming aversion to declarative interfaces is a bit too general.
Many declarative interfaces are the culmination of decades of work on re-inventing the wheel at the imperative level. The relational model and SQL databases are a great example of this - “we’ve solved these problems below this line mostly, please move on and focus up the stack closer to the customer/user”. It became a multi billion dollar industry as it nailed the 80/20 rule for data management.
And for a large class of problems this remains solid, despite a perpetual subset of engineers that think they can do better than the declarative interface and engine and build an alternative. Sometimes it makes sense to be adventurous and drop back to the imperative level - SQL databases fell over in recent years was the scalability of the underlying engine for the largest uses. But a Postgres or MySQL endures as a great declarative abstraction over a very complicated set of issues.
No, I agree with you. There are some great declarative interfaces and there are many problems best solved by one.
SQL is a great example. But it’s also an example of how difficult it is to make a great declarative API. Look how much effort went into designing and then refining and then adapting SQL to changing needs. It’s been an enormous effort across a huge community.
Same with CSS. Those aren’t things some developer just wrote and they got refined in an application over time. There are entire conferences about the regular redefinition of those interfaces.
My point is that concept doesn’t scale to some large number of abstractions. SQL and CSS are big manifolds that operate on the boundary of your application, and often separate you from an entire other group of developers working to maintain the other side of that contract.
If you try to apply that same setup to every problem, with thousands of declarative manifolds all intersecting within your application, you get chaos. Which is what many modern frameworks try to do. Declarative APIs only work when they are rare and standard and everyone knows them.
Whether you are writing SQL or graphics code you are constantly told "just express what you want to express directly, and the system is smart enough to do things as efficiently as possible".
That's just an excuse to stop mediocre engineers who would fiddle endlessly with pointless micro optimizations.
You very much need to understand the performance of your code if the product has performance requirements that fail unless you do.
I presume most coding is just crudding small strings from UI to database, where performance issues don't kick in, and hence it would be wastefull to care about them.
I've watched production builds to crawl to a halt and become sluggish as molasses because someone in the dev team was indoctrinated in this creed. (Many reasons, including a cartresian explosion in complexity due to some innocuous LINQ calls).
"...or graphics code you are constantly told "just express what you want to express directly, and the system is smart enough to do things as efficiently as possible"."
Who is saying that? I've worked in video games for 18 years and never heard that. In fact, anyone who said that would get puzzled and suspicious looks. We generally use a very C-like variant of C++.
Your comment sounds accurate from what little I know of the game community. But the reason it is fun to read about game programing is precisely because you guys are so willing to dive into the detail.
I was responding to a comment about Cairo graphics, which I assume is not much used in games. The kind of graphics I was thinking about is things like Cairo, Windows WPF, the Web.
That said: once upon a long time, OpenGL was supposed to be achieve efficiency by being high level so that the rendering stack could optimise your drawing better than you could. That idea didn't work so well and modern OpenGL seems to take the opposite approach.
i don’t think it’s a problem of abstraction itself. it’s a problem of poor abtraction. most APIs and things of this sort aren’t some beautifully crafted abstraction that even try to take care of things nicely for you.
and efficiency of computation is the least of my worries. i would like things to just work.
This one will hopefully be solved quick by the company, but think of what would have happened if this was a piece of technology sold in hundreds thousands pieces by a company now out of business: instant tons of electronic junk that would be instead perfectly useable if there was a law mandating all software/hardware details to be released if either of these conditions are met: IP owner going out of business, company declaring the product obsolete and stopping any technical support or upgrade, product sales plummeting due to competing or new models. The first two are obvious while the third one would allow some of the devices to be repurposed instead of thrown away.
I've saved a good number of old access points / routers from the landfill by installing OpenWRT/Lede where possible o their latest available firmware,pairing them together, adding homemade external antennas (small Wifi antenna enclosed in white PVC pipe plus self bonding tape, silicone sealant and heatshrink, RF240 cable and RP/SMA or N connector: => years exposed to sun, rain and snow with zero problems). I install them at really low prices to customers who need a cheap wifi bridge from point A to B. I would love to do a similar "afterlife" service to old cellphones, but none of them could host a true native Linux install because of how tightly closed the underlying hardware is, and all of them sooner than later are doomed to be thrown away.
The problem lays in the IP. It's considered to be a vital asset so that when a company goes belly up it will survive kept years or decades in a safe by law firms in the hope someone will buy it, or just to make profits through litigation against infringers. Unfortunately this has a deleterious effect on products derived from that IP, the people who bought them and the people living where the unusable products will be trashed.
They let their certificate expire, essentially bricking all of their devices. And now the app running it won't start, so they can't push an update.
Just recently picked up a Rift. I love the hardware and their exclusives are top notch, but this confirms my suspicions that their backend is super goofy.
They sell Rifts at Best Buy and want to pretend that it's a consumer-ready product, but here's why I am recommending people stay away for now:
- Non-existant repair or service out of Warranty.
- Basic things in the platform like changing your name or photo don't exist.
- Lots of non-response over other basic features requested by the community.
- Questionable future investment in the platform or hardware. It sounds like they are moving their efforts towards "lighter" experiences.
In short, it feels like being a legacy customer for a new product.
I see the exclusives as a negative. I don't want to support their efforts to build a closed ecosystem around what should be an open API that any headmounted display + tracking can expose.
Would anyone with a straight face say the same about Nintendo's exclusives?
I don't see anyone else in the market funding great things like Medium, Quil, Lone Echo, Robo Recall. Feels it's just cutting off your nose to spite your face to complain about this.
No, because the Switch is a game console whose hardware is produced by Nintendo to run Nintendo approved software. It doesn't allow homebrew by default and is a closed platform, and pretty much everyone buying one knows this.
An Oculus is not a standalone product; it is a peripheral that relies on an open PC platform for its processing. If software the Oculus uses can run on any PC and the Oculus does not have a unique hardware capacity to operate the software, then the exclusivity is an arbitrary constraint.
Oculus wants all the perks of being its own platform without the responsibility or technical merit.
Very well put. Can you imagine if say Logitech funded a flight simulator that only supported their joysticks, or Samsung funded a game that would only run on their monitors?
Google has funded Tiltbrush and Blocks and released it for every VR platform. We need to grow the industry. But this isn't their biggest problem - that would be their extremely walled-off app store. Oculus must still see their platform as a kind of gaming console and not the future of computing? What a shame.
You realize that this is the only way to fund triple AAA content for the time being though, right?
Oculus is producing these at a loss given the current size of the market hoping it will pay out in the long term by growing a healthy ecosystem...it’s the only way
"Built ground up for VR AAA titles" != "Adapted VR AAA titles"
I refuse to use revive to play any of these games because I don't want to support the exclusives, but I gotta admit that something like robo recall looks better than anything that's available on steam
SteamVR (OpenVR) platform, which Fallout VR uses, supports Oculus Rift just fine, and so does OSVR. It's the other way around that needs hacky unofficial wrappers.
< They let their certificate expire, essentially bricking all of their devices.
This also suggests that if the decide they stop supporting it, eventually the software will stop working due to these certificate errors for which will then there be no fix.
It may appear to suggest that, but it doesn't. These are code signing certificates not TLS. The certificate isn't for the executable but for the ability to sign an executable. If you update an executable with a new executable that was signed with a certificate that was expired at the time of signing, then you will encounter the issue.
Call me when it's a wireless & self-contained unit. Until then, I just cannot honestly see it taking off in the commercial space. Industrial & enterprise-ish use maybe, but to regular consumers hell no. It's still a mess of wires and sensor installation, not to mention you still need that high-end gaming PC (and with the prices of GPUs being what they are it's a no-go for the vast majority of people).
HTC is close to releasing a wireless module for the Vive. Combine that with their next-generation headset, with the screen-door effect mostly gone and much higher resolution, and I think we'll have something.
Still need the sensors, but those aren't that large.
It will be as useful as a brick if nothing can connect to the headset because it requires a signed communication DLL with a built-in obsolescence timer that must be continually refreshed by the company.
I feel like kicking out the actual founder of the company may have been a bad idea. Politics or whatever, I get it, but it doesn't feel like there's a vision guiding what VR is supposed to become.
Not the above commenter, but Google Earth VR is like something out of a sci-fi film from 20 years ago. I was sincerely impressed and enthralled. Of course, the more street views they can capture, the better it will get.
There are other education experiences like BBC Home which is one of my favourites. Another, Mission:ISS allows you to explore the ISS and control the Canadarm to dock a module. Highly recommend them.
edit: I almost forgot, Go For Launch:Mercury is also very worth it. You can choose manual mode which puts you in charge of setting the launch procedures. The graphics aren't as good as I'd like, but the experience is good.
It seems inline with the BBC charter (it's a non-service activity which supports learning for people of all ages [one of the BBC's stated purposes], and helps fulfil the requirement that the BBC must promote technological innovation), so it would seem to be a reasonable use of license fee income. It could be seen in a similar vein as the BBC Micro, which was designed and built by Acorn.
IIRC BBC has long had some pretty great in-house developers as a rule. It sounds like it was more of a joint project. IE, BBC wanted to do the project and recruited a partner to help see it through quickly enough. That's how it appears to me, anyway.
But if you get a chance to try it, do! I will warn you though—the first time through is a bit intense. I was new to the platform and had to take the headset off for a few minutes mid-game. After trying it again I found my legs.
Also, it might seem counter-intuitive, but I've found alcohol helps desensitize oneself to the experience.
But when it comes to kids/education, etc. Just be careful. I've invited others to try out some of the more intense experiences and I've even had people get mad at me for it.
Cool, I wonder if they can gather movement data and recognise you in camera footage even when they can't see your face. I'm sure it will come, if it's not possible yet.
> - Non-existant repair or service out of Warranty.
That.
My Rift is scratched since first use (I swear it came scratched) and I've never been able to have them acknowledge that the device can be scratched into a blurry mess. "Oh, that's only regular use wear and tear" ...
Lens cannot be replaced or repaired (and they could during rift dk1 era).
It's more complex than a display; it's a display plus a collection of USB sensors and some low-level hooks into display management. This requires kernel-mode drivers, for good technical reasons, where a normal monitor wouldn't.
That's a reasonable answer to why the drivers need to be signed in order to be installed. That's not the question. The question is why should the already-installed drivers that you've been trusting all along suddenly stop working.
It needs to exist in an untrusted environment.. it verifies that the driver which has low level access to the computer hasn't been modified by a third party.. MS signed system binaries are the same way; it's a safeguard against malicious entities.
I'm not arguing against the entire concept of driver signing, just one specific nuance of Microsoft's driver trust model. There is a place for driver signing and signature checking, but de-trusting a driver that you were perfectly happy to give kernel-level access yesterday doesn't make sense.
It's still Oculus fault because they didn't use a timestamped signature.
A timestamped signature on the binary would have it kept working and that's how MS intents it to work. You can leave it out if you have the desperate longing for having your software break suddenly without reason like Oculus just did.
And that's where MS is at fault: drivers without timestamped signatures should be treated as faulty. This would prevent these errors in the first place.
I guess I can see reasons why some companies may want to be able to produce time-limited drivers:
Maybe they want beta versions to stop working, forcing users to upgrade.
For offline computers, it might be that some companies would see this as a way to enforce contract periods (customers would have to install an update to continue using the product when their contract is renewed).
Of course, disabling driver signature verification is still a way to bypass that, but often times the companies that do things like this probably aren't thinking about that.
Maybe the API should then explicitly ask for a 'timestamp-none' in case the driver needs to be time-limited, forcing the developer to at least think about it.
There are usecases for signatures without timestamp.
Besides, literally every codesigning blogpost/tutorial/guide/etc I found tells you to use a timestamping server so the guys and girls and Oculus must have skipped the critical parts of whatever they used.
As you might have guessed, when you don't want someone to use a binary beyond a certain date.
Security Solutions could benefit from this, the customer will have to update or disable the signature check if their version of the solution becomes too old. Old versions could open them up to vulnerabilities.
Another might be when you distribute beta or testing versions of your software. The customers can safely test the version and the lack of timestamp prevents them from running it in production permanently. They have to update to the release version.
It could also be useful when you sell a software to a business and want them to test it first. So you send them the program without a timestamp signature and limit the validity of the certificate. That way they can't just run the test version forever.
Really anywhere where all parties involved, user and producer, do not want to run a binary forever but the producer might not fully trust the user to do that.
The customer can always re-sign the binary if they wante to and replace the existing signature. A time limited inside the program code would be more secure.
Does that really not sound ridiculous to you? Microsoft needs to be blamed for their certificate validation implementation because people might use it to make their software expire? Instead of just writing code that does so?
Microsoft's driver signing model has a mode that is a giant footgun with no redeeming value. Oculus is a victim of Microsoft's bad design. They weren't trying to build in a self-destruct timer for their whole product stack, and if they were, they wouldn't have used the driver signing certificate as the lynchpin.
I find this reasoning ("footgun", Microsoft's fault) interesting when compared to the prevalent HN opinions when it comes to, for example, (unsecured) redis and memcached servers being used in DDoS attacks, or even AWS S3 buckets (with confidential or even highly classified files) being -- inadvertantly -- left wide open to the public.
In those cases, "we" (as a "community", in general) often blame the people responsible for running those services instead of the developers (or Amazon) being blamed for choosing convenience/ease-of-use over security. That is, we're often quick to say that the people running those wide open memcached servers are at fault for not properly configuring and/or securing them -- and not blaming the developers for creating "a giant footgun".
"You shouldn't be running servers on the Internet if you don't know how to properly configure them" (paraphrasing) is often stated. Yet, in this case, we're not blaming Oculus for their screwup and instead blaming Microsoft -- even though there's zero evidence (AFAIK) that Oculus even used any Microsoft tools to sign their application. (N.B.: I don't know the first thing about code signing on Windows so it may well be that using a Microsoft utility is required and, thus, just assumed by those of you who are familiar with the process. If that's the case, sorry.)
I'm having trouble trying to reconcile these two seemingly opposing viewpoints. Why is Microsoft's utility "a giant footgun" but a (OOTB) completely insecure by default, wide open by default memcached server (for example) isn't?
There's a use case for redis and memcached being open to the network, and a failure mode if you don't properly separate your internal network from the public Internet. There's a use case for S3 buckets that are publicly readable, if they don't contain sensitive/private information. Those features have reason to exist, even though there's potential for misuse. Secure defaults would be nice, but can't eliminate these risks.
There's no reason for drivers to have an expiration date. There's no scenario where it makes sense for the configuration that Oculus stumbled into to be possible.
> There's no reason for drivers to have an expiration date
If you can license software with a definite expiration date, why can't you license hardware with a definite expiration date? And have your license enforced by the operating system? Imagine that I'm a company with a hardware product, and instead of selling that hardware at large expense, I rent it out, and provide drivers with an expiration date to enforce the terms of the hardware lease. If the lease is renewed, I'll provide new drivers with a new lease expiration.
Not that I'm arguing for hardware licensing, or arguing that it was what Oculus was trying to achieve and screwed up somehow. But there's a difference between "Microsoft built a feature some of their customers didn't know how to use" and "Microsoft built an anti-feature".
The driver signing system is not an effective way to implement an expiration date, if that is your goal. Driver signing enforcement can be disabled rather easily by skilled users. Licensing restrictions written into the code of the driver itself are harder to bypass. It also does not seem at all likely that Microsoft intended for the driver signing system to be usable as a time-based DRM mechanism like this.
I'm not being facetious, FWIW. I know a fair amount about PKI, in general (probably in more depth and intricate detail than the average HN'er, actually), but I'm not a developer and I know very little about code signing in particular (and even less when it comes to the Microsoft world of code signing).
I do find it kinda hard to believe that there's no use case whatsoever for this particular configuration (code signing w/o an included timestamp from a TTP), though. I certainly understand why a timestamp can be valuable (as it would in this case) but what isn't clear is that there is "no scenario" whatsoever where the lack of a timestamp might be acceptable or perhaps even desired.
As I said, though, I don't know enough about code signing specifically to know what these scenarios might be but I can't imagine there isn't even one of them.
> but what isn't clear is that there is "no scenario" whatsoever where the lack of a timestamp might be acceptable or perhaps even desired.
I can readily imagine a scenario where a driver with a signature but no separately attested timestamp should be acceptable. What I cannot imagine is a scenario where it is useful to treat a driver signed in such manner the way Windows currently treats the driver.
Sure, there are plenty of different opinions here on HN. If there wasn't, these discussion threads would be boring and useless.
My point was: in most threads, there's a common opinion or viewpoint shared by most, along with a few "detractors". In general, though, the overwhelming "predominant" opinions (within/on a particular subject) are pretty consistent from one thread to the next.
For example, the "it's the end user's fault, not the developers" thing I mentioned earlier. That seems to be, pretty consistently, the "belief of the majority". Here, though, it's the complete opposite. Instead of saying "the end user (Oculus) screwed up" (which, IMO, they certainly did, FWIW), it's "Microsoft made a footgun which caused this".
That said, I have now made it through the rest of the comments in this thread and it seems that this viewpoint isn't as widespread as it first appeared. Perhaps I just jumped to a conclusion much too quickly; there's obviously plenty of fingers pointing at Oculus as well.
Interesting. I would think that opening unsecured services on to the Internet at large is a big no-no; and that whoever sets the default-allow is the one who's setting the trap here. Yes, the admin should inspect any installation for traps, but that's, as you note, secondary to "don't ship software which has a highly convenient trap set up". Most software traditionally exposed to the Internet did manage to do that, back in the early 2000s (by shipping default config `interfaces=lo` or somesuch), nobody should get a free pass on that, MS or not.
That's not how APIs work. You don't eliminate stupidity by thinking of each and every way the user could screw up. You simplify your API so there aren't so many ways to use it in the first place.
So instead of trying to conform to the x.509 spec MS should have just developed their own certificate validation scheme, because that would totally be less of a "footgun" than conforming to the spec.
Am I getting this right?
Why aren't we blaming the people behind RFC5280, after all it was them who came up with this awful idea that certificates should expire.
>giant footgun
oh dear god how are you generating your certificates? This is not a footgun unless you are doing something immeasurably stupid before even involving MS products.
Besides, if you insist on going ahead and setting the Not After field, wouldn't it be a bigger footgun to ignore that?
However, I'd argue that disregarding the Validity section would be an unusually big departure from the spec, not comparable to the typical silliness surrounding x.509.
How about we stop blaming people who accidentally pressed figurative "system self-destruct" button and start asking why there are so many of those buttons everywhere? Nowadays this is a recurring theme. Simple mistakes leading to catastrophic failures at grand scale. "Just be more careful" doesn't cut it anymore, because in the software world there are just too many things to be careful about.
Because when you edit code basically every character you type have the potential to become a big red button. It’s not that developer add them for fun.
And each time you develop a new functionality you have to figure out how to build a very solid glass box around the reds buttons you just created. The default being none.
Alternatively you could propose no user configurable functionalities whatsoever and rarely and carefully upgrade any dependencies (including the os) and you will create a very robust program.
But then don’t expect to end up with a wireless VR headset with an online game catalog and multiplayer capability. No but you can up with a very nice banking application developed in COBOL for sure.
read the command for signing their code, and signed their code as instructed.
Today, the certificate they signed a driver with expired, and because the signature wasn't timestamped it means Windows can't know if the driver was signed with the certificate after it expired, so the signature is now treated as expired as well, so Windows doesn't trust the driver.
Why wasn't it timestamped? Probably because instructions like the link above treat that as a separate subject to signing your code, and when you sign your code it looks and works like it's fully correctly signed.
Your same link also explains how to sign a file with a timestamp as well, and contains a link on how to add a timestamp after the fact. It doesn't pretend to know what the best practice is for your specific use case of the signing tool is.
> Your same link also explains how to sign a file with a timestamp as well
Sure, further down it explains how to use signtool to timestamp something, but why would someone trying to get an app signed care about using the tool for timestamping?
> where Microsoft does talk about when you should be timestamping
If someone finds that article first, and reads to the bottom of it, it explains that timestamping is related and important and says the times you should be timestamping are "you should definitely do this", so perhaps a design that isn't a foot-gun would have "do this" as the default, with a --force option and alarmist warnings for anybody who has a reason to have their signed executables expire one day.
Design oversights and user mistakes like these will happen, but it doesn't mean influences and causes can't be identified and improved.
(most of the builds at the company I work for had not been timestamped either)
The problem isn't that Windows requires drivers to be signed. The problem is that Windows allows drivers to have an expiration date. If Windows verifies a driver's signature at the time the driver is installed, the driver should be considered trustworthy for as long as it remains installed on that system. There's no reason to re-verify the signature every time the driver is used.
Recommended practice is to timestamp windows drivers (and software) when they are signed. Without a timestamp, the driver is not trusted after the signing cert expires, which I guess is what happened here.
With a timestamp, as long as the signing date was within the signing cert's validity period, the signed driver continues to be trusted beyond the signing certificate expiration.
That seems silly. Presumably a cert has an expiration date after which we might assume its been compromised. If it has been compromised then it could have been used to backdate a driver signed with it. In other words, if you don't trust the cert you should not trust anything signed by it. Or is there another layer in this somewhere?
The timestamp server is a separate trusted entity that signs the signature asserting the date and time. It's not just metadata, it's effectively a separate signature.
> Then you would need an internet connection just to install a driver.
If you think I'm proposing any changes to how drivers are installed, then you have misread me. I'm proposing a change to how already-installed drivers are handled: absent any new information, the code that was trusted yesterday should be trusted today, and be allowed to keep running.
Imagine a scenario where a driver is installed during a network outage and with an incorrect clock. Because you need to be able to install a network driver the system will allow this security flaw. However when the system knows better its reasonable to limit the damage by stopping the driver.
You could say that any damage has already been done which is most likely true. But I can't fault them from mitigating it as much as possible.
I suppose you could modify the system to get external attestation of the time while the driver is installed and use that as a sticky bit - but its a big complication and its much better if the driver is securely timestamped in the first place.
> Because you need to be able to install a network driver the system will allow this security flaw. However when the system knows better its reasonable to limit the damage by stopping the driver.
The only way that the system "knows better" is by acquiring something like a certificate revocation list. The system does not know whether it was powered down for five minutes while the network outage was fixed, or for five years. When the system is powered back on with a working internet connection, it does not have any reliable way to tell whether the offline installation of the network driver occurred prior to the expiration, or after the expiration with a properly backdated driver and backdated system clock. There is no way to justify suddenly de-trusting a driver that's already been running simply by observing that you're in the future.
Even then you only need to verify that once and can save a time stamp in case the cert is revoked afterwards. Breaking system that has already been verified is still unjustified.
Can’t Microsoft give you an error report when they do this, to let you know what you are doing is probably very dumb?
I guess I don’t know the time when Microsoft has their code and heir contact information and is doing some kind of preflight check, or if that ever actually happens, and there are already so many ways to be very dumb with drivers...
Part of what driver expiry does is to prevent attackers from trivially banking older vulnerable versions of drivers and using them to bypass kernel protections.
Checking at install time is effectively useless. The whole point of running signed code is that you can't just load some rootkit. Secure Boot only loads a signed bootloader which only loads a signed kernel which only loads signed kernel modules. You can't do what you're suggesting without fundamentally breaking this chain of trust. What's to stop a rootkit from just spoofing that it was installed months ago?
> What's to stop a rootkit from just spoofing that it was installed months ago?
The fact that if a rootkit is in a position to performing that spoofing, it doesn't need to, because it already has the power to make arbitrary modifications to the system image.
The whole point of signing everything from the bootloader on down is to make sure that even ring 0 control over the computer can't persist through a reboot. Allowing signatures to work the way it was suggested would break any hope of something like Secure Boot ever working. As it is you're already trusting timestamping certificates to effectively live forever.
The signed kernel keeps track of when it first has seen a certificate. That is signed by a kernel, so a rootkit can’t spoof unless the system is already compromised.
Even the kernel can't modify its own code and persist through a reboot. The kernel only loads signed code that isn't malicious, the bootloader only loads signed kernels that aren't malicious and don't allow you to run malicious code as ring 0, and the BIOS only loads signed bootloaders, etc. There's a root of trust from the hardware on down that makes sure that you cannot run unsigned code as ring 0 and if there's a compromise it can't persist through a reboot. Allowing the kernel to mark certain modules as "signed" like you're suggesting would allow a rootkit to install itself via some exploit. This would render moot the whole point of Secure Boot in the first place.
I think that's better handled through a Certificate Revocation List (CRL), especially in this case where's it's fairly easy to enforce and keep up to date.
CRLs are pretty difficult to scale resiliently, though, for a number of applications. Same problem that led to OCSP stapling after OCSP became a thing. With CRLs you can at least take advantage of a CDN of some kind, but there are tradeoffs with your ability to operate a CRL securely doing that, too.
CRL in the driver install flow implies being online (at some point) to install drivers too. As we move into the future it’s hard to imagine not having Internet access, but we also don’t design Windows. It’s definitely a case they’ve considered, though I did see mention of a timestamp server in this thread (I don’t know much about Windows signing, just X.509 PKI in general).
if you by "malware defense" mean preventing stolen expired certificates from being used to sign code, then yes. if you mean by only allowing code to be "signed" for the duration of the certificate, then no.
Is that true though? Could a malicious driver be signed with a compromised key and distributed? Seems like a useful feature to be able to mark drivers as compromised.
> There's no reason to re-verify the signature every time the driver is used.
I was replying to this part of your comment. It does seem worthwhile to validate the signature of the driver every time the driver is used if that check would reveal when a certificate has been revoked for having been compromised.
Agreed that the expiration time is not particularly useful for this purpose.
> It does seem worthwhile to validate the signature of the driver every time the driver is used if that check would reveal when a certificate has been revoked for having been compromised.
It would be much more efficient to scan the list of installed drivers every time a certificate revocation list is updated, because certificates are revoked much less often than operating systems are booted.
And there's nothing gained by just checking timestamps if you don't have a new certificate revocation list. If the driver is already installed and was trusted and running yesterday, you gain no security by deciding to not load and run that driver today, unless overnight you acquired new information that the driver is insecure or malicious. The ticking of a clock does not convey any such information.
Not all e-books (or readers) have DRM. I understand and share your concern, but you can shift to only DRM-free e-books with your own backups and under your own control.
That's the only kind of ebooks (non-DRM) ones that I buy. Some publishers (like O'Reilly or Tor dot com) exclusively publish non-DRM books.
I politely ask authors or publishers to release non-DRM ebooks when I can. Apart from best-sellers (or time sensitive books), I believe most books are stuck in the long tail of obscurity and releasing them as non-DRM won't affect their sales.
This also means I don't visit pirate websites that claim to have ebooks. It takes two to tango. :-)
This kind of stuff worries me as well. Security experts generally don't seem to give a fuck about things being unusable due to security systems "Working as Designed". It just doesn't factor into their analysis. As long systems are not "compromised" in some narrowly defined sense, everything is considered fine.
Generally this attitude doesn't backfire, because individual users loosing access to their data, their accounts or their software can be simply dismissed. But in this case it happened to everyone at once, so it's suddenly a big deal.
This isn't my experience at all. Most "security experts" I know are familiar with the so-called "CIA triad" and understand quite well that the "availability" part is just as important as the "confidentiality" and "integrity" (i.e., the "not compromised") parts.
If one doesn't, well, she isn't much of a "security expert" after all, is she? Firewalling off TCP port 80/443 at your perimeter firewalls isn't a very good solution if you're an e-commerce company selling your product on your web site -- and the "security experts" know this.
That's weird. The blog post that answer links to implies that even if the timestamp certificate AND signing certificates are revoked on a date following the purported timestamp, then the signature is still trusted.
That doesn't make sense to me: If the certificates are compromised, an attacker could backdate the timestamp to whatever he wanted and sign anything.
> implies that even if the timestamp certificate AND signing certificates are revoked on a date following the purported timestamp, then the signature is still trusted.
That's not how I read it. The "lifecycle table" doesn't mention the case of both signing and timestamping certificates going bad at the same time, only what happens if one of them expires or is revoked. The only mention about both certificates going bad is in the text below the table:
> But timestamped signature remains considered as a valid even if all certificates in the signing and timestamping chains are expired.
Note that it doesn't say anything about certificates being revoked.
As an outsider my guess is that by having a signer and counter-signer you are accepting that the chance of both certs being compromised is minimal enough.
Of course this sorta falls apart if you consider large bad actors could have legit timestamp certs over a period of time and then use those historic certs on backdated servers to counter-sign a stolen signing cert. It would appear as a legit signed and counter-signed cert done when the signing certs were valid.
Saw this, opened Oculus Home, there's a message in the Updates tab saying "An update may not have installed correctly", and indeed, VR apps didn't work.
Nate Mitchell of Oculus posted on Reddit saying "We're working on resolving this issue right now. We'll keep everyone posted on progress here." https://www.reddit.com/r/oculus/comments/82nuzi/cant_reach_o... . Top-level of that thread has a workaround involving setting the clock back or using a utility called RunAsDate to fake the clock for a single application.
What a horrible design decision. Instead of making a system that simply works or doesn't work Microsoft allowed everyone to produce apps which break at random times in the future. It's one of those "what could possibly go wrong?" cases.
This, and many incidents like it, makes me think that running tests 1/10/100 years in the future should be a standard feature of test runners and CI systems. (on by default)
I work with time a lot and have always advocated running practical simulations, especially over year changes, leap year changes, leap year days, etc. with the junior engineer I mentor as well as the hardware company that partners with us -- it's only recently after we got bitten by a time-based bug that people have started listening.
That's the nature of the beast. I have to be the noisy guy about testing and also be the guy who doesn't say "I told you so," but instead continues pushing testing.
I mean, they would have noticed when a test actually runs into a problem, but yeah, it's not nearly as visible as something actually going wrong in production.
The kicker of course is that if you set the date to something in the future you may have a pile of other services fail before your code is run — Basically any /other/ services that happen over TLS may fail (correctly as their certs have expired ;) ).
I had a bunch of tests fail at the start of this year because someone had hard coded 2017 into the tests.
Fortunately it was a problem with the tests, rather than the code itself, but these things do happen.
At my old job, we had a bunch of tests fail when daylight saving ticked over. For some reason, some things were using local time, rather than UTC. We also had a test that would fail if the minute was the same as the hour.
I borrow the office Rift every couple of months to play around for a weekend and see how the field is progressing. Unfortunately what I've mostly seen is a bunch of regressions, technical and ux, as they update their platform.
The new home 2.0 and especially Dash are leaps and bounds above what home was like before I think. You do have to enable it (still beta) but I think it'll be really nice once it's finally released.
I had to leave the beta channel to get the device functional after a recent update. Some kind of error loop about "a recent update has not finished installing."
It sounds like the same expired certificate is also used to sign their autoupdater's exe, so they can't just roll out an update using a new certificate.
I'm -constantly- seeing 'certificate expired' in my browser. This certificate stuff is so hard that they can't pay some Chief Certificate Officer $15/hr. to -do nothing else- but assure that stuff is renewed in a timely fashion?
We furry 'self-reproducing' (YMMV) mammals are simply not ready for all of this.
This seems to be a somewhat common type of problem. I wonder if companies should routinely test on machines with the clock set one year into the future to catch them before they hit customers.
I think you'd run into other problems then, for example if your test machine needs to communicate with https sites powered by letsencrypt, all those sites will appear to use certs that "expired" at least 9 months ago.
It's a mess. At one point we had a backup domain controller that had gotten incorrectly setup as a time server, and was out of sync with the rest of the world, with a slight amount of drift. Randomly, our test servers would end up syncing time from that server at times, and wind up slightly off. When the time got slightly more than around five or ten minutes off, connections (over TLS encryption) from those boxes to our Lync IM servers would start failing, and weirdness would ensue. Reboot the box, or sometimes just sign in and out, and things would straighten out, for a while. Very spooky.
This was all years ago, so my recollection may be fuzzy, but I spent entirely too much time futzing with SIP traces and certs. Weird, weird things can result from time inconsistencies is my takeaway, however.
It’s not even necessary to test. Once you’ve done it a few times codesigning is a piece of cake. But there a few flags you absolutely pay attention to or else it’s going to bite you in the ass way down the line.
Yes but the test should be, “does this key have a valid timestamp?” not spin up a VM, set the clock 10 years in the future.
BTW, most commercial installer progtams will apply a valid timestamp if codesigning is enabled. So to save ~$1000 someone decided the tools built into Visual Studio were good enough. Anyone that ships commercial software that does more than a basic install into C:\program files will know to spend the money, it’s worth it.
The last time I listened to a vendor and turned off my anti-virus to install something, this happened: [Flight Sim Company Embeds Malware to Steal Pirates’ Passwords] https://news.ycombinator.com/item?id=16418837
You are likely to annoy many of your customers even further.
Including the word 'may' shifts the sentence from apologising for causing actual inconvenience, to apologising for causing a minor risk of possible inconvenience, which is not what you are trying to convey after selling someone something for a lot of money and then remotely breaking it at no notice.
In many customers, it is likely to elicit a response something along the lines of:
"Any inconvenience this may be causing? I'll give them may be causing. The fucking thing won't boot. May be fucking causing. I wasn't using the damn thing as a doorstop."
Has anyone got a good way of managing certificates in the wild? With no real management and staff turnover I've seen a bunch of expired certificate problems.
EDIT: presumably you need your client apps/libraries in the field write back when they use a cert that is <X months away from expiry.
I'd say someone very high up and "tied to the company", probably the CTO, should make sure a signing certificate is renewed when needed and make sure it's rotated every time it's about to hit expiration. For a company as big as oculus with the backing of Facebook, this is a pretty big issue.
I think the CTOs role here would probably be to make sure a process or team is in place to do this but not actually do it themselves? (For a company the size of oculus)
I use simple Nagios checks for keeping an eye on certificate expiration. It's simple to set up checks for new hosts/services and I have them set to trigger an e-mail alert 30 days before expiration (20 days for certificates from Let's Encrypt). It does the job; I have yet to wake up one day to an expired certificate.
Apparently, this ("send me an e-mail before my certificate expires") is also the sole reason some companies even exist (i.e., it is their only product/service). It amazes me that this is something folks will pay for.
What happens when you leave for a different job or get hit by a bus? More than one cert has expired because the guy who was maintaining them moved on and nobody else knew how to keep them up to date.
Obviously it should be part of a handoff process when you leave, but companies aren't always good at smoothly handling transitions like this.
I've been hit by a bus! Well, a Jeep Cherokee, technically, but I was on a motorcycle so it felt like a bus. I was off work for ~4 months -- couldn't even walk for the first three and could barely feed myself. We managed, though. We got by and didn't have any SHTF moments in my absence.
What happens if I leave, though? I don't know and, TBH, I won't really care; it won't be my problem at that point. How -- if -- things are handed off or transferred off to someone else when I leave will be up to my boss, I suppose.
Never trust somebody else to make sure your certificate is renewed.
Case in point: Even Azure had a huge outage due to cert issues (abeit quite a bit more complicated than a simple expiration, but my point here is that certificates are hard.)
Rotation due to expired keys should be frequent, enough to pretty much require automated methods to handle the changes. (One of the many great things in LetsEncrypt.)
If it’s a much longer time scale, people start to forget that it’s even possible for stuff to expire.
If my fridge filter can display a little reminder light on a timer every few months, cryptography-dependent devices might need something similar. That way, your customers could know in advance and be asking you for an update.
In 2091, an overworked developer will accidentally let the certificate expire for the Planetary Shield Defense Matrix, and the Zylorts will finally conquer Earth.
OK. The issue arose because the expired certificate wasn't countersigned by a timestamp server.
So many comments agree that (a) security is hard, (b) countersigning with a timestamp server is easy to miss, (c) countersigning makes build processes difficult, and (d) they've done or seen similar things in other apps/companies.
This sounds like a classic UI/UX issue for developers around a literally mandated and mission-critical requirement of the OS.
At the least, MS should provide a validation tool to surface errors or risks before production. Better, signtool.exe should make omissions (like a timeserver) very difficult and make them an override, not a default. Best, they would do both.
I don't agree that the OS should reject non-timestamped signatures as faulty per se (and throw an error), as that puts the burden on the user to understand a developer's mistake. Sometimes running without a timestamp may be desirable - ultimately that's the dev's choice.
It's not phoning home (or at least if it is that's not the issue here). The cert used to sign the actual binaries expired, and Oculus signed the binaries in a half-assed fashion that tells windows not to run the code with an expired cert-- what they should have done is timestamped[0] when the signature was created. Since the binaries were signed before the cert expired, nothing should've broken. This is one of those cases that required a perfect storm of multiple mistakes/oversights.
So you're saying Rifts and Windows 10 drivers do not work offline? That basically Windows 10 will be functional only while Microsoft keeps the update servers on?
Edit: I don't follow Windows, I'm really curious what the consequences for stuff like this can be generally.
No. The certificate in question is a code signing certificate, not a TLS certificate. Oculus incorrectly used an expiring certificate instead of a timestamped certificate here; if timestamped certificate was used, then the Rift driver would work offline forever.
Oculus says you will receive $15 store credit if you used Oculus between Feb 1st and when it went kaput.
I don't see credit on my Oculus account? Am I supposed to have received it already? Or is this maybe because I don't have payment method added to my account?
At the core of the issue, yeah, they just need to publish the same driver with a different signature.
It looks like their auto-updater used the same cert though, so they can't distribute it as a normal update. They're probably figuring out the least sketchy/most automated way to distribute it right now.
When this is all said and done, there will be a handful of people who will never, ever forget to use the /t flag in signtool.
There are still chances of Human error here as it's very difficult to automate issuing new certificates (unless you're using Let's Encrypt, which they are not).
Something similar just happened to me. I have a windows computer I only Use for gaming. After the last update My Samsung display is no longer usable. It has a polarized effect now only when using the windows Computer. However the Computer Works fine Connected to another brand monitor. So much money, yet windows still sucks when it comes to most basic things
It seems like the commenters here are still shrugging it off. At least facebook and the oculus division are still around to fix the issue, imagine if this was a company that was now defunct, which could easily have been oculus if facebook hadn't purchased them. You're hardware would now be bricked and you would have no recourse because no one is left to create a new certificate. Or imagine if the occulus 2 was out and they decided that they no longer wish to support the old one, this is the ultimate vehicle of planned obsolescence.
It's not just people shrugging it off, many are defending this as being a perfectly fine state of affairs.
Beside the fact that you should be concerned about whether the controlling company goes out of business, or sells your data, here stands yet another reason to never trust devices that require an internet connection to activate in the first place, or phone home periodically to remain active.
This includes phones, cars, self-driving cars, watches, farm equipment, computing devices and anything marketed as an IoT appliance.
One glitch, as minor as an improper system time, and you’re dead in the water.
The problem here is that kernel drivers have to be signed and drivers will stop working if the signature expires because the vendor didn't use a time stamp server during the signing process. The drivers were clearly indended to keep working so I assume this happened by accident.
The big question is why on earth can drivers that have been verified and are already installed in your system can suddenly stop working? If this mechanism is intended to protect against malware disguised as drivers then it's already too late. The malware had several years to exploit your system.
Expiration after installation simply doesn't make sense for code signing. The signed executable won't change unlike a website. The driver is always going to have the same file hash, forever.
Expiration after installation makes sense from the perspective of planned obsolescence, and in anticipation of long-term-support sunsets.
It makes absolutely no sense to the end user, acting as possessor or potentially a reseller of an object, since the very premise implies that an owner should not be provided total control over their device, that it's never really "theirs", and that a vendor should retain the capacity to take a "sold good" away from the owner, under the guise of expected behavior, built as designed, effectively converting a sale into a rental, in time, perhaps after statutes expire.
It's effectively a back door for manufacturers, so that they can count on well-made products not lasting forever, not in museums, not for resale, not for nostalgia.
Fortunately one of our engineers figured out we could get our demo rigs working by setting the clock back a few days. This could have been a huge disaster for our company if we hadn't found that workaround though. Pretty annoyed with Oculus about this