Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's more complex than a display; it's a display plus a collection of USB sensors and some low-level hooks into display management. This requires kernel-mode drivers, for good technical reasons, where a normal monitor wouldn't.


That's a reasonable answer to why the drivers need to be signed in order to be installed. That's not the question. The question is why should the already-installed drivers that you've been trusting all along suddenly stop working.


Apparently, they signed them incorrectly.


That puts the blame on Oculus, but the blame really should rest on Microsoft for enabling and enforcing a signature mode that shouldn't exist at all.


It needs to exist in an untrusted environment.. it verifies that the driver which has low level access to the computer hasn't been modified by a third party.. MS signed system binaries are the same way; it's a safeguard against malicious entities.


I'm not arguing against the entire concept of driver signing, just one specific nuance of Microsoft's driver trust model. There is a place for driver signing and signature checking, but de-trusting a driver that you were perfectly happy to give kernel-level access yesterday doesn't make sense.


It's still Oculus fault because they didn't use a timestamped signature.

A timestamped signature on the binary would have it kept working and that's how MS intents it to work. You can leave it out if you have the desperate longing for having your software break suddenly without reason like Oculus just did.


And that's where MS is at fault: drivers without timestamped signatures should be treated as faulty. This would prevent these errors in the first place.


I guess I can see reasons why some companies may want to be able to produce time-limited drivers:

Maybe they want beta versions to stop working, forcing users to upgrade.

For offline computers, it might be that some companies would see this as a way to enforce contract periods (customers would have to install an update to continue using the product when their contract is renewed).

Of course, disabling driver signature verification is still a way to bypass that, but often times the companies that do things like this probably aren't thinking about that.


Maybe the API should then explicitly ask for a 'timestamp-none' in case the driver needs to be time-limited, forcing the developer to at least think about it.


Do you have any reason that is good from the point of view of the computer owner? (You know, the one sending money to Microsoft.)


Then they can write it into the driver "Stop working on jan 1 2019"


There are usecases for signatures without timestamp.

Besides, literally every codesigning blogpost/tutorial/guide/etc I found tells you to use a timestamping server so the guys and girls and Oculus must have skipped the critical parts of whatever they used.


What is a legitimate use case for a binary deliverable without a timestamped signature?


As you might have guessed, when you don't want someone to use a binary beyond a certain date.

Security Solutions could benefit from this, the customer will have to update or disable the signature check if their version of the solution becomes too old. Old versions could open them up to vulnerabilities.

Another might be when you distribute beta or testing versions of your software. The customers can safely test the version and the lack of timestamp prevents them from running it in production permanently. They have to update to the release version.

It could also be useful when you sell a software to a business and want them to test it first. So you send them the program without a timestamp signature and limit the validity of the certificate. That way they can't just run the test version forever.

Really anywhere where all parties involved, user and producer, do not want to run a binary forever but the producer might not fully trust the user to do that.


The customer can always re-sign the binary if they wante to and replace the existing signature. A time limited inside the program code would be more secure.


For what it's worth, I think this is a very sane perspective.


Does that really not sound ridiculous to you? Microsoft needs to be blamed for their certificate validation implementation because people might use it to make their software expire? Instead of just writing code that does so?


Microsoft's driver signing model has a mode that is a giant footgun with no redeeming value. Oculus is a victim of Microsoft's bad design. They weren't trying to build in a self-destruct timer for their whole product stack, and if they were, they wouldn't have used the driver signing certificate as the lynchpin.


I find this reasoning ("footgun", Microsoft's fault) interesting when compared to the prevalent HN opinions when it comes to, for example, (unsecured) redis and memcached servers being used in DDoS attacks, or even AWS S3 buckets (with confidential or even highly classified files) being -- inadvertantly -- left wide open to the public.

In those cases, "we" (as a "community", in general) often blame the people responsible for running those services instead of the developers (or Amazon) being blamed for choosing convenience/ease-of-use over security. That is, we're often quick to say that the people running those wide open memcached servers are at fault for not properly configuring and/or securing them -- and not blaming the developers for creating "a giant footgun".

"You shouldn't be running servers on the Internet if you don't know how to properly configure them" (paraphrasing) is often stated. Yet, in this case, we're not blaming Oculus for their screwup and instead blaming Microsoft -- even though there's zero evidence (AFAIK) that Oculus even used any Microsoft tools to sign their application. (N.B.: I don't know the first thing about code signing on Windows so it may well be that using a Microsoft utility is required and, thus, just assumed by those of you who are familiar with the process. If that's the case, sorry.)

I'm having trouble trying to reconcile these two seemingly opposing viewpoints. Why is Microsoft's utility "a giant footgun" but a (OOTB) completely insecure by default, wide open by default memcached server (for example) isn't?


There's a use case for redis and memcached being open to the network, and a failure mode if you don't properly separate your internal network from the public Internet. There's a use case for S3 buckets that are publicly readable, if they don't contain sensitive/private information. Those features have reason to exist, even though there's potential for misuse. Secure defaults would be nice, but can't eliminate these risks.

There's no reason for drivers to have an expiration date. There's no scenario where it makes sense for the configuration that Oculus stumbled into to be possible.


> There's no reason for drivers to have an expiration date

If you can license software with a definite expiration date, why can't you license hardware with a definite expiration date? And have your license enforced by the operating system? Imagine that I'm a company with a hardware product, and instead of selling that hardware at large expense, I rent it out, and provide drivers with an expiration date to enforce the terms of the hardware lease. If the lease is renewed, I'll provide new drivers with a new lease expiration.

Not that I'm arguing for hardware licensing, or arguing that it was what Oculus was trying to achieve and screwed up somehow. But there's a difference between "Microsoft built a feature some of their customers didn't know how to use" and "Microsoft built an anti-feature".


The driver signing system is not an effective way to implement an expiration date, if that is your goal. Driver signing enforcement can be disabled rather easily by skilled users. Licensing restrictions written into the code of the driver itself are harder to bypass. It also does not seem at all likely that Microsoft intended for the driver signing system to be usable as a time-based DRM mechanism like this.


Really? No reason whatsoever?

I'm not being facetious, FWIW. I know a fair amount about PKI, in general (probably in more depth and intricate detail than the average HN'er, actually), but I'm not a developer and I know very little about code signing in particular (and even less when it comes to the Microsoft world of code signing).

I do find it kinda hard to believe that there's no use case whatsoever for this particular configuration (code signing w/o an included timestamp from a TTP), though. I certainly understand why a timestamp can be valuable (as it would in this case) but what isn't clear is that there is "no scenario" whatsoever where the lack of a timestamp might be acceptable or perhaps even desired.

As I said, though, I don't know enough about code signing specifically to know what these scenarios might be but I can't imagine there isn't even one of them.


> but what isn't clear is that there is "no scenario" whatsoever where the lack of a timestamp might be acceptable or perhaps even desired.

I can readily imagine a scenario where a driver with a signature but no separately attested timestamp should be acceptable. What I cannot imagine is a scenario where it is useful to treat a driver signed in such manner the way Windows currently treats the driver.


> I'm having trouble trying to reconcile these two seemingly opposing viewpoints.

I mean this with all respect... but why? You're talking about different opinions expressed by completely different people. HN isn't a monolith.


Sure, there are plenty of different opinions here on HN. If there wasn't, these discussion threads would be boring and useless.

My point was: in most threads, there's a common opinion or viewpoint shared by most, along with a few "detractors". In general, though, the overwhelming "predominant" opinions (within/on a particular subject) are pretty consistent from one thread to the next.

For example, the "it's the end user's fault, not the developers" thing I mentioned earlier. That seems to be, pretty consistently, the "belief of the majority". Here, though, it's the complete opposite. Instead of saying "the end user (Oculus) screwed up" (which, IMO, they certainly did, FWIW), it's "Microsoft made a footgun which caused this".

That said, I have now made it through the rest of the comments in this thread and it seems that this viewpoint isn't as widespread as it first appeared. Perhaps I just jumped to a conclusion much too quickly; there's obviously plenty of fingers pointing at Oculus as well.


Interesting. I would think that opening unsecured services on to the Internet at large is a big no-no; and that whoever sets the default-allow is the one who's setting the trap here. Yes, the admin should inspect any installation for traps, but that's, as you note, secondary to "don't ship software which has a highly convenient trap set up". Most software traditionally exposed to the Internet did manage to do that, back in the early 2000s (by shipping default config `interfaces=lo` or somesuch), nobody should get a free pass on that, MS or not.


you can't guard against every imaginable stupidity. you try but users will inevitably find a hole.


That's not how APIs work. You don't eliminate stupidity by thinking of each and every way the user could screw up. You simplify your API so there aren't so many ways to use it in the first place.


So instead of trying to conform to the x.509 spec MS should have just developed their own certificate validation scheme, because that would totally be less of a "footgun" than conforming to the spec.

Am I getting this right?

Why aren't we blaming the people behind RFC5280, after all it was them who came up with this awful idea that certificates should expire.

>giant footgun

oh dear god how are you generating your certificates? This is not a footgun unless you are doing something immeasurably stupid before even involving MS products.

Besides, if you insist on going ahead and setting the Not After field, wouldn't it be a bigger footgun to ignore that?


FWIW, there are basically no common implementations that fully conform to the x.509 spec. That thing is a bundle of unimplemented features.


Hence "trying to" :)

However, I'd argue that disregarding the Validity section would be an unusually big departure from the spec, not comparable to the typical silliness surrounding x.509.


How about we stop blaming people who accidentally pressed figurative "system self-destruct" button and start asking why there are so many of those buttons everywhere? Nowadays this is a recurring theme. Simple mistakes leading to catastrophic failures at grand scale. "Just be more careful" doesn't cut it anymore, because in the software world there are just too many things to be careful about.


Because when you edit code basically every character you type have the potential to become a big red button. It’s not that developer add them for fun. And each time you develop a new functionality you have to figure out how to build a very solid glass box around the reds buttons you just created. The default being none.

Alternatively you could propose no user configurable functionalities whatsoever and rarely and carefully upgrade any dependencies (including the os) and you will create a very robust program.

But then don’t expect to end up with a wireless VR headset with an online game catalog and multiplayer capability. No but you can up with a very nice banking application developed in COBOL for sure.


You cant possibly make this argument in the context of driver development. Everything is a self destruct button.


Why should the Rift fail when a certificate expires?


I imagine someone went to a site like this:

https://msdn.microsoft.com/en-us/library/windows/desktop/aa3...

read the command for signing their code, and signed their code as instructed.

Today, the certificate they signed a driver with expired, and because the signature wasn't timestamped it means Windows can't know if the driver was signed with the certificate after it expired, so the signature is now treated as expired as well, so Windows doesn't trust the driver.

Why wasn't it timestamped? Probably because instructions like the link above treat that as a separate subject to signing your code, and when you sign your code it looks and works like it's fully correctly signed.

or, as wtallis puts it (https://news.ycombinator.com/item?id=16542204), someone left a foot-gun lying around that didn't have much value except to cause incidents like this.

...and if your own company makes Windows apps, go check they are timestamped ;)


Your same link also explains how to sign a file with a timestamp as well, and contains a link on how to add a timestamp after the fact. It doesn't pretend to know what the best practice is for your specific use case of the signing tool is.

But you can also find places where Microsoft does talk about when you should be timestamping: https://blogs.msdn.microsoft.com/ieinternals/2011/03/22/ever...


> Your same link also explains how to sign a file with a timestamp as well

Sure, further down it explains how to use signtool to timestamp something, but why would someone trying to get an app signed care about using the tool for timestamping?

> where Microsoft does talk about when you should be timestamping

If someone finds that article first, and reads to the bottom of it, it explains that timestamping is related and important and says the times you should be timestamping are "you should definitely do this", so perhaps a design that isn't a foot-gun would have "do this" as the default, with a --force option and alarmist warnings for anybody who has a reason to have their signed executables expire one day.

Design oversights and user mistakes like these will happen, but it doesn't mean influences and causes can't be identified and improved.

(most of the builds at the company I work for had not been timestamped either)


Any reason why most sensors couldn't be handled by generic HID drivers which even have dedicated VR page?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: