No matter how great security labeling may be, I fear the incentives are completely and utterly in the wrong place.
An individual consumer who purchases a poorly protected network device is unlikely to suffer any meaningful individual harm, like having their computer ransomwared.
Rather, it makes things like botnets possible that can be used for all sorts of things, e.g. DoS attacks against a third party.
So why should a consumer do anything but ignore the label? It's the rational choice if the less-secure product is cheaper.
If we want security standards, they need to be legislated democratically and applied to all devices -- not left up to consumer choice.
Now whether a legislature is capable of doing that effectively is certainly an open question. But I'm afraid labeling may be no more than an ineffective band-aid.
> An individual consumer who purchases a poorly protected network device is unlikely to suffer any meaningful individual harm
It opens the door to liability for companies who purchase insecure network devices. If your peers are buying good hardware while you're buying self-identifying garbage, someone harmed by a botnet running on your metal has a better argument, now, that you were knowingly reckless.
The only sales you can control are the ones that happen in your own country. I'm sure you can buy seat belts from Ali Baba at a fraction of the price, they'll probably be hilariously non-compliant to your country's safety standards, whether or not they work can be modeled by a fair dice roll, and I'm sure your insurer will deny any claims you make after installing them. But you can certainly buy them.
It's likely that if you literally fly out, buy them, pack them in a suitcase and fly home they'd make it, but if you try to buy a crate of obviously non-compliant Product X and it arrives at a port there's a reasonable chance somebody says "This Product X is non-compliant, so, why the hell is that here?" and you're not going to receive it.
You might think well, surely they don't look in most crates. And they don't. They don't look in the forty identical crates of compliant seatbelts going to Ford, because why would Ford be like "Hey, let's order 39 crates of complaint ones, but order 40 crates with #8 non-compliant to kill a few customers as a joke" ?
They're going to look in your crate because you never ordered any crates of seatbelts before, and "Bo Yang Belts" never sent anybody in your country a crate of anything before. Because their products aren't compliant to anybody's standards and so you're their first foreign sale.
But actually you may never even get to buy them. The huge first world economies like the EU and US order such enormous volumes of stuff and require compliance to their standards that it just often doesn't make sense to make Product A for them and then also Product B that's much worse but a bit cheaper for domestic use. I wouldn't like to guess if seatbelts are such a product.
Your answer seems logical but it is a real problem, see this article about Amazon repeatedly called out for selling deathtrap infant seats:
https://www.bbc.co.uk/news/technology-51497010. They really do exist and really do make it across the fairly strict borders in the UK regularly.
But if there are a hundred million compromised TVs, toasters, refrigerators, and thermostats, liability for those few enterprises is largely a moot point.
I don't understand what you're trying to say here. The fact that companies will now be liable means that if even a single person is affected, not only is there clear liability, the kind of offenses that aren't sued for right now, because the payoff is to low to cover the court costs, are suddenly perfectly viable class action suits for amounts in the hundreds of millions of dollars against single manufacturers.
That's a huge shift, and about as far from "moot" as you can get.
I think what is being discussed here in liability for companies purchasing insecure devices, rather than the manufacturers of those devices.
It is reasonable to say that, even if companies are discouraged from purchasing insecure devices, that won't necessarily deter consumers purchasing insecure devices for their households. The threat from devices in households is perhaps even greater than in businesses, if the number of households in question is great enough.
> If your peers are buying good hardware while you're buying self-identifying garbage, someone harmed by a botnet running on your metal has a better argument, now, that you were knowingly reckless.
If every piece of hardware has the same label, that argument dries up and blows away.
If some piece of hardware doesn't have the label and later gets owned, the manufacturer will be held accountable. It would have to be, or else this is toothless. Since no manufacturer can predict which vulnerabilities may be discovered, and since legal teams are a cowardly and superstitious lot, every manufacturer will put the label on now to avoid any potential problems later.
But if we're holding companies liable for dangerous products... shouldn't we be holding the manufacturers liable?
What's the point is holding companies which purchase products liable for the quality of those products? That's a step removed for literally zero benefit I can see.
Just hold the manufacturers liable directly. In other words: standards, not labels.
The point in general for holding purchasers liable is maintenance traditionally for 'wear and tear' as opposed to defects.
To use a car analogy if your car gets into an accident because the break pads should have been replaced 10,000 miles ago that is your fault. If it is because the break pads disintegrate if they get wet that is the manufacturer's fault.
These aren't cars however, but it does bring to mind a hypothetical consistent set of standards involving patches. So if say the product was perfectly fine at launch on 32-bit platforms but it has a bug when run on 64-bit platforms it would become the user's problem.
It obviously wouldn't be a very good system, it isn't realistic in its expectations nor easy to judge or administer with all of the nuances and fine details of knowledge.
> An individual consumer who purchases a poorly protected network device is unlikely to suffer any meaningful individual harm, like having their computer ransomwared.
The number of stories I've read of poorly secured connected devices aimed at children. Stories of flaws so basic that it would be trivially easy for an attacker to get the child's location and send them messages posing as a parent.
Individual consumers will be very concerned about devices that could potentially allow their child to be lured to some random location and attacked.
I agree that security holes can have externalized costs. But I also think labels can make a difference, especially scary ones with words like "security".
Look at anti-virus software. While some of it is legit, a lot of it is garbage that doesn't do anything. But people will happily buy (or subscribe) because it promises to improve security. And fear sells.
I don't know specifically what the label is going to look like. But let's suppose that the government set 5 years of security updates as the minimum standard. And suppose that if a company only promised to provide 3 years, their product would have to bear a label saying, "WARNING: Does not meet minimum required government computer security standards. May lack software updates that protect from hacking." I think that would discourage a lot of people from buying it.
And conversely, if meeting certain security standards allowed the manufacturer to label their product as officially scoring "Very Good" or "Excellent", they'd want to put that on their label. Manufacturers always like to maximize the number of good-sounding things on the box. To the point that they'll invent useless bullet items to fill the space if they can't think of anything else to say.
> If we want security standards, they need to be legislated democratically and applied to all devices -- not left up to consumer choice.
A more important piece of legislation would be to require governmental security agencies to inform companies of the security flaws in their products and to require the companies to fix them. Organisations like the NSA stockpile security flaws in secret in order to exploit the flaws for their own ends.
The WannaCry malware caused worldwide economic damage and was a direct result of the NSA losing control of its EternalBlue exploit. Had the NSA reported the flaw to Microsoft the problem could have been fixed before it ever became a problem:
It's unacceptable that these organisations are permitted to act like cowboys with our common infrastructure. These are not messes I want to spend my days cleaning up.
>If we want security standards, they need to be legislated democratically and applied to all devices -- not left up to consumer choice.
I get where you are coming from, and forgive me for going all libertarian but... I have less than zero trust in governments (especially mine in the UK). They don't understand tech. They don't want or try to understand tech. They have zero interest in personal freedom or autonomy.
If the UK government did this, I'd go out of my way to find a "non secure" phone as anything they licensed would just have massive insecure backdoors and probably wouldn't actually work as a phone...
Sorry for the rant. I'd honestly like more security in my devices...
Isn’t this the healthcare argument but for security? Because it becomes an international problem when millions of EOL’d devices have a wormable flaw and can send enormous DDoS traffic stressing networks and taking sites offline?
Wouldn’t that be more of a problem if security is standardized though? If everyone has the same security, the same flaw makes everyone vulnerable. Multiple competing security types diversify the pool and prevent one flaw from causing all devices being susceptible to the same attack.
I fail to see how standardizing how long products are supported and how vulnerability reports are processed would cause everyone to have less security.
That goes both ways though. GM/Ford/Tesla aren’t allowed to sell Bob a “less safe” version of their cars to undercut their competition.
I suspect the long term answer here might look a lot like the auto industry. You won’t be allowed to sell network-connecting devices that don’t meet certain minimum security standards. Manufacturers will need to commit to a minimum security update period (like car manufactures need to commit to spare parts availability - for at least 10 years after the sale here in .au), and purchasers will will be required to accept responsibility for device’s operation, some of which will be mitigateable by insuring against it, but irresponsible use will not be covered by insurance and become the owner’s responsibility. (Admin while Under Influence? Speed limits on pushing patches?)
I don’t see a clear path to that kind of regulatory control over $15 devices sourced directly out of China by every vanishing retailers/manufactures though, and there’s a whole raft of genuinely useful use cases for inexpensive net connected hardware that’d be impossible or illegal if the expense of the sort of regulatory burden place of car drivers was imposed on people with smart powerpoints or dash cams...
Isn’t that the same argument that can be applied to health? Vaccines, clean water, fluoride etc promote your well-being, and protect you from various bacteria and viruses.
Why shouldn’t you protect your “digital” self as well?
The internet, by it's core design, allows anyone to send as much data as they like with any content and pretending to be anyone.
I don't think mandatory security requirements for webcams is going to do much about that...
Instead, we should be thinking about how packets can be source and destination signed, and how unsigned packets can be dropped in the network rather than clogging up their destination.
> A basic checklist of best practice for internal self-audit (SQL injection, plaintext data, enumeration attacks)
I think this is a massive ask/knowledge expectation for the average person. A simple warning label about changing the device password from the default would be a major step in the right direction for consumers.
The average consumer probably has no idea what a growth hormone is either, but it's all over food labeling. It might be enough if there is a label that security experts know and understand, that consumers can learn to say yes/no about without having to know what it really means.
- federated login support (i.e. login with Google/Facebook/etc buttons)
- some sort of indication of encryption in-flight and at-rest, and who handles the keys (e.g. is there a per-user key that tech support can't even access without user grant, or is there a single hard-codes AES key in the APK etc that everyone knows)
The 3rd one makes sense, the first two are system questions rather than device questions. In an open system there may be multiple service providers who's security should be judged separately from the security of a device.
- All consumer internet-connected device passwords must be unique and not resettable to any universal factory setting
- Manufacturers of consumer IoT devices must provide a public point of contact so anyone can report a vulnerability and it will be acted on in a timely manner
- Manufacturers of consumer IoT devices must explicitly state the minimum length of time for which the device will receive security updates at the point of sale, either in store or online
I don’t see a time limit on that second point. For how long will companies be expected to act upon vulnerability reports? What’s a reasonable end of life?
My guess is that this is covered by the third point - if you EOL security patches for a device I am guessing you are no longer expected to act on vulnerability reports.
Mostly reasonable, but shortsighted it seems to me.
How do they expect to enforce these requirements on the manufacture of the IoT crap sold by the vendor “Best Security Happiness Store” on AliBaba, and the unnamed (or outright counterfeit named) Chinese manufacturer they bought it from?
And conversely, they could obviously easily apply this to the UK based Raspberry Pi foundation, but who’s responsible for enforcing the “no way to reset to a known factory password” for the pi:raspberry login from a stock Rasbian install? (Or do we just hand wave that away and say “that’s not a consumer device, even though we’ve shipped over 30 million of them!”?)
I suspect the Raspberry Pi issue is avoided because technically, it doesn't do squat until you install software on it. You might get a preflashed Raspbian card in the box, but I could just as easily be running RiscOS
The question to me is: how do we avoid another FIPS-like disaster, where the government standards fall behind the times and lead to worse security then you'd otherwise get?
We can’t. That’s pretty much how government works in the best case. In the worst case we’ll get both mandatory worse security _and_ a rentseeking monopoly granted to donors and ex politicians to supply/enforce it as well.
I fail to see how this really improves anything for the average consumer. Government getting involved in this sort of thing just feels like more of the same TSA-style security theater nonsense. I'd prefer my network device manufacturers focus their efforts on the actual hard stuff rather than spending time and money getting certified for some bullshit box label.
They are extremely good at focusing on “the hard stuff”, of shaving tenths of a cent off production costs. To a first approximation, nobody cares about anything except price in the low end of gadgets.
Exactly. It's going to be a list of check-boxes that the manufacturer will do the bare minimum to meet. Or they twist their process and wording to make it look like they are meeting the requirement.
I often wonder why IOT devices aren’t regulated more analogous to cars, since the Internet is a bit analogous to a road system [0], i.e. a shared resource where mistakes and misbehaviour impact other participants.
A couple of car analogies might be, that car manufacturers are required to have cars repairable for x years, and that recalls to repair dangerous defects are mandatory. In the case of IOT, the recalls could just be mandatory updates.
Because technology progresses faster than laws and by the time the laws catch up there are already powerful corporations established based on the lack of those laws.
For example its an obvious public and environmental benefit to require that all phones have a user replaceable battery but until recently they almost all did and now it's too late because every phone maker would lobby against it.
> Because technology progresses faster than laws and by the time the laws catch up there are already powerful corporations established based on the lack of those laws.
I see another aspect of this. Societies have allowed tech companies to run unregulated in a trader-off between safety and technological advance.
Medical equipment, cars or planes are examples were regulations were put in place as safety failures have more dangerous consequences.
As devices are more ubiquitous and the economy and lives depend more on them, further regulation will be pushed forward.
> and now it's too late because every phone maker would lobby against it.
I agree that will take political will to regulate the tech industry. But, in the same way that phone manufacturers do not want replaceable batteries the rest of industries will see their costs reduced by such a regulation. So, there is also opposing forces that want big tech to play nicer with the rest of the industry ecosystem. And, in democratic countries, population will also push for change as their lives are disrupted by the lack of regulation.
Wouldn't that also have precedence in the history of the automobile? Pretty much a free-for-all while development was advancing fast, large & powerful corporations dominating the industry and good chunks of the economy - and then a Ralph Nader[0] comes along and at the right place and the right time with the right tenacity, things change.
> The idea is that similar to how bluetooth and wifi labels help consumers feel confident their products will work with these wireless communication protocols, a Security label will instill confidence in consumers that their device is safe and secure according to standards.
I would like a warning label if the device requires an internet connection for normal operation or features that don't really need it, so I can decide not to buy it if the requirement is unreasonable.
This is a good point to remind citizens to keep an eye on the Government consultations that come out from time to time - at least in the UK, we all have the opportunity to contribute to this type of regulation through responding to the relevant consultations.
I got competition6155.primeluck2.live redirecting to mobile-app-market-here1.info redirecting to updatelive.yourultimatesafevideoplayer.info. Which is obviously a malware download.
Fun stuff. Gives me tons of confidence TrustableTech can be trusted and certifying device security globally. Trusted Technology Mark? To me this will mean "unsafe".
> Both the United Kingdom and Singapore have aligned their IoT security plans and programs with the draft European Standard EN 303 645 ‘Cyber Security for Consumer Internet of Things’.
It's a start of sorts; I seriously hope it develops into a wider set of reasonable policies and practices. The UK gov't does a lot right when it comes to IT and security, but it also gets a lot wrong — I'm hoping this develops sensibly.
I think a reasonable basic set of requirements would be the following:
- There is no non-free firmware or other software on the device.
- The consumer is provided full source code to the software and can effectively replace the preinstalled version with a version they have compiled themselves.
- The manufacturer provides updated versions of any software or firmware (again, including full source code) to patch any discovered security vulnerability for the expected life of the device: at least three years for most devices, but perhaps as long as 30 to 60 years for some devices. This lifetime is disclosed.
- The device does not transmit any personally identifiable information back to the manufacturer in its default configuration; for example, audio recordings, power usage measurements, accelerometer readings, temperature readings, or customer login names or account numbers.
Unfortunately, I don't think such requirements are viable in the current political situation. That doesn't change the fact that any device that fails to comply with them introduces a serious security vulnerability: there is no way for the users to defend themselves against malicious actors who penetrate the manufacturer. The Dieselgate scandal and the Huawei prohibition are only the mildest taste of what we are in for.
Of course it is not practical for every person to audit the source code of the firmware for every TV remote control and power brick they use, but it is possible for people to organize consumer watchdog agencies that do perform such audits.
Definitely not our intention to make it hard to read! Now that it's been pointed out, we've changed the text color on the blogs. Hopefully that will make it easier to read for everybody.
I would even read it were it not for light grey text on white background. I am declaring personal vendetta against visual design decisions that ignore any common sense.
I presume the idea is that your Apple Foozle is safe, and so is this Famous Brand Foozle and this Obviously Rebadged Generic Foozle that's half the price of the Apple product, but the foozle your mate got from the geezer who used to get him pirate DVDs doesn't have the sticker. No surprise when your mate gets ransomware a few months later. They saw him coming.
I'll kindly say this: while it might be technically bikeshedding, accessibility on the web is important and it only gets better when we call it out, respectfully, every chance we get. OP should have chosen better words, but the sentiment is valid.
It's very likely that as a high ranking HN article the owners of mender.io will read these comments and improve their blog. I don't have sight accessibility issues and I struggled to read this content.
I would be totally fine with the comment section being split in two, the main section being on-topic and a bottom section for related but less core discussion. I doubt many on HN would like the idea though.
I would, as long as both sections are on the same page. Or even three sections: on topic, tangents, on medium - in that order (or even fourth section at the end, meta).
Point being, I love me the occasional rant about a page's bad design decisions, or some vaguely on-topic meta angles. I want to read them all, but preferably in order, and not mixed up together.
On-topic / Meta would simply be enough. Like you I get some value from the occasional side discussion. I can appreciate why people would want to keep the main discussion on topic though, I just think it would be best to keep the meta discussion and just section it off.
An individual consumer who purchases a poorly protected network device is unlikely to suffer any meaningful individual harm, like having their computer ransomwared.
Rather, it makes things like botnets possible that can be used for all sorts of things, e.g. DoS attacks against a third party.
So why should a consumer do anything but ignore the label? It's the rational choice if the less-secure product is cheaper.
If we want security standards, they need to be legislated democratically and applied to all devices -- not left up to consumer choice.
Now whether a legislature is capable of doing that effectively is certainly an open question. But I'm afraid labeling may be no more than an ineffective band-aid.