Hacker Newsnew | past | comments | ask | show | jobs | submit | Msurrow's commentslogin

I have not knowledge of this kind of software dev/hw production, so can you please explain why the units cant just be born with a default pass and then have the setup process (which is always there) Force the owner to set a new password?

Knowledge or not, this..

> It's not impossible, it's just extra work that usually goes unrewarded.

.. is just not an acceptable way for business to think and operate i 2026, especially not when it comes to internet connected video enabled devices


I'll answer your question with a question: how often do you see people complaining about needing setup processes vs the old way of just plug and play? There's no perfect answer that placates all sides. Things can certainly be better, but when those people win and you no longer need to have a setup process, then what?

While true that in $current_year it would be nice if things were more secure, the sad truth is that most people don't care.


I agree that yes most just want PnP and basically don’t care about security. But it seemed on the posts above that there was an engineering complexity, and a robot vaccum needs local WiFi, so there will be a setup flow. Whats preventing a password selection just be part of that?

> a robot vaccum needs local WiFi

No, it doesn't. Unless it's supposed to spy on you (or "harvest training data") there's no reason it needs to phone home at all (c.f. Roombas).


Well it needs to talk to either a web frontend (internet) or app (bluetooth or wifi). If you're worried about it spying, well, the app could always relay data for it.

Anyway regardless of wifi, bluetooth, or something else there will be a setup process.


You're begging the question. Why does it need to talk to a web front end or app? Why does any appliance need this? (I know they all claim to need it, but it isn't at all clear why this (supposedly) needs to be the case.)

For that matter, I'm unclear why there needs to be a setup process. I understand that this may be key to the vendor's business model, but that's their need, not something the products needs, and certainly nothing I need.


I'm not begging the question although I am implicitly assuming that the vast majority of consumers will want to control a robot vacuum via their phone. I suppose including a touchscreen on the unit itself is not entirely unreasonable but I expect that would be an uphill battle for various disparate reasons (expense, durability, and ease of use at minimum).

Once you introduce control via phone the most straightforward approach is either wifi or bluetooth which requires a setup process.


I am shocked really, i think this is actual law in China.

This is just people working 24/7 for 50 dollars a month? Because we want cheap shit

Yeah was thinking the same thing. I wonder if the author didnt known that passpory chip == fingerprint.

And FP is a much worse modality to have registered because, as opposed to Face image, fingerprint is not affected by age. So that will match you 99.999999% for ever. Faces change.


I naievely assumed fingerprints were trivial to change but on further reading they are a remarkable biomarker

That that is exactly why [more] regulation is necessary!

Regulation is not done with the purpose of preventing companies from profits. It is done because companies cannot be expected to act in society’s best interest, so society has to make demands of companies, ie regulation.


But doesn’t your argument that the principal risk [with ssh] is vulnerabilities also apply to the alternatives you say is best practice? Firewalling off ssh (but not http(s)) has the risk of vulns in the FW software. Tailscale, wireguard etc also has the risk of vulns in that software?

So what’s the difference in risk of ssh software vulns and other software vulns?

Also, another point of view is that vulnerabilities are not very high on the risk ladder. Weak passwords, password reuse etc are far greater risks. So, the alternatives to ssh you suggest are all reliant on passwords but ssh, in the case, is based on secure keys and no passwords. Should “best practices” not include this perpective?


Good defense is layered.

For vulnerabilities, complexity usually equals surface area. WireGuard was created with simplicity in mind.

>So, the alternatives to ssh you suggest are all reliant on passwords but ssh, in the case, is based on secure keys and no passwords.

WireGuard is key-based. I highly suggest reading its whitepaper:

https://www.wireguard.com/papers/wireguard.pdf


Sure, no one said it wasnt layered.

But saying ssh is a risk “on principle” due to possible vulnerabilities, and then implying that if wireguard is used then that risk isnt there is wrong. Wireguard, and any other software, has the same vuln risk “on principle”.

> For vulnerabilities, complexity usually equals surface area. WireGuard was created with simplicity in mind.

That is such consultant distraction-speak. Simple software can have plenty vulns, and complex software can be well tested. Wireguard being “created with simplicity in mind” doesn’t not make it a better alternative to ssh, since it doesn’t mean ssh wasnt created with simplicity in mind.

I don’t disagree that adding a vpn layer is an extra layer of security which can be good. But that does not make ssh bad and vpn good. Further, they serve two different purposes so its comparing Apples to oranges in the first place.


>That is such consultant distraction-speak.

Or how large companies actually think about this risk in the real world. Expose SSH ports to the public internet willy-nilly and count the seconds until their ops and security teams come knocking wondering what the heck. YMMV of course, but that's generally how it goes.

Are critical SSH vulns few and far between, as far as anyone knows? Yes.

Do large companies want to protect against APT-style threats with nation-state level resources? Yep.

Does seeing hundreds if not thousands of failed login attempts a day directly on their infrastructure maybe worry some people, for that reason? Yup.

You call it consultant distraction speak, I call it educating you about what Wireguard actually is, because in your original reply you suggested it was password-based.

>Further, they serve two different purposes so its comparing Apples to oranges in the first place.

Not when both can be used to protect authentication flows.

One is chatty and handshakes with unauthenticated requests, also yielding a server version number. The other simply doesn't reply and stays silent.

>Simple software can have plenty vulns, and complex software can be well tested.

In this case, both are among some of the most highly audited pieces of software on the planet.


I’m calling it consultant speak because your response to an argument is to bring up something else, instead of actually responding.

The same with this last reply; you can keep throwing out new points all you want, but thats not going to make you correct in the original question.

Saying or implying that one software has a “principle” risk of vulnerabilities that another software doesn’t is plain and simply wrong.

And that has nothing to do with all the other stuff about layered defence, vpns, enterprise security, chatty protocols or whatever you want to pile on the discusion.


Your question was this:

>So what’s the difference in risk of ssh software vulns and other software vulns?

I proceeded to explain how large companies think about the issue and what their rationale is for not exposing SSH endpoints to the public internet. On the technical side, I compared SSH to WireGuard.

For that comparison, the chattiness of their respective protocols was directly relevant.

Likewise complexity: between two highly-audited pieces of software, the silent one that's vastly simpler tends to win from a security perspective.

All of those points seem highly relevant to your question.

>... but thats not going to make you correct in the original question.

If you can elucidate what I said that was incorrect, I'm all ears.


You are still implying that wireguard are somehow different from ssh in its suceptibilty to vulnerabilities existing or being introduced into its codebase. And it simply is not.

Edit: codebase of ssh/wireguard implementations, just to be clear


Yes, the two are very different in that regard.

WireGuard is 4k LoC and is very intentional about its choice of using a single, static crypto implementation to drastically reduce its complexity. Technically speaking, it has a lower attack surface for that reason.

That said, I've been on your side of the argument before, and practically speaking you can expose OpenSSH on the public internet with a proper key setup and almost certainly nothing will happen because it's a highly-audited, proven piece of software. Even though it's technically very complex.

But, that still doesn't mean it isn't best practice to avoid exposing it to the public internet. Especially when you can put things in front of it (such as WireGuard) that have a much lower technical complexity, and thus a reduced attack surface.


No, they are not. Doesn’t matter how many LoC; it only take 1 LoC to introduce a vulnerability.

Wireguard is a protocol. So what implementation is “very intentional about its choice of …”? Are you talking about my own WG client implementation? Or the one made by this other Chinese vendor?

I don’t care what software we are talking about, or who made it. All software has a risk of undiscovered/-disclosed vulnerabilities already existing, or when new ones introduced with an update.

If you really want to make this argument we can talk about the implementing organisations SDLC, including SW supply chain, and compare those.

But back to the OP/point above: its false to state that one piece of software has a “principle risk” of vulnerabilities that another piece does not. At least, not when both are internet exposed and accepting incoming data.

Lasty remember that I never disagreed with you point that a VPN solution is often a better solution, but that was never what I was arguing about. Simply that all code always has a risk of vulnerabilities. No piece of software is excempt from that.


>No, they are not. Doesn’t matter how many LoC; it only take 1 LoC to introduce a vulnerability.

So according to you, the concept of attack surface doesn't exist. A 100MB binary is equivalent in risk to a 1KB binary. Got it.

If both are highly-audited, their risk is equal despite their size and protocol complexity. Got it.

>...its false to state that one piece of software has a “principle risk” of vulnerabilities that another piece does not.

That's like the third or fourth time you've scare-quoted the word principle. You're aware that principle and principal are two different words with different meanings?

The word I used, principal, in that context means the foremost or primary risk.

Anyways, I'm just telling you how major corporations think about it. Their underlying rationale is exactly what I've explained thus far, and hence why it's best practice.

Keep shooting the messenger I guess.


You cant.


Depends on environment variable P=NP


> *If you don't want your LG TV quietly snooping on what you watch and using it to serve you ads, here's how to turn Live Plus off.

If LG makes money from snooping on you, what makes you think the “off” button actually turns it off? People have no way of verifying this.

To me this is the worst part of TVs (and cars, and fridges, and so on) are even allowed to have these features[1]: non-techinical customers have no understanding that “smart” hardware is capable of doing whatever it wants - and hide it from customers. You have no way of knowing what your “smart” thing is doing behind the scenes.

[1]: any feature thats sending data back to company servers, meaning you loose control of your data. Features that are 100% on-device is not what I’m talking about.


The construction on some of these windmill farms started years ago. Before that permits & legal has been in the works for a long time. This surely included security clearances.

The orange shrimp pulling the “national security” card now, on the same day as he also creates a new Greenland debacle, is very clearly simply an attempt to strong arm the danish govt into Greenland concessions (in turn simply to please his fractile lille ego)


They were approved before the invasion of Ukraine and before our politicians could see how devestating drones can be. Just because the orange dictator did something does not mean it necessarily was wrong. Even a broken clock is right two times per day.


>"Even a broken clock is right two times per day."

That is incorrect. There are any number of ways in which a clock might be broken such that its hands are not in the correct position even once per day.


Not incorrect so much as underspecified?

The phrase more commonly starts with a ‘stopped’ clock, which works more clearly.


Should be “a stopped clock is right twice a day”


> dictator

Can we stop overusing this term? It has already lost it's significance. Every political leader you don't agree with is a dictator nowadays. What kind of shitty dictator he is anyways if he is being shut down by courts left and right, and has to shut down the government waiting for the Congress to approve budget? You do know that dictators don't give a fuck about courts and parliaments?


This reply doesn't address any core point.

When these wind farms were permitted many years ago, shipborne drones were not part of the threat matrix. It was considered purely hypothetical even a decade ago because it was not an imminent capability for any country even though e.g. the US DoD had studied it. In the last few years shipborne drones have emerged very quickly as a substantial practical threat, largely due to the Russia/Ukraine war. Governments around the world are struggling to adapt to this new reality because none of their naval systems are designed under this assumption.

Whether or not this is convenient for Trump doesn't take away from the reality of the security implications.


Yes, it does.

First of all: occam's razor. Political theatrics seems simpler than the US defence/intelligence forces sudenly realizing that drones can be launched from ships. Esp. with the timing involved.

Second: Established/traditional radar systems cannot spot drones. Take it from someone living in a country that recently had its airspace violated by (assumingly) Russian drones, affecting national infrastructure. It was considered an attack at the time. I don’t think thats the word we use any more, for political reasons.

Third: Trump already shut down one of these windmill farms once this year. Until the danish company building the park sued and got the courts word that the shutdown was illegal, and resumed construction. The current shutdown has much larger impact for many multi-national companies. Usually there is a political process expected between allied countries before such a drastisc move. We havnt seen that ie no attempt to solve a concrete (security) issue before punching the red button ie probably because there was no motivation for a solution ie the security issue was probably not an actual issue)

Fourth: Earlier this week the danish intelligence services released a new security assesment of USA (that takes Trumps behaviour on the international scene into account). That probably hurt the little mans ego, and now we see a retaliation. This provides yet another motivation for Trumps action, besides factual, real security concerns.

Looking at this purely from the security aspect is naive, and fails to consider the context of the real world.


Before Ukrain everyone though drones were easy to counter. Now that has proven false.

granted Trump probably isn't thinking that, but the concern should be real. We need better drone defense before someone (Russia, Iran...) starts anonymously shooting down airplanes.


That's nonsense. Many countries have been using drones before. (Starting with Nazi Germany during WW 2.)


We have learned counters for them over the years.

Ukraine makes drones vastly cheaper than the current counters and so we can be bankrupted trying the current counters.


> We have learned counters for them over the years.

Using $1m a piece missiles


Man, I got my rope out for this..


Yes. GDPR covers all handling of PII that a company does. And its sort of default deny, meaning that a company is not allowed to handle (process and/or store) your data UNLESS it has a reason that makes it legal. This is where it becomes more blurry: figuring out if the company has a valid reason. Some are simple, eg. if required by law => valid reason.

GDPR does not care how the data got “in the hands of” the company; the same rules apply. Another important thing is the pricipals of GDPR. They sort of unline everything. One principal to consider here is that of data minimization. This basically means that IF you have a valid reason to handle an individuals PII, you must limit the data points you handle to exactly what you need and not more.

So - company proxy breaking TLS and logging everything? Well, the company has valid reason to handle some employee data obviously. But if I use my work laptop to access privat health records, then that is very much outside the scope of what my company is allowed handle. And logging (storing) my health data without valid reason is not GDPR compliant.

Could the company fire me for doing private stuff on a work laptop? Yes probably. Does it matter in terms of GDPR? Nope.

Edit: Also, “automatic” or “implicit” consent is not valid. So the company cannot say something like “if you access private info on you work pc the you automatically content to $company handling your data”. All consent must be specific, explicit and retractable


What if your employer says “don’t access your health records on our machine”? If you put private health information in your Twitter bio, Twitter is not obligated to suddenly treat it as if they were collecting private health information. Otherwise every single user-provided field would be maximally radioactive under GDPR.


Many programmers tend to treat the legal system as if it was a computer program: if(form.is_public && form.contains(private_health_records)) move(form.owner, get_nearest_jail()); - but this is not how the legal system actually works. Not even in excessively-bureaucratic-and-wording-of-rules-based Germany.


Yeah, that’s my point. I don’t understand why the fact that you could access a bunch of personal data via your work laptop in express violation of the laptop owner’s wishes would mean that your company has the same responsibilities to protect it that your doctor’s office does. That’s definitely not how it works in general.


The legal default assumption seems to be that you can use your work laptop for personal things that don't interfere with your work. Because that's a normal thing people do.


I suspect they should say "this machine is not confidential" and have good reasons for that - you can't just impose extra restrictions on your employees just because you want to.

The law (as executed) will weigh the normal interest in employee privacy, versus your legitimate interest in doing whatever you want to do on their computers. Antivirus is probably okay, even if it involves TLS interception. Having a human watch all the traffic is probably not, even if you didn't have to intercept TLS. Unless you work for the BND (German Mossad) maybe? They'd have a good reason to watch traffic like a hawk. It's all about balancing and the law is never as clear-cut as programmers want, so we might as well get used to it being this way.


If the employer says so and I do so anyway then that’s a employment issue. I still have to follow company rules. But the point is that the company needs to delete the collected data as soon as possible. They are still not allowed to store it.


I’ll give an example in more familiar with. In the US, HIPPA has a bunch of rules about how private health information can be handled by everyone in the supply chain, from doctor’s offices to medical record SaaS systems. But if I’m running a SaaS note taking app and some doctor’s office puts PHI in there without an express contract with me saying they could, I’m not suddenly subject to enforcement. It all falls on them.

I’m trying to understand the GDPR equivalent of this, which seems to exist since every text fields in a database does not appear to require the full PII treatment in practice (and that would be kind of insane).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: