Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As a developer, I was shocked when at a friend's house saw that Alexa not only operated lights, but door locks and blinds. Or when another friend takes hands off the steering wheel to test lane keep assist.

I don't think I can trust any software at that level.



The biggest tech enthusiasts I know are tech workers, though. It's the normies in my life that couldn't care less about this stuff. I've been a developer for 30 years and I love my home automation toys.

But I do make sure that all of it fails gracefully to 'normal' when something interrupts HA. My software can (and does, every day) lock the door. But the deadlock still has a key on the outside and a standard twist knob on the inside. All the light switches work just fine without their wireless connection. Thermostat is indistinguishable from an old school Honeywell if you ignore the Z-Wave logo. etc.

I do keep my hands on the wheel, though ;-). The whole 'dying in a ball of twisted metal' failure mode is pretty convincing.


> But I do make sure that all of it fails gracefully to 'normal' when something interrupts HA.

This is why interest and trust don't have to go together. Sometimes the people who are most adventurous are simply the ones who are comfortable with a lower level of trust, because they can manage their risk.

I'm comfortable playing with new or niche technology because I'm able to keep it low stakes. My most tech-phobic family members, on the other hand, act like they have complete faith in the software they use even when they say they have zero faith. They don't have the savoir-faire to do things that limit their risk, like withholding sensitive data and backing up important data in different places, so they resent technology as chronically treacherous and agonize over whether they can "trust" individual pieces of tech in an absolute yes/no way.


I wouldn't mind blinds so much, but it'd have to be via a button; I dislike talking on a good day, let alone to a machine.

That said, my lights are operated via RF remote(s) (two systems, one working on plugs, the other from ikea), pretty convenient without needing an internet connection. Just an awkwardly sized battery.


Why not just automatic? Maybe plus a button for privacy/needing darkness for a film or whatever depending on the room, but mostly I just want curtains & blinds to open in the morning and close in the evening.


Because they provide privacy in addition to light blocking. I don't have a 100% reliable schedule on when I'm going to need my windows to be blocked.


At least with Alexa, you don’t need to talk to a device to manipulate smart-home devices. The mobile app has controls in it for you to use.


Pretty sure that's the case with all voice-assisted smart home tech


I agree with the IoT stuff, would never trust or want that...

Taking hands off steering wheel. Depends on where done, but straight or slightly curving highway with no significant other traffic sure. Why not try how well it works, just be ready to grab and correct if needed.

I even try how well my car goes straight line in similar conditions...


I presume the Alexa Argument ist against the service and not against automation. Running your whole life on self hosted home automation is Stil awesome and the way to go. Just ensure that data never leaves your home premises and you have enough resiliency.


My problem was not with Alexa specifically, it was a general feeling of "if software controls both locks and blinds it can lock you not only outside but inside as well". Of course I was immediatelly told that there's a physical workaoround in case anything goes wrong.

I cited this example not as something that is objectively bad, but as something subjective, more about my gut reaction.

Funny thing is both my friend and I have electrical engineering education. I went to build software for a living, he more or less moved to sales/management. His job is to hype tech. My job is to make the saussage. There's a popular meme which I heard some time after the anecdote above:

Tech enthusiasts: My entire house is smart.

Tech workers: The only piece of technology in my house is a printer and I keep a gun next to it so I can shoot it if it makes a noise I don't recognize.


I would say trusting any kind of software with door locks(!) is a terrible idea. Yes, if you use home automation you should self host. No, if you secure your home you should not needlessly expand the attack surface.


In what little experience I have with burglars, they aren't exactly the type who will be hacking your smart locks to get into your house. They're just going to break the door or window.

I like being able to check the status of my door locks remotely, and lock them if necessary. I like being able to make sure all locks and garage doors are secure automatically before bedtime. I'm not at all worried about someone sitting out in front of my house hacking the Z-Wave lock. By the time they figured it out, the neighbors will have already called the police anyway.

Connecting all my door locks (and garage doors too) to HA has measurably increased the security of my home.


Unless you have a door specifically designed to withstand burglary a crowbar will always be an easier door hack than figuring out a backdoor to your automation.

There is no perfect security, only risk profiles. I would argue the risk profile of door lock automation is low enough not to be worthy of concern in most cases.


> Unless you have a door specifically designed to withstand burglary a crowbar will always be an easier door hack than figuring out a backdoor to your automation.

If your home is targeted specifically, yes. However, if a group hacks a cloud lock provider and then hits several homes using that without having to do something that looks supicious like physically breaking the door then it could be a lot safter and thus profitable for them.


And so they what, rob a few houses which statistically probably won't be yours before their hack gets discovered? Sounds like way more effort than just breaking the door or picking the lock.


I use a "smart" lock, even with BT/wifi/GPS based automatic unlocking which seems to freak my fiancee out. I see it as stealing my phone is not different from stealing my key. The more sinister ability is to unlock remotely via the "cloud" which I am still a bit shocked that it just seems to work, and no complaints AFAIK from insurance companies. I just happen to know that locks aren't really a deterrent if you really want to get inside a house....

On the other hand, in reality I am not going to be targeted by competent hackers (who would own me at once), but rather a crude gang of misfits roaming around for an easily burglarised home.


> On the other hand, in reality I am not going to be targeted by competent hackers (who would own me at once), but rather a crude gang of misfits roaming around for an easily burglarised home.

The threat isn't a competent hacker targeting you directly.

The threat is a competent hacker targeting the lock's vendor, getting their whole database, then selling address, entry code and occupancy times for $20 per home on the darknet.


It boils down to assessing the threat level and the risk level.

Threat: Hacker owning vendor and selling info on darknet Threat likelyhood: 1%

Risk: All valuables being stolen. Risk value: High if you keep a few kilograms of gold in your home. But basically depends on your possessions.

When we were burglared while sleeping a few years back the perp entered through a window. It was the last window not switched to a new one more secure. It took him around 15 seconds to enter. He left with the DSLR, a TV, clothes, a few valuables lying around and my SO's purse. Additionally, because traditionally her car keys were by the door on ground level he could grab them and put everything into her car and leave.

Nowadays our purses and keys never stay downstairs when we go to sleep.

I think the risk profile for traditional break ins is way higher than the risk from a smart lock (not that we have any).


But then what? Somebody still has to take the risk of actually burgling your house, getting passed the dog, alarm systems, lights, neighbors, etc.

And if they're going to do that, it's quite likely easier to pop a window lock than it is to buy a darknet database of API keys.


Schlage doesn't know where I live, though, nor do they get any status updates from my home assistant. It isn't software that is the problem you're referring to, it's involving a third party unnecessarily.


Maybe for your smart lock - but sandos has "GPS based automatic unlocking" so his lock vendor probably knows exactly where he lives.


That doesn't require your lock vendor to know anything about your location. Homekit for example can do this with your bridge doing the work, and any cloud-required data being e2e encrypted.


> then selling address, entry code and occupancy times

What locks are you referring to where locally configured entry codes are stored in a cloud DB? That's not the case for any of the ones I've installed.


That you are savvy enough to know this is a suggestion that "locks you've installed" might not be a general sample.


> I am not going to be targeted by [...]

This reminds me strongly of an ex-employer of mine, whose response to a security vulnerability was inevitably "our customers won't do that". It's one thing to have a threat model, it is quite another to dismiss low-hanging fruit (and trust me, whatever "smart" lock you have is very low-hanging) as not a big concern.


You should probably update your threat model.


Why, is it out of date? Am I missing the latest security updates?


My generic answer (not being the guy who replied to you) is to look at probabilities. AFAIK in most places theft from homes is opportunistic. Garage doors left open, for example, is extremely common. Thieves just drive around looking for them, it's such a reliable occurrence. Other times, they just try the door and walk in if it's unlocked and they think nobody is home.

I.e. automating your door locks may very well increase your actual security rather than reduce it, if it causes your doors to be locked more often than you'd remember otherwise.


Why would you be even a little bit concerned over software door locks?

Most traditional locks can be picked open in seconds. There's no meaningful way to do any worse than that. The only fit-for-purpose way to use locks is in keeping honest people honest. They are utter rubbish at keeping bad guys out of your house.


I use a voice assistant for lightning control, it’s amazing when I’m walking in with hands full or lay down and want the lights off.

I occasionally use it for weather but it’s annoying with such suggestions.

I should know better but in this case the convenience is worth it.


It's not just trusting the software, but the implicit policy around its implementation and use and even more implicit social norms around what it is expected to do that develop and evolve in interesting ways as products and services entrench further and further into the woodwork.

Say an attacker captures enough of a homeowner's voiceprint to build a language model good enough to get past Google's or Amazon's voiceprint matching, then pipes "<hotword>, open the front door" through a wall or window (possibly ultrasonically).

What happens now?

Who is to blame?

Is it the homeowner's fault for using a product according to the use cases included in the instructions?

"Oh, no, that was just a serving suggestion! You shouldn't use <product> to lock your doors unless the doors were an effectively unnecessary decoration to begin with."

So then you have Buy N' Large and Big Brother getting into the home and contents insurance industry.

No. No you don't. The only reason this use-case got cleared by Legal is because someone figured out the circle of plausible deniability (weaponized portmanteau of "circle of competence") is currently wide enough to fit a lawsuit defence through. :(

It's like with Tesla and the whole self-driving thing. The implementation *happens* to be able to drive asleep "drivers" down highways at 100 miles an hour, and the company has somehow been able to corral itself into the evil-genius legal position of being 50% "always keep hands on the wheel" and 50% "look self-driving AI go brrrt" and somehow combine the liability protections of the former with the societal expectations of the latter. Solely in terms of overpromising and underdelivering and then wrangling the legal status quo to declare that okay, I do think it's a bit... not great.

Crucially, this problem is most definitely not limited to Tesla, Google or Amazon; it's an endemic permeability in the legal systems of ostensibly-developed countries that makes a joke of the proof-of-work foundation of good-faith technical investment, and institutes a fraudulent social contract that the infrastructure we increasingly depend on is fundamentally reliable and trustworthy ("set in stone") when it is not. To adopt these technologies is to adopt the ever-changing tech stacks that underpin them, and the risks associated with upgrading the proverbial engines on ever-larger digital airplanes while they're in flight. I think it's dangerous and irresponsible to dispel the tenuous fabric of social perception around these risks and give unsuspecting passersby the impression things are safer than they are.

Hmm.

I'm reminded of http://bash.org/?4753:

  <xterm> The problem with America is stupidity. I'm not saying there should be a capital punishment for stupidity, but why don't we just take the safety labels off of everything and let the problem solve itself?
Maybe that's what's happening here?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: