Hacker Newsnew | past | comments | ask | show | jobs | submit | z3ugma's commentslogin

This has been one of the things their latest clinical trials are pursuing, which is to see if the side effect of insomnia ends up outweighing the benefits of preventing the apnea

It's exciting seeing open-source sleep models catch up with the industry. In case you're interested in a deep-dive, a lot of commercial software like this falls under the FDA medical device product code "MNR" and "OLZ"

https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfTPLC/tp...

https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfTPLC/tp...


EnsoData | healthcare, sleep, machine learning | Hybrid or Remote (Madison, WI) | Product Manager - Software Medical Devices SaaS | Full Time | $110k-$130k + bonus, stock options | https://apply.workable.com/ensodata/j/5FD6175FEB/

Sleep is vital to recharging mentally, restoring our bodies physically, and fighting off disease. Sleep is so core to our health that it's the activity we all spend most of our life doing. Yet sleep disorders are some of the most underdiagnosed health conditions on the planet, with an estimated 80% of obstructive sleep apnea patients undiagnosed and untreated

EnsoData uses machine learning, integration with pulse oximeter hardware, mobile apps, and clinician review UIs to automate sleep study scoring and give sleep clinicians tools to find and help patients with sleep disorders.

We're hiring a product manager to uncover what features our customers crave most, conduct discovery research in the sleep industry, define and prioritize product requirements, partner with engineers to ship features, and work with marketers to tell compelling product stories. You'll manage the product development process from whiteboard mockups tested with real users through to detailed specs and launch, breaking down ambiguous problems into releasable slices that bring AI-powered sleep diagnosis to more patients.


and yet:

When you ask an AI like ChatGPT a question, what is it actually doing?

Survey of 2,301 American adults (August 1-6, 2025)

- Looking up the exact answer in a database: 45%

- Predicting what words come next based on learned patterns: 28%

- Running a script full of prewritten chat responses: 21%

- Having a human in the background write an answer: 6%

Source: Searchlight Institute

most survey respondents don't even _understand_ what AI is doing, so I am a bit skeptical to trust their opinions on whether it will cause harm


> most survey respondents don't even _understand_ what AI is doing, so I am a bit skeptical to trust their opinions on whether it will cause harm

Why do they need to know how AI works, in order to know that it is already having a negative effect on their lives and is likely to do so in the future?

I don't understand how PFAS [1] work, but I know I don't want them in my drinking water.

[1] https://www.niehs.nih.gov/health/topics/agents/pfc


> Why do they need to know how AI works, in order to know that it is already having a negative effect on their lives and is likely to do so in the future?

Because otherwise you might not actually be properly attributing the harm you're seeing to the right thing. Lots of people in the US thing current problems are left/right or socialist/authority, while it's obviously a class issue. But if you're unable to actually take a step back and see things, you'll attributed the reasons why you're suffering.

I walked around on this earth for decades thinking Teflon is a really harmful material, until this year for some reason I learned that Teflon is actually a very inert polymer that doesn't react with anything in our bodies. I've avoided Teflon pans and stuff just because of my misunderstanding if this thing is dangerous or not to my body. Sure, this is a relatively trivial example, but I'm sure your imagination can see how this concept has broader implications.


I'm fond of pointing out that in the 1980s, people raised the same kinds of alarms about databases.


You seem to be raising this as a "just so" kind of argument and absurdum, but we have extant examples of databases and information technology enabling villainy like oppression and genocide by making correlations easier to track, making tracking more efficient, and less cost prohibitive.


It's a pity we never regulated MySQL. The good we could have done!


Honestly. To me that is starting to sound like very very good idea. Regulate what you can store, how you store, it how you modify it, who can access it, how is access controlled, what sort of trail should be left, how can mistakes be corrected, require that those whose information is stored can get full log on actions done on data relating to them.

Sounds like over regulation to many. But it is pretty clear companies and developers have failed. So maybe strict regulation is needed.


We absolutely should, some companies cannot ever be trusted with certain information. There is no reason why companies like Meta or Google should be entrusted with so much user data. The government should force a divestment from it and allow the public to own it (which should include public job guarantees that allow the public to maintain said data) or allow for smaller companies to be the handlers of such data.

Google, Meta, and the rest of big tech have proven they should never be trusted.


It is a pity we never regulated the consumer surveillance industry out of existence.

See, the original question isn't really about the technology per se. Rather it's about how it will be used. Do you have confidence in the track record (and trajectory) of our current regulatory approach when it comes to reigning in the scaling up of novel types of harm?

The way I see it, the American approach has been to simply write off those who end up on the business end of the technological chainsaws as losers, and tell them they should have tried harder to be on the other side doing the damage. So why would we think "AI" will be any different?


Citation needed


None of these answers are correct btw.


This is a fallacious argument. You don't need to understand the inner workings of a thing to see examples of harm and evaluate that harm as bad. For example, you don't need to understand how electric motors differ from internal combustion engines to understand that a mishandled car can very easily kill multiple people.


The problem, following your analogy, is seeing the consequences from the mishandled car but blaming the electric motor, in this case.


It also neglects that car companies purposely made cars extremely unsafe while chasing profits.

The only reason we have any regulations and safety standards for cars is because of one person leading the charge: Ralph Nader. You know what companies like Ford, GM, Chrysler tried to do after he released "Unsafe at any speed?" Smear his name in a public campaign that backfired.

Car companies had to be dragged kicking and screaming to include basic features like seatbelts, airbags, and crumple zones.


No surprises, many of these are fawning, big visions for AI infrastructure. I would love to see success from some of these less-ambitious hits that have a real chance at being successful in the next 18 months.

The "creative tools go multimodal" strikes me as one of those


Really excited for this, it might be the model that gets me to fully commit to a Starbook


Making the Nest 2nd gen thermostat the Google recently bricked compatible with local setups like Home Assistant.

I'm involved in 3 projects that are solving this problem from different angles:

https://sett.homes/

https://github.com/codykociemba/NoLongerEvil-Thermostat

https://github.com/cuckoo-nest


I'm reasonably excited about the prospect of this authorship of replacement firmware

I think that putting MQTT on this would be an important step toward local control and connecting it to Home Assistant


After you flash the exploit and SSH into the thermostat you can see it at https://github.com/codykociemba/NoLongerEvil-Thermostat/issu...

It's a boot script called /bin/nolongerevil.sh that supplies its own trust material and redirects traffic intended for frontdoor.nest.com to a hard-coded IP 15.204.110.215. 99.9% of this image is the original copyrighted Nest image. Maybe it's enough for the bounty though? And I suppose you could change that IP to a local server. If you wanted to publish the server side Nest API discovered through WireShark . Just stand up your own http rest server.


presumably it's the reverse engineered server that has most of the work put into it, and one would hope that's what is going to be released if the developer decides to


I'm working on https://sett.homes which is in this spirit. Instead of an Atmel it's an ESP32.

It uses MQTT with Wifi as you requested :)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: