> Getting at logs in something like azure functions is a great example of this.
This is the least of the problems I've experienced with Azure Functions. You'd have to try very hard to NOT end up with useful logs in Application Insights if you use any of the standard Functions project templates. I'm wondering how this went wrong for you?
What's confusing is that it's not just a small e-reader. It has a microphone, SD card, the Google Play store, and normal smartphone apps. Everything except cell data / SIM card.
I've often wondered why there aren't devices that have all network operations bundled into a removable module; that way you could get both people who want some level of disconnect and those who want a more thorough level of disconnect.
Not sure why they fail in the marketplace. Maybe look at other similar market failures like power tools with different interchangable heads? I suspect the sticker price of the individual modules scares consumers.
Yeah, good point. Because the iPod touch was also kinda awkward as a product, but it made more sense as a transitionary device when smartphone penetration wasn't fully ubiquitous.
In 2025, the Palma product seems like feature creep since I wouldn't expect smartphone apps to appeal to anyone looking for less distraction. That's how most people use tablets which are dedicated distraction devices stuck on wifi.
These are the extremely obviously different. A company that has to take specific measures to prevent the suicide of their workers should raise a much different level of scrutiny that the fact that a massive bridge available to millions of people is used to commit suicide.
The company had just shy of a million people in it at the time, making the comparison "about the entire population of South Dakota" (which had 139 suicides that year) or "121% of the population of San Francisco" (32 jumped from specifically the Golden Gate bridge in 2010, which was Foxconn's worst year[0], and that doesn't count any of the other suicides in SF that year, just jumping specifically off that specific bridge), and it's nowhere near the only example of this in the USA.
This university had three students jump to their deaths in 2010, out of about 26k students, compared to 15 in Foxconn's worst year out of 980,000 employees:
"""In late 2003, the library was the site of two suicides. In separate incidents, students jumped from the open-air crosswalks inside the library and fell to the stereogram-patterned marble floor below.
After the second suicide, the university installed Plexiglas barricades on each level and along the stairways to prevent further jumping. In 2009, a third student jumped to his death from the tenth floor, apparently scaling the plexiglas barricade.[7]
The library has since added floor-to-ceiling metal barriers to prevent any future suicide attempts. The barrier is made of randomly perforated aluminum screens that evoke the zeros and ones of a digital waterfall.[8]"""
2 out of 59,144 students would be equivalent to 33 out of the 980k Foxconn employees, double the number who actually jumped.
Why should a company require more strict scrutiny than, say, a public bridge? Well of course there are many reasons, but specifically: in the case of addressing suicide? If a bridge is being used to commit suicide then... perhaps the problems causing suicide should be addressed instead of (or... in addition to) the symptom of suicide being prevented.
It really doesn't have to be. Your standard zigbee smart lightbulb, for example, behaves exactly the same as non-smart lightbulb - you flip a switch to turn it on and off. You have the option of integrating it with automations and services fully under your control and even disconnected from the broader internet, if you like.
Building up a "smart" home where you have control of everything is radically different from off-the-shelf IOT crapware that turns into a paperweight when some company decides to pull the plug.
To do this in Home Assistant, you'd probably want to run Music Assistant and integrate it in. Looks like they manage to support some streaming providers, not entirely sure how: https://music-assistant.io/music-providers/
This is one advantage of a system with a constrained set of commands/grammars, as opposed to the Alexa/Siri model of trying to process all arbitrary text while in active mode. It can simply ignore/discard any invocations which don't match those specific grammars (and no need to wait to confirm that the device is awake).
"Computer, turn lights to 50%" -> "turn lights to fifty percent" -> {action: "lights", value: 50}
"My new computer has a really beefy graphics card" -> "has a really beefy graphics card" -> {action: null}
I have a coworker that set up an Alexa an year or so ago, I don't know what was the issue, but it would jump into Teams meetings after every noise in his house.
Sure, if the system is set up to only respond to very specific commands that humans would not respond to, I guess that could work. I was thinking more about the other way around, where a person might speak to someone else in the room and be overheard and acted upon - "turn on the lights!" could be a command for the computer controlling the room, or the human standing next to the Christmas tree, for example.
I’ve never had Alexa control a device via a TV show’ audio but playing back a video of me testing my home automation (“Alex, do X”) triggered my lights.
I’d love a no-wake-word world where something locally was always chewing on what you said but I’m not sure how well it would work in practice.
I think it would only take 1-2 instances of it hearing “Hey, who turned off the lights?” in a show turning off my lights for real (and scaring the crap out of me). Doctor Who isn’t particularly scary but if I was watching Silence in the Library and that line turned off my lights I’d be spoked and it would take me a hot minute to realize what happened.
The nerve center would be your Home Assistant instance, which is not this device. You can run Home Assistant on whatever hardware you like, including options sold by Nabu Casa.
This device provides the microphone, speaker, and WiFi to do wake-word detection, capture your input, send it off to your HA instance, and reply to you with HA’s processed response. Whether your HA instance phones out to the internet to produce the response is up to you and how you’ve configured it.
This is the least of the problems I've experienced with Azure Functions. You'd have to try very hard to NOT end up with useful logs in Application Insights if you use any of the standard Functions project templates. I'm wondering how this went wrong for you?
reply