Hacker News new | past | comments | ask | show | jobs | submit login

I'm curious when this starts to matter. I would guess most normal, non tech, people just don't ask much of these voice devices now. Mostly asking them for countdown timers, weather, playing a song, and maybe a few other things. Not because the devices can't do more, but because the customer isn't expecting them to.



Well even for the more "simple" things federated learning can improve the experience. Lower power usage, less latency, and in some cases improved privacy.

And some kind of "smart" parsing of voice is still required. I don't like having to say things a specific way to get the machine to understand them, and as the "smart" stuff gets smarter, it gets easier to use even for simple tasks.

Before I could ask "whats the weather" and get a generic answer, now I can ask "what's the weather tonight", "what time will it rain", "will it be really cold tomorrow morning", or even "what temperature is it in here" (asking specifically what temperature my indoor temperature is).

There is a TON of work that goes into getting all of that to work, but it's not really a "feature" you can point to easily, even though it makes the product significantly better than alternatives. And being able to do all of that without needing the extra latency of a round trip to a google server would be an improvement.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: