Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This doesn't make any sense to me.

Why would voice recognition software be interpreting ultrasonic (or near-ultrasonic) signals at all?

First, it doesn't make sense they'd be trained on them. So why would models be interpreting these as speech at all?

And second, it doesn't make sense they'd make it from the microphone to the recognition engine -- surely there's a low pass filter in there to remove all extraneous noise above the vocal range?

I don't get it.

(Edit: could it be some kind of downsampling aliasing artifact that is interpreted as normal vocal frequency, precisely because they skip a low pass filter that would prevent it?)



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: