Hacker Newsnew | past | comments | ask | show | jobs | submit | ancalimon's commentslogin

Heya,

I'm sorry for the confusion. My issue with quoting webpages is not that it quotes them -- that's fine -- but that it does so in a very verbose manner. This leads to information overload. For example:

Me: What is the boiling point of water at an altitude of 1km?

Google: At sea level, water boils at 212 °F. With each 500-feet increase in elevation, the boiling point of water is lowered by just under 1 °F. At 7,500 feet, for example, water boils at about 198 °F. Because water boils at a lower temperature at higher elevations, foods that are prepared by boiling or simmering will cook at

The problem is with the vast quantity of information, and the fact that some is both irrelevant and truncated. The last sentence is incomplete and cut off, yet as a listener I have no way of knowing this. I will thus try to remember it, at the expense of the facts that came previously.

When reading a webpage, the important part is to read the specific parts of interest, and not overload the user. If it can't do that, it risks providing irrelevant or, quite frankly, confusing data (such as the odd answer to how much a Dreamliner weighs). I don't know if that's better than not providing an answer at all.


Sometimes it'll read a page, e.g. it read a passage from Wikipedia a few times. But if it really can't decide, it'll say something like "I'm sorry, I don't know that one".


Heya,

My first Hacker News inclusion. I feel like there should be some rite of passage. Well, other than the sudden and unanticipated login attempts.

My testing device for Google was the Google Home speaker, which appears to have a different tolerance for reading search results. I've had it rattle off several sentences from web pages for other keywords in the list (see, for example, the boiling point of water), but for the Bill Murray question there seems to be some kind of limiter. I just re-checked, using the exact phrasing I had before, and it still says that it doesn't know, but it's learning all the time.

I'm guessing there is some kind of a relevance check for the speaker version compared to the phone version. The phone is probably happier to return any result (a la Siri), whereas the speaker appears to be making some attempt to understand what I'm asking for before reading search results.

This particular question appears to trigger the speaker not to read the search results. We can only speculate as to why: does it not find it relevant enough? Is there a reserved path on "Did xyz" questions when sent to Google Home? Am I unknowingly in the A/B testing group that doesn't get the answer? There's few ways of knowing black-box without massive data testing, but it is curious.


Welcome! I don't know of any rite of passage, but maybe I gave you your first upvote? ;)

> I'm guessing there is some kind of a relevance check for the speaker version compared to the phone version.

I would bet on that & expect it too... I'm sure all these voice search products are experimenting with how voice search needs to be tuned differently than text search.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: