Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have given Zillow a lot of non-public information over years of searching.

How many people search on bedrooms but not bathrooms? When people search on both, what’s the pattern they use? If we highlight prices and BRs on the map does that give more clicks than just prices? How important are photos (times 50 different questions there)? How strong a signal is repeat views spaced over time? Saving a house to favorites? Sending a link to a friend? Clicking on comps in the neighborhood? Which comps do people zero in on (as evidenced by spending more time on the page)? How strong a signal is sending a message to the real estate agent on the listing? What areas of the country are seeing an uptick in search traffic? How long between claiming a house as an owner on the site, updating the information, and listing it for sale?

They are sitting on a (well-earned) treasure trove of data and it’s not unreasonable to think they could use that to be better informed than another buyer without that information.

Where they seem to have failed is in not augmenting that advantageous data with regular old boots-on-the-ground observations.



I think the opposite: there is not much valuable data, it is just noise.

It is very difficult to go from what users browse to what they actually buy. People very often say one thing, then do something completely different.

And sometimes they browse stuff just to make sure that their current decision is correct, so they will look at a lot of items they're not going to buy.

(oh, and everybody and their mother knows photos are important. No need for ML to find that out)


Do you think that A/B or multi-variate testing works in general?


In some cases A/B testing works very well and in other cases not at all.

So it is easy to test UI changes, but difficult to find out why people do what the do.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: