Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Check that you're not routing unnamed SNI requests to your web content. If someone sets up a reverse proxy with a different domain name, google will index both domains and freak out when it sees duplicate content. Also make sure you're setting canonical tags properly. Edit: I'd also consider using full URLs in links rather than relative paths.


Canonical Tags are done perfectly. Never changed them, and the blog is quite old too. I found a pattern where Google considers a page a duplicate because of the URL structure. For example:

www.xyz.com/blog/keyword-term/ www.xyz.com/services/keyword-term/ www.xyz.com/tag/keyword-term/

So, for a topic, if I have two of the above pages, Google will pick one of them canonically despite different keyword focus and intent. And the worst part is that it picks the worst page canonical, i.e., the tag page over blog or blog page over service.


I guess a workaround could be to tell googlebot to keep off /tag/?


I did that. But I am not sure what to do with blog and service pages. Can't tell Google to off one of them. For now, I have priortized service page as canonical.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: