I understand the article walks through the technical aspects. However, having seen a lot of non-savvy users use Internet, I am firmly with WWW. Probably because of association and all major "established" companies using www, people associate it with strength. I have learned it to be important in similar ways that a .com extension is.
If the site did not have www, most people assumed it is probably made by kids, who do not have www yet. See, most people do not understand that www is not a domain like .com which you have to buy. So for average joe consumer, it signals strength. For an enterprise customer, it probably does too. So unless your product is for savvy users or zen-like designers, I'd stick with www.
A lot of people think that naked domain is cleaner. It is actually not, since the average mind is conditioned to read www.x.com, and you have https://x.com. It's cleaner in the sense that a face is cleaner without a nose.
>having seen a lot of non-savvy users use Internet, I am firmly with WWW
As an anecdotal counterpoint, at our family Christmas, I mentioned a few different urls, spelling out "www" each time; one of my nieces said "why do you keep saying 'www'", and one of the other kids said "that's how old people find websites." After some discussion, it turned out that nearly everyone over 30 habitually wrote urls with "www" at the front, and everyone under 18 always omitted it.
Or they just parse the URL differently. One possibility is to treat mit.edu as equivalent to www.mit.edu, and web.mit.edu to www.web.mit.edu. This is how I did it when I was a kid (and knew absolutely nothing about programming etc.)
here's a weird thing about "www": in the netherlands, many people say the "www" out loud when mentioning a domain name, but then omit the dot after the "www". they do mention the dot before the TLD. so a domain like "www.example.nl" would be pronounced as "wwwexample.nl".
to be clear, this is not something incidental. you hear it in radio and tv commercials. there might be a relation with age, with older people forgetting the "." after "www" more often.
This is interesting, I never realized this is the same in France.
Sometimes all the words "dot" are omitted. I will have a walk at the office tomorrow to have people read out loud www.example.fr and www.example.nl. This must be seriously researched :)
10-15 years ago, it was popular to see "www.example.com" everywhere (television ads, etc.). Nowadays, though, I can't think of the last time I saw "www" in ads, printed materials, etc.
I think the younger generation, in particular, is more accustomed to the lack of www.
> If the site did not have www, most people assumed it is probably made by kids, who do not have www yet.
This doesn't make any sense to me. There's literally no correlation between "made by kids" and "doesn't have www", and no reason why anyone would ever make that association. You've provided what appears to be a bullshit justification for a completely unsubstantiated and fairly outlandish claim.
It's anecdotal. However, it should not be hard to test it out with 10 non-tech friends. I also don't recall having my credit card on file on any website that uses naked domain. That's pure correlation. Which goes on to show how valid any hard evidence on this issue would be.
Performing your own little user study is the best way to take a stance, I believe.
I agree and not sure why you've been downvoted originally. I've also seen plenty of non-tech-savvy users use the internet who seem almost confused if a website isn't 'using www'.
It also depends on your site's target audience. For a programming site or other I.T technical site it properly wouldn't matter.
For others, as recent as 2014 i still witness the following.
A: What was the site address ?
B: something.com
A: OK. [Start typing www.something.com ]
For the majority, i.e those who dont know HN, and dont know jack about computers, WWW means website. While in recent years this is much less of a problem because Google has become our gateway to Internet and Apps taking over, WWW is still ingrained into their mindset.
After having tried both I'm very much in the WWW camp. Even though the naked domain looks nicer, it's just not worth the hassle.
> End users save an extra DNS lookup
Most intermediate resolvers will return both the CNAME and A record in one response anyways.
Another issue with naked domains is that all the cookies are automatically served on subdomains as well. It's just another hassle to worry about when trying to keep the cdn clean or wondering why a session works only in specific cases.
Connection limits to CDN from naked domain might also be limited. Can anyone confirm ? Ex: if browser limit is 2 connections per host, bare + sub will only give 2 connections, while sub + sub will give 4 connections.
If you are using Websockets or AJAX the connection limit can become an issue.
Also some users always write www in front of the domain. Although some browsers will just redirect to Google. So if you check browser referer a lot will be from www.google.com?q=www.yourdomain.com
Yes basically this is the main reason. I could change it but I have better things to do.
Also my www recommendation is for commercial websites who tend to have multiple subdomains, more traffic with HA requirements, and a more complex setup than a static website.
The way you word the statement and then append "Curious." makes it somewhat hostile. It's a callout, not just an observation.
"I'm curious why you say you're 'very much' in the WWW camp but you don't use it on your own site." would not have the same tendency to register as an attack.
That's just a comment written in a second or two, not a verse from the bible. No need to dissect it this deep. If this is hostile, asking "how are you" is too, implying one may be in bad shape.
If it's a comment written in a second or two, is it worth posting? It was definitely a call-out as indicated by the finality of "Curious"; it could have been asked very matter-of-factly if he was actually curious.
As a contradicting opinion, by my conversational standards, the comment reads as just an observation that this is curious, and an implication that the poster would be interested in more information. It does not read at all like an attack to me.
Yes, that's why I was explicit that I was discussing my personal conversational standards that reflect my personal history, which is different from yours. As you point out, there are people from a wide variety of backgrounds on the internet, and part of participating in the internet is understanding this and expecting it.
When you see something that you might think is hostile, it's often more productive to give them the benefit of the doubt and presume that they intended good faith but have different conversational norms from you. In the best case, they meant well, and everything proceeds nicely. In the slightly worse case, they didn't mean well, but you've helped de-escalate, and things are back on track for a nice discussion. In the worst case, they get more blatant in their attacks, and at least you'll find out that productive discussion wasn't possible anyway. Not much downside, and lots of upside, in my experience.
Depends on your audience too. If you're targeting non-savvy web users then www is a clear indication that a string is a web domain. Not so important for a .com but if your site is example.io or some such ...
* Cookies for the root domain get sent to all subdomains, so a subdomain for static content still gets flooded with cookies, slowing down requests. Now subdomains will get cookies you may not want them getting, complicating site design. You can end up sending dozens of kilobytes of cookies with each request due to the www-less cookies. The way around this is buying whole new domain names just for static content, and then duplicating SSL and all the other requirements for this new domain. Or hoping RFC 2965/6265 won't break anything using your site.
* There is a security boost by the same-origin not allowing a subdomain to hijack cookies for the root domain ("forums.foobar.com" could be made to set a cookie that "foobar.com" interprets, which can be used to hijack user sessions; this would not happen on www). This problem affected Github and they had to implement complicated workarounds.
* It is easier and more flexible to configure a round robin of frontend hosts with a CNAME (on www) than by A records on the root domain. If your cloud hosting provider's IP changes, they can change their hostname records without needing to modify your DNS records - less work for you and them. And if you think a single static anycast address could never have a routing problem, think again.
* Google will (or did in the past) ding you for duplicate content. The same content on foobar.com/ and www.foobar.com/ will appear as duplicate. Providing the content only on www separates it from other content and makes it easier to search subdomain-specific content. (This won't happen if one of them is 301 redirected to the other, however)
Reasons not to use www:
* "It looks cleaner."
People, you can 301 redirect your www-less site to www, gain all the advantages of using www, and the only "hassle" will be in how the address bar looks.
> If you want to be able to receive email on your domain, you’ll need to set MX records at the apex domain. With a CNAME, no other records can be set.
> Want to validate your domain for webmaster tools? Or for the DNS validation required for some domain validated SSL certificates? Now you have to add a TXT record to the apex domain. If you already have a CNAME, again, that’s not allowed.
It’s actually worse than that. All domains have, for technical DNS reasons, both a SOA record and at least one NS record in them at the “apex” domain. This would conflict with an apex CNAME record. Therefore, you can’t have a CNAME on an apex domain, even if it would otherwise be empty.
(There is a technical, and very theoretical, way around this limitation: The administrators of the top-level .com domain could, for example, add a CNAME record directly into the top-level domain zone. This would be valid, technically, but good luck convincing the various parties involved to do this.)
Don't make the mistake I made: host different content on www.gitlab.com (static site) than GitLab.com (application). People expect them to be the same. Ended up moving the static site to about.GitLab.com
I always have my www.* domains alas to the site without the www. by default. While you assumed people will almost always think they're different, I'm assuming people will almost always think they're the same.
I wonder how many people get frustrated by being redirected from the www to it's naked counterpart before spamming refresh and leaving in defeated frustration... Uh oh
The main problem is, in my opinion, that CNAME is broken for the root domain but something that can hardly be fixed on such an ancient protocol without some pain.
What Cloudflare and DNSimple are doing is the right thing. I hope that CNAME flattening or ALIAS records become some kind of standard.
That would be great to see and would solve a real issue for many users of services like ours (or Heroku, GitHub pages, etc, etc, etc).
There are gotchas however, since you now depend on two levels of DNS based traffic direction and we have sometimes run into issues where DNS providers offering ALIAS records simply cached one DNS respond and sent all DNS lookups to the same CDN pop regardless of their location :/
It's explained in the article, but the TL;DR is that CF and DNSimple are simple pretending that a CNAME on the root domain is the corresponding A or AAAA record instead.
It breaks geographical CDN a bit but it works somewhat.
I don't think there is a reason it would have to be tied to HTTP 2, and also not much to gain by explicitly including it. Proposals for using SRV records for HTTP have been around a long time, seems like there have been some open questions and not all that much interest.
The HTTP 2 standard must include provisions for SRV records to be used, since that is part of how clients should follow a URL. Additionally, the SRV specification itself says that a protocolspecification must say that SRV record should be used before any client of that protocol takes it upon itself to use SRV records.
[...] If host is a registered name, the registered name is an indirect identifier for use with a name resolution service, such as DNS, to find an address for that origin server.
[...]
When an "http" URI is used within a context that calls for access to the indicated resource, a client MAY attempt access by resolving the host to an IP address, establishing a TCP connection to that address on the indicated port, and sending an HTTP request message (Section 3) containing the URI's identifying data (Section 5) to the server.
I don't think that excludes SRV-based name resolution. Some sort of standardization of course would be helpful, even if just for reference, but that could in my mind be an independent document recommending to use SRV instead for HTTP, without any detail about the version (since HTTP 2 has no property that makes it more or less fit for use with SRV records than 1.1). Adding something that's totally unclear if it ever will see any use to HTTP2 just because seems worse.
Also, DNS round robin won’t work with IPv6. At all. Round robin DNS depends on a client connecting to the first address record it receives in the DNS reply, and the DNS server altering its responses to set different addresses as the first one each time it sends a DNS response. But with IPv6, a client host is required to connect to the address closest to its own (as determined by the longest range of common bits), regardless of the adress’ position in the DNS reply.
I'm surprised that the article doesn't mention anycast, which is more or less the "correct" way of using a CDN on an apex domain, since for the user's purposes it's just a static IP address.
I find anycast to be convenient even for subdomains, since it isn't affected by things like DNS caching, (although things like edns-client-subnet apparently help with that).
I'm actually currently looking for a CDN for my website. I don't like www (just personal preference) so anycast is pretty important to me, but there don't seem to be a lot of providers offering anycast for decent price. The closest I've seen is Google's Cloud CDN, which out of all the CDN's I've tried (a lot), is one of the best, but for a small site like mine I tend to get more cache misses than hits (simply due to eviction).
Maybe I'll write up a blog post about this issue :)
It's odd to hear a CDN complaining about this limitation when it has already been solved for well over a decade by other leading CDNs.
Akamai can serve your apex domain from their edge servers. They do it by giving different answers for the A record to different users, based on where each user is coming from. All that's required is that you use them as your NS.
At SunSed, we use Google HTTP(S) Load Balancer which allows us to load balance our entire infrastructure via a single IP.
Our users don't need to worry about CNAME vs A records they can do what ever they want with the IP, since we don't need to change this IP there is no benefit for using a CNAME.
On top of that SSL handshake for HTTPS happens at Google front ends which reduces the load on our servers. Also we can send traffic to different sets of VMs based on the URL! How cool is that?
I really think that Google's HTTP Load balancer is the hidden gem of Google Cloud.
> On top of that SSL handshake for HTTPS happens at Google front ends which reduces the load on our servers. Also we can send traffic to different sets of VMs based on the URL! How cool is that?
Very cool - let's just hope Google is better at hiding contents of some random memory than Cloudflare.
I host my own sites and simply use A name records.
> When it looks up example.netlify.com, it connects to our advanced traffic director, that returns an A record with an IP address of the server from our pool of currently available CDN nodes that’s geographically closest to the end user.
It looks like the way their DNS redirects/loadbalacing work is the reason they don't simply allow A records to a static IP.
This gets into the whole "you could be redirected to other servers based on your geographical location" issue; and not necessarily your location but the location of your DNS server! I'm not sure if Netlify does this, but Akami does work with ISPs DNS servers around the world to return different results to get to the closest CDNs. This is why using Google DNS (8.8.8.8) resulted in slower loads for Akami customers.
Author here. We do actually offer all of these options.
We offer a public IP address for A records pointing to a our main load balancer. This will send all traffic to a single origin instead of serving your HTML pages out of our global CDN.
We also offer DNS hosting for pro plans and up. When you move your DNS to Netlify, the caveat about naked domains doesn't apply (as mentioned in the first paragraph), since we hook the domain record straight into our global traffic director.
For enterprise customers we also offer an anycasted IP address that lets you use our CDN with a normal A record, but we still recommend either using our DNS hosting or a www domain since the DNS based traffic direction is faster at responding to localized issues and offers more precise traffic distribution.
Wouldn't a simpler (for the end customer, not for you) solution be to use Anycast on a (or block of) IP addresses and then let folks always use A records as intended? Solves the ANAME non-local caching issue and also handles people using DNS servers not nearby to them.
We do run an anycast CDN network, but there's a lot of limitations on BGP routing compared to CDN based traffic direction.
We can only route BGP requests to hardware we control, whereas we can add PoPs in all the major cloud providers on our DNS based network. We can then use tools like Cedexis or DYNs internet intelligence to identify where the different cloud providers have the best networking and peering agreements and piggy back on that + their DDoS mitigation. This means we get a combination of all the best AWS/Google Cloud/Rackspace/DO, etc, etc has to offer in that aspect.
On the DNS based traffic director we can also do very quick traffic decisions (20s TTL, instant changes) whereas on our BGP routed anycast IP we have to be more conservative and force 10 minute intervals between any up/down changes for a PoP.
I did GeoDNS + Unicast IPs for a while. I had a really rough time making it work, and we ended up building our own anycast network (https://status.neocities.org)
Aside from the root domain issues (and less options for market-priced bandwidth), "GeoDNS + Cloud" pushes your traffic into someone else's ASN, which means complaints end up being sent to them, and your hosting is effectively governed not just by one, but by two different ToSes.
This isn't a big deal for a couple thousand sites (unless they're huge), but once you start getting into the hundreds of thousands, you'll see a significant spike in issues (phishing, malware, spam, DMCA, legal threats, etc.) that get sent to whomever owns that IP address. After getting too many of these complaints, those other providers can decide you're just not worth the effort and boot you off their servers.
Crazy hypothesis? Sounds like it would be, but it happens: https://twitter.com/surge_sh/status/685164708861624325. DO did the same thing to us when we tried to use them for part of our CDN early on. After that, I tried three other cloud services that either did the same thing or threatened to do the same thing (to say nothing about the ridiculously overpriced bandwidth).
The choice we were left with: Get our own AS, or die. Mind you, this was over < 30 abuse reports per month, not thousands. Most of these providers are designed for a single company or a wordpress blog, they're not designed (and not really equipped) for usage as infrastructure for a web hosting provider with hundreds of thousands (or millions) of customers.
Building out the anycast CDN was a "drinking from the firehose" experience and had some upfront costs I would have rather not paid, but it solved this existential problem for us permanently, and probably saved our life. From experience, I do think you'll have to do this eventually (or at least do GeoDNS + unicast with your own IPs and AS).
Does this anycasted ip actually serve the html page/assets or does it reply with a redirect to a "stable" ip announced from the same pop for actually serving the assets?
Noob question: what companies would have to jump onboard to get a new record up and running? Could it not just be one company like DNSimple who first adopts it?
I feel like the biggest problem would be all the ISP's DNS servers, ISPs are notorious for breaking all kinds of stuff and this would probably be just another thing they break.
Technically, one company (like DNSimple) could add a new record and start using it themselves. For it to be universally supported, however, it would need to go through the standards process and become part of the DNS standards.
Using 'computer' is a bit disingenuous there. Upgrading a computer sounds like you're buying a new laptop or replacing hardware to deal with a software protocol change.
The software installed on my computer doesn't understand your new record type, so if you want me to see your site, you're going to have to wait until I upgrade it.
Adding www Doesn't make any sense for URL shorteners for example.
The same occurs on media like Twitter where chars are counted and "precious": using www. adds 4 chars to the message (in theory, since those url shorteners are in help).
Another detail I've noticed since wide adoption of browsers that include a single combined url/search entry field. Most people don't even care about the exact URL, they just enter the name they believe the website is, and let the search engine do the job if mistyped or inexistant. (That leads to phishing attacks).
Because it lets different organizations/organizational units control different parts of the resolution. for example, you don't want to give heroku control of your whole dns (and they don't want to be in the dns business), but you want to let heroku change the actual network ip addresses that handle your app on their own, you don't even want to have to know what it is.
this is slightly off topic, but is there anyone who can elaborate a little bit on why/where/how netlify differs from heroku? it's a little more expensive and you cant host your back end, so im a little confused of the value provided.
It very much depends on the age of your target market. I'd say there's a cutoff around age 30 where people simply omit the www. when talking about addresses and assume everything is just whatever.com.
104 comments says that you're wrong. There is a many years long discussion about www vs non-www and this is a continuation of that. It served the purpose of sparking the conversation, that was its value.
If the site did not have www, most people assumed it is probably made by kids, who do not have www yet. See, most people do not understand that www is not a domain like .com which you have to buy. So for average joe consumer, it signals strength. For an enterprise customer, it probably does too. So unless your product is for savvy users or zen-like designers, I'd stick with www.
A lot of people think that naked domain is cleaner. It is actually not, since the average mind is conditioned to read www.x.com, and you have https://x.com. It's cleaner in the sense that a face is cleaner without a nose.