Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
To www or not www (netlify.com)
208 points by jacobwg on March 18, 2017 | hide | past | favorite | 125 comments


I understand the article walks through the technical aspects. However, having seen a lot of non-savvy users use Internet, I am firmly with WWW. Probably because of association and all major "established" companies using www, people associate it with strength. I have learned it to be important in similar ways that a .com extension is.

If the site did not have www, most people assumed it is probably made by kids, who do not have www yet. See, most people do not understand that www is not a domain like .com which you have to buy. So for average joe consumer, it signals strength. For an enterprise customer, it probably does too. So unless your product is for savvy users or zen-like designers, I'd stick with www.

A lot of people think that naked domain is cleaner. It is actually not, since the average mind is conditioned to read www.x.com, and you have https://x.com. It's cleaner in the sense that a face is cleaner without a nose.


>having seen a lot of non-savvy users use Internet, I am firmly with WWW

As an anecdotal counterpoint, at our family Christmas, I mentioned a few different urls, spelling out "www" each time; one of my nieces said "why do you keep saying 'www'", and one of the other kids said "that's how old people find websites." After some discussion, it turned out that nearly everyone over 30 habitually wrote urls with "www" at the front, and everyone under 18 always omitted it.


Wow yeah I bet those kids were not born back when web.mit.edu and www.mit.edu were still different things (plus see e.g. http://web.archive.org/web/19990208005346/http://mit.edu/ ).


Or they just parse the URL differently. One possibility is to treat mit.edu as equivalent to www.mit.edu, and web.mit.edu to www.web.mit.edu. This is how I did it when I was a kid (and knew absolutely nothing about programming etc.)


At least we've mostly gotten away from "h-t-t-p-colon-slash-slash..."


I always thought it was odd when they go with the whole "http://www." voodoo ritual, but then they skip the final slash after the domain.


AFAIK the lack of a path part is allowed in an HTTP URL per RFC2616.

       http_URL = "http:" "//" host [ ":" port ] [ abs_path [ "?" query ]]
> If the abs_path is not present in the URL, it MUST be given as "/" when used as a Request-URI for a resource (section 5.1.2).

Browsers may differ on whether they auto-fix schemaless URIs, but they're all required to fix path-less URIs. So it's much "safer" to use them.


Or even worse, "h-t-t-p-colon-backslash-backslash..." from people who were used to saying "c-colon-backslash"


Or worse yet still "forward slash forward slash."


How is forward slash worse? At least that's the correct direction.


Because it's a made up term. It's just called "slash."


More like: solidus and reverse solidus!


At least it unambiguously refers to the correct character / as opposed the incorrect character \


Or worse yet, "h-t-t-p colon reverse-backslash reverse-backslash..."


its awesome that URL's can have different protocols. althoug most user/apps will asume http#/www


I heard https mentioned in a spoken url on the radio recently. And they said "that's with secure encryption"


My anecdote confirms your counterpoint. I too was once ridiculed for spelling out www in a conversation with some millennials. Made me feel old.


here's a weird thing about "www": in the netherlands, many people say the "www" out loud when mentioning a domain name, but then omit the dot after the "www". they do mention the dot before the TLD. so a domain like "www.example.nl" would be pronounced as "wwwexample.nl".

to be clear, this is not something incidental. you hear it in radio and tv commercials. there might be a relation with age, with older people forgetting the "." after "www" more often.


This is interesting, I never realized this is the same in France. Sometimes all the words "dot" are omitted. I will have a walk at the office tomorrow to have people read out loud www.example.fr and www.example.nl. This must be seriously researched :)


10-15 years ago, it was popular to see "www.example.com" everywhere (television ads, etc.). Nowadays, though, I can't think of the last time I saw "www" in ads, printed materials, etc.

I think the younger generation, in particular, is more accustomed to the lack of www.


> If the site did not have www, most people assumed it is probably made by kids, who do not have www yet.

This doesn't make any sense to me. There's literally no correlation between "made by kids" and "doesn't have www", and no reason why anyone would ever make that association. You've provided what appears to be a bullshit justification for a completely unsubstantiated and fairly outlandish claim.


I have trouble believing this to be honest. Any hard evidence?


It's anecdotal. However, it should not be hard to test it out with 10 non-tech friends. I also don't recall having my credit card on file on any website that uses naked domain. That's pure correlation. Which goes on to show how valid any hard evidence on this issue would be.

Performing your own little user study is the best way to take a stance, I believe.


I agree and not sure why you've been downvoted originally. I've also seen plenty of non-tech-savvy users use the internet who seem almost confused if a website isn't 'using www'.


Exactly. We used a naked domain for a while and had to switch because a percentage of people kept getting "worried" about the missing www.

I think if you have a tech target audience this doesn't matter but for general consumers, it's still an issue.


Some browsers hide the www from the URL (even if it's actually there,) so I think customers notice it less.


This.

It also depends on your site's target audience. For a programming site or other I.T technical site it properly wouldn't matter.

For others, as recent as 2014 i still witness the following. A: What was the site address ? B: something.com A: OK. [Start typing www.something.com ]

For the majority, i.e those who dont know HN, and dont know jack about computers, WWW means website. While in recent years this is much less of a problem because Google has become our gateway to Internet and Apps taking over, WWW is still ingrained into their mindset.

And agree on the .com.


>It's cleaner in the sense that a face is cleaner without a nose.

Otoh, our noses (and balls) are ugly as hell. We just used to them during evolution.


After having tried both I'm very much in the WWW camp. Even though the naked domain looks nicer, it's just not worth the hassle.

> End users save an extra DNS lookup

Most intermediate resolvers will return both the CNAME and A record in one response anyways.

Another issue with naked domains is that all the cookies are automatically served on subdomains as well. It's just another hassle to worry about when trying to keep the cdn clean or wondering why a session works only in specific cases.


Connection limits to CDN from naked domain might also be limited. Can anyone confirm ? Ex: if browser limit is 2 connections per host, bare + sub will only give 2 connections, while sub + sub will give 4 connections.

If you are using Websockets or AJAX the connection limit can become an issue.

Also some users always write www in front of the domain. Although some browsers will just redirect to Google. So if you check browser referer a lot will be from www.google.com?q=www.yourdomain.com


With HTTP2 support this is not longer really a problem, since HTTP2 multiplexes requests over 1 connection.


I think few web apps (aka software driven sites, not just static content) use http2 yet.


I believe the Chrome connections per host is now 6 (even under old school http). Not sure about the other browsers.


I believe most are 6 and have been for at least a few years.


So you're "very much" in the WWW camp but you don't even use it on your own site. Curious.


Sometimes you have to make the mistake to know it was one. I wouldn't trust his opinion if he had never tried non-www.


Yes basically this is the main reason. I could change it but I have better things to do.

Also my www recommendation is for commercial websites who tend to have multiple subdomains, more traffic with HA requirements, and a more complex setup than a static website.


That's a slightly strange personal attack. Their site may be one of the ones where they tried the naked domains.


It wasn't a personal attack, literally only an observation.


The way you word the statement and then append "Curious." makes it somewhat hostile. It's a callout, not just an observation.

"I'm curious why you say you're 'very much' in the WWW camp but you don't use it on your own site." would not have the same tendency to register as an attack.


That's just a comment written in a second or two, not a verse from the bible. No need to dissect it this deep. If this is hostile, asking "how are you" is too, implying one may be in bad shape.


If it's a comment written in a second or two, is it worth posting? It was definitely a call-out as indicated by the finality of "Curious"; it could have been asked very matter-of-factly if he was actually curious.


As a contradicting opinion, by my conversational standards, the comment reads as just an observation that this is curious, and an implication that the poster would be interested in more information. It does not read at all like an attack to me.


Some/Many posters here are not native speakers. There's no English conversational standard on the Internet.


Yes, that's why I was explicit that I was discussing my personal conversational standards that reflect my personal history, which is different from yours. As you point out, there are people from a wide variety of backgrounds on the internet, and part of participating in the internet is understanding this and expecting it.

When you see something that you might think is hostile, it's often more productive to give them the benefit of the doubt and presume that they intended good faith but have different conversational norms from you. In the best case, they meant well, and everything proceeds nicely. In the slightly worse case, they didn't mean well, but you've helped de-escalate, and things are back on track for a nice discussion. In the worst case, they get more blatant in their attacks, and at least you'll find out that productive discussion wasn't possible anyway. Not much downside, and lots of upside, in my experience.


Some domains just sound much better without the www. You should weight the advantages vs disadvantages ...


Depends on your audience too. If you're targeting non-savvy web users then www is a clear indication that a string is a web domain. Not so important for a .com but if your site is example.io or some such ...


Interesting point on .com vs others for www.


Reasons to use www:

* Cookies for the root domain get sent to all subdomains, so a subdomain for static content still gets flooded with cookies, slowing down requests. Now subdomains will get cookies you may not want them getting, complicating site design. You can end up sending dozens of kilobytes of cookies with each request due to the www-less cookies. The way around this is buying whole new domain names just for static content, and then duplicating SSL and all the other requirements for this new domain. Or hoping RFC 2965/6265 won't break anything using your site.

* There is a security boost by the same-origin not allowing a subdomain to hijack cookies for the root domain ("forums.foobar.com" could be made to set a cookie that "foobar.com" interprets, which can be used to hijack user sessions; this would not happen on www). This problem affected Github and they had to implement complicated workarounds.

* It is easier and more flexible to configure a round robin of frontend hosts with a CNAME (on www) than by A records on the root domain. If your cloud hosting provider's IP changes, they can change their hostname records without needing to modify your DNS records - less work for you and them. And if you think a single static anycast address could never have a routing problem, think again.

* Google will (or did in the past) ding you for duplicate content. The same content on foobar.com/ and www.foobar.com/ will appear as duplicate. Providing the content only on www separates it from other content and makes it easier to search subdomain-specific content. (This won't happen if one of them is 301 redirected to the other, however)

Reasons not to use www:

* "It looks cleaner."

People, you can 301 redirect your www-less site to www, gain all the advantages of using www, and the only "hassle" will be in how the address bar looks.



This comment convinced me to switch to www while article and other comments didn't. Thank you for a detailed reply.


> If you want to be able to receive email on your domain, you’ll need to set MX records at the apex domain. With a CNAME, no other records can be set.

> Want to validate your domain for webmaster tools? Or for the DNS validation required for some domain validated SSL certificates? Now you have to add a TXT record to the apex domain. If you already have a CNAME, again, that’s not allowed.

It’s actually worse than that. All domains have, for technical DNS reasons, both a SOA record and at least one NS record in them at the “apex” domain. This would conflict with an apex CNAME record. Therefore, you can’t have a CNAME on an apex domain, even if it would otherwise be empty.

(There is a technical, and very theoretical, way around this limitation: The administrators of the top-level .com domain could, for example, add a CNAME record directly into the top-level domain zone. This would be valid, technically, but good luck convincing the various parties involved to do this.)


Don't make the mistake I made: host different content on www.gitlab.com (static site) than GitLab.com (application). People expect them to be the same. Ended up moving the static site to about.GitLab.com


Oh my

I always have my www.* domains alas to the site without the www. by default. While you assumed people will almost always think they're different, I'm assuming people will almost always think they're the same.

I wonder how many people get frustrated by being redirected from the www to it's naked counterpart before spamming refresh and leaving in defeated frustration... Uh oh


Yeah, big mistake on my part. I don't understand why you're mentioning a redirect, that was not something we were doing.


The main problem is, in my opinion, that CNAME is broken for the root domain but something that can hardly be fixed on such an ancient protocol without some pain.

What Cloudflare and DNSimple are doing is the right thing. I hope that CNAME flattening or ALIAS records become some kind of standard.


That would be great to see and would solve a real issue for many users of services like ours (or Heroku, GitHub pages, etc, etc, etc).

There are gotchas however, since you now depend on two levels of DNS based traffic direction and we have sometimes run into issues where DNS providers offering ALIAS records simply cached one DNS respond and sent all DNS lookups to the same CDN pop regardless of their location :/


Could you explain more about what Cloudflare/DNSimple are doing to workaround this, from a technical standpoint?


It's explained in the article, but the TL;DR is that CF and DNSimple are simple pretending that a CNAME on the root domain is the corresponding A or AAAA record instead.

It breaks geographical CDN a bit but it works somewhat.


This is why I wish that as part of HTTP 2 they had allowed the use of SRV records and gotten it built into the browsers / clients etc.

SRV records are far superior - its a priority and weighted list of hosts for a protocol, which could really cut down on load balancing complexity.


I don't think there is a reason it would have to be tied to HTTP 2, and also not much to gain by explicitly including it. Proposals for using SRV records for HTTP have been around a long time, seems like there have been some open questions and not all that much interest.

The Mozilla Bug is old enough that it is a Mozilla bug (Firefox didn't exist when it was filed): https://bugzilla.mozilla.org/show_bug.cgi?id=14328

Chromium's bug is from 2009: https://bugs.chromium.org/p/chromium/issues/detail?id=22423 (which has some interesting comments regarding DNS fallback behavior and the latency penalties incurred)


The HTTP 2 standard must include provisions for SRV records to be used, since that is part of how clients should follow a URL. Additionally, the SRV specification itself says that a protocolspecification must say that SRV record should be used before any client of that protocol takes it upon itself to use SRV records.


The most explicit reference to name resolution I know of in any of the HTTP standards is RFC 7230 Section 2.7.1 (https://tools.ietf.org/html/rfc7230#section-2.7.1), which is still quite vague:

[...] If host is a registered name, the registered name is an indirect identifier for use with a name resolution service, such as DNS, to find an address for that origin server.

[...]

When an "http" URI is used within a context that calls for access to the indicated resource, a client MAY attempt access by resolving the host to an IP address, establishing a TCP connection to that address on the indicated port, and sending an HTTP request message (Section 3) containing the URI's identifying data (Section 5) to the server.

I don't think that excludes SRV-based name resolution. Some sort of standardization of course would be helpful, even if just for reference, but that could in my mind be an independent document recommending to use SRV instead for HTTP, without any detail about the version (since HTTP 2 has no property that makes it more or less fit for use with SRV records than 1.1). Adding something that's totally unclear if it ever will see any use to HTTP2 just because seems worse.


Yes; a thousand times yes.

I have written about this here before:

https://news.ycombinator.com/item?id=8404612

https://news.ycombinator.com/item?id=8850251


Is that relevant to HTTP/2, though? Does HTTP/2 say anything about DNS at all?


Isn't that the same as doing DNS round robin?

Also with software defined networking load balancing you can have one user facing ip be backed by many servers.


No, DNS round robin has ... issues.

With SRV, you can set a priority, so it goes to a set of hosts first, then falls back.

You can also weight traffic so that some hosts get more than others.

(also host is actually an A (or AAAA) record - so it could have multiple values as well)

> Also with software defined networking load balancing you can have one user facing ip be backed by many servers.

Sure - but software defined (or normal) load balancing is usually inside the datacenter or region. Its another tier behind the DNS layer.


Also, DNS round robin won’t work with IPv6. At all. Round robin DNS depends on a client connecting to the first address record it receives in the DNS reply, and the DNS server altering its responses to set different addresses as the first one each time it sends a DNS response. But with IPv6, a client host is required to connect to the address closest to its own (as determined by the longest range of common bits), regardless of the adress’ position in the DNS reply.


Tragic missed opportunity.


I'm surprised that the article doesn't mention anycast, which is more or less the "correct" way of using a CDN on an apex domain, since for the user's purposes it's just a static IP address.

I find anycast to be convenient even for subdomains, since it isn't affected by things like DNS caching, (although things like edns-client-subnet apparently help with that).

I'm actually currently looking for a CDN for my website. I don't like www (just personal preference) so anycast is pretty important to me, but there don't seem to be a lot of providers offering anycast for decent price. The closest I've seen is Google's Cloud CDN, which out of all the CDN's I've tried (a lot), is one of the best, but for a small site like mine I tend to get more cache misses than hits (simply due to eviction).

Maybe I'll write up a blog post about this issue :)


It's odd to hear a CDN complaining about this limitation when it has already been solved for well over a decade by other leading CDNs.

Akamai can serve your apex domain from their edge servers. They do it by giving different answers for the A record to different users, based on where each user is coming from. All that's required is that you use them as your NS.


If you read the start of the article you'll see we do that as well. This only applies to people that don't use netlify for DNS.


At SunSed, we use Google HTTP(S) Load Balancer which allows us to load balance our entire infrastructure via a single IP.

Our users don't need to worry about CNAME vs A records they can do what ever they want with the IP, since we don't need to change this IP there is no benefit for using a CNAME.

On top of that SSL handshake for HTTPS happens at Google front ends which reduces the load on our servers. Also we can send traffic to different sets of VMs based on the URL! How cool is that?

I really think that Google's HTTP Load balancer is the hidden gem of Google Cloud.


> On top of that SSL handshake for HTTPS happens at Google front ends which reduces the load on our servers. Also we can send traffic to different sets of VMs based on the URL! How cool is that?

Very cool - let's just hope Google is better at hiding contents of some random memory than Cloudflare.


The person who found the Cloudflare bug in the first place is a Google employee.


Are their motives similarly aligned?


Do you operate your own CDN?

If not, then basically on any CDN you need to trust them with the SSL certificates unless you serve your content over HTTP.

Unless you don't use a CDN at all!


I can't tell what your point is. Thats why we don't want them exposing random bits of their memory.


You could give them a certificate that is only valid for cdn.example.com, no?


Yes. But it's best to serve your entire website (including the HTML pages) via a CDN to reduce latency.


Am I reading this wrong or, does this only apply to people who are netlify customers?


It applies to any service where having them host your domain is done by publishing a CNAME record.

But, that's not the only way to do that sort of thing. Firebase, for example, allows you to use A records pointing at their IP addresses.

Cloudflare and WordPress.com allow you to make them the authoritative server for all your records, then they provide an edit interface.

Netlify doesn't mention these as good options, probably because they don't have them to offer.

Edit: Apparently they do offer these options, but have their own reasons for preferring the CNAME approach


I host my own sites and simply use A name records.

> When it looks up example.netlify.com, it connects to our advanced traffic director, that returns an A record with an IP address of the server from our pool of currently available CDN nodes that’s geographically closest to the end user.

It looks like the way their DNS redirects/loadbalacing work is the reason they don't simply allow A records to a static IP.

This gets into the whole "you could be redirected to other servers based on your geographical location" issue; and not necessarily your location but the location of your DNS server! I'm not sure if Netlify does this, but Akami does work with ISPs DNS servers around the world to return different results to get to the closest CDNs. This is why using Google DNS (8.8.8.8) resulted in slower loads for Akami customers.


Author here. We do actually offer all of these options.

We offer a public IP address for A records pointing to a our main load balancer. This will send all traffic to a single origin instead of serving your HTML pages out of our global CDN.

We also offer DNS hosting for pro plans and up. When you move your DNS to Netlify, the caveat about naked domains doesn't apply (as mentioned in the first paragraph), since we hook the domain record straight into our global traffic director.

For enterprise customers we also offer an anycasted IP address that lets you use our CDN with a normal A record, but we still recommend either using our DNS hosting or a www domain since the DNS based traffic direction is faster at responding to localized issues and offers more precise traffic distribution.


Wouldn't a simpler (for the end customer, not for you) solution be to use Anycast on a (or block of) IP addresses and then let folks always use A records as intended? Solves the ANAME non-local caching issue and also handles people using DNS servers not nearby to them.


We do run an anycast CDN network, but there's a lot of limitations on BGP routing compared to CDN based traffic direction.

We can only route BGP requests to hardware we control, whereas we can add PoPs in all the major cloud providers on our DNS based network. We can then use tools like Cedexis or DYNs internet intelligence to identify where the different cloud providers have the best networking and peering agreements and piggy back on that + their DDoS mitigation. This means we get a combination of all the best AWS/Google Cloud/Rackspace/DO, etc, etc has to offer in that aspect.

On the DNS based traffic director we can also do very quick traffic decisions (20s TTL, instant changes) whereas on our BGP routed anycast IP we have to be more conservative and force 10 minute intervals between any up/down changes for a PoP.


I did GeoDNS + Unicast IPs for a while. I had a really rough time making it work, and we ended up building our own anycast network (https://status.neocities.org)

Aside from the root domain issues (and less options for market-priced bandwidth), "GeoDNS + Cloud" pushes your traffic into someone else's ASN, which means complaints end up being sent to them, and your hosting is effectively governed not just by one, but by two different ToSes.

This isn't a big deal for a couple thousand sites (unless they're huge), but once you start getting into the hundreds of thousands, you'll see a significant spike in issues (phishing, malware, spam, DMCA, legal threats, etc.) that get sent to whomever owns that IP address. After getting too many of these complaints, those other providers can decide you're just not worth the effort and boot you off their servers.

Crazy hypothesis? Sounds like it would be, but it happens: https://twitter.com/surge_sh/status/685164708861624325. DO did the same thing to us when we tried to use them for part of our CDN early on. After that, I tried three other cloud services that either did the same thing or threatened to do the same thing (to say nothing about the ridiculously overpriced bandwidth).

The choice we were left with: Get our own AS, or die. Mind you, this was over < 30 abuse reports per month, not thousands. Most of these providers are designed for a single company or a wordpress blog, they're not designed (and not really equipped) for usage as infrastructure for a web hosting provider with hundreds of thousands (or millions) of customers.

Building out the anycast CDN was a "drinking from the firehose" experience and had some upfront costs I would have rather not paid, but it solved this existential problem for us permanently, and probably saved our life. From experience, I do think you'll have to do this eventually (or at least do GeoDNS + unicast with your own IPs and AS).


Have you written up your experience with building out the anycast CDN? That would be extremely interesting!


I'd be interested in reading that too


Does this anycasted ip actually serve the html page/assets or does it reply with a redirect to a "stable" ip announced from the same pop for actually serving the assets?


It serves the HTML or assets directly


Don't you have to worry about the route changing during a TCP connection, causing the destination physical server to change, severing the connection?


Yes, that is also my concern. I hope Matt will share his experience here.


You might want to fix the "two benefits: 1. [...] 1." in the article.


This could be solved by a new record, of course, but how many years exactly would that take? So many companies would have to jump on board.

Thinking a record like `DELEGATE <comma delimited list of record types> <priority> <name server>` or _something_.


> So many companies would have to jump on board.

Noob question: what companies would have to jump onboard to get a new record up and running? Could it not just be one company like DNSimple who first adopts it?


It would require extensive support from browser vendors, so if google got behind a proposal like that, it could probably be pulled off.

Most servers would likely use both protocols for quite a long time before one could be discarded.


I feel like the biggest problem would be all the ISP's DNS servers, ISPs are notorious for breaking all kinds of stuff and this would probably be just another thing they break.


Technically, one company (like DNSimple) could add a new record and start using it themselves. For it to be universally supported, however, it would need to go through the standards process and become part of the DNS standards.


My computer doesn't understand your new record type, so if you want me to see your site, you're going to have to wait until I upgrade.


Using 'computer' is a bit disingenuous there. Upgrading a computer sounds like you're buying a new laptop or replacing hardware to deal with a software protocol change.


The software installed on my computer doesn't understand your new record type, so if you want me to see your site, you're going to have to wait until I upgrade it.


I have a hunch you upgrade at least once every six months or so, often to music (;

My point was that your wording made it should much more like a hardware upgrade, which is on a slower cycle for sure.

And I think you'd probably good. I hear your preferred OS has a good track record for implementing new networking stuff quickly.


Well, yeah, but more generally there are other people too. :)


Adding www Doesn't make any sense for URL shorteners for example. The same occurs on media like Twitter where chars are counted and "precious": using www. adds 4 chars to the message (in theory, since those url shorteners are in help).

Another detail I've noticed since wide adoption of browsers that include a single combined url/search entry field. Most people don't even care about the exact URL, they just enter the name they believe the website is, and let the search engine do the job if mistyped or inexistant. (That leads to phishing attacks).


Do (informed) people even use URl shorteners anymore, given that they become a malware vector?


I agree, but unfortunately most major corporations/websites do shorten urls...


Why use CNAME at all? You can put the same IP address into as many A records as floats your boat. Bonus: saves a round trip to the DNS server.


Because it lets different organizations/organizational units control different parts of the resolution. for example, you don't want to give heroku control of your whole dns (and they don't want to be in the dns business), but you want to let heroku change the actual network ip addresses that handle your app on their own, you don't even want to have to know what it is.

cnames are what make 'the cloud' work.


Yes!

CNAME is a solution looking for a problem.

Just use A records, everything is just better that way.


Surely AAAA these days :-)


this is slightly off topic, but is there anyone who can elaborate a little bit on why/where/how netlify differs from heroku? it's a little more expensive and you cant host your back end, so im a little confused of the value provided.


I find it perfect to host static pages generated with hugo.


hugo looks pretty cool, thanks!


It very much depends on the age of your target market. I'd say there's a cutoff around age 30 where people simply omit the www. when talking about addresses and assume everything is just whatever.com.


Short answer: Dont www Long answer: Do www


At least Facebook don't uses WWW.


They do


I did tested earlier it was shown as facebook.com but now it shown www.facebook.com. They are something, after this discussion.


www is the relic of the early Internet. Really no point in it today.


This is an ad... Why is it on the front page?

The article brings absolutely no value.


104 comments says that you're wrong. There is a many years long discussion about www vs non-www and this is a continuation of that. It served the purpose of sparking the conversation, that was its value.


www is dead.


Long live www.


A few months ago, we built https://www.forcewww.com/ to make our lives, and that of our customers, and everyone else, easier.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: