Compare the cost of operating multiple servers, on one hand, with the lost revenue from having weekly or monthly maintenance windows during which you just put up a Fail Whale page. Most people overestimate the latter by a huge margin.
That's fine if your service is really local - you can do it at night. Not really an option for a global site. Imagine if Twitter went down for a few hours every month. People are addicted to Twitter. It might be at a critical time for an entire country (e.g. the Queen dies). Even worse you can't guarantee how long the upgrade will take.
You'd definitely need at least two servers. But I think you could surely just have simple master/slave replication and switch between them.
Yeah, I was just responding to the "how about OS updates" part of the parent comment, for which scheduled downtime is a reasonable option. To protect your service from unscheduled downtime though, like a failing RAID array, you would need at least two servers.
Personally I wouldn't run a critical service on only one server, but two servers? Definitely doable. I actually have a service running on two servers in different DCs 700 miles apart. Zero downtime in 9 years. :)