$0.02/hr == approximately $14/mo if you leave it on.
Is this really $6/mo cheaper than a linode 512? That might be nice for personal projects.
I'm trying to teach myself some things that really need more than my local dev machine (puppet, backup strategies/more resilient code, learning cassandra). I've been running a bunch of VM's on my laptop, but my dev machine is a weakling and can't hardly handle it.
It almost seems like I could just spin up a dozen of these little instances for $2.88 per waking day and teach myself under substantially more "real" conditions on the cheap. That's something I'd love to have as an option on linode, given that teaching myself is a large part of what I use it for.
Is there any reason that wouldn't work? Is this too complicated in practice?
One thing to note is that even EC2's "small" instance pales in comparison CPU-wise to even the low end Linodes. When playing with EC2, it blew me away how ridiculously slow they are until you get to the high levels (oddly it seemed the memory speed was also very poor - I had to wonder if even memory was networked somehow).
I took a few minutes to google a few terms, like "cassandra EC2" and so forth.
I get the general impression that doing anything architecture-y or deployment-y requires amazon-specific steps, steps that I don't have to take on a barebones ubuntu install. (of the 'Do this to get Thrift working on EC2' variety)
That sucks some activation energy away.
The ease/hurdles of VMs on my dev machine, even if performance is a bit of a bear, is still more attractive to me than doing anything Amazon-specific, because it's still all just unix.
I don't want to learn "Amazon", I want to learn [puppet, Cassandra, et al].
hmm. I do cassandra on EC2 and I can't think of any ec2 specific setup that I need to do. Its just plain ubuntu. I do install my own custom cassandra to stay on top, but I would do that on any ubuntu install.
I think he's talking about making an AMI for his particular packages.
This is an issue for me to. It looks like a hassle and the instructions look vague and all I want to do is just setup a VM on my machine and then run it on amazon.
Turnkey Linux seems to make this better.
I'm guessing you're just using an off the shelf Ubuntu AMI, right?
These instances are different though, they are able to burst CPU usage, just like Linode. AFAIK, none of the other EC2 instance sizes are able to do this.
Linode doesn't "burst" CPU usage. Processor time is shared fairly among Linodes on a host, and you can use any time that isn't used by others. It's worth noting that each Linode has access to four cores, so you can go up to 400% CPU utilization. For a good idea of how Linode's CPU performance routinely exceeds that of competitors, try this review: http://journal.uggedal.com/vps-performance-comparison
Semantics. The end result is the same: there is almost always a large amount of spare CPU cycles on each physical box, which can be utilized by instances to "burst" above their allocated capacity.
It's more than semantics. The term "burst" carries the implication that you only get something for a short time. That's really not a good reflection of reality in this situation.
Words have meanings and carry implications. By your logic, we might as well use the term "banana" instead. The point is that the concept doesn't reflect reality, and thus your point has no merit.
I haven't seen anyone mention the cost of IO requests associated with EBS. Quoted from http://aws.amazon.com/ebs/ :
As an example, a medium sized website database might be 100 GB in size and expect to average 100 I/Os per second over the course of a month. This would translate to $10 per month in storage costs (100 GB x $0.10/month), and approximately $26 per month in request costs (~2.6 million seconds/month x 100 I/O per second $0.10 per million I/O).
I'm running 4 m1.small EBS-backed LAMP servers in all 4 regions. I pay about $1/mo for EBS IO (8-11 million IO requests per instance per month). The servers get a few thousand hits per day. The cost of EBS IO should be minimal in most cases. The benefit of EBS versus the local storage you'll get with Rackspace or Linode is greater durability because it is off instance and AWS automatically duplicates the data. If the VM host system fails (i.e. power supply goes bad), you can bring your instance back online within a very short period of time on another host at the exact state it was at the time of failure.
One word of caution: sometimes when an instance becomes unavailable (think network partitioning), all its EBS volumes will become unavailable as well. You can't snapshot, you can't detach. My most frustrating experience on EC2 yet.
If you use a 1-year reserved instance, it'll cost you about $115/year, which is substantially cheaper than the lowest-priced Linode. I'll be switching http://mbusreloaded.com/ after its Linode subscription runs out, as there's absolutely no way I use more than 10 GB of bandwidth per year.
Yup - you could - this is a fantastic move by Amazon - this is the one thing that's been missing from my experimentation world - Linode is fantastic for the servers I run permanently there - I have no plans to remove them, but the commitment to an instance for a certain period of time is what keeps me from using it to work on deployment scenarious and whatnot. Amazon is fantastic for this (and has differnet constraints, of course)
This just brought AWS down into the realm of competition with the likes of Linode and others, and we can now figure out how to spin up a dozen nodes and work with all the innards of aws without worring about getting accidentally hosed on forgotten instances... works for me.
Yes, but only if the nodes are in the same region and availability zone. From amazon's pricing page [http://aws.amazon.com/ec2/pricing/]:
There is no Data Transfer charge between Amazon EC2 and
other Amazon Web Services within the same region (i.e.
between Amazon EC2 US West and Amazon S3 in US West). Data
transferred between Amazon EC2 instances located in
different Availability Zones in the same Region will be
charged Regional Data Transfer. Data transferred between
AWS services in different regions will be charged as
Internet Data Transfer on both sides of the transfer.
None that I can see. I'm going to buy one reserved one, even though I have 7 hardware servers. I need to get better with the platform and for $54/year, the price is right
One thing that is useful: if you like to keep several instances configured and 'ready to go' for your learning experiments, when you are done, don't terminate them, just stop them. You don't pay for stopped instances (but then you do pay 24 cents a day per unattached Elastic IP address).
Linode is a VPS provider, not a cloud provider like EC2. Instances are always on, not on-demand. If they are ever killed it would be because of some kind of outage but I've found that to be extremely rare in my experience. Slicehost has also been good for VPS.
Aside: How far off are we from renting our PCs in the cloud, and just having a local terminal? I know it's an old Failed Dream (mainframe-terminal, client-server, settop box etc), but maybe we're getting closer...
It seems a bit ridiculous, because you still need a bit of local power for display and fast reactions, and current iPhones/netbooks could do with more power. But desktop PC's have been fast enough in that respect for a while. An advantage of the cloud is that as RAM, cycle prices etc drop, you get it (more of them or cheaper) without the hassle of physically upgrading. And bursty usage is available too, eg. when compiling.
There's solid economics here: it's a sort of timesharing idea, instead of cycles being wasted while you type, someone else uses them to compile. Even more compelling globally - someone else uses them while you sleep. The same argument works for sharing your desktop's own cycles, p2p, but a centralized cloud has admin advantages and other economies of scale.
I regularly use a cheap netbook as a terminal to an m1.large instance for development work. My scripts use spot instances to keep the price low--typically around $0.13/hr. Not for everyone, but it's saved me maybe $500 over buying a fast laptop. Unlike a laptop I can easily let others log into the system, leave it on for long compute jobs when I'm out of town, and don't have to worry as much about losing important data.
The "failed dream" been successfull but never for long. As disk bandwidth and network bandwidth increase irregularly and leapfrog one another, it goes in and out of fashion.
The user experience is a very sensitive variable. "Cheaper" matters to Enterprise folks but lag will make the rest of us pay a little more.
In fact local maintenance is the driving force behind web apps already. They generally lag like a mother, have lame controls and update in whole-page flashes, but I see more every day.
If you take a look at the sad state of this countries broadband infrastructure we are a long way off. I live in a "tech" city, I have two internet connections (Time Warner and ClearWire) patched together on a high-end router and I still wait for things I shouldn't have to.
OnLive sounds cool, but it's a little hard to believe, since latency is a common issue in regular gaming already. Demos are invariably carefully orchestrated. The other thing that gives me pause is that it is such a compelling vision for publishers. It's an absolutely magic idea. However, this enthusiasm is not technology-driven: it's not about what has become possible, but what would be really amazingly cool if it was possible.
But latency isn't such an issue for compiling etc. Someone was saying in the instant google thread that 250ms is imperceptible for search (and I've heard 200ms for the command line, from Rob Pike on Go). That's much lower hanging fruit than for gaming, where 250ms latency is far from instant but unplayable.
I thought there was no way it would work, got a beta invite a few months back and was blown away. My brain still insists that the game had to be installed on my local system to be able to be working so well.
How is it hard to believe? They are essentially doing what Akamai did and locating server farms very "close" to end users on the network. I have a 20ms ping from the closest Akamai CDN server.
My pictures are all stored "in the cloud" (on facebook, or flickr, or photobucket). "My" music doesn't even really exist anymore, it is playlists on grooveshark, or stations on pandora.
Documets? Google documents.
I do all of my web development on VIM running VPSs.
The only stuff that doesn't happen "in the cloud" is specialized, media-creation type things, an activity that I would be surprised if more than 5% of the population participates in.
There isn't really any data to be backed up here, except for the web dev things. For that, there are three servers that all exchange rsyncs every night. One is prod, one dev, and one that is just using some spare disk space on one of the arrays at work.
For the photos, the online stuff is the backup. The originals still live either on their negatives, or on SD cards.
online is not a backup. there is a reason we back up databases to magnetic tape, store them in remote locations, rotate tapes and test their integrity.
That CPU performance is consistent with what they give as the burst performance: up to 2 EC2 Compute Units, which should be 2x as much as the small instances' 1 ECU. Would be interesting to know how often/long you can burst, and what the non-burst baseline is, though.
Aside: I looked into AWS a couple weeks ago, to play with a simple webapp idea, but the myriad choices, acronyms and signups confused me, and there was [seemed to be] no free options for getting started, and initial traction. It seems focussed on sophisticated enterprise users (nothing wrong with that). So, I went with google's App Engine, which was much simpler, and has been great. These micro-instances seem the same.
If you can make your app fit within the significant limitations of App Engine, then it is a great service. I've used it for several projects from basic CMS to AJAX chat.
That said, sooner or later you'll want to do something that should seem possible on App Engine (e.g., image transformations with BufferedImage) and you'll hit a brick wall.
That's when I turn to a generic Ubuntu image running on EC2. It's not free, but with spot pricing it's awfully cheap. I expect spot pricing for this newest micro size to stabilize at around $10 a month.
Maybe not as huge a deal for Linux instances, but this is HUGE for Windows users. There's nothing comparable elsewhere. The cheapest Rackspace Cloud instance is $0.08/hour for 1GB. There is no faster, cheaper way to spin up a Windows server than AWS now.
Yeah, they're not going to run SQL Server. It's plenty for just serving up some simple stuff. I use the small instances mostly for testing stuff out with different configurations and running small short-term side projects. These will work perfectly for that.
Could you pay $14 and have a remote IE instance that the whole team can access no matter where they are, instead of using up your precious local memory on every machine?
A good strategy I've used with EC2 for the past 6 months is to always purchase spot instances using a bid price that is just slightly above the on-demand price. Generally spot pricing stays around the much lower reserve pricing but will occasionally spike. By doing this, you basically get reserve pricing without having to pay the upfront reserve fee, and can keep your spot instance online long term. This site provides some useful historical data on spot pricing: http://cloudexchange.org
I have been horribly burned and disfigured following this advice. It's important to keep in mind that just because it's unlikely the spot price will go above the reserve price, there's nothing from preventing that. It has in the past, and it will again.
The important take away: Don't use spot instances unless you are 100% okay with the idea that your machines will disappear without warning at some point. Let that sink in.
Presumably, there are people out there using it for what the things it (was likely) made for -- short computational tasks, temporary processing, or jobs that require lots of computation but aren't time sensitive -- meaning there will always be people who aren't using the instance 24x7.
Still, I wonder the same thing -- actually, whether or not it's possible for the spot pricing to go higher than the on-demand pricing, etc.
However, they can be great to help fill out your server cluster with things that don't always need to stay up (like Mongel servers). Buy some reserved instances, then put out several spot bids as well for things you can load balance to.
That said, keep in mind that if you need to keep at least one instance up, you need to put them in multiple availability zones, or vary the max spot prices - if there's a load spike that leads to your spot instances being terminated, it'll likely hit all of the spot instances at a given price in the same AZ.
This might just make me replace the Slicehost instance I use for Mercurial and build server. Elastic IP + EBS + micro instance makes a pretty nice low level machine.
It always bothered me that for a development server you are basically overpaying for bandwidth. Who cares I have 450GB bandwidth when I use maybe 30GB per month ?
Try Rackspace Cloud Servers (Rackspace owns Slicehost too). Pretty much the same configs as slicehost except no bundled bandwidth so you only pay $11 a month.
No affiliation to them but I do have VPSs at both (slicehost for things that require lots of bandwidth).
It seems there is no local storage included in the price.
I did not try, but that probably means complicated settings, which is a pity, since while the price could probably appeal to people launching side projects at minimal costs, like me, being a side project also means that not much time can be devoted to sysadminery.
Go to the AWS console and launch one. It's pretty much as simple as it can be. They automatically use EBS so they're persistent. No special configuration required.
Then it is really interesting. I have a few possible use cases in mind :
- we are hosting a little software load balancer for web services and it definitely does not need more than that
- thanks to the web service api of Amazon, it is relatively easy to set automated recovery plans, and the idea is very attractive, but until now I was detterred by the price.
- for small web applications with low bandwith, the price is good. For reserved instances, for one year, you pay 54$ up front, then 87.6$ for usage for the whole year, for a total of 141.6. That's a lot less than renting a server at linode.com for a year (~220$).
Don't forget that static IP is extra. All bandwidth is extra. Memory is at a fixed limit (some VPS will let you burst above your assigned memory). No software like Plesk to help configure the box. Those things can add up.
Edit 1: I mistakenly said that static IP was extra when it's not while in use.
Static IPs only cost money if you're not using them. For most small VPSes utilization is practically nothing. I'm not saying it will replace VPSes, but it's a valid competitor.
Yeah, the storage doesn't seem like a huge deal, but the bandwidth might be. A $12/mo VPS from prgmr.com gives you 80 gigs/month free transfer, which Amazon would charge you another $12 for.
If you try to construct a standard VPS or dedicated server plan out of EC2, you're always going to find that the bandwidth makes EC2 more expensive -- but that ignores the fact that most people don't even come close to using all their allocated bandwidth. The fact that AWS only charges for actual bandwidth used makes a big difference.
(The same applies with Tarsnap's $0.30/GB storage cost vs. fixed-plan backup pricing -- $10 for 50 GB sounds cheaper, but if people only use 5 GB of that on average, it turns out to be far more expensive.)
Perhaps you don't understand. This is EC2 we're talking about, not prgmr. People run massive production websites on EC2 because it is reliable, predictable, and secure. It provides features that allow you to recover from outages and failures. For instance, being able to periodically snapshot your EBS volume to S3 and recover from a total datacenter failure within a few minutes by reconstituting the volume in a separate datacenter. However, this just scratches the surface on what AWS offers over a traditional VPS provider.
Reserved instances (assuming you're going to use it for the full year, which of course is the big caveat) also bring the cost down to $0.007 an hour with an up-front investment of $54, so:
(($0.007 * 24 * 30)*12 + 54)/12 = $9.54 a month assuming you use it for the full year.
Edit: Also, as mentioned by _delirium, Linux instances are also $0.02 an hour. I did the Linux calculation but the same one with Windows goes at $0.012 per hour.
Same here. I am sure EngineYard will use this opportunity to provide services on micro-instances. It's going to be a big boost to EngineYard. Heroku, not as much I guess since they anyway run multiple dynos on a single m/c.
It's my understanding that the indices would be sharded as well, so wouldn't you be able to just fire up more instances if the indices started to approach some kind of 80% figure?
If you book a reserved instance, the price for lnx get as low as $0.01 ($54/yr).
It's a bit premature but for spot instances, atm windows ones are around $0.0135 (linux history is not yet available). As for other instance-types it looks like that with spot instances you'll get the usual 60% off the original price.
Sure. They probably followed a similar reasoning (example with fictional numbers, but probably close to reality):
* They have 64GB RAM hosts.
* They want to dedicate only up to 85% of the RAM to the Xen instances (keep 15% for the host OS, buffercache, etc).
* The Operations/Management team decides to target an overall rate of $1.82/hr per host to achieve desirable profits.
* The AWS marketing department has a requirement that instances be priced $.xx/hr (no fractional cents) to evoke "simplicity".
* At a first pricing attempt, they see they have the choice of charging $.02/hr and assigning 65536 * .85 / (1.82 / .02) = 612MB per instance
* ...or charging $.03/hr and assigning 65536 * 0.85 / (1.82 / .03) = 918MB per instance
* They select the first option (612MB/instance) because it is deemed sufficiently smaller than the existing "small" 1.7GB instance offering, whereas 918MB was not small enough.
That makes almost no sense. S3 is a purely network based file delivery service over HTTP, and pre-dates EC2 by a significant amount of time.
A Xen supervisor needs a fair amount of memory for its own operations, plus it can buffer the physical disks in the machine as well as any network attached storage. If these servers were also hosting S3 in their "spare time" it would degrade performance, and expose the system to potential vulnerabilities.
Primarily, the fact that Amazon really has its act together with respect to security. That silly HMAC canonicalization bug notwithstanding, they've made a whole lot of good design decisions. I currently use Linode. A year ago, I reported two vulnerabilities in their control panel to them, both elementary in nature, one of them with a PoC exploit. Last I checked, neither has been fixed.
So how often do EC2 instances go down? Is it at hardware fail rate? or more often? Can I use this as a VPS replacement, and not have to worry about monitoring and fast restoration? (of non-important projects).
I think you'd need a pretty large sample set before you came to any reasonable conclusion.
I've been running ~10 nodes for AdGrok for the last 3 months, and we've already had one node fail (in that it wouldn't respond to any ec2 cli command to shut down or even terminate).
That hardware failure rate is about what I'd expect if it was our own colo and our own machines. Stuff always breaks.
It's difficult to say. I've worked at two companies that have used EC2 for various purposes. One had an instance that was used for dev work get "corrupted" on three separate occasions (I couldn't get the specifics, but there were definitely I/O issues with the instance storage), and the other has been using the same production box for more than a year.
The bottom line is that EC2 installations need to be designed as semi-permanent. My preferred strategy is similar to how Google talks about their hardware (when they do), that any one server can go down at any time, but the overall setup is resilient to failure.
We've been running hundreds of instances on EC2 for a couple years, and have never seen one just "go down." However, we will get notifications of "degraded instances." When an instance is degraded, you have some window (generally a couple days) to move the services running on that instance to another one. Even at the aforementioned scale, this happens maybe once every three to four months.
Can you use this as a VPS replacement? Probably. My guess is that your uptime will be no worse than some VPS provider. However, if you're storing information on the ephemeral storage, the onus is on you to get it to the new instance. I imagine that isn't generally the case on a VPS.
You may be able to mitigate this by using EBS (required in the case of these micro instances), but I've only used EBS a handful of times, and am no expert on the subject. If I understand their layout for these micro instances, it would simply be a matter of spinning up a new instance and spinning down the degraded node.
I've been using the high-cpu medium instances (c1.medium) for our rails nodes, just to avoid the slothy m1.small CPU. It seems like these are tailor-made for running either haproxy or your web tier!
I notice that, in a sense, AWS proves that the Total Cost of Ownership of Windows infrastructures is higher than the TCO of Linux infrastructures.
Amazon charges more for Windows instances across their entire offering. A Windows micro instance costs 50% more than a Linux micro instance ($.03/hr vs $.02/hr). This likely reflects Amazon's statistical studies on their EC2 datacenters that a Windows stack (OS + apps) uses on average more resources than a Linux stack, therefore more power costs, cooling costs, etc.
I imagine the price difference also relates to licensing costs for Windows instances. Also, TCO is more than just compute efficiency, although the latter is important in its own right.
Who said amazon charges according to what is costs them ?
Infact, I CAN say, that, according to the market, windows is better than linux and hence amazon is charging more for
windows machines.
The Amazon EC2 is sooo slow. I spun up an instance for Amazon and Rackspace both to see how long it would take to render a frame in Blender. It is shocking how slow the difference was. I didn't do an apples-to-apples comparison, but the Rackspace blender 64bit 2.49b rendered in 47 seconds. The Amazon linux blender 2.48 rendered in 17 minutes!
I don't think that argument is superficial. At scale, Amazon is actually quite pricey. Currently, Amazon makes most sense if your site does have large variability in usage and if it makes use of the ability to spin up/down instances on demand. If you're an event-related site where usage goes up by a factor of 10-100 for a few hours every week, for example, Amazon makes a whole lot of sense.
However, if your usage is way up there all week long, it seems to me there are significantly cheaper alternatives, e.g. Hetzner servers.
One of the biggest advantages to using EC2 is its scaling capabilities. EC2 offers 10 different instance sizes from m1.small to cc.4xlarge (with 10 Gbps clustering capabilities), 4 different regions, auto-scaling, load balancing, high availability via off-instance storage, durability via copying, GigE uplinks, and much more. You can't get that level of features from in any other IaaS cloud I am aware of. Yes, you might pay more than co-locating yourself or leasing some dedicated servers... but that isn't exactly an apples to apples comparison to EC2.
Do you have a testing environment or multiple testing environments? Does it/Do they need to run all the time? If not, you save there. Do you like being able to spin up an entire duplicate of your environment to do environmental tests? You can't do that in normal server environments without ridiculous expenditure.
The best option if you wanted to do this would be to install SQL Server Express since you're not going anywhere near the memory limit of 4 gig that that product is bound by.
That way you're not going to incur any licensing cost for SQL Server provided you can live with the DB having a file size limit of 4 gig.
Tried it with SQL Server Express on Server 2008, pushed RAM usage right up to ~530/613 MB before even starting SQL Server.
Regardless, I'm glad to see the offering. Was looking for something similar and with the bar of entry constantly getting lower by competition rising, am sure it'll find a niche.
This is awesome, now it's super affordable for me to run beanstalkd and a few queue workers as necessary and communicate with my Heroku app over the Amazon private network.
That is exactly the position I'm in right now. Was going to use GAE but now I'm rethinking that decision.
A micro instance of their relational database service would be perfect for my use, but I guess the ram would be too small.
Hm... I don't see AWS and GAE playing in the same ballpark at the moment.
I'm sure AWS will continue to increase the convenience and ease-of-use of their services, and GAE will continue to increase the breadth of their services, but right now they seem to be targetted at quite different app-dev areas.
I'm an AWS user, and I also use Rackspace some, so interesting to find this article indicating you're better off with a small Rackspace instance than a medium AWS instance.
This study was sponsored by Rackspace... I think the end results are questionable. Here is a study I wrote comparing AWS, Rackspace and some other cloud providers using some more standard benchmarking methods: http://blog.cloudharmony.com/2010/05/what-is-ecu-cpu-benchma...
Is this really $6/mo cheaper than a linode 512? That might be nice for personal projects.
I'm trying to teach myself some things that really need more than my local dev machine (puppet, backup strategies/more resilient code, learning cassandra). I've been running a bunch of VM's on my laptop, but my dev machine is a weakling and can't hardly handle it.
It almost seems like I could just spin up a dozen of these little instances for $2.88 per waking day and teach myself under substantially more "real" conditions on the cheap. That's something I'd love to have as an option on linode, given that teaching myself is a large part of what I use it for.
Is there any reason that wouldn't work? Is this too complicated in practice?