Avotons are very slow, an 8C SoC will typically be slower than an 8 year old 2C desktop CPU (I ran Go builds as a benchmark on my own 2.4 GHz C2750 vs. a 2008 iMac with 2.8 GHz Core 2 Duo).
As for Scaleway, some people seem to like it very much, but I found their policy about spamming their users problematic. They (online.net) mock you at registration with a sleazy checked and disabled box for receiving spam ("product news" etc.), therefore I would consider their offers "ad-supported".
The C2 is advertised as "bare-metal", but since they offer a 4C variant, I doubt that (there's a 4C variant, the C2550, but that doesn't seem to be a sane choice). C2L might be a full dedicated box (or not), but C2S and C2M seem very much VPS/shared. It's likely to be based on SuperMicro MicroBlades: http://www.supermicro.nl/products/MicroBlade/module/MBI-6418... (4 nodes in 1 3U blade!).
Just tried for a few minutes all 3 of them. The C2S is a C2550, and the C2M and C2L are C2750. They each appear to be bare metal, and not virtualized. The VPS offering, which is advertised as a VPS, appears to be virtualized inside of a C2750. The C1 is a Marvell Armada 370/XP, a bare metal ARMv7l board.
> I would consider their offers "ad-supported".
This also is not my experience. I haven't ever received spam from them, nor received advertising.
> Avotons are very slow
This part of what you wrote is true, unfortunately. :(
Please run nbench [1]. I ran it on the C1 in April 2015 and got the following results:
CPU : 4 CPU
L2 Cache :
OS : Linux 3.2.34-29
C compiler : gcc version 4.8.2 (Ubuntu/Linaro 4.8.2-19ubuntu1)
libc : libc-2.19.so
MEMORY INDEX : 5.859
INTEGER INDEX : 8.164
FLOATING-POINT INDEX: 5.770
For comparison, here are an Intel Atom N450, a Core 2 Duo L7500 and a Raspberry Pi 1 Model B:
CPU : Dual GenuineIntel Intel(R) Atom(TM) CPU N450 @ 1.66GHz 1667MHz
L2 Cache : 512 KB
OS : Linux 3.2.0-23-generic
C compiler : gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5)
libc : libc-2.15.so
MEMORY INDEX : 10.845
INTEGER INDEX : 9.315
FLOATING-POINT INDEX: 8.748
CPU : Dual GenuineIntel Intel(R) Core(TM)2 Duo CPU L7500 @ 1.60GHz 1601MHz
L2 Cache : 4096 KB
OS : Linux 3.5.0-26-generic
C compiler : gcc version 4.6.1 (Ubuntu/Linaro 4.6.1-9ubuntu3)
libc : libc-2.13.so
MEMORY INDEX : 18.734
INTEGER INDEX : 14.318
FLOATING-POINT INDEX: 23.178
CPU :
L2 Cache :
OS : Linux 3.6.11+
C compiler : gcc version 4.6.3 (Debian 4.6.3-14+rpi1)
libc : libc-2.13.so
MEMORY INDEX : 2.536
INTEGER INDEX : 3.159
FLOATING-POINT INDEX: 2.157
How "reliable" are those numbers in estimating real application performance?
I've run nbench for a 1 CPU VM in EC2, and a 2 CPU VM in DO, and the former is a lot faster ( almost 2x ) than the latter!
EC2 Ubuntu ( 1 CPU )
==============================LINUX DATA BELOW===============================
CPU : GenuineIntel Intel(R) Xeon(R) CPU E5-2676 v3 @ 2.40GHz 2400MHz
L2 Cache : 30720 KB
OS : Linux 3.13.0-74-generic
C compiler : gcc version 4.8.4 (Ubuntu 4.8.4-2ubuntu1~14.04.1)
libc : libc-2.19.so
MEMORY INDEX : 39.370
INTEGER INDEX : 35.426
FLOATING-POINT INDEX: 53.665
DO CoreOS ( 2 CPU )
==============================LINUX DATA BELOW===============================
CPU : Dual GenuineIntel Intel(R) Xeon(R) CPU E5-2630L 0 @ 2.00GHz 2000MHz
L2 Cache : 15360 KB
OS : Linux 4.2.2-coreos-r2
C compiler : gcc version 4.9.2 (Debian 4.9.2-10)
libc : libc-2.19.so
MEMORY INDEX : 20.668
INTEGER INDEX : 19.277
FLOATING-POINT INDEX: 28.370
Edit:
Here's also a 2 CPU machine on Azure
==============================LINUX DATA BELOW===============================
CPU : Dual GenuineIntel Intel(R) Xeon(R) CPU E5-2673 v3 @ 2.40GHz 2397MHz
L2 Cache : 30720 KB
OS : Linux 4.2.2-coreos-r2
C compiler : gcc version 4.9.2 (Debian 4.9.2-10)
libc : libc-2.19.so
MEMORY INDEX : 28.667
INTEGER INDEX : 24.351
FLOATING-POINT INDEX: 39.950
This is helpful but you're not mentioning what instance type at EC2.. t2 and t1 instances do heavy throttling after you use up your CPU credits[1] so it's likely that you were just using all or most of a full Xeon CPU. I got similar results on even a t2.nano.. until I used up the CPU credits. Then it slowed to a crawl!
Also keep in mind that nbench only tests a single CPU[2].
Apples to apples would probably be a small m3 or m4 against a comparable DO instance. (but let's not go down that cost disparity rabbit hole.. esp EBS vs SSD or bandwidth!)
I don't think the comparison is wrong per se; this actually makes it a bit easier, since you're just comparing one core to another.
The only thing that I think is problematic is comparing a t2 to anything else, since that CPU performance is not sustainable as it might be elsewhere. m3 and m4 (and other instance types @ aws) are not explicitly throttled. (Source: AWS Solutions Architect)
We're running a portion of our workload at Userify[1] on AWS, but not the biggest portion, but this is actually for bandwidth cost reasons, not CPU (even though our workload is almost entirely CPU and bandwidth -- almost zero disk; our infrastructure at AWS would cost 8x more!)
I assumed nbench uses all the available cores, that's why I said I'm wrong.
On a 1 core basis, yes, it's useful, and I get that this t2 instance is not a stable baseline. I only have that available right now, as my free credit.
CPU : 4 CPU GenuineIntel Intel(R) Atom(TM) CPU C2550 @ 2.40GHz 2394MHz
L2 Cache : 1024 KB
OS : Linux 4.4.4-std-3
C compiler : gcc version 4.8.4 (Ubuntu 4.8.4-2ubuntu1~14.04.1)
libc : libc-2.19.so
MEMORY INDEX : 22.134
INTEGER INDEX : 17.899
FLOATING-POINT INDEX: 21.522
Here are the results for a Kimsufi KS-1 server which has an Intel Atom N-2800 CPU (1.86Ghz, Dual Core, 4 Threads) and 2GB RAM, 500GB HDD for 6€/month. I'm posting them here because they are competing in the low-end dedicated server space:
nbench:
CPU : 4 CPU GenuineIntel Intel(R) Atom(TM) CPU N2800 @ 1.86GHz 798MHz
L2 Cache : 512 KB
OS : Linux 4.2.0-25-generic
C compiler : gcc version 4.8.4 (Ubuntu 4.8.4-2ubuntu1~14.04.1)
libc : libc-2.19.so
MEMORY INDEX : 11.834
INTEGER INDEX : 11.540
FLOATING-POINT INDEX: 8.730
C1 root $ sysbench --test=cpu --cpu-max-prime=20000 run
sysbench 0.4.12: multi-threaded system evaluation benchmark
Running the test with following options:
Number of threads: 1
Doing CPU performance benchmark
Threads started!
Done.
Maximum prime number checked in CPU test: 20000
Test execution summary:
total time: 686.1683s
total number of events: 10000
total time taken by event execution: 686.1557
per-request statistics:
min: 68.59ms
avg: 68.62ms
max: 70.83ms
approx. 95 percentile: 68.62ms
Threads fairness:
events (avg/stddev): 10000.0000/0.00
execution time (avg/stddev): 686.1557/0.00
vs the VPS (VC1):
VC1 root $ sysbench --test=cpu --cpu-max-prime=20000 run
sysbench 0.4.12: multi-threaded system evaluation benchmark
Running the test with following options:
Number of threads: 1
Doing CPU performance benchmark
Threads started!
Done.
Maximum prime number checked in CPU test: 20000
Test execution summary:
total time: 45.9858s
total number of events: 10000
total time taken by event execution: 45.9822
per-request statistics:
min: 4.59ms
avg: 4.60ms
max: 4.98ms
approx. 95 percentile: 4.61ms
Threads fairness:
events (avg/stddev): 10000.0000/0.00
execution time (avg/stddev): 45.9822/0.00
Just for the lulz, here is nbench running on my Asus Chromebook Flip.
==============================LINUX DATA BELOW===============================
CPU : 4 CPU ARMv7 Processor rev 1 (v7l)
L2 Cache :
OS : Linux 3.14.0
C compiler : gcc version 5.2.1 20151028 (Debian 5.2.1-23)
libc : libc-2.21.so
MEMORY INDEX : 15.111
INTEGER INDEX : 14.374
FLOATING-POINT INDEX: 18.785
==============================LINUX DATA BELOW===============================
CPU : 4 CPU GenuineIntel Intel(R) Core(TM) i3 CPU 540 @ 3.07GHz 3059MHz
L2 Cache : 4096 KB
OS : Linux 4.3.0-1-amd64
C compiler : gcc version 5.3.1 20160224 (Debian 5.3.1-10)
libc :
MEMORY INDEX : 37.273
INTEGER INDEX : 25.801
FLOATING-POINT INDEX: 47.882
TLDR; For the blowfish test suite (multi-threaded), it's almost on par with an m3.2xlarge, for the majority of other multi-threaded tests, it's a little quicker than an m3.xlarge, but for anything that is single-threaded, it's pretty slow.
I'm not sure why I'm replying to what look like sockpuppets, but that photo looks nothing like Supermicro equipment. (Of course, "100% designed by our R&D teams" can mean "we sent the ODM a Powerpoint with some diagrams in it".)
I can confirm that, i've also tested 8 core one, it's avoton, very slow cpu, which i've tested some time ago also on their normal offer. But for 8.3 addditional euro you get 24 GB of ram and 120 GB more SSD if you compare it with their normal dedicated offer with avoton.
Prices seem really great but a few paragraphs down they say the servers are based on Avoton SoCs. Intel Avoton is an Atom chip (Silvermont core), so CPU-bound performance will be somewhat lower than the usual Sandy Bridge/Haswell/whatever core that you get on AWS or Google Compute Engine. It's a server SoC though so I/O throughput is probably pretty decent...
That's what scares me a bit. The multi-core performance is great (considering their pricing), but single-threaded performance is quite a bit below what the competition gets you. If you're running a web server with something single-threaded (like PHP) requests might start taking a bit longer than you're used to.
Well, that is the whole point, you are suppose to have a whole bunch of small, inexpensive, power efficient, cores.
If the software you normally use doesn't take advantage of, at least, multi core hardware, then you can obtain more value from other hosting provider...
On the other hand if your software can takes advantages of multicore hardware, or -- even better -- multi node architecture, then scaleway is likely the best option.
Nearly no web framework is per request multi core (e.g. multi core HTML / JSON rendering, assuming backend operations already async). So each request will be slow.
[Edit:] Not sure for the downvote, would be interested what's wrong in my comment.
Almost all modern frameworks allow a request to be executed on multiple cores, though if all you're doing is HTML/JSON rendering there would very rarely be any performance advantage to doing so (though if there's an async point it will probably happen, i.e. one core will execute the part up until the call to the backend and then a different core may well pick up the continuation when the result comes back). The actual compute time to render HTML/JSON is utterly minimal (even if you're doing it in a super-slow language like Ruby or Python that requires a hashtable lookup for every function call); if you're doing linear algebra in your web frontend then you'll notice slowness (especially as a lot of SoCs may not have much FPU), but for typical frontend workloads the CPU usage is just utterly irrelevant compared to the cost of the backend I/O.
I understand how to use async for backends, I wrote quite a little bit about it [1]
"one core will execute the part up until the call to the backend and then a different core may well pick up the continuation when the result comes back"
This surely helps for one request if your backend or all your microservices are all on one machine, and you have several cores. But if you have your microservices on different machines, multi core will not help you speed up one request if it does not break down rendering of a page in chunks and distribute them to cores (and e.g. combine them with something like Facebook BigPipe (2010 tech)).
And yes multiple cores help with SEDA architectures but request and url parsing (which might be a SEDA stage) is too fast to have any real impact.
So what is it that you think is going to make a web app perform poorly on these cheap servers? They're slow, but they're nowhere near slow enough that the time taken to render HTML for a realistic page on one of these cores is going to be a bottleneck. Each individual core has poor throughput, but there are a lot of cores. Doing a bunch of backend calls in series for a single page will make your webapp slow but that's always true, don't do that (likewise microservices). If you're doing heavy compute for a single request then yes your system will perform poorly on these servers, but that's not usual for a web workload.
No. There's a certain amount of basic serialism in a single HTTP request. Go, Erlang, Haskell, a few others make it really easy to write handlers that may themselves be running on multiple cores, but the HTTP handling itself is essentially serialized by the fact that you have to get the request, then send the headers, then send the body, which itself probably has order constraints (such as HTML, which certainly does, at least unless you really go out of your way to write yourself a CSS framework that would render chunks of the HTML order-independent). Most of the required bits of handling a request, like header parsing, header generation, etc. have been made so efficient in the implementations that care about that that any attempt to multithread that would lose on coordination costs to a single-threaded implementation, I think... at least, I'd need to see a benchmark to the contrary before I'd believe in a win.
(You can play a lot of games in how a lot of requests are handled, even going to the HTTP2 extreme of interleaving responses to different requests on the underlying network stream, but what I said will be true on a per-request basis.)
That means you can process more requests at once (across several threads/cores), but per request timings will be slower because each core is slower than the cores on the competitions CPUs.
Modern PHP is much more performant than it used to be, and more so than other similar higher level dynamic web languages. Unless you're doing something wrong (like using Wordpress) or something unusual, network, db, file io etc. will dwarf PHP time.
Compared with a Xeon D-1520 (the current hot chip of low-cost cloud computing and actually very nice), single core speed is less than half of the Xeon D (at about the same clock rate of 2.4GHz); multi-thread speed (8 threads running lame encodes) is about 66% of Xeon D.
> take their "storage snapshot", and turn off the server
Or rather, turn off the server and then take the storage snapshot, which is the order you have to do it. And you have to snapshot and reincarnate just in order to attach or detach a volume, which means adding some more storage space requires several hours downtime, which is a nuisance.
Although apart from that, I have found them very good.
For this usecase several hours of downtime is not much of a problem. In fact, the server will be off for more time then it's not. If you want to upgrade: wait until the backups are done and you should have 12hr till your next backup.
I'd hope that's more than enough time to expand a snapshot.
Also, how does pricing work for hourly servers? Is it still a quid a month?
I've been very pleased with http://packet.net and their typo 0 server. It's bare metal, but much more performant than here. They have an amazing network, it's by the hour, and it's < $40/mo. Bandwidth is not included: $.05/GB, but that's half AWS.
Also they're based in the US, so less latency for those of us with primary customer bases here.
We use them at DNSFilter in NYC as part of our anycast network, for the last two months. Looking forward to their San Jose and Amsterdam datacenter expansions coming soon.
>typo 0 server. It's bare metal, but much more performant than here
Packet's 0 server is Atom C2550 and costs 36$/month.
Scaleway C2S server is the same Atom C2550 but it costs ~$13/month.
Scaleway C2M is C2750 which has twice as much cores and costs ~20/month, so I am not seeing how it is possible for Packet 0 server to be "much more performant" when in fact Packet's offering is so much weaker.
If you need something US side, one of my ventures, microservers.io offers C2750 using Supermicro MicroBlades starting at $25/mo (with coupon code LAUNCH) out of Seattle. Includes full KVM over IP access so you can install any OS you like. Most popular distros are available via local samba.
We soft launched over a year ago and have a number of hosting industry insiders for customers, but are still working on the main, public facing website (domain just goes directly to an ordering page) as well as automation on the back end for near-instant provisioning. We'll be doing our official launch after that.
$0.05/GB may be cheaper than Amazon, but is rather expensive for bandwidth these days (would work out to approximately $10/Mb 95th percentile using a 200GB:1Mbps conversion ratio). With us, 2TB of bandwidth is included, and additional bandwidth is $0.002/GB or $0.40/Mb using a GTT/Tinet, Hurricane Electric, and Cogent bandwidth blend provided by my main hosting company's secondary/value network (AS63213).
I don't work for them, just a customer. It just happened to be what I typed for their URL. I'm not intimately familiar with how every site I visit does their redirects.
Their support isn't great, and they're only in Canada and EU, but Kimsufi[1] is my goto for beefy but cheap dedicated servers. Their cheapest offering is $5/month for an Atom, 2G RAM, and 500G harddrive. But where the value really lies is a step up to about $25 where you start getting non-Atom processors, 16+G of RAM, and 1+TB harddrives. Also free bandwidth. It's a real dedicated server so you can install whatever on it, but KVM access is expensive if you break it into not booting. You can deploy a fairly good set of distros through their built-in wizard for free
Somebody correct me if I'm wrong, but I got the impression that KVM access on KimSufi or SoYouStart requires somebody to walk over to your machine and plug a USB key into your machine. That would explain why it costs $30.
Not really in a data center. They have embedded firmware for KVM e.g. Dell's DRAC, HP's ILO, Oracle's ILOM, Intel's RMM2, and IPMI. Plugging and unplugging things into a server can be operationally hazardous and is definitely tedious.
OVH's more expensive servers do use IPMI and include free KVM. Their cheaper kimsufi and soyoustart offerings use a USB KVM to ethernet device, and charge for its usage.
In my experience online.net/scaleway support is better than ovh/kimsufi. Also, the scaleway offers have some sort of API at https://developer.scaleway.com in addition to web ui.
If you install a custom kernel and it crashes or hangs before it gets far enough to write the logs to the disk, the rescue image doesn't help figure out why the custom kernel isn't booting up.
Charging for KVM access.... Sounds extremely unpleasant. Why would you want a provider like the that, who is going to kick you in the genitals when you're down?
Just to put it into perspective, both Azure and AWS don't allow KVM access at all, at any price, period. If your VM doesn't boot then only THEIR support can restore it (and both of their support is expensive).
Is it even possible to have a KVM for a VM? Don't you need it to be bare metal for that? In which case, it would make sense why neither AWS nor Azure offer it, since they don't offer bare metal hosting.
Yes? It isn't called a KVM, but most hypervisors have a management tool which allows you to act as if you're sitting in front of a physical machine. This is how most allow you to install operating systems (aside from images).
> Don't you need it to be bare metal for that? In which case, it would make sense why neither AWS nor Azure offer it, since they don't offer bare metal hosting.
It is more a logistical problem. The hypervisor viewer isn't security aware yet, so in order to provide "KVM"-like access to a single VPS they'd need to alter how the viewer and interface works.
You can attach to a KVM or Xen instance with vnc, generally speaking, which allows access to the BIOS, grub, and other stages of the boot. It's generally how I would troubleshoot a non-booting VM.
Yes, a number of VPS providers offer it. Well not precisely KVM, since that's for bare-metal machines, but a similar kind of console access. For example https://prgmr.com/ does.
The blade servers at Delimiter (https://www.delimiter.com/) are even more affordable. I pay $20 a month for a dual Xeon blade with 16gb of ram.
They did have a very long downtime this year with no service credit, but uptime has been reasonable over the past year. If you're looking for a hobby box it's a pretty good deal.
If you're looking for affordable north american servers, OVH has the Kimsufi outlet which is cheaper and better hardware. They've got a data center in Canada and will give you American IPs from their AS in Newark, NJ:
The parent company of scaleway, online.net, also has some inexpensive offerings, but the servers are in France which might be a deal breaker for you. (Delimiter seems to be in Atlanta.)
That's correct, we offer largely older gear. We had a stack of E3-1225v3's at $39 recently though. We primarily offer services in Atlanta right now, but we're expanding to Los Angeles & NYC this month.
We have a few customers who purchase servers by the chassis (16 blades) to use for crawlers and other applications where they can scale horizontally. When you compare the E5420's at $20/month to even a lot of cloud hosts, you're getting quite a bit of dedicated resources vs. smaller allocations on newer shared gear.
I'm running a pair of L5639's in my main server. Perfectly usable. Yes, the cores are relatively slow, but you get plenty of them.
> OVH has the Kimsufi outlet which is cheaper and better hardware
I see one dual socket Westmere Xeon that's actually slower than the two you're complaining about (1/3rd as much cache, 5-20% slower clock), and a bunch of similarly ancient consumer-level junk.
I even spy an i7 920 in there. A 2008 first-gen Nehalem. That's a 130W CPU with terrible power management, how on Earth is that economical for them to deploy?!
> I'm running a pair of L5639's in my main server. Perfectly usable.
Disclaimer: I run Delimiter
That's absolutely what we're going for with our offerings. Gear that's off lease and long paid for, which allows us to slap some new drives in it and rack it at a low monthly price. But we only use proper server gear - so that means HP blade servers, ECC memory, dedicated ILO/KVM with each box, etc.
To be fair though: I'm a big fan of Kimsufi stuff, along with OVH's SYS lineup. Great for backups/testing/etc. The guys at OVH are great.
Kimsufi is good (used to be with them), but there are a few drawbacks:
1. Only one IP allowed. No way to request more.
2. No KVM access-- Delimiter has HP iLO
You can choose how many months you pay for (1-12), and sometime before that runs out they email you. Either you pay for some more months or the server goes away.
~50% of customers were back online within 14 hours. 96% were online within 24-28 hours. There were a few that stretched approximately 3.5 days due to disk/psu failures, etc. It was our first (and hopefully only) large outage.
We released a full RFO to our customers, but it was a really unfortunate series of events involving losing phase, burning out compressors/pumps for HVAC equipment, overnighting replacement parts from up and down the east coast and some from LA.
We had to keep ~10 racks of blades (doesn't sound like much, but that's over 500 customers) offline to manage heat as we brought up the HVAC equipment and added more spot coolers.
Baptism by fire, huh? :) I guess I was one of the unlucky ones, but it hasn't put me off. Hoping you guys expand to the UK (datacenter-wise) eventually!
Amazon bandwidth prices are utterly insane. Pretty much every reasonably hosting provider out there will be below 20% of Amazon on bandwidth prices even with metered bandwidth, and going down to around 2% of AWS bandwidth prices (still for metered).
Basically, if you're shifting lots of data out of AWS, it very often pays to rent some cheap boxes elsewhere to use as caching proxies. Every TB of AWS bandwidth you can cut trivially finances an extra server and still leaves you with savings.
Yes, and it's already the cases for the C1s. It's best effort, you're not guaranteed a sustained I/O if your neighbors are as greedy. But I have a few workloads at OVH and Scaleway that are bandwith-bound, and cloud pricings are just 10 to 100 times more expensive. That's how it is.
Their mother company (Online/Iliad/Free) does this to balance out their incoming/outgoing traffic in order to avoid to pay for transit, which is why they provide these services at/near cost.
nope, our network (Online.net/Scaleway) is AS12876, Free/Iliad is AS12322 neither of them reannounce the other AS to the internet, they are both independant and separated
I have a quad core i7, 64 GB RAM, 3 x 4TB hard drives, and a 1 Gbps pipe with OVH for around $100/mo. Unfortunately it is not in the US. The closest they have is near Montreal. They are expanding to the West Coast though I hear.
The main reason is vendor lockin. As long as you stay within AWS you will pay barely anything but if you want to use both AWS and Google Cloud then it will start getting expensive.
"unmetered" is a popular dedicated hosting company marketing trick. things are generally oversubscribed to horrifically oversubscribed and you'll be very hard pressed to sustain 100Mbps 24x7 for a full day, let alone a full month.
Dammit. Cheap - very cheap - for everything but storage.
It seems impossible to find a low-cost server with some big, slow spinning disks on it at the moment. I'm really not sure why.
Anyone got any recommendations there? Where would I look, if anywhere, for, say, 4Tb of storage attached to a low-cost virtual or dedicated server, for less than Google Nearline or equivalent?
I'm really happy with TransIP. You can get a cheap VPS with an SDD, and attach a multi-TB network disk to it. That said, it's not the same as having them locally attached, I get ~35Mbps on writes.
There is So You Start[1] which offers 2x3TB with soft RAID.. Dedicated server, so you might be able to pay some amount and get them to add more space to it
CPU:model name : Intel(R) Atom(TM) CPU C2750 @ 2.40GHz
Network:
root@scw-4e7977:~# ./speedtest-cli
Retrieving speedtest.net configuration...
Retrieving speedtest.net server list...
Testing from Free SAS (212.47.234.38)...
Selecting best server based on latency...
Hosted by NEOTELECOMS (Paris) [1.59 km]: 2.652 ms
Testing download speed........................................
Download: 881.83 Mbit/s
Testing upload speed..................................................
Upload: 513.56 Mbit/s
Not surprised to read in the comments they're a subsidiary of Online.net. Ever since they've introduced ultra-cheap dedicated boxes ten years ago, it seems everybody's been sub-renting either them or OVH. I wonder why there aren't more similar offers worldwide, at least in Europe (I can only think of Leaseweb, and they're data capped). The network backbone isn't much different in Britain, Holland or Germany. And the server units aren't really custom.
However last time I checked the still have only IPv4 and very few of them, it happens to me that I wanted to spin up an instance for a quick experiment but wasn't possible to have a public IP where to connect, mine was just a simple experiment, nothing important, but still...
I don't think there's much to compare there. Hetzner's servers are a completely different price range. Unless you get a used one here: https://robot.your-server.de/order/market (URL looks suspicious, but it's Hetzner, I promise).
These are servers that are fairly old and out-dated and whose users have discontinued their contracts; now they're being auctioned off so Hetzner doesn't have to throw them out. But these, I'd say, are also difficult to compare because of the old hardware vs. the new but low-cost hardware of the scaleway ones.
I know, but the parent comment was explicitly asking about root servers; I agree, though, that I should've mentioned these, as they fit more into the price range (and I imagine they're quite good, as I've mostly heard positive things about Hetzner).
People should care; the current clients of this service should care. It shouldn't be on top of HN for hours IMHO; by far most cannot do anything with this which makes it indeed cheap advertising. Or it should say 'invite only' or 'only for current members'. Then I probably wouldn't have read it and would've checked it when it said 'Signups now open'.
Can you please send me an invite? We are building something that's going to be heavy on the server side and this looks like a really good deal.
My email is visible on my profile.
Thanks!
Invitations won't get too far :
"Send an invitation to your friend to try Scaleway! Your invitation will get queued and sent when enough capacity is available."
C1 is a good idea and works well especially while arm distro are maturing.
IMHO C1 perfs are quite good for the price (700-900 tps on pg_bench, 1500 req/s on rabbitmq on debian).
We are running an ELK server which is performing well enough for us even with 2gb of ram.
I was just expecting a low cost (6$/month) 4cores ARM64 server with 8gb of ram, I think it would have been more exciting !
> I was just expecting a low cost (6$/month) 4cores ARM64 server with 8gb of ram, I think it would have been more exciting !
In case anyone doesn't spot it the closest is this:
> Our Starter VPS comes with 2GB of ram, 2 x86-64 Cores,
> 50GB of LSSD and 200 Mbit/s of unmetered bandwidth.
> It's available at the insane price of €2.99 per month
> or €0.02 per hour.
Where are Scaleway's data centers located? I found mention on their site that they're a Paris-based company, but is this the only geographical location for their offerings?
"The service is hosted in our own datacenters, Iliad’s datacenters, DC2 and DC3. Both are located near Paris, France."
I have to say this is quite an interesting offer, especially for private/side projects. Currently I rent a virtual server at Hetzner, which costs me 23 € / month and is way below the specs mentioned on the Scaleways page, so I am compelled to switch, provided reliablity is good.
Interesting. I'm currently using servers by OVH, which compete in the same low-end segment using their Kimsufi[1] offering, and have datacenters in France, Canada and Germany.
Paris is close to me, so these might be interesting for VNC or other low-latency applications.
$13/mo for 8GB of ram is the big thing for me here; most of my servers are not cpu bound, but require a lot of tuning to fit in the smaller VM instances.
They are a subsidiary company of Online.net, itself a subsidiary of Iliad/Free, one of the leading ISPs in France.
I have over 50 servers at Online since more than a year now, and they have been pretty much flawless. Support is good and reachable by phone. I haven't worked with Scaleway though.
For what it's worth: I created an account to try it out in January, played a bit with a cheap instance, and deleted the instance. Got billed in February (about 1€) while there was strictly no activity going on. Got to wait until end of March for the account to close, and with everything turned off I can see another bill coming in for 0.50€. Cannot remove my credit card details.
Don't care much about the undue handful of euros, but I would not trust them for billing.
Account was closed a while back. Now I can still log in, but only thing I can do is watch my bill increase and my attached credit card stay attached until they charge it. Online fun!
has somebody experience with Scaleway? Especially if you compare to DigitalOcean? First time I see them. Seems for me the biggest difference is that their servers are baremetal and not VMs?!
I've been using their ARM Servers for a while as a build/test servers for some embedded Linux stuff. Overall a positive experience and things have 'just worked' with a Digital Ocean style interface. That said I've not been hosting critical websites or similar.
Unlike Digital Ocean I like the ability to add flexible storage volumes.
Our BigV service has done this since 2012, on more traditional KVM-based virtualisation - up to 8 SSD or spinning discs which you can add & remove (as well as RAM & cores). We think vertical scaling has been pretty underserved...
A real x86/64 bare metal server for ~ us$3.30/mo inclusive, wow, it seems like only yesterday people were drooling at $99/mo 300mhz Cobalt RAQs at ev1servers (it was more like 15yrs ago). I was able to spin one up in <1min however I was previously registered at scaleway due to their last offering.
If you do CPU-intensive work on these servers, might not the electricity cost surpass the 0.02/hour income? Example: if a kWh costs 0.10, then to have something burning 200W costs 0.02 per hour. (Someone substitute real-ish numbers.)
Right now I'm developing on Amazon EC2, because of the low ping times and because I have a range of options in terms of being hosted around the world to minimize low ping times for potential customers. I would like to know what options would I have at scaleway.com, but I can't find that information easily.
All I found was: How are my servers positioned in terms of network proximity and resiliency? -- The ability to group your servers to create placement preferences is already integrated in the core of our system. We will expose it to you in the coming months.
Just yesterday tried to install Gitlab-ce in C1 then realized that there is no ARM build for it. Went to digitalocean just for x86. And today they introduced x86, great news.
I already have Scaleway account. I just finished some benchmarks on their starter vps and looks very interesting price per performances.
Cores are dedicated, based on Intel C2750 (on /proc/cpuinfo).
I have approx 23% more performances on cpu and iops than DO $20 plan. For equivalent $2.70 price, its 8-9 times less expansive for more performances.
It's time for me to think about migration :-)
I can't actually test it, but considering nbench shows the Avoton is faster on regular CPU operations, and that on top of that it comes with native AES-NI support (unlike the ARM CPU used by the Pi), it's probably much faster at Tor encryption.
Interesting. I have an invite and so I have thought about buying Scaleways for long. Their storage space fits what I need at an unbeatable price. May be that a better choice for me is the VC1 over the C1, as the CPU seems to be much better (Atom C2750) there. The cores will, however, be shared right? What kind of virtualization are they using, hardware-based? Very happy if you can make a response. This server looks really promising.
I can't wait for the day when it's actually cheaper to host in the cloud then hosting the servers yourself, electric, bandwidth, and storage costs included. This is a step in the right direction!
I can see nothing about where the servers are, which can impact latency dramatically in some cases. I have to serve to a chunk of our users in China, and I need to have at least a server in Asia.
I'd still like to know where I can buy the new Xeon D variants at an affordable price. The Avoton variants are okay but the availability is not good, certainly in the UK.
I've just tried it, but did not found how to resize/scale... And a bit disappointed about snapshot: you have to power-off your server before snapshot/backup.
I've tried the C1 to run JSPWiki on Tomcat, but it didn't work. I guess too little RAM. I'm sorry to see that there is nothing inbetween a C1 and a C2.
I would be careful with them. I've bought couple VPSes there and there was always an issues. In one instance provider vanished after 3 months, in another one VPS is still live but it's terrible slow (high ping latency). I am not complaining much because I paid for them ~$7/year (yes, thats year) but still, the same companies offer more expensive options and I wouldn't risk data lost. Use DigitalOcean, I think its very reliable and fairly inexpensive solution, just avoid London datacenter if you are in UK, for some reason Amsterdam is way faster.
I don't have any experience with Scaleway, but Vultr allows you to upload your own ISO and install it (even Windows Server). However, it's purely a VPS at competitive prices, not bare metal. Their bare metal servers start at $60USD/month.
I tried Scaleway last year. I was very disappointed with the usability of their offering (I was trying to run a simple Ubuntu Minecraft server - it was unnecessarily complicated to set up everything from ssh keys to snapshots), and stopped the account quickly. That very next day and for the first time ever my card picked up a fraudulent $3000 bar bill in Las Vegas. I don't think it was a coincidence.
Not parent poster, but the implication was that his card details had been sold.
Geography has nothing to do with it, I'm reliably told that one can cheaply purchase card details on Tor hidden services, paying in BTC.
I've not tried it myself, and the only time I've had my card cloned was the other way around. It was cloned IRL, and was then used online for gambling.
As for Scaleway, some people seem to like it very much, but I found their policy about spamming their users problematic. They (online.net) mock you at registration with a sleazy checked and disabled box for receiving spam ("product news" etc.), therefore I would consider their offers "ad-supported".
The C2 is advertised as "bare-metal", but since they offer a 4C variant, I doubt that (there's a 4C variant, the C2550, but that doesn't seem to be a sane choice). C2L might be a full dedicated box (or not), but C2S and C2M seem very much VPS/shared. It's likely to be based on SuperMicro MicroBlades: http://www.supermicro.nl/products/MicroBlade/module/MBI-6418... (4 nodes in 1 3U blade!).