Prices seem really great but a few paragraphs down they say the servers are based on Avoton SoCs. Intel Avoton is an Atom chip (Silvermont core), so CPU-bound performance will be somewhat lower than the usual Sandy Bridge/Haswell/whatever core that you get on AWS or Google Compute Engine. It's a server SoC though so I/O throughput is probably pretty decent...
That's what scares me a bit. The multi-core performance is great (considering their pricing), but single-threaded performance is quite a bit below what the competition gets you. If you're running a web server with something single-threaded (like PHP) requests might start taking a bit longer than you're used to.
Well, that is the whole point, you are suppose to have a whole bunch of small, inexpensive, power efficient, cores.
If the software you normally use doesn't take advantage of, at least, multi core hardware, then you can obtain more value from other hosting provider...
On the other hand if your software can takes advantages of multicore hardware, or -- even better -- multi node architecture, then scaleway is likely the best option.
Nearly no web framework is per request multi core (e.g. multi core HTML / JSON rendering, assuming backend operations already async). So each request will be slow.
[Edit:] Not sure for the downvote, would be interested what's wrong in my comment.
Almost all modern frameworks allow a request to be executed on multiple cores, though if all you're doing is HTML/JSON rendering there would very rarely be any performance advantage to doing so (though if there's an async point it will probably happen, i.e. one core will execute the part up until the call to the backend and then a different core may well pick up the continuation when the result comes back). The actual compute time to render HTML/JSON is utterly minimal (even if you're doing it in a super-slow language like Ruby or Python that requires a hashtable lookup for every function call); if you're doing linear algebra in your web frontend then you'll notice slowness (especially as a lot of SoCs may not have much FPU), but for typical frontend workloads the CPU usage is just utterly irrelevant compared to the cost of the backend I/O.
I understand how to use async for backends, I wrote quite a little bit about it [1]
"one core will execute the part up until the call to the backend and then a different core may well pick up the continuation when the result comes back"
This surely helps for one request if your backend or all your microservices are all on one machine, and you have several cores. But if you have your microservices on different machines, multi core will not help you speed up one request if it does not break down rendering of a page in chunks and distribute them to cores (and e.g. combine them with something like Facebook BigPipe (2010 tech)).
And yes multiple cores help with SEDA architectures but request and url parsing (which might be a SEDA stage) is too fast to have any real impact.
So what is it that you think is going to make a web app perform poorly on these cheap servers? They're slow, but they're nowhere near slow enough that the time taken to render HTML for a realistic page on one of these cores is going to be a bottleneck. Each individual core has poor throughput, but there are a lot of cores. Doing a bunch of backend calls in series for a single page will make your webapp slow but that's always true, don't do that (likewise microservices). If you're doing heavy compute for a single request then yes your system will perform poorly on these servers, but that's not usual for a web workload.
No. There's a certain amount of basic serialism in a single HTTP request. Go, Erlang, Haskell, a few others make it really easy to write handlers that may themselves be running on multiple cores, but the HTTP handling itself is essentially serialized by the fact that you have to get the request, then send the headers, then send the body, which itself probably has order constraints (such as HTML, which certainly does, at least unless you really go out of your way to write yourself a CSS framework that would render chunks of the HTML order-independent). Most of the required bits of handling a request, like header parsing, header generation, etc. have been made so efficient in the implementations that care about that that any attempt to multithread that would lose on coordination costs to a single-threaded implementation, I think... at least, I'd need to see a benchmark to the contrary before I'd believe in a win.
(You can play a lot of games in how a lot of requests are handled, even going to the HTTP2 extreme of interleaving responses to different requests on the underlying network stream, but what I said will be true on a per-request basis.)
That means you can process more requests at once (across several threads/cores), but per request timings will be slower because each core is slower than the cores on the competitions CPUs.
Modern PHP is much more performant than it used to be, and more so than other similar higher level dynamic web languages. Unless you're doing something wrong (like using Wordpress) or something unusual, network, db, file io etc. will dwarf PHP time.