The documentation has the following to say on the subject:
> Each server has access to a pool of local drives. These drives are exported to servers via the NBD protocol which effectively makes them network drives. However, network between these drives and C1 nodes are dedicated PCB tracks which ensures minimal latency and avoids network congestion.
> There is no redundancy on these volumes, you need to handle redundancy on your side! They are archieved to a permanent storage when you start and stop your server.
> Local volumes are 100% SSD drives which are able to deliver a lot of IOPS and are perfect for random read/write patterns. The maximum size of LSSD volumes is 150GB.
Do you think that would be acceptable for a database under heavy load, compared to a true dedicated server?
Does anyone has tested this product and can share some benchmark?
For 3x the price, you can get a real Xeon processor [that will likely perform equivalent or better to 3x scaleway nodes] and 8x the RAM. At this price point, I don't see the appeal outside of the object storage.
> Each server has access to a pool of local drives. These drives are exported to servers via the NBD protocol which effectively makes them network drives. However, network between these drives and C1 nodes are dedicated PCB tracks which ensures minimal latency and avoids network congestion.
> There is no redundancy on these volumes, you need to handle redundancy on your side! They are archieved to a permanent storage when you start and stop your server.
> Local volumes are 100% SSD drives which are able to deliver a lot of IOPS and are perfect for random read/write patterns. The maximum size of LSSD volumes is 150GB.
Found on https://www.scaleway.com/faq/server#local_volumes