> If CF offered support for running x64 linux .NET6+ binaries on these edge workers, I'd probably block off the next 3-4 weekends to play around with the stack.
The trouble with "containers on the edge" is that if we just literally put your container in 300+ locations it's going to be quite expensive.
Cloudflare Workers today actually runs your Worker in 300+ locations, and manages to be cost effective at that because it's based on isolates rather than containers.
We'll probably offer some sort of containers eventually, but it probably won't be oriented around trying to run your container in every location. Instead I'm imagining containers would come into play specifically for running batch jobs or back-end infrastructure that's OK to concentrate in fewer locations.
What does the roadmap look like around establishing some sort of multi-tier architecture within the CF product stack?
I imagine I could hack something together today by combining CF workers and another hyperscaler to run my .NET workload (TCP connection definitely helps with this!), but I think that there would still be a lot of friction with operations, networking, etc at scale. Ideally, workers and backend would be automagically latency-optimized and scaled relative to each other.
We don't have this in any public example yet, but here's our simple trick with Workers on how to bypass needing to pay for Amazon API Gateway or CloudFront but still get routed to the nearest AWS location:
1. Add add Lambda Function URLs as records in a latency based record on Route 53. (Lambda Function URLs do not support custom domains, so you cannot use this record directly.)
2. Have the Worker do a fetch to `https://cloudflare-dns.com/dns-query` on the Route 53 CNAME to discover what lowest latency Lambda Function URL hostname is.
3. The Worker can then fetch the Lambda Function URL using the discovered hostname.
What do you think about running it via Wasm? .NET runtime can be compiled to WebAssembly. It creates slow-start, but pre-initialized snapshot of WebAssembly VM, like AWS SnapStart should make it low again.
The trouble with "containers on the edge" is that if we just literally put your container in 300+ locations it's going to be quite expensive.
Cloudflare Workers today actually runs your Worker in 300+ locations, and manages to be cost effective at that because it's based on isolates rather than containers.
We'll probably offer some sort of containers eventually, but it probably won't be oriented around trying to run your container in every location. Instead I'm imagining containers would come into play specifically for running batch jobs or back-end infrastructure that's OK to concentrate in fewer locations.
(I'm the tech lead for Workers.)