I'm using tsx for a project to achieve the same effect. As you said, it saves you from having to set up a build/transpilation step, which is very useful for development. Tsx has a --watch feature built in as well, which allows me to run a server from the typescript source files and automatically restart on changes. Maybe with nodemon and this new node improvement this can now done without tsx.
To check types at runtime (if that can even be done in a useful way?) it would have to be built into v8, and I suppose that would be a whole rewrite.
I've switched to opencode. I use it with Sonnet for targeted refactoring tasks and Gemini to do things that touch a lot of files, which otherwise can get expensive quickly.
My heart stopped for a moment when reading the title. I'm glad they haven't decided to axe GPUs, because fly GPU machines are FANTASTIC!
Extremely fast to start on-demand, reliable and although a little bit pricy but not unreasonably so considering the alternatives.
And the DX is amazing! it's just like any other fly machine, no new set of commands to learn. Deploy, logs, metrics, everything just works out of the box.
Regarding the price: we've tried a well known cheaper alternative and every once in a while on restart inference performance was reduced by 90%. We never figured out why, but we never had any such problems on fly.
If I'm using a cheaper "Marketplace" to run our AI workloads, I'm also not really clear on who has access to our customer's data. No such issues with fly GPUs.
All that to say, fly GPUs are a game changer for us. I could wish only for lower prices and more regions, otherwise the product is already perfect.
I used the fly.io GPUs as development machines.
For that, I generally launch a machine when I need it and scale it to 0 when I am finished. And this is what's really fantastic about fly.io - setting this up takes an hour... and the Dockerfile created in the process can also be used on any other machine.
Here's a project where I used this setup:
https://github.com/li-il-li/rl-enzyme-engineering
This is in stark contrast to all other options I tried (AWS, GCP, LambdaLabs). The fly.io config really felt like something worth being in every project of mine and I had a few occasions where I was able to tell people to sign up at fly.io and just run it right there (Btw. signing up for GPUs always included writing an email to them, which I think was a bit momentum-killing for some people).
In my experience, the only real minor flaw was the already mentioned embedding of the whole CUDA stack into your container, which creates containers that approach 8GB easily. This then lets you hit some fly.io limits as well as creating slow build times.
My last position was at Atlassian working on various backend systems, and specifically I've developed (in clojure) the OT-based synchronization engine behind Confluence's collaborative editing feature.
I'm looking to join a company in Japan and would need visa sponsorship.
>Although I am being shown a bunch of accounts to choose from, I am unable to get photos from Google Photos to show on my tv. Not even my own.
The tweet (like almost any similar tweet in my limited understanding) is not clear and void of actual "proper" descriptions of the issue, but it seems like the issue is with "random" images appearing in the "ambient mode screensaver" (whatever it is):
>Private @googlephotos of strangers are being shown to me in the ambient mode screensaver.
What's interesting is that even though the value of for example Ethereum is going down, the usage statistics seem to be much healthier, specifically transaction volume [1] and address growth [2].
Google wave is an example of what works well with a central server: you have many documents that can be sharded across many instances because each document only needs to be consistent with itself, and your throughput limit is therefore only per-document which is more likely to be bounded by the number of participants that can practically interact in a single document.
There is still the issue of a node becoming hot because there is an unusually active document on that node, which would usually not happen in a completely decentralized model.
What you describe looks a lot like a Google Wave like OT system. Wave-style OT is eventually consistent, like CRDTs, but you need a central server to give the event history a total order. This is necessary because Wave-style OT is a 1-1 model: clients are 1-1 connected with the server, but not with each other, which would be n-n, which is what CRDTs can do.
The total order of the central server can make the system simpler and more efficient, but by itself it doesn't solve the problem that Wave has, which is allowing a client to edit his text/message without being interrupted by network latency/interruptions -- imagine typing a letter and having to wait for the server to acknowledge that keypress with >100ms latencies. To solve this problem, you still need some form of xform/merge algorithm that OT and CRDT systems provide.
EDIT: I assumed you were not familiar with OT systems since you didn't mention it in your post, but now that I followed your link I can see that you are. In that light, it seems your comment is more a question about what the tradeoffs are between OT and CRDT systems rather than whether a central server can solve all problems without xform/merge logic.
One tradeoff that comes to mind when thinking about OT and CRDT systems is in the way operations track locations in the datastructure. In OT systems you have offsets (small), in CRDTs you have uuids (large) or dynamically growing identifiers (usually small but possibly large). This has implications for the byte-size of operations or the in-memory datastructure.
Another is that CRDTs have a pruning problem. It has been some time since I looked at CRDTs, but I remember that Wave-style OT didn't have the same problem due to the central server. The pruning problem can cause a CRDT to grow larger than it needs to by forcing it to keep more historic data around just in case it gets an old operation it hasn't seen yet. The central server solves this problem by guaranteeing that it will have sent you all old operations before sending you a newer one. If you know all actors in a n-n system you can also solve this issue, but in an unbounded n-n system I didn't see any way this issue can be solved when I was researching it.
EDIT2: Just want to add that there are lots of other problems that are more practical than theoretical. For example, authorization, authorative copy of the data, REST API, things like that, but that would depend more on the exact use case.
With CRDTs log entries can be purged once they synchronize with everyone, no need to keep them just in case. Although it's rather implementation specific.
Thanks, I did address that when I said you can solve it if you know all actors in a n-n system, but I should have been clearer by pointing out the solution, which is (as you already said) every known actor acknowledging it.
In an unbounded n-n system I still don't see a solution.
Thanks for this! I think I'm still digesting, but I do find the contrast you're drawing between OT and CRDTs interesting. I don't think I had ever seen the fundamental difference between them to be the network topology, but I can't quite claim to understand either in sufficient depth.
Btw, Wave-style OT needs a central server, but there are other forms of OT that do not. They mention this in your link. If your OT system can satisfy the TP2 propery you can do n-n synchronization but still have what you'd call an OT system.
To check types at runtime (if that can even be done in a useful way?) it would have to be built into v8, and I suppose that would be a whole rewrite.