Thank god they preserved the one time purchase. I bought all of these apps back in like ~2013 and have been using them for literally 13 years with all updates (fcp, compressor, motion)
It's rare for a company to not only offer one-time purchases, and keep updating them, but also not rebranding/renaming/version cut-off charging at some point.
It helps that you have to continue to buy their hardware to keep running said software. I guess they could be greedy and keep making me pay for Logic every few years so I'm happy they don't do that but they're still making money off my initial purchase of logic just in a different way.
definitely, but to be fair, beyond that it's just linux. Most people would need claude code to get what ever they want to use linux for running reliably (systemd service, etc.)
That's probably just "what ever the OS does". The client only sees the IP used to connect to the proxy, and the proxy just says "please dial TCP using this IP", so it's up to the OS.
On another note, are you an LLM? You just made an account to post something that looks llm generated, and that one repo has contributors @pd8030938 and @aj9704845-code only
you have to tell me if you are, those are the rules
Sure. But if I see listen to some song, and copy it, and release it, I could get sued. Even if I claim that I'm merely inspired by the original content - the court doesn't care.
You don't need to redistribute the original material, it's enough that you just copied it.
Works fine on every browser I've thrown at it. Even Gnome Web (WebKit) allows scrolling just fine. Sounds like a Safari bug? Maybe a content blocker interfering with the page?
Will you have higher tier pricing plans in the future? I don't see a way to sleep them (if you mean other than idle), and the max plan has 10 running concurrently
I thought fly.io snapshots weren't guaranteed to stick around? Although I can can't find the docs mentioning it, but i checked within the last few months... maybe they changed it?
Alright nerd-snipe snooping research post happning now!
Seems like they are using JuiceFS under the hood, with an overlay root for your CoW semantics. JuiceFS gives them instant clone (because they're not cloning the whole rootfs), while the chnages to the overlay are done as an overlayfs and probably synced back to S3 via a custom block device they have mounted into firecracker.
You can also see they are using juicefs it for the "policy" directly (which I'm assuming is the network policy functionality). iirc juicefs has support for block devices too, so maybe they are using that to back the rootfs overlay.
One concerning thing is the `/var/lib/docker` mount - i ran this in an ubuntu container, did they... attach it? Maybe that's a coincidence, but docker is not installed on the sprite by default. (the terminal is also super busted when used through an ubuntu container)
I played with a similar stack recently, my guess is they are:
1. making some base vm, snapshotting it
2. when you create a vm, they just restore a copy and push metadata to it (probably via one of the mounts)
3. any changes that you make to the rootfs are stored on the juicefs block device (the overlay), which is relatively minimal compared to the base os. JucieFS also supports snapshotting, so that's probably how they support memory + filesystem snapshot and restore so quick
interestingly, seems they provision maybe a max disk size of 100GB for total checkpoints?
```
NAME TYPE SIZE FSTYPE MOUNTPOINTS
loop0 loop 100G /.sprite/checkpoints/active
```
fuse is definitely being used within the VMM, i can see a fuse mount and id being assigned. They're probably using juicefs directly for the policy mount because that doesn't need to be local nvme-cached, just consistent. The local-nvme -> s3 write-through runs on the hypervisor through a custom block device they attach to the firecracker vmm. This might just be the --cache-dir + --writeback cache option in juicefs. Wild guess is just 1 file per block.
guessing the "s3" here is tigris, since fly.io seems to have a relatoinship with them, and that probably keeps latency down for the filesystem
You’ll probably be waiting a long time, since Rust very explicitly doesn’t have “leak safety” as a constructive property. Safe Rust programs are allowed to leak memory, because memory leaks themselves don’t cause safety issues.
There’s even a standard, non-unsafe API for leaking memory[1].
(What Rust does do is make it harder to construct programs that leak memory unintentionally. It’s possible but not guaranteed that a similar leak would be difficult to express idiomatically in Rust.)
The specific language feature you want if you insist that you don't want this kind of leak is Linear Types.
Rust has Affine Types. This means Rust cares that for any value V of type T, Rust can see that we did not destroy V twice (or more often).
With Linear Types the compiler checks that you destroyed V exactly once, not less and not more.
However, one reason I don't end up caring about Leak Safety of this sort is that in fact users do not care that you didn't "leak" data in this nerd sense. In this nerd sense what matters is only leaks where we lost all reference to the heap data. But from a user's perspective it's just as bad if we did have the reference but we forgot - or even decided explicitly not - to throw it away and get back the RAM.
The obvious way to make this mistake "by accident" in Rust is to have two things which keep each other alive via reference counting and yet have been disconnected and forgotten by the rest of the system. A typical garbage collected language would notice that these are garbage and destroy them both, but Rust isn't a GC language of course. Calling Box::leak isn't likely to happen by accident (though you might mistakenly believe you will call it only once but actually use it much more often)
I think the main part of Ghostty's design mentioned here that - as a Rust programmer - I think is probably a mistake is the choice to use a linked list. To me this looks exactly like it needs VecDeque, a circular buffer backed by a growable array type. Their "clever" typical case where you emit more text and so your oldest page is scrapped and re-used to form your newest page, works very nicely in VecDeque, and it seems like they never want the esoteric fast things a linked list can do, nor do they need multi-writer concurrency like the guts of an OS kernel, they want O(1) pop & push from opposite ends. Zig's Deque is probably that same thing but in Zig.
The issue isn’t linked list vs dequeue but type confusion about what was in the container. They didn’t forget to drop it - they got confused about which type was in the list when popping and returned it to the pool instead of munmap.
The way to solve this in Rust would be to put this logic in the drop and hide each page type in an enum. That way you can’t ever confuse the types or what happens when you drop.
Was going to say this, but I don't think anyone actually wants to hear that Rust actually would have helped here.
As you're saying, the bug was the equivalent of an incorrectly written Drop implementation.
Nothing against Zig, and people not using Rust is just fine, but this is what happens when you want C-like feel for your language. You miss out on useful abstractions along with the superfluous ones.
"We don't need destructors, defer/errdefer is enough" is Zig's stance, and it was mostly OK.
Impossible to predict this kind of issue when choosing a project language (and it's already been discussed why Zig was chosen over Rust for Ghostty, which is fine!), so it's not a reason to always choose Rust over Zig, but sometimes that slightly annoying ceremony is useful!
Maybe some day I'll be smart enough to write Zig as a default over Rust, but until that day I'm going to pay the complexity price to get more safety and keep more safety mechanisms on the shotgun aimed at my foot. I've got plenty of other bugs I can spend time writing.
Another good example is the type vs type alias vs wrapper type debate. It's probably not reasonable to use a wrapper type every single time (e.g. num_seconds probably can probably be a u32 and not a Seconds type), but it's really a Rorschach test because some people lean towards one end versus the other for whatever reason, and the plusses/minuses are different depending on where you land on the spectrum.
> "We don't need destructors, defer/errdefer is enough" is Zig's stance, and it was mostly OK.
There's more than that. Zig has leak detecting memory allocators as well, but they only detect the leak if it happens. Nobody had a reliable reproduction method until recently.
If you wanted to match Ghostty's performance in Rust, you'd need to use unsafe in order to use these memory mapping APIs, then you'd be in the exact same boat. Actually you'd be in a worse boat because Zig is safer than unsafe Rust.
> If you wanted to match Ghostty's performance in Rust, you'd need to use unsafe in order to use these memory mapping APIs, then you'd be in the exact same boat.
Yea, but not for all the parts — being able to isolate the unsafe and build abstractions that ensure certain usage parts of the unsafe stuff is a key part of high quality rust code that uses unsafe.
In this case though I think the emphasis is on the fact that there is a place where that code should have been in Rust land, and writing that function would have made it clear and likely avoided the confusion.
Less about unsafe and more about the resulting structure of code.
> Actually you'd be in a worse boat because Zig is safer than unsafe Rust
Other people have mentioned it but I disagree with this assertion.
Its a bit simplistic but I view it this way — every line of C/Zig is unsafe (lots of quibbling to do about what “unsafe” means of course) while some lines of rust are unsafe. Really hard for that assertion to make sense under that world view.
That said, I’m not gonna miss this chance to thank you and the Zig foundation and ecosystem for creating and continuously improving Zig! Thanks for all the hard work and thoughtful API design that has sparked conversation and progress.
This is trivially false... for instance here's a line:
const pi = 3.14;
It's actually a pretty small subset of the language that can cause unchecked illegal behavior.
Also IMO the word "safety" should include integer overflow. I don't agree that those kind of bugs are so unimportant as to not be checked in safe builds.
> This is trivially false... for instance here's a line:
Yep, that was really wrongly stated on my part -- what I meant is that the kind of protections that "safe" Rust provides are not available anywhere in average lines of Zig code (though they can be detected with tooling, etc).
What I should have written is that I could easily write unsafe code anywhere in Zig (as in C). In practice of course most people don't because they're not trying to destroy their own computers, and most code is benign. Rust will at least save me from myself some of the time.
> Also IMO the word "safety" should include integer overflow. I don't agree that those kind of bugs are so unimportant as to not be checked in safe builds.
Rust does do some work to catch trivial overflows, but you're right that it does not catch any slightly more complex overflows, and that is certainly unsafe in a sense. I don't think any reasonable person would disagree with that.
Rust's answer to this of course is checked_{op}/wrapping_{op}/etc options, and that's what I often see in high quality codebases where it matters. Of course, this is a footgun that could have had a safety applied and it's too late now (AFAIK) to change the default to be always wrapping or something (also, I think people may oppose always checked for perf reasons).
[EDIT] Just to compare/make this more concrete, playgrounds:
It's just that little bit of safety that makes it easy for me (personally) to default to Rust. Very possible that someday that won't be true.
[EDIT2] Also, somewhat under-discussed, but if Zig supported a bolt-on a "safety check compile mode" that ran with some stricter (maybe not quite borrow checking level) semantics, that would be pretty dope. Of course not something anyone should devote any real time to for a long time (or ever?) BUT it would trivialize a lot of these discussions maybe.
But in the mean time people just using what they're comfortable with/the feel they want is obviously fine.
Although Rust provides Wrapping<i32> if you want that, in practice you don't want that, wrapping unsigned integers are occasionally useful and I've written code with Wrapping<u8> and Wrapping<u32> types, but wrapping signed integers basically never come up. However it is significantly faster and it remains well defined so that's why it was chosen for release builds.
Those are great points, thanks for mentioning this, re-enabling overflow checks for release builds would indeed make the code safer with only a config change.
It's great that there are lots of options other than wrapping as well, checked, saturating, etc -- that at the cost of a little inefficiency make code that is robust to such failures really obvious.
I don't buy your theory that a Rust terminal would need to directly use mmap to deliver matching performance. In fact I doubt Ghostty's author would endorse this claim either, they've never tried any alternatives, they tried this and it works for their purpose which is a long way from other ways wouldn't work or would all be slower or whatever.
Please don’t get defensive and spread silly FUD. You can be proud of what you’ve accomplished without feeling sad that a different language has strengths that yours doesn’t.
Calling unsafe mmap APIs not only is unlikely to run into the corner cases where unsafe Rust is tricky to get right, there’s “millions” of crates that offer safe APIs to do so and it’s fundamentally not hard to write it safely (it would be very hard to write it to have any issues).
And fundamentally I think Rust is much more likely to be easier to get high performance because the vast majority of safe code you write is amenable to the compiler performing safe optimizations that Zig just can’t do regarding pointer aliasing (or if it does brings all the risks of of unsafe Rust when the user annotates something incorrectly).
I don't think this is silly FUD. The article describes a scenario where the low-level abstractions itself was buggy in a subtle way, the comparison to "unsafe" Rust seems entirely fair to me. (edited for typos)
With Rust you always could unsafely do whatever went wrong in somebody's C or Zig or whatever, but the question is whether you would. Rust's technical design reinforces a culture where the answer is usually "No".
I don't find the claim that weird low level mmap tricks here are perf critical at all persuasive. The page recycling makes sense - I can see why that's helping performance, but the bare metal mmap calls smell to me like somebody wanted to learn about mmap and this was their excuse. Which is fine - I need to be clear about that - but it's not actually crucial to end users being happy with this software.
I think we can agree that Mitchell knows what he’s doing and isn’t playing around with mmap just because. It’s probably quite important to ensure a low memory footprint. But mmap in rust is not extra risky in some weird mystical way. It’s just a normal FFI function to get a pointer back and you can trivially build safe abstractions around it to ensure the lifetime of a slice doesn’t exceed the lifetime of the underlying map. It’s rust 101 and there’s nothing weird here that can cause the unsafe bits here to be extra dangerous (in general unsafe rust can be difficult to get right with certain constructs, but it doesn’t apply here).
I actually don't think I agree about mmap. Reading around it seems as though Mitchell had clever ideas for abusing mmap and those didn't work out. Now he's got mmap for the pages and it works so why replace it, but that does not mean you need mmap to deliver this performance and in fact I'd be extremely surprised if that were true as somebody who spent about a decade of his life mostly writing close-to-metal database code in C using mmap...
If you want a whole lot of bytes and you ask your allocator, do you know what almost any popular general purpose allocator will do on a vaguely decent modern Unix? Call mmap to get them for you. So at most you're cutting out a few CPU instructions worth of middle man.
Also in C or Zig you do not need to create your own memory management using mmap. Whether this is necessary in this case or not is a different question.
In the end, if the Rust advantage is that "Rust's technical design reinforces a culture" where one tries to avoid this, then this is a rather weak argument. We will see how this turns out in the long run though.
The long run has already spoken. Go look at the reports out of Microsoft and Android. It’s screamingly clear that the philosophy of Rust that most code can be written in safe with small bits in unsafe is inherently safer. The defect rate plummets by one or two orders of magnitude if I recall correctly. C is an absolute failure (since it’s the baseline) and Zig has no similar adoption studies. You could argue it will be similar if you always compile releasesafe, but then performance will be worse than C or Rust due to all the checks and it’s unclear how big a while the places that aren’t dynamically checked are.
Oh and of course rust is inherently slightly faster because no reference aliasing is allowed and automatically annotated everywhere which allows for significant aggressive compiler optimizations that neither C nor Zig can do automatically and is risky to do by hand.
More FUD and guilt by association. Microsoft and Google are also major contributors to the C and C++ standards bodies. Microsoft also has C# and Google has Kotlin. I think claiming they control Rust is weak given the community organization structure within the project and claiming the studies are inherently biased because they provide some funding is exceedingly weak.
IMHO the onus is on you to present any contrary studies showing Rust's safety profile isn't as good as the studies indicate when compared with C++ or to demonstrate where Zig's safety profile in real world complex environments stacks up.
We can disagree on opinions, but you can't discard all experimental evidence in favor of no evidence, especially when the safety profile of Rust is backed by solid theoretical models as to why it would be safer.
To that point, AWS and Cloudflare have also adopted the Rust language for all new projects. I think that says something about the recognition that it really is much harder to write trivial memory vulnerabilities.
I don't put too much wait on the self-reporting by Microsoft or Google. I agree though that the strategy to write safe bits and abstractions is good. What I know not to be true is the idea that similar strategies would not work also in C.
> What I know not to be true is the idea that similar strategies would not work also in C.
Is your argument that developers at MS and Google haven’t been trying to employ these strategies for existing C codebases? It’s a bold position to take and one I’d say devoid of evidence; all the evidence suggests it’s really hard to reason about ownership in complex systems and abstractions only help you do so error free up to a very limited point.
I know for sure that Microsoft does not, because they are not interested in C (and there compiler does not even fully support recent standards) and I assume the same thing about Google. I general, I do not think they write much C in the first place. I also think their use cases and priorities are different from others.
> We will see how this turns out in the long run though.
Rust 1.0 was in 2015. This is the long run. And I disagree that safety culture is a "weak argument". It's foundational, this is where you must start, adding it afterwards is a Herculean task, so no surprise that people aren't really trying.
I am not saying that safety culture is irrelevant, not at all. I am saying that if the advantage of Rust is the culture that emphasizes safety (or rather memory safety, if the Rust community cared about safety in general cargo would not exist in this form) then that is a weak argument.
I don't think 10 years ago there was a lot of Rust used, so I am not sure how relevant it is that 1.0 was released at this time.
The culture of Rust is pretty uniform both in terms of convention (lots of good examples to learn from) and automated tooling (eg cargo clippy can fix many constructs into cleaner versions).
But sure, ultimately any code you see is limited by the talent of the author. However the safety of that code is not - it’s limited by how many unsafe blocks they wrote which you can actually grep for.
This is a naive and dangerous view of "unsafe". The safety of surrounding code depends on the unsafe blocks not violating invariants of safe Rust, and the safety of "unsafe" blocks may rely on assumptions about the safe part. Also it relates only to memory safety, so if your code review is to grep for "unsafe" blocks you are doing it wrong anyway.
Of course, culture and technical design are both important for any language, but be specific. Despite the prevalence of tools that improve C's safety, writing C safely generally requires a culture of using those tools and other techniques. For better or worse, Rust's borrow checker is a clear demonstration of where Rust lies on the safety-freedom spectrum.
The low level abstraction was buggy because they forgot to free memory because they confused types, not because of mmap.
Thats completely orthogonal to the question and less likely in Rust because you would generally use an enum with Drop implemented for the interior of the variants to guarantee correct release.
And mmap is no more difficult to call in Rust nor more magically unsafe - that’s the FUD. The vast majority of Ghostty wouldn’t even need unsafe meaning the vast majority of code gets optimized more due to no aliasing being automatic everywhere and why the argument that “zig is safer than unsafe rust” is disingenuous about performance or safety of the overall program.
I don't know if this particular error would have been findable with zig-clr, but you don't need RAII. Errdefer/defer is enough, if you have an alogrithm checking your work.
> I think the main part of Ghostty's design mentioned here that - as a Rust programmer - I think is probably a mistake is the choice to use a linked list. To me this looks exactly like it needs VecDeque, a circular buffer backed by a growable array type.
This comment [0] by mitchellh on the corresponding lobste.rs submission discusses the choice of data structure a bit more:
> Circular buffer is a pretty standard approach to this problem. I think it's what most terminal emulators do.
> The reason I went with this doubly linked list approach with Ghostty is because architecturally it makes it easier for us to support some other features that either exist or are planned.
> As an example of planned, one of the most upvoted feature requests is the ability for Ghostty to persist scroll back across relaunch (macOS built-in terminal does this and maybe iTerm2). By using a paged linked list architecture, we can take pages that no longer contain the active area (and therefore are read-only) and archive them off the IO thread during destroy when we need to prune scroll back. We don't need to ever worry that the IO thread might circle around and produce a read/write data race.
> Or another example that we don't do yet, we can convert the format of scroll back history into a much more compressed form (maybe literally compressed memory using something like zstd) so we can trade off memory for cpu if users are willing to pay a [small, probably imperceptible] CPU time cost when you scroll up.
good on them
reply