Yes there are. It is well documented in other countries such as Venezuela or Argentina and some vendors even prefer cryptocurrency because compared to their national currency it is more stable. In addition, there are significant remitance and cross-border payments done in crypto where banking or FX controls make dollars hard to access in countries like Venezuela and certain regions in Africa.
Day-to-day transactions at the street level may not be dominated by crypto or even a majority, but it is a growing nontrivial minority in a lot of places especially emerging markets.
> countries like Venezuela and certain regions in Africa
So approximately none of the world's money and only in the most desperate situations.
I'm all in favor of some kind of vehicle like this, a currency supplemental system that is decentralized and safe. What I don't give a shit about and what is obviously useless are the "usecases" people keep coming up with beyond currency. The absolute #1 best usecase is ticket NFTs - proving you did something on the network to the outside world. As soon as you need to prove to the network something you did on the outside world - 99.99% of the ideas people have - you are back to uselessland.
I think the parent has a valid point. The actual README says "inspired by Apple’s Private Cloud Compute".
I think it's more fair to say it implements the same idea but it is not an opensource implementation of Apple's Private Compute Cloud the way e.g. minio is an implementation of S3, so the HN title is misleading.
> though I'm curious whether Rust's integer overflow behavior in release builds would have definitely fared better?
Based on the descriptions it's not the integer overflows that are issues themselves, it's that the overflows can lead to later buffer overflows. Rust's default release behavior is indeed to wrap on overflow, but buffer overflow checks will remain by default, so barring the use of unsafe I don't think there would have been corresponding vulnerabilities in Rust.
It is always harder, because it always take more time.
We don't know the ratio (how many bugs more would have been found if VMware would be open source)
We can agree to disagree. I just don’t think it’s the high order bit in determining the rate of vulnerability discovery - in my opinion the commercial utility (white / black / grey) of the exploits is a more important factor in determining how quickly they are found.
Kind of odd that the blog states that "The architect for ZFS at Apple had left" and links to the LinkedIn profile of someone who doesn't have any Apple work experience listed on their resume. I assume the author linked to the wrong profile?
Also can confirm Don is one of the kindest, nicest principal engineer level people I’ve worked with in my career. Always had time to
mentor and assist.
Not sure how I fat-fingered Don's LinkedIn, but I'm updating that 9-year-old typo. Agreed that Don is a delight. In the years after this article I got to collaborate more with him, but left Delphix before he joined to work on ZFS.
My theory is Wayland happened. As SPICE doesn't work that well through it. I would assume it's another case of a "niche" the Wayland protocol didn't account for, however.
Interesting theory. Any idea on why SPICE wouldn't work well through it? I don't have recall running into any issues with it.
A more Wayland-oriented remote desktop protocol would probably make for an even better VNC alternative, but I don't really know why SPICE never got the uptake it deserved.
Wasn't aware SPICE was deprecated. However, I think it addresses a different use-case than RDP: SPICE is primarily designed for accessing virtual machines by connecting to their hypervisor. Thus it's designed to operate without VM guest awareness nor cooperation, going purely from a framebuffer.
This approach is fundamentally limited in terms of performance/responsiveness, as you're ultimately just trying to "guess" what's happening and apply best-effort techniques to speed things up, falling back to just using a video stream.
A proper remote desktop solution like RDP on Windows works in cooperation with the OS's GUI subsystem in a way that makes the RDP server aware of the GUI events, so it doesn't have to guess, and can offload compositing and some 2D operations directly to the client, rather than sending rendered bitmaps.
Thus it didn't catch on because it focuses on a narrow use-case nobody should be using except in emergency/break-glass situations (you should instead be remoting into the VM directly, for reasons explained above), and even for such situations, it didn't offer anything substantial over VNC, except everyone and their dog has a VNC client by now, but good luck finding a functional SPICE client for anything but Linux.