Hacker Newsnew | past | comments | ask | show | jobs | submit | rrdharan's commentslogin

No, there actually aren’t. People just assume there are but the majority of desperate people turn to the barter system before crypto helps them.


Yes there are. It is well documented in other countries such as Venezuela or Argentina and some vendors even prefer cryptocurrency because compared to their national currency it is more stable. In addition, there are significant remitance and cross-border payments done in crypto where banking or FX controls make dollars hard to access in countries like Venezuela and certain regions in Africa.

Day-to-day transactions at the street level may not be dominated by crypto or even a majority, but it is a growing nontrivial minority in a lot of places especially emerging markets.

[1] https://www.trmlabs.com/reports-and-whitepapers/2025-crypto-...

[2] https://www.chainalysis.com/blog/2025-global-crypto-adoption...

[3] https://www.statista.com/statistics/1362104/cryptocurrency-a...


> countries like Venezuela and certain regions in Africa

So approximately none of the world's money and only in the most desperate situations.

I'm all in favor of some kind of vehicle like this, a currency supplemental system that is decentralized and safe. What I don't give a shit about and what is obviously useless are the "usecases" people keep coming up with beyond currency. The absolute #1 best usecase is ticket NFTs - proving you did something on the network to the outside world. As soon as you need to prove to the network something you did on the outside world - 99.99% of the ideas people have - you are back to uselessland.


Please tell me about a better system that would let me send money from EU to Russia to help my mother buy her medicine?


I’m shocked it’s even that high



Correct thanks for the links


I think the parent has a valid point. The actual README says "inspired by Apple’s Private Cloud Compute".

I think it's more fair to say it implements the same idea but it is not an opensource implementation of Apple's Private Compute Cloud the way e.g. minio is an implementation of S3, so the HN title is misleading.


https://access.redhat.com/articles/2201201 and https://github.com/git/git/security/advisories/GHSA-4v56-3xv... are interesting examples to consider (though I'm curious whether Rust's integer overflow behavior in release builds would have definitely fared better?).

> Unless the end goal is to rewrite the whole thing in Rust piece by piece, solving hidden memory bugs along the way.

I would assume that's the case.


> though I'm curious whether Rust's integer overflow behavior in release builds would have definitely fared better?

Based on the descriptions it's not the integer overflows that are issues themselves, it's that the overflows can lead to later buffer overflows. Rust's default release behavior is indeed to wrap on overflow, but buffer overflow checks will remain by default, so barring the use of unsafe I don't think there would have been corresponding vulnerabilities in Rust.


Actually Microsoft started this in the 1990s, long before FAANG was a thing. They just all adopted it.


“Why are manhole covers round?” “How many dry cleaners are in the city of Seattle?” were both infamous Microsoft interview questions.


Previously known as "Fermi questions"- commonly asked of undergrads and grad students in quantitative fields.


What is c9k short for?


I guess it's "clusterfuck" :)


A c9k problem is much worse than a y2k problem (yuck).


It’s really not harder for the folks with this skill set, and plenty of these vulnerabilities have been found in VMware too over the years.

https://www.blackhat.com/presentations/bh-usa-09/KORTCHINSKY...

https://www.darkreading.com/vulnerabilities-threats/vmware-z...

https://cloud.google.com/blog/topics/threat-intelligence/vmw...


It is always harder, because it always take more time. We don't know the ratio (how many bugs more would have been found if VMware would be open source)


We can agree to disagree. I just don’t think it’s the high order bit in determining the rate of vulnerability discovery - in my opinion the commercial utility (white / black / grey) of the exploits is a more important factor in determining how quickly they are found.


Kind of odd that the blog states that "The architect for ZFS at Apple had left" and links to the LinkedIn profile of someone who doesn't have any Apple work experience listed on their resume. I assume the author linked to the wrong profile?


Ex-Apple File System engineer here who shared an office with the other ZFS lead at the time. Can confirm they link to the wrong profile for Don Brady.

This is the correct person: https://github.com/don-brady

Also can confirm Don is one of the kindest, nicest principal engineer level people I’ve worked with in my career. Always had time to mentor and assist.


Not sure how I fat-fingered Don's LinkedIn, but I'm updating that 9-year-old typo. Agreed that Don is a delight. In the years after this article I got to collaborate more with him, but left Delphix before he joined to work on ZFS.


Given your expertise, any chance you can comment on the risk of data corruption on APFS given that it only checksums metadata?


I moved out of the kernel in 2008 and never went back, so don’t have a wise opinion here which would be current.


I don't quite understand what happened to SPICE. I know Red Hat deprecated it, and I can't tell if it was ever fully opensourced or not?

https://www.spice-space.org/developers.html


My theory is Wayland happened. As SPICE doesn't work that well through it. I would assume it's another case of a "niche" the Wayland protocol didn't account for, however.


Wayland. Move slow and break everything.


Interesting theory. Any idea on why SPICE wouldn't work well through it? I don't have recall running into any issues with it.

A more Wayland-oriented remote desktop protocol would probably make for an even better VNC alternative, but I don't really know why SPICE never got the uptake it deserved.


Wasn't aware SPICE was deprecated. However, I think it addresses a different use-case than RDP: SPICE is primarily designed for accessing virtual machines by connecting to their hypervisor. Thus it's designed to operate without VM guest awareness nor cooperation, going purely from a framebuffer.

This approach is fundamentally limited in terms of performance/responsiveness, as you're ultimately just trying to "guess" what's happening and apply best-effort techniques to speed things up, falling back to just using a video stream.

A proper remote desktop solution like RDP on Windows works in cooperation with the OS's GUI subsystem in a way that makes the RDP server aware of the GUI events, so it doesn't have to guess, and can offload compositing and some 2D operations directly to the client, rather than sending rendered bitmaps.

Thus it didn't catch on because it focuses on a narrow use-case nobody should be using except in emergency/break-glass situations (you should instead be remoting into the VM directly, for reasons explained above), and even for such situations, it didn't offer anything substantial over VNC, except everyone and their dog has a VNC client by now, but good luck finding a functional SPICE client for anything but Linux.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: