Hacker Newsnew | past | comments | ask | show | jobs | submit | more willtemperley's commentslogin

I initially liked the sentiment but the offering doesn’t appear to add up. Unfortunately the real private cloud, if it exists, is bare metal and can’t really be sold as a subscription.


Arrow has a different use case I think. Lite3 / TRON is effectively more efficient JSON. Arrow uses an array per property. This allows zero copy per property access across TB scale datasets amongst other useful features - it’s more like the core of a database.

A closer comparison would be to FlatBuffers which is used by Arrow IPC, a major difference being TRON is schemaless.


Write the code yourself until you get stuck on something. Post the code, the context and the problem and the LLM will suggest something very sensible usually. I find these things are excellent at smaller but tricky-for-a-human problems.


I think it’s going to be great for smaller shops that want on premise private cloud. I’m hoping this will be a win for in-memory analytics on macOS.


I don’t think the effect Instagram and TikTok has on this attention market can be ignored. Living in a big Asian city I will check those first.


Isn’t a memory arena an application level issue? Like with Arrow I can memory map a file and expose a known range to an array as a buffer.


Sure, but I think the problem is there is an existing paradigm of libraries allocating their own memory. So you would need to pass allocators around all over the place to make it work. If there was a paradigm of libraries not doing allocations and requiring the caller to allocate this wouldn't be such an issue.


> I think the problem is there is an existing paradigm of libraries allocating their own memory.

That is a problem, and the biggest reason for why the arenas proposal was abandoned. But if you were willing to accept that tradeoff in order to use the Go built-in arenas, why wouldn't you also be willing to do so for your own arenas implementation?

> If there was a paradigm of libraries not doing allocations and requiring the caller to allocate this wouldn't be such an issue.

I suppose that is what was at the heart of trying out arenas in an "official" capacity: To see if everyone with bespoke implementations updated them to use a single Go-blessed way to share around. But there was no sign of anyone doing that, so maybe it wasn't a "big miss" after all. Doesn't seem like there was much interest in collaborating on libraries using the same interface. If you're going to keep your code private, you can do whatever you want.


Is there a world in which OpenSwiftUI will be usable on Android?


I don't know enough about Open Swift UI to answer. Are they gone? With Skip Tools, anything is possible, probably.


This is the equivalent of force-unwrap in Swift, which is strongly discouraged. Swift format will reject this anti-pattern. The code running the internet probably should not force unwrap either.


Yes Claude is down with a 500 (cloudflare).


The largest number factored by Shor's algorithm is 21.

https://en.wikipedia.org/wiki/Integer_factorization_records


Even 21 was only possible by cheating (optimizing away the difficult part using prior knowledge of the results) [1]. Craig Gidney has a blog post that shows the actual quantum circuit for factoring 21 which is far beyond the capabilities of current quantum computers [2].

[1] https://www.nature.com/articles/nature12290

[2] https://algassert.com/post/2500


And it was done in 2012. I admit I’m surprised there hasn’t been more progress since.


Overview of QC factoring records, applied sleight-of-hand tricks, and their replication using a VIC-20 8-bit home computer from 1981, an abacus, and a dog:

https://eprint.iacr.org/2025/1237.pdf


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: