Hacker News new | past | comments | ask | show | jobs | submit | delifue's comments login

If you use Box to refer to parent, then parent cannot own the child (unless using things like Arc<Mutex<>>).

The "if I then go and code in Python, Java or C#, pretty much all objects have the overhead of an Arc" is not accurate. Rust Arc involves atomic operation and its preformance can greatly degrade when the reference count is being mutated by many threads. See https://pkolaczk.github.io/server-slower-than-a-laptop/

Java, C# and Go don't use atomic reference counting and don't have such overhead.


Not allowing take reference can avoid interior pointer. Interior pointers make memory safety harder and also make GC harder.


If I keep removing one element in front and adding one element on back, then normal ring-buffer deque will involve no copying, but this will keep doing copying to empty space, so its performance could be much worse than deque if the queue is large.


From: https://x.com/zzlccc/status/1903162768083259703

DeepSeek-V3-Base already exhibits "Aha moment" before RL-tuning

The ever-increasing output length in RL-tuning might be due to a BIAS in GRPO


This kind of explains why LLMs are good at making demos and good at writing code under clear specification, but often break existing features in a large codebase. When codebase become large, the signal-to-noise ratio of context reduces.


The RSA problem in article doesn't mention that RSA's difficulty is based on MODULUS prime factorization, not simple prime factorization.

https://en.wikipedia.org/wiki/RSA_(cryptosystem)


Are there any known attack methods that don't involve factoring the public key into two primes?


Yeah but they usually require RSA to be used in some rather unusual and bad way.

For example, encrypting one message with many different public keys can be broken with chinese remainder theorem and Nth roots. This reveals the message without factoring any key. This is why randomized padding (among other things) is a must with RSA.


There's a footnote along these lines that links to the actual algorithm.


Software can take a freeride of hardware improvements. GPT wrappers also can take a freeride of foundation model improvements.


I always roll my eyes when someone makes a "Show HN" post that their wrapper app has amazing new capabilities. All they did was push a commit where they typed "gpt4o-ultra-fancy-1234" in to some array.


May you share some example UI and code of how HTMX cannot work well?


ARM already have a special instruction `FJCVTZS` to accelerate JavaScript. If WebAssembly gets popular enough there will probably be hardware acceleration for it.

https://community.arm.com/arm-community-blogs/b/architecture...

https://stackoverflow.com/questions/50966676/why-do-arm-chip...


JavaScript is in the name, but really it's just a way to convert floats to ints with the kind of rounding that x86 does. The impetus might've been to run JS faster because JS specifies x86 semantics, but it's not like it's some wild "JavaScript acceleration instruction". I don't really get why they put JavaScript in the name to be honest.


Except FJCVTZS is not exclusively useful to javascript. Its behaviour is that of x86 rounding, which is what the JS spec encodes. So it’s also useful for x86 emulation / compatibility in general.

> If WebAssembly gets popular enough there will probably be hardware acceleration for it.

ARM already tried that back in the days with Jazelle. Plus much of the point of WASM is that you can compile it to machine code during loading.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: