Hacker Newsnew | past | comments | ask | show | jobs | submit | jamincan's commentslogin

I have definitely noticed this when I've tried doing Advent of Code in Rust - by the time my code compiles it typically send out the right answer. It doesn't help me once I don't know whatever algorithm I need to reach for in order to solve it before the heat death of the universe, but it is a somewhat magical feeling when it lasts.

As a hypothetical example, when making a regex, I call `Regex::new(r"/d+")` which returns a result because my regex could be malformed and it could miscompile. It is entirely reasonable to unwrap this, though, as I will find out pretty quickly that it works or fails once I test the program.

People form parasocial relationships with AI already with content restrictions in place. It seems to me that that is a separate issue entirely.

I avoid the automated checkouts in part because it takes jobs away from robots. Am I a bad person for creating jobs for humans?

I confess I am a hypocrite though, as I'm one of those job-stealing people that return the cart to the corral.


All of the memory safety stuff is independent of the trait system, to the best of my knowledge, but the data race protection is implemented through the Send and Sync traits. I'm unsure if there is an obvious alternative approach to this same feature, but I think it may be one innovation that is still novel to Rust and would not have existed in earlier decades.

I would contend that 5 bills are more bulky than 5 coins. The only upside of dealing with US bills when travelling in the US is that you feel like a millionaire when you pull out the massive wad of bills from your pocket.

I wonder if it's simply the fact that there really isn't anything driving them to get their kids vaccinated rather than a particular religious conviction. In Ontario, the old-order Mennonite and Amish groups have separate schooling for their kids and aren't integrated into the medical system here (not even being a part of our public health insurance system). Your family doctor and public health agency (through the schools) are the avenues the vast majority have to vaccination and so being apart from that, the old-order families would need to make a special effort to get vaccinated above and beyond what most people need to do.

Graydon's post was about as full-throated an endorsement of Fil-C as you can get, including noting where it's innovations could be used to improve Rust safety. The fact that you see undertones of some sort of deepset Rust agenda to unseat C and C++ is, I think, more a reflection on just how deep down the rabbit hole some Rust critics have gone, seeing so-called Rust zealots hiding in every shadow of the internet.


If anything, Rust zealots sure aren't hiding, their agenda is deep-set and out in the open, and they are generally obnoxious. They're pushing the language far harder than it deserves, and harder than I've ever seen any language pushed. They are scrambling to rewrite everything in Rust whether there is any benefit to doing so or not. The inclusion of Rust in the Linux kernel is a prime example. So is the deployment of broken coreutils replacement tools in Ubuntu. If you challenge the obvious campaign, they'll call you a dinosaur or something.

This post is borderline or lowkey Rust propaganda IMO. You might disagree with that but you're not going to convince me there is no campaign.

It also seems reasonable that Rust programmers would feel threatened by anything that makes C and C++ safer and more usable. While there is some benefit to comparing and contrasting different solutions to memory safety, this guy is clearly biased.


I don't think you understand that the post was written by the creator of Rust. That he is writing positive things about Fil-C says more than enough.

> If anything, Rust zealots sure aren't hiding, their agenda is deep-set and out in the open, and they are generally obnoxious.

You really need to drop the paranoia.


It's not paranoia, I see what has been taking place as I said. I did not realize that this dude is the author of Rust. I can forgive the creator of a language for stumping for his own product. But he is clearly doing it in this post and making claims that Rust does everything better even as he is saying nice things about Fil-C.

If you think my outlook is paranoid or whatever, you should take it up with the Rust community, not me.


Likely just that the fastest implementations in the benchmarks game are using those features and so aren't really a good reflection of the language as it is normally used. This is a problem for any language on the list, really; the fastest implementations are probably not going to reflect idiomatic coding practices.


Here are naive un-optimised single-thread programs transliterated line-by-line literal style into different programming languages from the same original.

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...


For what it's worth, when I ask ChatGPT 5, it gets the correct answer every time. The response varies, but the answer is always three.


Now try a different language. My take is hard RL tuning to fix these "gotcha:s" since the underlying model can't do it on its own.

OpenAI is working on ChatGPT the application and ecosystem. They have transitioned from model building to software engineering with RL tuning and integration of various services to solve the problems the model can't do on its own. Make it feel smart rather than be smart.

This means that as soon as you find a problem where you step out of the guided experience you get the raw model again which fails when encountering these "gotchas".

Edit - Here's an example where we see a very tuned RL experience in English where a whole load of context is added on how to solve the problem while the Swedish prompt for the same word fails.

https://imgur.com/a/SlD84Ih


You can tell it "be careful about the tokenizer issues" in Swedish and see how that changes the behavior.

The only thing that this stupid test demonstrates is that LLM metacognitive skills are still lacking. Which shouldn't be a surprise to anyone. The only surprising thing is that they have metacognitive skills, despite the base model training doing very little to encourage their development.


LLMs were not designed to count letters[0] since they work with tokens, so whatever trick they are now doing behind the scenes to handle this case, can probably only handle this particular case. I wonder if it's now included in the system prompt. I asked ChatGPT and it said it's now using len(str) and some other python scripts to do the counting, but who knows what's actually happening behind the scenes.

[0] https://arxiv.org/pdf/2502.16705


There's no "trick behind the scenes" there. You can actually see the entire trick being performed right in front of you. You're just not paying attention.

That trick? The LLM has succeeded by spelling the entire word out letter by letter first.

It's much easier for an LLM to perform "tokenized word -> letters -> letter counts" than it is to perform "tokenized word -> letter counts" in one pass. But it doesn't know that! It copies human behavior from human text, and humans never had to deal with tokenizer issues in text!

You can either teach the LLM that explicitly, or just do RLVR on diverse tasks and hope it learns the tricks like this by itself.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: