Hacker Newsnew | past | comments | ask | show | jobs | submit | zerofan's commentslogin

'everybody knows' red meat is bad?


Rastafari surely do.


If you're going to put words in his mouth, you should make them much stronger words. It's not a valid argument either way, but it'll seem more dramatic. (He didn't say either of your quotes...)


I downvoted you initially, but changed to an upvote to hopefully ungrey your comment.

The use of quotation marks on the Internet (especially on Internet discussion forums) has become non-standard, and I can see how it could be confusing. I think on HN that we tend to use italics or email-style

> block-quoting

to indicate direct quotations of posts or user comments.

Quotation marks on forums like HN tend to be used either to mark dialogue (things spoken out loud) or to mark paraphrased or "hypothetical" thoughts. This is different from the use of quotation marks in formal English writing, as described by Wikipedia [1]. Here, the quotation marks are used to separate the "paraphrased thought" from the rest of the sentence.

I'm actually finding it hard to describe exactly how quotation marks are used this way on the Internet; it's something I've just developed a "feel" for.

There's more discussion of this phenomenon here. http://metatalk.metafilter.com/23184/Should-we-keep-quotatio...

[1]: https://en.wikipedia.org/wiki/Quotation_marks_in_English


Sorry, next time I'll say something like "If you're going to misrepresent his intention" so as not to confuse a quoted sentence. And I won't use the word "say", because clearly nobody says anything in a text forum. /s

I find it very obnoxious when people exaggerate what someone else said so as to make it easier to contradict. I gather you don't have any problem with that? Yet you do have a problem with people calling it out as bad behavior? Are you sure you know why you're policing anything?


He didn't misrepresent anything. You're the one doing all the misrepresenting, exaggerating, and being obnoxious.


Have you done much skydiving? I used to go three days a week, for a couple years, between 4-10 jumps a day at a place that had world class experts. My experience is that only beginning skydivers are constantly preaching safety. They go around (vocally) judging everything they see, and I think they do it because it alleviates their own fear. Instructors would teach safety, but really only to their own students.

I think the safety aspect of Rust appeals to a lot of beginning programmers. They can feel safer looking down their nose at us dangerous C or C++ programmers.

> Rust is a parachute that always opens at exactly the right altitude

This isn't a good metaphor. Frequently it's safer to pull higher, and on some occasions, you're safer opening lower than you had planned... I think a canopy that always opened at the prescribed height would cause many unnecessary deaths. That doesn't say anything about Rust, one way or the other.


> I think the safety aspect of Rust appeals to a lot of beginning programmers.

Maybe, but it also appeals a lot to many of us experienced programmers who know how hard things can bite us. It's not so much that we can't get things right. It's that it's really expensive to revisit old assumptions when circumstances change, and it's phenomenal to be able to document more of these in a machine-checked way.


Please don't get me wrong - I would take safe over non-safe if everything else were equal. It's just that Rust made many other choices that are worse for me than what's in C++. Also, I think it would be very painful trying to explain some of Rust's features to my coworkers (who are generally very smart, but generally not interested in clever programming languages).

> It's that it's really expensive to revisit old assumptions when circumstances change, [...]

That's very dependent on the type of work you do. Over the last 23 years, my job has been to write many small programs to solve new problems. It's not expensive for me because I've aggressively avoided making monolithic baselines. I have medium sized libraries that I drag from project to project, but I can fix or rewrite parts of those as needed without breaking the old projects.


> That's very dependent on the type of work you do.

True, if your code never gets big or old, you can keep all of it in mind and write correct code without too much worry. Though in my experience, it really doesn't need to be very old or very big before tooling starts paying big dividends.

> I have medium sized libraries that I drag from project to project

I'd wonder in particular about those libraries. Certainly you know more about your context. But I expect that there are both contexts where it wouldn't be helpful, and also contexts where it would be substantially helpful but authors don't know what they're missing. I don't have a way of distinguishing the two here.


I think it's a misconception to classify type-safety and memory-safety techniques as 'clever', they should be seen as the bread-and-butter of day-to-day coding. To put it another way, Rust's memory safety is no more clever than C++'s smart pointers, the only difference is what people mistakenly believe about the two.


> I think it's a misconception to classify type-safety and memory-safety techniques as 'clever'

I didn't call Rusts type-safety of memory-safety clever. The clever stuff is lifetime specifications, a multitude of string types, traits as indications to the compiler for move vs copy, Box Ref Cell RefCell RefMut UnsafeCell, arbitrary restrictions on generics, needing to use unsafe code to create basic data structures, and many other things.

If I tried to advocate Rust in my office, many of my coworkers would simply say, "I didn't have to do that in Fortran, and Fortran runs just as fast. Why are you wasting my time?!"


Almost everything you mentioned as 'clever' is trying to achieve either type-safety or memory-safety. To your coworkers I would reply: would you like the compiler to handle error-prone memory deallocations? Or do you want to keep doing it manually and wait till runtime to find potential mistakes?


I don't believe those clever things are necessary for safety or performance. I think many of them are incidental and caused by a lack of taste or just a disregard for the value of simplicity. Rust deserves credit for it's good ideas, but these aren't those, and I believe there will be other high performance (non-GC) languages that are more accessible to non Computer Scientists [1].

> To your coworkers I would reply: would you like the compiler to handle error-prone memory deallocations? Or do you want to keep doing it manually and wait till runtime to find potential mistakes?

They don't really care about memory deallocations - the program will finish soon anyways, and the operating system will cleanup the mess. Sorry, they've already excused you from the office and have gotten back to getting their work done.

Btw, modern C++ programmers don't worry about memory deallocations either. You should find a better bogeyman.

[1] http://benchmarksgame.alioth.debian.org/u64q/compare.php?lan... (yes, most people disregard benchmarks, but you need someway to discuss performance)


> I don't believe those clever things are necessary for safety or performance. I think many of them are incidental and caused by a lack of taste or just a disregard for the value of simplicity.

Well, I just re-read your list of 'clever' features, and can't really see how any of them is incidental, or in fact how some of them are worse than the exact same features in Swift, which you mentioned.

> ... I believe there will be other high performance (non-GC) languages that are more accessible to non Computer Scientists....

Not sure what to make of this comparison, given that Rust beat Swift in the majority of the benchmark tasks. Also, you have to look at the quality of the compilers themselves. Rust is universally acknowledged to be a high-quality compiler, while Swift (especially together with Xcode) are often bemoaned as buggy and crashy.

> They don't really care about memory deallocations ... the operating system will cleanup the mess.

Well, I have to say they are an extremely lucky bunch. Most systems programmers don't have the luxuxy of writing script-sized programs which use the OS as their garbage collector.

> Btw, modern C++ programmers don't worry about memory deallocations either. You should find a better bogeyman.

I was specifically replying to your Fortran example, but for the sake of argument, to C++ programmers I'd ask, 'Would you like to do high-performance concurrency with statically guaranteed no data races?'


> a multitude of string types

There are two string types in Rust, `String` (growable and heap-allocated) and `&str` (a reference to string data). Anything else is just a shim for FFI.


> There are two string types in Rust, `String` [...] Anything else is just a shim for FFI.

I guess I don't have to worry about the non-Scotsman strings then... You've heard the criticisms about Rust's strings before, and I'm unlikely to tell you anything you don't know.


I think it's especially hasty to criticize Rust's string types in the context of C++, given the standardization of string_view in C++ as an analogue of Rust's &str :P


To me, Rust's &str seems a lot more like const char* (with a size tacked on for bounds checking). But you're the expert, so if I did agree they were the same, then C++ adopting it in the STL is practically proof it's a mistake in Rust.

You never addressed my other "too clever" items in Rust. Does that mean, other than strings, we agree?


> Does that mean, other than strings, we agree?

Not necessarily. :P Features exist, and I'm not about to dictate where others draw the cleverness line.


No, I haven't done any skydiving. After writing my comment, I suspected my analogy might not hold up if I knew more about skydiving. I guess just imagine an abstract form of skydiving where you just need to have fun in the sky and then open your chute at the right altitude, and you'd like to wait as long as you can before opening it.

I still think the point is valid, though, even if the analogy isn't.


Fair enough. I'm sorry about calling you out on the metaphor. Instead I'll call you out on the point itself :-)

> Having safety features that you know you can rely on allows you to take risks that you normally wouldn't in order to accomplish some really awesome things.

Unless you rush to publish a public facing version of your code, I can't see why you'd be afraid to take risks in any language. What's so scary about a buffer overflow on your home workstation running data from a source that's never even seen your program? It will just segfault, which is no worse than a Rust panic. If I could exploit your new code, it means I've already gotten so far into your workstation or server that I could just run my own code. Where does the fear come from?


I think it's more like: since you know the compiler won't let you write a buffer overflow, use-after-free, data race, etc., you no longer have to waste time worrying about whether your code might contain such problems, which frees up more mental bandwidth for other concerns. But unlike other languages, you still have confidence that your code will compile to equally performant machine code (e.g. No GC overhead).

The scary thing isn't causing a segfault on your local machine. The scary thing is writing code that could segfault but doesn't do in testing until after you've deployed it publicly. If your compiler rejects code that can segfault, this is no longer a concern. (Or replace segfault with a buffer overflow that leaks your private keys or something equally bad.)

I guess the analogy would be that you can have more fun cavorting across the sky if you knew with 100% confidence that your parachute automatically would deploy itself at the appropriate time (and not a moment sooner).


> I think it's more like: since you know the compiler won't let you write a buffer overflow, use-after-free, data race, etc., you no longer have to waste time worrying about whether your code might contain such problems, which frees up more mental bandwidth for other concerns.

I've spent a lot of time figuring out how to do completely mundane things in Rust. At this point, buffer overflows and use-after-frees are not my biggest concerns in C++.

> The scary thing isn't causing a segfault on your local machine. The scary thing is writing code that could segfault but doesn't do in testing until after you've deployed it publicly. If your compiler rejects code that can segfault, this is no longer a concern.

If your testing didn't catch the problem (which I can fully understand), a panic at runtime is not much different than a segfault.

> (Or replace segfault with a buffer overflow that leaks your private keys or something equally bad.)

I firmly believe the OpenSLL team would've used unsafe blocks in Rust to disable the performance overhead of bounds checking. That whole exploit was caused by sloppy optimizations, and Rust is not immune from that.


> I've spent a lot of time figuring out how to do completely mundane things in Rust. At this point, buffer overflows and use-after-frees are not my biggest concerns in C++.

I could visit a country with a completely different set of laws regarding driving and a different road marking system. After a few days of driving, I might also feel like I've spent a lot of time trying to figure out how to navigate the rules of the road rather than actually getting to my destination when compared to driving in my native lang. I would also be unable to accurately ascertain whether one system was better than the other, because of inadequate experience with the new system. It would be a mistake to assume that I could become proficient enough in such a complex system in such a short period of time as to ascertain whether one was better than the other.

To put it another way, I don't feel like avoiding bicyclists is my biggest problem when driving, but having a dedicated bike lane at all times would probably be a good idea anyways. Sure, maybe you've never hit a cyclist, and never will. That doesn't mean it doesn't happen enough that we shouldn't do something about it, because it does.

> If your testing didn't catch the problem (which I can fully understand), a panic at runtime is not much different than a segfault.

No, a segfault at runtime is something that is possibly exploitable. A Panic is not.

> I firmly believe the OpenSLL team would've used unsafe blocks in Rust to disable the performance overhead of bounds checking.

Even if they did, that would still reduce the portion of the code that needs to be audited to those blocks. Effort could be made to reduce the size and scope of those blocks. There is something to be said for having the ability to categorize and enforce different safety levels in your codebase, when the alternative is no categorization or enforcement.


> I could visit a country with a completely different set of laws regarding driving [...]

Arguments by metaphor aren't my thing. It's very likely I would become more proficient at Rust if I programmed in it more. It's also very likely the poster above would worry less about memory errors if s/he programmed in C or C++ more. Yes, Rust is safer in some ways, but I still can't understand where all the fear of other languages comes from.

> having a dedicated bike lane at all times would probably be a good idea anyways.

I used to live in a city with a lot of dedicated bike lanes. I commuted to work on a particularly long stretch that was very popular for cycling. The majority of the cyclists refused to ride in the lane. It turns out that cars naturally blow the dust and small pebbles out of the main road way, but bikes don't do that in the bike lane. Cars also smooth out the pavement in their tire tracks. The result was a road that's 5 foot narrower for cars (speed limit 45 mph) with bicyclists in it (not moving 45 mph), a generally unused bike lane, lots of uncomfortable passing, and a lot of indignation from cyclists who claimed an equal right of way despite having a separate lane designated for them.

> Sure, maybe you've never hit a cyclist, and never will. That doesn't mean it doesn't happen enough that we shouldn't do something about it, because it does.

The city I live in now has many bike paths, completely separate from major roads. It's also a different climate, so there are less pebbles and they have street sweepers clean the road after snow season to remove the sand. There really doesn't seem to be much interaction between the cyclists and the cars. So should I choose a programming language with bike lanes on major roads or separate paths though the parkways? :-)

> No, a segfault at runtime is something that is possibly exploitable. A Panic is not.

Anything is possible, but it's very unlikely. I will write a program and intentionally put a buffer overflow in it. Can you send me some data that will exploit it?

Here's a metaphor that also isn't one: I'm not afraid of terrorists despite some high profile events in the last 20 years. I certainly wouldn't optimize my life around avoiding terrorist attacks because the empirical evidence shows me the probability is very low.


> It's very likely that I would become more proficient at Rust if I programmed in it more. It's also very likely that the poster above would worry less about memory errors if s/he programmed in C or C++ more.

> The city I live in now has many bike paths, completely separate from major roads.

Which wasn't the point of that at all. It was to point out that you assessment of how much time is wasted working around problems in each case is irrelevant given your vastly different experience levels. There are plenty of people here with quite a bit of C and C++ experience that have weighed in about this, not just the person above who you assess as not having much experience in C or C++.

A bike path is a dedicated bike lane,just not necessarily parallel to the road. You're taking the metaphor too literally to be useful. A metaphor is as useful as you allow it to be. They can be extremely useful in pointing out somewhat parallel situations where people may find their beliefs are different. When that is so, it allows the people involved to examine what is different about the situations that leads to a different opinion, if anything. Sometimes we fall prey to our cognitive biases, and a metaphor can be a shortcut out of that bias if it exists, and you allow it be that shortcut. Driving it into irrelevancy through focusing on minutiae is a useful rhetorical trick, but doesn't actually advance the conversation, and at the extreme end if done purposefully is not acting in good faith.

> Anything is possible, but it's very unlikely. I will write a program and intentionally put a buffer overflow in it. Can you send me some data that will exploit it?

Depending on the segfault? I could. It would take me a lot of work, because it's been nearly 15 years since I paid much attention to that, but I have done it before.

> Here's a metaphor that also isn't one: I'm not afraid of terrorists despite some high profile events in the last 20 years. I certainly wouldn't optimize my life around avoiding terrorist attacks because the empirical evidence shows me the probability is very low.

No, you don't optimize your life around them, but you might also support checking of identities on international flights to prevent access to your nation from known terrorists.

Here's the thing. It's not about you. At any point in time, some percentage of C and C++ programmers are neophytes that may not be as proficient as you at avoiding the pitfalls possible in those languages. Given the average amount of time it takes someone to be proficient in C or C++, divided by the average career length of a programmer of those languages, and you'll have a rough estimate of what percentage of programmers of those languages we might conceivably have to deal with problems from them being inadequate for the job they are assigned. I think that reducing this has such a large impact, that this is of vast benefit to society at large (given the botnets we are currently seeing), and would total billions of dollars.


"not acting in good faith"

An accurate diagnosis, I think. You'll never get anywhere with people like that ... or where you get is not anywhere you want to be. In this case, you have someone arguing against Rust because a) his coworkers don't bother to free memory because their programs will finish soon and b) because he doesn't care whether toy programs that he writes for his home computer are subject to buffer overflow exploits.

And on top of that was missing the point of your analogies that, if not willful, was certainly convenient. To use another one: some people are like quicksand.


> > The city I live in now has many bike paths, completely separate from major roads.

> Which wasn't the point of that at all. It was to point out that you[r] assessment of how much time is wasted working around problems in each case is irrelevant given your vastly different experience levels.

That was almost my exact point, and it's odd you're repeating it back to me. I guess I could've laid it out more plainly.

> [Metaphors] Driving it into irrelevancy through focusing on minutiae is a useful rhetorical trick. [...] at the extreme end if done purposefully is not acting in good faith

Using a metaphor is a rhetorical trick. If you want to explain something to a non-technical audience, maybe analogies "get the hay down to the horses" so they can have at least a limited understanding. However, we both seem to understand programming languages so talking about roads obfuscates the discussion, leaving me to wonder whether there really is a parallel between the two topics. I know more about programming languages than I do about bike paths.

> Depending on the segfault? I could. It would take me a lot of work, because it's been nearly 15 years since I paid much attention to that, but I have done it before.

Even if I offer to run malicious data, it sounds to me like a low probability event - probably lower than my being in an airplane crash or shot by a cop. It's not something I should fear today. Over the last 25 years, I've had lots of segfaults, but I think I've done the most damage by accidentally overwriting files. I'm a little afraid of that.

> No, you don't optimize your life around them, but you might also support checking of identities on international flights to prevent access to your nation from known terrorists.

No, I definitely would not. It's very easy to get into this country, and an organized (dangerous) group would have no more difficulty than the drug dealers do smuggling cocaine. There is no benefit to harassing millions of citizens if you can't actually stop the problem.

> Here's the thing. It's not about you.

Are you suggesting the only people allowed to share their experiences in a thread like this are new programmers and the people pushing their language? I was new once, and I survived lots and lots of segfaults. Don't you think neophytes should hear that? They're definitely getting a large dose of doom and gloom about the bad old days.

> Some percentage of C and C++ programmers are neophytes. [...] I think that reducing this has such a large impact, that this is of vast benefit to society at large (given the botnets we are currently seeing), and would total billions of dollars.

In one of your other comments, you indicated you haven't tried Rust yet. You should - you sound interested. It definitely has its nice parts. However, I don't think you will find the safety features to be a big productivity gain, and you will have to use unsafe code to accomplish tasks from a freshman level computer science book. Think about that - you can't cleanly use the safe subset of Rust to teach computer science to beginners... (you could do it with a lot of compromises)


> Using a metaphor is a rhetorical trick.

Rhetorical tricks can be used to deepen the conversation, or to dismiss points out of hand. The first is useful to the discussion, the second is useful for winning, but at the detriment to the discussion.

> However, we both seem to understand programming languages so talking about roads obfuscates the discussion

I provided a example where it may provide value even if two people are experts in the area being discussed. Metaphors can help explain someone underlying reasoning and motivation in a way that is hard to express technically. People talk past each other enough in discussions by slightly misinterpreting what is trying to be expressed, that I find metaphors a valuable tool. I find many disagreements in text are rooted in people assuming a comment is countering a point of their or someone the agree with, and interpreting it in that light when often they are saying very close to the same thing. Thus I believe expressing a point in multiple ways, even if it's through metaphor, to have merit.

> Even if I offer to run malicious data, it sounds to me like a low probability event - probably lower than my being in an airplane crash or shot by a cop.

First, airplane crashes are extremely rare. Second, being shot by a cop is rare too, depending on your vocation and behavior. Third, remote code execution exploits are not rare, given the relatively small amount of public facing software compared to airplane flights an all police interactions.[1] Were you to author or contribute to any non-trivial size C or C++ project that was publicly available, I would put better than even money on there being an exploit findable in it. There's a vast difference in how much software is written to how much is public facing, but that doesn't mean things that were originally private don't sometimes make their way public years later, for example internal libraries that a company open sources or even just includes in another project that ends up being public facing.

> No, I definitely would not. It's very easy to get into this country, and an organized (dangerous) group would have no more difficulty than the drug dealers do smuggling cocaine. There is no benefit to harassing millions of citizens if you can't actually stop the problem.

So, again, because you can render a metaphor in more detail to make it irrelevant in context doesn't mean that's appropriate. So, in more generic terms, "do you support keeping known detrimental people out of a defined area to facilitate the usefulness of that area"? If can can do so, and it's not to cumbersome on those that are not detrimental, depending on the problems caused by the people in question, at some point it becomes worth it. There are parallels that can be drawn here, if you're willing to entertain the thought. It appears you aren't.

> Are you suggesting the only people allowed to share their experiences in a thread like this are new programmers and the people pushing their language?

No, I'm expressing that a single person't ability to avoid negative behavior has little bearing in an argument regarding community norms and herd behavior, which is what I'm getting at. Whether you are a perfect programmer and never make a single mistake in any language you use doesn't matter when discussing the merits of enforced safety in general as in this discussion regarding C and C++. What does matter is whether other programmers in general do, and what percentage of them, which you've also made a point of expressing. I think that is worth discussing, because I think we either disagree on the proportion of those programmers that can code with adequate safety, or some other facet of them that results in them yielding far more problematic code every year than you think they are producing.

> In one of your other comments, you indicated you haven't tried Rust yet.

I've tried it. I haven't done more than dabble though, while playing it futures-rs. I understand the borrow checker is cumbersome at my level of uncerstanding, and I fought with it. I don't think I have sufficient experience to make an assessment of the language personally based on my level of experience with it, and especially not with how it feels to write in comparison to C or C++, because I strive to avoid using those languages.

> However, I don't think you will find the safety features to be a big productivity gain, and you will have to use unsafe code to accomplish tasks from a freshman level computer science book.

I believe have the ability to define safe and unsafe portions of code is in itself laudable and useful. Allowing me to categorize possibly problematic portions of code is a benefit. In any case, I could essentially write the entire program in an unsafe block and have a C/C++ alike with a different syntax. I'm not sure how "unsafe" can be presented as a downside, when it's strictly a way to enforce separation of a feature that C and C++ don't have.

> Think about that - you can't cleanly use the safe subset of Rust to teach computer science to beginners... (you could do it a lot of compromises)

What, you can't use that explicit separation of what is known safe and known unsafe to point out computational problems and ways they can be solved? I find that hard to believe. Unless you think unsafe is Rust but "lesser, not really". It isn't. It's part of the language. It exists as a concession that sometimes things are needed that can't be proven safe by the compiler, but you may be able to prove to yourself it is.

1: https://www.exploit-db.com/remote/


You seem like a forthright person, but with or without metaphors, we're still talking past each other.

My point about remote exploits, airplane crashes, and cops is not about me. Yes, public facing software needs to be careful, but (fun metaphor) that's like saying prostitutes should use condoms. Web servers, browsers, firewalls, and the like are built specifically to communicate with untrusted entities. That's some of the most promiscuous software out there, and yes it gets exploited. But most people don't need to use condoms with their wives, and nobody is going to exploit software a newbie wrote and runs on his home computer. Safety should not be the fundamental criteria for a newbie programmer to choose a language and learn how to write fibonacci or hello world. When they're ready to write nginx, then they should be careful.

My point about the questionable productivity gain and safety was a reply to your estimate of the billions of dollars lost. If you're not more productive, and you aren't really safe, then you aren't going to save those billions.

> What, you can't use that explicit separation of what is known safe and known unsafe to point out computational problems and ways they can be solved? I find that hard to believe.

I didn't say anything like that. We're talking past each other.

> Unless you think unsafe is Rust but "lesser, not really". It isn't. It's part of the language.

(Metaphor time again) I've got a really safe bicycle. When the safety is on, children can't get hurt while riding it. If you care about the safety of the world's children, they should use my new safer bicycle. Oh, but you can't pedal it on paths I don't provide unless you disable the safety. Is my bike really that safe?

> 1: https://www.exploit-db.com/remote/

I have no idea how many people compiled and ran a program today. It's probably millions. Bayes's theorem might be a useful way to normalize that long list you linked. I don't see a single program from a home programmer on that list.


"I didn't say anything like that."

No one said that you said anything like that. Of course you didn't. But what you said necessarily implied that.

"We're talking past each other."

No, you willfully ignored and misrepresented all his points.

"Oh, but you can't pedal it on paths I don't provide unless you disable the safety."

That's a grossly dishonest misrepresentation the situation with Rust.


> Unless you think unsafe is Rust but "lesser, not really". ... It's part of the language ... sometimes things are needed that can't be proven safe by the compiler, but you may be able to prove to yourself it is.

This. Unsafe is to the borrow checker as 'Any' is to the typechecker.


Perhaps a better analogy would be the way that much of modern medicine is enabled by access to antibiotics. Without antibiotics, the risk of post-operation death by infection would be so high as to rule out many of the procedures that we now consider safe and routine.


I would prefer a surgeon who washed his hands over one who didn't but gave me antibiotics. I've had stitches a few times, but only one real surgery. I never got antibiotics for any of those. Maybe we could skip the analogies? I don't think they help the discussion.


They help in a discussion with someone intellectually honest.


> I think the safety aspect of Rust appeals to a lot of beginning programmers.

Is that a bad thing? All programmers start as beginners, and if C is too painful to begin with then they'll learn via an easier language, and then comfortably spend their whole careers using those easier languages. If we want to expand the field of systems programmers organically, then we need to make tools that don't punish beginning programmers.

> They can feel safer looking down their nose at us dangerous C or C++ programmers.

What makes you feel like anyone's looking down their noses at you? Every language in history has been made to address the perceived flaws of some prior language. Safety is a crucial selling point for a huge chunk of people, and C and C++ have failed to appeal to this market. Just because safety isn't a priority for you doesn't mean that the people for whom it is a priority are suddenly pretentious.


> > I think the safety aspect of Rust appeals to a lot of beginning programmers.

> Is that a bad thing?

The appeal to beginners is fine, maybe even a good thing, but the condescending comments from beginners is a lot like listening to a teenager who thinks they know everything.

> What makes you feel like anyone's looking down their noses at you?

There're are no shortage of obnoxious comments from beginning Rust users here and on Reddit. If you can't see them, it might be because you're aligned with that point of view.

A recent one implied the whole world is going to end because of Heartbleed-like exploits. Don't they realize that despite the occasional high profile exploits, the world is generally running just fine? Don't they realize that the OpenSSL developers would've probably used pools of dirty memory to avoid allocation costs and unsafe blocks to avoid bounds checking had they developed that code in Rust? They got bit by sloppy optimization, and Rust isn't immune to that. I really wish people weren't so afraid of everything that achieving safety is their primary goal.

> Just because safety isn't a priority for you doesn't mean that the people for whom it is a priority are suddenly pretentious.

It's not pretentious if you make your own decision for your own project. It's not even pretentious to spread the good word and say how much you like Rust. It is very pretentious and condescending when you say something like in Graydon's article: """When someone says they "don't have safety problems" in C++, I am astonished: a statement that must be made in ignorance, if not outright negligence."""

Are you going to stand by that sentence? You probably should, because the newbies will love you for it, and it might help increase adoption of your language. It really shouldn't matter if you alienate a few of us old-timers who really don't have safety problems in C++.

To be clear, I like Rust. I've been following it for years, and I'm disappointed that it's not an adequate replacement for C++ (which I really don't like).


"the condescending comments from beginners"

You like to make stuff up.

"If you can't see them, it might be because you're aligned with that point of view."

Or it might not. It might be that you're just being abusive and dishonest.


> There're are no shortage of obnoxious comments from beginning Rust users here and on Reddit. If you can't see them, it might be because you're aligned with that point of view.

Can you give me an example of a comment in this thread that you find to be from a pretentious beginner? Alternatively, if you're calling the author of this article a beginner, I can assure you that he isn't.


The guy's a troll.


> To be clear, I like Rust. I've been following it for years, and I'm disappointed that it's not an adequate replacement for C++

Just out of curiosity, what is it about Rust that means it's an inadequate replacement for C++?


There are many things you could dismiss as style issues, but here is one relating to performance. Rust does not (yet) have integer generics. If I use Eigen (the C++ library), I can declare a matrix of size 6x9 and have the allocation live (cheaply) on the stack. I do this kind of thing frequently (not always 6x9), and in Rust I would pay for heap allocated matrices. The cost in performance can be huge. Maybe this will get fixed in the near future.


Humans are prone to error (fine), therefore you are prone to error (condescension, not fine). Post-aristotelian logic?

I'm not completely serious, it's more complex that this.


"Is that a bad thing?"

Regardless, it's a complete mispresentation of Rust, which is all that zero has to offer.


I've written this several times in several ways using C++ (and I'll probably write it at least once more to make it better). In an early version I used an integer level as you came to, but it made the compiler very unhappy as it recursively tried to expand nested types at compile time (it couldn't figure out the recursive types would terminate in practice). Max template recursion of 256 if I remember correctly, even though you'd never instantiate past 45 or so on any machine in the world.

In a later version, I implemented the specializations as inheritance on the abstract FingerTree base class (verbose, but it works), and I added Leafs and Nodes. Leafs are FingerTrees that hold your data, and Nodes are FingerTrees that point to other FingerTree instance. This dodges the recursive types problem. I don't know much Haskell, but I think it would be the C++ equivalent of:

    data FingerTree a = EmptyLeaf
                    | SingleLeaf a
                    | DoubleLeaf a a
                    | TripleLeaf a a a
                    | EmptyNode
                    | SingleNode (FingerTree a)
                    | DoulbeNode (FingerTree a) (FingerTree a)
                    | TripleNode (FingerTree a) (FingerTree a) (FingerTree a)
                    | FingerSpine (FingerTree a) (FingerTree a) (FingerTree a)
Virtual methods on each specialization took the place of pattern matching. Not super elegant.


I'm interested too. I've been thinking about the idea, and it seems like you'd have to be sure to re-use the hell out of your file descriptor on /dev/zero (otherwise, you could easily run out of file descriptors). It also seems like you're trading cache misses for system calls if you have to re-map pages a lot. Maybe it's a clear win, but I'd like to understand it better.


No need for /dev/zero: Linux has memfd[1] and OSX has vm_remap[2]. You only need one file descriptor per heap because Linux lets you poke holes with fallocate[3].

I'll define objects with a header that looks roughly like this:

    struct header {
      off_t phys;
      size_t len;
      int localrefs;
      short flags;
      char mstrategy, type;
    };
phys is the offset in the heap file descriptor. len is the number of units and type is indexed into an array of unit sizes.

mstrategy is used to select the bucket size (allocated range is a power of two, so 1<<(mstrategy&31)) and the heap number.

localrefs is an optimisation which I'll get to.

If I want to allocate an array of type t, size n, I can use a BSR[4] to identify the bucket that it needs, and see if there's anything on the free list. If there isn't, I can see if I can split a larger bucket into two parts (this is effectively Knuth's buddy allocator).

I know that (mstrategy&31) < BSR(page size) needs to be moved just like the classical allocator for appending or prepending, but when (mstrategy&31) >= BSR(page size) I can take a virtual address of a bigger region, then either mmap the memfd (using phys) or vm_remap the region into the bigger region. Instead of copying the contents of the pages, the operating system will simply copy the page table (which is 3 levels deep[5], hence the log log log, although using 1G pages means log log with a lower coefficient). This is a tremendous win for problems that need to deal with several large arrays.

Now localrefs gives me a further optimisation: In-process, I can track the reference count of objects, and if my functions always consume their arguments I know inside the grow/append/prepend routine if this is the only holder of the object. If it is, I can potentially reuse this virtual address immediately, saving 3-4 syscalls.

When it's time to deallocate, I can put small objects on my free list, and garbage collect any big objects by calling fallocate() on the physical address to poke a hole (freeing system memory). OSX doesn't need fallocate() because mach has vm_unmap.

[1]: https://dvdhrm.wordpress.com/2014/06/10/memfd_create2/

[2]: http://web.mit.edu/darwin/src/modules/xnu/osfmk/man/vm_remap... because the osx manual page is pants

[3]: http://man7.org/linux/man-pages/man2/fallocate.2.html

[4]: http://x86.renejeschke.de/html/file_module_x86_id_20.html

[5]: http://wiki.osdev.org/Page_Tables#Long_mode_.2864-bit.29_pag...

> It also seems like you're trading cache misses for system calls if you have to re-map pages a lot

Cache misses aren't the dominant force here.

The real trade off is illustrated with benchmarking: memcpy one page, versus 8 bytes (page table entry). How many pages do you need to copy before it is faster to pay the fixed (<100ns) cost of the system call and just copy the page table entries?

Memory streams at a rate of around 10GB/sec, but the TLB flush is ~100ns and the memory latency is only around 10ns, so it's easy to see how quick the gains add up when you're using 1GB pages.


I'd like to see if Rust is still around in 10 years and then look before we declare the winner on this.... All software has bugs.


It's been around for roughly six or eight already, depending on how you count. Of course, pre-1.0 was a different thing, but still. Ten years is not that long a time.

(I still agree that results at that point will be more interesting then speculating today.)


> It's been around for roughly six or eight already, depending on how you count.

I don't think you get to play on both sides of that fence. Some of your team has been working on Rust for that long, but I doubt any code with sigils compiles, and I doubt there were many large projects using it then. I started my stopwatch at May 2015.

> Ten years is not that long a time.

Totally agree.

> I still agree that results at that point will be more interesting then speculating today.

We'll see on May 2025. :-)


Yeah that's why I said it depends. :) periodization is tough. 18 months, four years, six years, and 9ish years are all valid, depending on how. What I mean to say is that it's already been quite a while, and now that it's making its way into distros and required for building Firefox and all that, I think it has even more of a chance of sticking around for a long time, given that it was around for quite a while when it wasn't even a viable "real" language. I don't mean to insinuate that today's Rust is mega mature because those old Rusts exist.


Rust did a nice job with algebraic data types. Haxe has a very similar approach, and I've loved it since I first saw it. I think this is a very elegant way to define data structures. I listed plenty of complaints about Rust in my other comment, but this one stands out as a "single interesting thing" worthy of praise.


To me it's almost the opposite. I like C a lot, but I ended up compromising on C++11 because in some programs I need more features to keep the implementation clean. I pay the cost of a messy language (C++) when implementing my libraries in order to have simpler applications that use those libraries.

I've written C-like programs in Rust, and that goes very well. But I really like function overloading, generic operators that I can overload from the left and the right, integer parameters for my templates/generics, copy semantics as the default (opt-in for moves), placement new, explicit destructors, and maybe a few other niceties. Rust does none of these things the way I want, and so I'm stuck with C++.

There are a few other things where I think Rust just made the wrong choices and didn't improve over C++. I've always hated the dichotomy in C++ strings between const char* pointers and an actual string class. Rust had the chance to make strings feel as comfortable as integers, but instead they introduced their own dichotomy with String and &str.

Speaking of integers, Rust's choice of 32 bit integers as the default for literals is painful given that every machine most of us will ever care about is now 64 bit. I routinely deal with arrays and files that are too large for a 32 bit integer. This means I have to remember to suffix all of my integers in for loops etc... C++ has this wrong too (backwards compatibility), but Rust had a chance to make a clean start and did not.


> Rust had the chance to make strings feel as comfortable as integers, but instead they introduced their own dichotomy with String and &str.

It makes perfect sense when you understand the differences and reasoning behind it. A `String` is a heap-allocated string that can grow in size. On the other hand, a `str` is basically a fixed-size string array, but you'll never interact directly with this type because there's no point in it. It's tucked away inside the `String` that created it.

Meanwhile, an `&str` is a slice of that fixed-size `str` array that's hidden within the `String`. You can think of a `str` as an `[char]` with it's own special `str` methods, and an `&str` as an `&[char]` which has access to all of the `str` methods as well.

> Speaking of integers, Rust's choice of 32 bit integers as the default for literals is painful given that every machine most of us will ever care about is now 64 bit. I routinely deal with arrays and files that are too large for a 32 bit integer.

I'm pretty sure the default integer is a `usize`, which is 32-bit on 32-bit systems and 64-bit on 64-bit systems.


No, the default integer is u32. If type inference can't provide a better type, Rust will pick u32. If type inference can pick a better type it will use it. So if you create an unsuffixed int literal and use it for indexing, that literal will be a usize.

Rust will type error if you try to use u32s with an array so it's all good though. Just means that you need to specifically `: usize` things.

If you need an integer type for counting stuff, u32 is fine. If you actually need to talk about memory, use usize. The compiler will force you to do it.


> No, the default integer is u32

Maybe I'm missing something, but it seems to be i32: https://play.rust-lang.org/?gist=c79d4afef7fa20c81ba14de61f5...


That makes sense. I've never noticed what the default size was, but because it always worked with indexing, I assumed that it was always usize.


I believe it used to be uint (usize) back in pre-1.0, and IIRC inference didn't work well or wasn't supposed to work at all. I recall needing explicit prefixes on my int literals back then. No longer the case.


How would you have gone about making Strings be "as comfortable as integers"?

Arrays are indexed by usize, so if you're on a 64-bit machine, then you shouldn't need a cast. It's _unconstrained_ numbers that default to i32, not anything without a suffix.


> How would you have gone about making Strings be "as comfortable as integers"?

I would prefer a str be a str be a str, regardless of how you got it. Lowercase type-name and fundamental like an integer.

I'm fairly certain I understand why Rust made the choice they did. I've read the forum threads at HN, Reddit, and users.rust-lang, and I've seen previous replies by you and other Rusties, so I hope you won't try to educate me about the performance advantages of having slices as references and another string type as an ownership class or why we need OsStr and friends.

If strings are the fundamental processing concern in your application, then I think you should be able to opt-in to that kind of micro-optimization and complexity, but it would've been better to spare the rest of us who have different concerns. I don't want to become a string expert to build a filename, and the default implementation could (at least conceptually) be always on the heap for all I care. Go one step further and implement the "small string optimization" (Alexandrescu's fbstring), and you'd probably get back most of the performance without nearly the complexity.

> Arrays are indexed by usize

I've spent the last half hour trying to troubleshoot why this line hangs:

    let aa = vec![1u8; 10e9 as usize];
However, I'm at home using the Ubuntu under Windows thing, so maybe there's some bad mojo between Rust and my less than usual setup. (If you're interested, I have 32 Gigs of memory, and the equivalent C malloc and memset code runs just fine, so I don't think Rust is doing the right thing here...).

Anyways, I wanted to test the commented out line, but I'm stuck for now. If that line would work, I'll admit I was wrong, but I think the uncommented line shows a similar complaint.

    for ss in 0..63 {
        //print!("{}\n", aa[1<<ss]);
        print!("{}\n", 1<<ss);
    }
> It's _unconstrained_ numbers that default to i32, not anything without a suffix.

Looking at the present and the future, why is that a sensible default? Both x64 and ARM are going to use a 64 bit integer register for the operations, and many of those operations are going to be 1-clock throughput. You can probably find a counter example, but 32 bit integers aren't generally faster than 64 bit ones.


> I would prefer a str be a str be a str, regardless of how you got it. Lowercase type-name and fundamental like an integer.

The lowercase type names are reserved to primitives. The String type is not a primitive but a comprehensive data structure, hence the capital S. The String type contains an `str` primitive though, along with size information.

> I don't want to become a string expert to build a filename, and the default implementation could (at least conceptually) be always on the heap for all I care.

Is it that hard to understand that when you create a string, you will create it as either a `String` or `PathBuf`? File methods are designed to automatically convert input parameters into a `&Path` so it doesn't matter what string structure you provide.

There is also no way (currently) to create a stack-allocated string with the standard library out of the box. You can do this with crates like `arrayvec` though. It's very much opt-in for that performance.

let path = String::from("/tmp/file");

let mut file = File::open(&path).unwrap();

> Looking at the present and the future, why is that a sensible default? Both x64 and ARM are going to use a 64 bit integer register for the operations, and many of those operations are going to be 1-clock throughput. You can probably find a counter example, but 32 bit integers aren't generally faster than 64 bit ones.

No need to use a 64-bit integer when you only need a 32-bit integer. You can fit two 32-bit integers into a single 64-bit integer and perform a calculation on both simultaneously with a single cycle, versus spending two cycles to calculate two 64-bit integers. There's also no need to pay that memory cost either.


> so I hope you won't try to educate me

No, I'm interested in what tradeoffs you would have made differently. I now understand. Thanks! (I disagree, but at least I understand.)

> why this line hangs:

It compiles and runs effectively instantaneously for me on Ubuntu under Windows as well, so that's very strange. Maybe file a bug?

> I think the uncommented line shows a similar complaint.

Yes, there's no constraint on that literal, so it's going to be an i32. When we made this decision, we did some analysis, basically no numbers in real world programs weren't constrained, it was often tests, toy programs, and documentation. It should be a rare thing. YMMV.

> why is that a sensible default?

Your assertion about the speed was the opposite of what was asserted while we had the discussion, basically. And not everybody is running on 64-bit hardware, so it's a broader default.


> It compiles and runs effectively instantaneously for me on Ubuntu under Windows as well, so that's very strange. Maybe file a bug?

Follow-up: I tried it with -O (don't know why I didn't think of that earlier), and it runs fine. So maybe the debug version is just generating terrible code, initializing by iterating through 10 billion bounds checks or something?

Anyways, more importantly, it works as I would like and does not behave as a 32 bit integer. I think I understand what you mean by "constrained" now. And clearly, I was wrong.

However, if most un-suffixed integers in real-world programs will become constrained (as you claimed), this further confuses me why i32 is the unconstrained choice. It doesn't seem like something so rare could be enough of a performance problem to justify being anything but the largest supported size.


Weird, it was fine for me with and without -O. Nothing about that code should be doing bounds checks, as it's just allocating an array.


Version: rustc 1.13.0 (2c6933acc 2016-11-07)

I remember installing it by cut and pasting one of the "curl ... | sh" commands there.

> Nothing about that code should be doing bounds checks, as it's just allocating an array.

I didn't dive into the macro definition for vec!, but I assume there is a loop in there to Copy the initialization element 10 billion times. I think you guys do bounds checking on the lower level reference to a slice that Vec uses. But I really don't know. If it's not that, then it was hanging or spinning doing something else. (Debug version of your memory allocator?)


> basically no numbers in real world programs weren't constrained, it was often tests, toy programs

The fact that C and C++ compilers generally chose to leave int at 32 bit on 64 bit platforms, combined with the standards requiring "usual promotions" for smaller types to go to int bites me all the time. I'm very happy that Rust dodges the promotions problem altogether, and I'm sorry if I'm wrong about the array subscripting thing (does the snippet I provided panic at 1<<32 or 1<<34?).

> And not everybody is running on 64-bit hardware, so it's a broader default.

That argument could be used to justify 8 or 16 bit integers... :-)


> (does the snippet I provided panic at 1<<32 or 1<<34?).

Overflow is a "program error", and in debug builds, is required to panic. In other builds, if it does not panic, it's required to two's compliment overflow. Rustc currently just overflows, but in the future, we'll see.

That's true except our 16 bit support is nonexistant at the moment :)


> That's true except our 16 bit support is nonexistant at the moment :)

If you pay attention to your usage statistics, I'll bet you drop 32 bit support before the May 2025 deadline we were discussing in the other thread.


We'll see!


> But I really like function overloading, generic operators that I can overload from the left and the right, integer parameters for my templates/generics, copy semantics as the default [etc.]

I do too, and I'd like to think that rust needs all those things to be a replacement, but realistically I think that the only killer feature that rust is still missing is reasonable interoperability with C++ (which admittedly might require implementing a few of those features).

Disclaimer: I have been following rust since Graydon initial announcement, but I have yet to write a single line of code in it.


What do you mean by "reasonable?" There are several crates that provide inline C++ macros on both nightly (rustcxx) and stable (rust-cpp). I haven't used the latter but the former works really well (albeit using a gcc specific feature). I usually break out the C++ code into a wrapper (unless its a few lines) to make it more idiomatic Rust but at the end of the day, there aren't that many hoops to jump through.


> and so I'm stuck with C++

If you're stuck with C++, might as well do it safely:

(shameless plug) https://github.com/duneroadrunner/SaferCPlusPlus


I looked briefly at your code. I don't believe it is possible to be compatible with the STL and safe in the way the Rust guys intend.

I might be wrong though. What happens with your vector in this code?

    using namespace mse::mstd;
    vector<double> data(10);
    double& dangling = data[0];
    data.resize(100000);
    double crashing = dangling;
You use a lot of typedefs, so I couldn't tell for sure, but I think your operator[] returns a C++ reference right?

The problem here is there is only one operator[] for both reading and writing. This is a simple contrived example, and taking a reference like that looks artificial, but there are a lot of other ways in real programs to stumble on to this. (I don't think it's as bad as the Rusties do, but I stumble into this bug once or twice a year...)

The Rust folks seem to believe you need a borrow checker to solve this problem, but I think that a different container library in C++ could do the trick. For instance, favoring value copies instead of references, and returning a proxy object from operator[] instead of a reference.


Native C++ references are technically unsafe, so code that uses them would not qualify as "strict" SaferCPlusPlus code. In the case of your example, the "double&" is technically not kosher. The easiest way to make it safe would probably be to use a (safe) iterator instead of a native reference. So instead of

    double& dangling = data[0];
you could make it

    auto not_dangling_iter = data.begin();
    // not_dangling_iter += 0;
C++ references are the one unsafe element that does not have a "compatible" safe replacement. Unfortunately, you have to convert your references to pointers (or iterators). I don't think there is a way to create a "safe" reference with an interface compatible with native references. Apparently C++ will at some point add the ability to overload the dot operator, but I'm not sure that will be enough to be able to emulate C++ references.

And while I can overload the & (address of) operator to "prevent" you from getting a native pointer to a "safe" object, I don't know if there's a way to prevent you from getting a native reference. If you wanted to somehow enforce a prohibition on the use of unsafe C++ elements (like references), that would probably require some sort of static tool that is not yet available. But should be fairly straightforward to implement, I think.

But if you just want some confidence in the safety of the code you write, it doesn't take much effort to reliably avoid using C++'s unsafe elements.


> returning a proxy object from operator[] instead of a reference.

Oh yeah, maybe. But that would still require the ability to overload the dot operator, wouldn't it? And how would you know when to deallocate the proxy object? And presumably there would be some run-time overhead. Hmm, I don't know if it wouldn't be more practical to create a static tool (or "precompiler") to automatically convert references to (safe) pointers (or iterators).


Yes, the dot operator is a headache. As soon as you march down the road of a precompiler, you're off to building a new language. I think C++'s grammar is too much of a mess to just tweak the parse tree reliably. I suspect there really isn't a way to win at this - every workaround is partial and involves compromise.


You know, while C++ references are technically unsafe, there is TRegisteredRefWrapper<> [1]. It's a safe version of std::reference_wrapper. Which kind of acts like a reference. So, if you don't mind me using std::strings instead of doubles, your example could be rewritten like

    mse::mstd::vector<mse::TRegisteredObj<std::string>> data(10);
    data[0] = "some text";
    mse::TRegisteredRefWrapper<std::string> dangling = data[0];
    data.resize(100000);
    try {
    	std::string crashing = dangling;
    }
    catch (...) {
    	// expected exception (not a segfault)
    	int q = 3;
    }
Does that work for you? I'm not an expert on std::reference_wrapper, so I'm not sure when it can and cannot substitute for a reference. (Btw, if it's a little verbose for you, there are shorter aliases available. Just search for "shorter aliases" in the header files.)

[1] https://github.com/duneroadrunner/SaferCPlusPlus#tregistered...


Rush hour traffic is due to human drivers making bad decisions. They knee-jerk the brakes just in case, they rubber neck by accidents, they take too long to accelerate, they try to cut other people off from merging, etc etc etc...

Computer drivers could make much better use of the existing bandwidth on freeways. I'm guessing 6X better, since my 15 minutes drive takes 1.5 hours during rush hour.

It could be a long time before we ever have enough automated cars to realize this benefit. But when that days comes, we could safely increase speed limits too (computers generally have better reaction times).


Taxis are usually disgusting too. I won't take a cab because I frequently feel like I need to shower after sitting in one. I rate Uber drivers almost exclusively by their cleanliness, which is usualy very good.


Taxis can be dirty, but there is nothing necessary forcing this state of affairs, and as you say at least one taxi company mantains a higher standard.

still, it is not clear that making the car self driving will make the taxi cleaner on average.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: