Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I think the reason you don't see this as a real language problem is because of status quo bias.

If we could go back and revise C before it became widespread, it'd probably be wise to do so. In a cross assembler and scripting language I built, I enforced {}.

> What a terribly stupid thing for your language not to help you with!

Agreed! Anything that is an error should be caught. I even think it'd be nice if the compiler alerted you to dead code, which would have caught this bug.

> We can extend that to literally every error-mitigation tool we have.

Or we can take a balanced approach. Recognize that C is a low-level language. That it has very compelling benefits, but that they come with tremendous risks. We can approach the language with reverence. We can fix its glaring bugs without neutering it of its power or terse expressiveness.

Some people can't handle it. And they should stick to higher-level languages, or at least stay away from backbone security libraries.

> From the other direction, look at Haskell.

I can't really evaluate it, as I've never come in contact with a single application written in the language. It's a wonderfully philosophic language, but real world usage indicates it is not at all practical.

> making your language less safe for the express purpose of making it less safe is just crazy

Yes it is.

> That's what you're doing when you keep around gotchas to keep out the noobs.

I am not suggesting we allow single-statement if's to bar beginners from using C. I am suggesting that if you want to program in the language, you should learn the rules. Preferably before you start working on a crypto library. I know there are countless insane, esoteric edge cases, especially if you go on to C++. But this is something I learned on the very first day I started programming in C.

How much expressivity and/or power should we take away from the language before we finally blame something on the programmer instead? I'm not saying "always blame the programmer", I am saying there has to be a point where you say, "maybe it's not C, maybe it's the author who is to blame here."

> Don't be the guy who thinks shaving with a straight razor makes him a real man.

I don't program in assembler :P (well, at least not on modern systems where C is an option.)

> Oh, but she used an uncommented fallthrough. Fire her!

It would be more, "Oh, but she forgot a break; statement, leading to an unintentional fallthrough that exposed millions of users' credit card details to MitM attacks. She didn't review her code, or run it through an analyzer, or run it through the test suite. Write her up. If it keeps happening, fire her."

I'm actually arguing in favor of allowing the uncommented fallthrough. Not saying it's best practice (if I wrote the language, case labels would default to breaking without an explicit fallthrough keyword), but a C programmer should damn well know when they see that code that it is going to fall through. Because that's how switch/case works.

> And next time you discover a baffling bug that, in the final analysis, should have been really obvious, I hope your peers cut you more slack then you cut this person.

I think it should scale based on the importance and consequences of your actions, or lack thereof. I screw up a lot in the little gaming apps I write. I also don't get paid for it in any way. If my screw-up brought down production ordering for a day, I'd expect to be disciplined for that.



I agree you should learn the rules, because that will make the software work better. But if a rule turns out to be easy to accidentally break, we should change the rules. (And we don't have to change the rules of C per se; we can just use a static code analyzer or something). The languages and compilers aren't static things to take as constants; they're just pieces of software. In terms of whose fault it is, I think it's not so much that anyone thinks the developer is completely free from blame in some abstract sense, just that if you want to fix the problem, you'd get more mileage out of fixing the tools than disciplining the coder. Blame is a complicated, squishy thing, so in a sense it's hard to argue about how to allocate it, but to the degree that you want to take concrete actions to prevent the issue here, the tools seem like the right choice here. I think that might be the core of our disagreement.

> I think it should scale based on the importance and consequences of your actions, or lack thereof.

I was worried you thought that. I'm not denying there's some truth to it; the bar for carefulness should indeed be higher when you're working on critical code. But it's tricky. Think of it from the perspective of someone whose full time job is working on an SSL library. First, you now have a huge downside risk, and have to be worried about every action you take. Are you rewarded proportionally or is it just that you just have a worse job than your carefree peers? Even if we do compensate these developers for that, we're now attracting risk tolerant people to work on risk intolerant code, which seems precisely backwards. Second, working with a constant fear that you're going to break things is exhausting and unsustainable. I don't know if you've done it or not, but I can tell you it's kinda brutal, and it's exactly why having a comprehensive set of unit tests makes for happier developers. Adding the stress of potential punishment for casual errors would make the working environment untenable, and if you could convince anyone to actually do the job, you would quickly burn them out. They might even produce more errors. For both those reasons, the way most organizations handle this is not by throwing lots of personal responsibility for errors on the developer, but by adding process: code reviews, extra testing, and of course, better (or at least additional) tools. It's the recognition that humans make mistakes and don't magically stop making them when the stakes are high.

Finally, it's not clear to me that being more careful would prevent this kind of slipup. Surely the developer in this case knows the relevant syntax rules. They just didn't notice. By their nature it's hard to notice things you don't notice, even if you walk around with a metal alarm bell telling you to notice things. I'm not sure it's really any less likely than it is in your games. (Aside: I'm tempted to look up the literature on error rates because I suspect this kind of thing is well studied.) So how would disciplining them help?

If there's one thing that just seems silly and broken on Apple's part, it's why they didn't use a static code analyzer to catch stuff like this, which to me is an easy, obvious step. I could certainly see the person responsible for that kind of thing being held personally responsible for this. Much more obvious than nailing the committer for a copy-paste bug.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: