Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I suppose Joel does a better job explaining it than I do, so I'll just quote him verbatim.

"In other words, the more information about what code is doing is located right in front of your eyes, the better a job you’ll do at finding the mistakes. When you have code that says

    dosomething();
    cleanup();
"… your eyes tell you, what’s wrong with that? We always clean up! But the possibility that dosomething might throw an exception means that cleanupmight not get called. And that’s easily fixable, using finally or whatnot, but that’s not my point: my point is that the only way to know that cleanup is definitely called is to investigate the entire call tree of dosomething to see if there’s anything in there, anywhere, which can throw an exception, and that’s ok, and there are things like checked exceptions to make it less painful, but the real point is that exceptions eliminate collocation. You have to look somewhere else to answer a question of whether code is doing the right thing, so you’re not able to take advantage of your eye’s built-in ability to learn to see wrong code, because there’s nothing to see."

https://www.joelonsoftware.com/2005/05/11/making-wrong-code-...



> dosomething(); > cleanup();

This scenario is contrived. Use RAII and you never have to worry about whether cleanup() is called. You can even write, with some macros, something like

    FINALLY(cleanup());
    dosomething();
And the right thing happens all the time automatically. But you shouldn't even have to use something like this, because cleanup should be implicit in the types you're using, e.g., unique_ptr.

> But the possibility that dosomething might throw an exception means that cleanupmight not get called.

You shouldn't care whether dosomething() throws: you should be using the language's scoped cleanup facilities to clean up on scope exit anyway, and doing that improves code clarity whether or not you use exceptions.

You're doing the wrong thing and then complaining that exceptions make code unclear. It's like complaining that a car doesn't float after you've driven off a dock.

Besides: exactly the same bug can occur in Rust, because panics. On the panic path, your cleanup() won't be called. And with "?", it's also easy to forget to call cleanup(), because "?" acts just like rethrowing an exception.

Exceptions, in practice, produce clear code. All the confusion scenarios people sometimes claim arise from the use of exceptions are just contrived.


Question about proper destructor/RAII use:

C++'s unfortunately falls over when you throw an exception from a destructor, and also doesn't let us return error codes from destructors (not that any other language does).

How do you deal with this exceptional destructor problem in practice? Or does it not come up enough to warrant concern?


Destructors shouldn't fail: that's why they're implicitly noexcept in C++11. (You can, however, throw inside a destructor so long as you catch before returning.)

In practice, it's not a problem. If you really want to do something fallible on every success path, you can use an IIFE or a named function to isolate everything before the thing you want to do on the success path.

What cases do you have in mind?


I ran into something like this (very simplified):

  MyRAIIFile temp_file = ...;
  AddSomeStuffToFile(&temp_file); // threw something
where MyRAIIFile has a destructor:

  ~MyRAIIFile() {
    underlying_file.close(); // also threw something
  }
So, AddSomeStuffToFile threw an exception, and the destructor also threw an exception, meaning I had two exceptions in flight which was undefined behavior and caused some weirdness. It took many hours to track down this particular problem...

I can see that one correct answer would be to put a try-catch around the .close() call, but that's the wrong place for that logic in my case; I want the caller of the destructor to decide what to do to recover. Even Java's checked exceptions would cause chaos here. Only returning an error in the destructor's return type (with a must-use annotation of course) would force me to handle this situation at compile time... but C++ can't do that.

Any advice for this situation?


The problem is that your close is fallible in the first place. In general, resource reclaim code should always be infallible. When you deallocate a resource, be it a file descriptor or a chunk of memory, you're returning something to the system. You're providing a gift. The kernel should never refuse this gift.

Linux confuses the issue somewhat. close(2) can report errors, and I'm guessing that when your close() throws an error, it's just propagating something it got from the operating system.

Thing is, close(2) errors aren't really errors. There are three cases: 1) close(2) succeeds; 2) close(2) fails with EBADF; and 3) close(2) fails with some other errors. In case #2, there's no problem. In case #2, your program has a logic bug and you should abort, not throw. In case #3, the close operation itself actually succeeded, and the kernel is just reporting some error that occurred during file writeback in the meantime.

Errors in case #3 should be ignored. If you care about file durability, call fsync(2) before close. Catching and propagating IO errors from close(2) ensures nothing, since the kernel is allowed to defer potentially-failing IO operations until after the close!


For case #2, isn't it a bit presumptious of the MyRAIIFile to make the decision to abort the entire program? It would be nice if the destructor could report the error upward to whoever called it, so they can decide whether to log or abort.

When you say "In general, resource reclaim code should always be infallible", that sounds kind of optimistic (as this example shows, cleanup code is fallible), the question is just where we handle it. So, should I instead read this statement as "destructors shouldn't return"? And if so, is that because of the C++ limitation that destructors can't return, or fundamentally a best practice unrelated to the language?


> isn't it a bit presumptious of the MyRAIIFile to make the decision to abort the entire program? I

No. Closing an invalid file descriptor is a logic bug. It's just as bad as an invalid pointer. When you notice one of these, you crash, because to continue means operating in some unknown and potentially dangerous state.


The usual advice is to add a TryClose method to your MyRAIIFile class that can signal failure, and that also keeps the object around in some well-defined state. This doesn't force you to handle the situation properly, but at least it makes it possible to do so.


> Besides: exactly the same bug can occur in Rust, because panics. On the panic path, your cleanup() won't be called. And with "?", it's also easy to forget to call cleanup(), because "?" acts just like rethrowing an exception.

It's harder. If you're typing ?, you're aware that you're dealing with a result or option (with the values of the error case clearly documented in the type system), and aware of the fact that you might be returning early, and have forced your own function's signature to also be a result or option.

In contrast, I've had code where finger->Position.X occasionally threw. Because a finger was released, invalidating the old finger ID, before it even gave you the chance to process the finger up event to realize you shouldn't query that finger. Did you know you need to wrap every finger position query in a try/catch? I didn't, so this was just a rare crash bug for awhile. What exception did I catch? I don't remember (but not a null reference nor an access violation exception), and it's not documented, so in new code I guess I'd just try { ... } catch (Platform::Exception^) {} despite the fact that we all know catch-all do-nothing statements are terrible. At least I can minimize the scope!

For bonus points, the exception handling overhead was heavy enough that the framerate of the game we were working on would stutter if you pawed at the touch screen. Better than crashing, but still terrible. Not having source access to the throwing code, and touch release events being delayed, this was the least horrible option available though. Yaaaay.

Was this a nasty edge case and not indicative of all use of exceptions? Yes. Do I eventually encounter such a nasty edge case in most, if not all, large scale projects involving exceptions in practice? Also yes. Often enough that it influences my preferred error handling mechanisms, even.

Do I encounter such a nasty edge case in return based error handling? Not in the wild. The use of error codes makes it much clearer what can fail. On the rare occasion I've encounter something similar, it's been when harshly stress testing cross platform API abstractions in contrived tests, when disambiguating between multiple error codes. Occasionally, the underlying API does something strange and returns an unexpected error code (either a unique error code, or one of the usual error codes in unusual circumstances) and I take a suboptimal error handling path.

Usually, at worst, the end user would've need to retry something, even if I hadn't caught and worked around the underlying API weirdness.

> Exceptions, in practice, produce clear code.

Do you catch NullReferenceException s instead of writing basic if conditionals to check values for null? Probably not. For such unexceptional cases it's less clear (which reference, exactly, was null?), and the performance overhead is often unacceptable.

For more exceptional uses of exceptions, you can sometimes get clearer code. It's often brittle and mishandles rare edge cases, but it's clearer for the happy path at least. But I will happily sacrifice a little of that clarity to make that code handle the edge cases properly instead of crashing - because those exceptional cases aren't really all that exceptional after all.


If you're wrapping lots of things in try/catch, you're doing something very wrong in the first place. People who think you need to do that are using exceptions wrong. It's no wonder that they come to dislike them.

Your example sounds like a badly structured piece of code. If your finger-query code is racing against your event processing code, that's a bug. You violated the function's contract. The exception was telling you about the bug. Don't shoot the messenger.

Maybe you wanted to write the moral equivalent of optional<Coordinate> getFingerPosition(FingerID finger). Nothing stops your using the optional value pattern in exceptional code.

I'm sympathetic. Should contracts be more explicit in code? Sure. In C++, the default should be noexcept, and it should be a compiler error to call a non-noexcept function from a noexcept one. But that's an argument for doing exceptions better, not an argument for abolishing them.

> Do you catch NullReferenceException s instead of writing basic if conditionals to check values for null?

If a pointer is null where it's not supposed to be null, you crash. That's a contract violation, and failing fast in the face of contract violation is the right thing to do. Are you one of those people who writes out null checks at the start of every function? I dislike code like that very much.


> If you're wrapping lots of things in try/catch, you're doing something very wrong in the first place. People who think you need to do that are using exceptions wrong. It's no wonder that they come to dislike them.

I agree something has gone terribly wrong. But in this case, it's the initial API design. Not my fault!

> Your example sounds like a badly structured piece of code. If your finger-query code is racing against your event processing code, that's a bug. You violated the function's contract. The exception was telling you about the bug. Don't shoot the messenger.

That's what it sounds like, and if the original API was sane you'd be correct. The original API was not sane. As I recall, this might've been when handling finger moved events, that the position query threw, because a not-yet-recieved finger up event had already invalidated the finger.

One could say that I violated the contract by checking the position of released fingers. But that contract was designed in such a way as to be impossible to consistently fulfill. It's terrible API design, and arguably a bug, but not my bug - I just wrote the workarounds.

One can share blame with the API authors, but this is a reoccuring pattern with APIs that use exceptions, so I'm willing to share the blame with exceptions too - they seem prone to misuse.

> Maybe you wanted to write the moral equivalent of optional<Coordinate> getFingerPosition(FingerID finger). Nothing stops your using the optional value pattern in exceptional code.

The API I exposed (wrapping the underlying system API) was similar. Well, I didn't use optional, because my API wasn't prone to race conditions despite the underlying system API being prone to race conditions. (Returning a position that's been stale for a few milliseconds seemed acceptable in that case.)

Of course, this had the performance problems I mentioned earlier, but those were fundamentally unfixable.

> I'm sympathetic. Should contracts be more explicit in code? Sure. In C++, the default should be noexcept, and it should be a compiler error to call a non-noexcept function from a noexcept one. But that's an argument for doing exceptions better, not an argument for abolishing them.

Except I've yet to see exceptions done well. Java tried to go a step further with checked exceptions, but that turned out pretty poorly too. I have yet to see exceptions done well, which is an argument for abolishing them until such a time as someone does do them better.

EDIT: Particularly topical in this thread - almost every single exception system I've ever used has caused me grief trying to translate errors across C ABI boundaries at some point or another. Per https://doc.rust-lang.org/nomicon/unwinding.html, unwinding across FFI boundaries is undefined behavior - and I've seen some really nasty bugs from C++ exceptions, C longjmps, C# exceptions, Ruby exceptions, etc. all trying to unwind over ABI boundaries too. And then I get to try and sanitize a whole slew of call sites to not invoke undefined behavior. Yuck.

> If a pointer is null where it's not supposed to be null, you crash. That's a contract violation, and failing fast in the face of contract violation is the right thing to do. Are you one of those people who writes out null checks at the start of every function? I dislike code like that very much.

There are plenty of cases where optional-and-missing values are represented with null in most null supporting programming languages. I'll typically lean towards the null object pattern to get rid of the null checks where sane and possible, but sometimes null checks are the sane and simple solution. This is the case I was asking about.

But sure, let's cover cases where it's not supposed to be null too. I won't just check/bail/ignore, that's terrible for the debugging and bug finding experience. But I probably don't want megs of minidump and a confused end user's error report, just because they installed a mod pack that was missing a sound asset resulting in a null pointer somewhere, either. Just crashing is also unacceptable - I want the error report, which I might not get, and the gamer is probably happier with a missing sound effect instead of a crash.

Instead, I'd rather insert a null check. For the dev side, you can have your check macros insert an automatic breakpoint, log (for the mod makers), report the error via sentry.io (for catching released bugs in your unmodded game), or even explicitly crash for your internal builds (so you and QA can find bugs). Just as easy to debug as a crash (if not easier thanks to smart error messages), much less end user angst.


> Instead, I'd rather insert a null check. For the dev side, you can have your check macros insert an automatic breakpoint, log (for the mod makers), report the error via sentry.io (for catching released bugs in your unmodded game), or even explicitly crash for your internal builds (so you and QA can find bugs). Just as easy to debug as a crash (if not easier thanks to smart error messages), much less end user angst.

I really wish it were feasible to write more code as out-of-process components. The Right Way to handle unreliable mods is to just run them in their own process where they can't hurt anything. I really like that COM tried to make this approach easy. I feel like we've regressed since COM's heyday.


This is my reason for hating exceptions. And impure functions. But I frequently use impure functions for interop or performance reasons. I haven't found a reason to throw exceptions.


Don’t write C++ like C and cleanup will be guaranteed by RAII.


Yep, RAII is an eminently reasonable way to address the narrow question of cleanup. In fact, that's how idiomatic Rust code would handle it, too.

That still leaves the general question of how to tell what errors a function may alert you to. Sanity suggests including it in a type signature, either in the return type or with checked exceptions. From what I've seen, the much more common solution in exception-using code is to ignore or forget the possibility of error.


> Sanity suggests including it in a type signature, either in the return type or with checked exceptions

Yet basically all of Rust's std::io propagates just io::Result<T>, which is Result<T, io::Error>, where io::Error is a generic result that could be anything vaguely related to the outside world. It's as useful as writing "throws Exception" in Java. C# gets along fine in practice using unchecked exceptions everywhere.

Your point would be stronger if, say, File::Create specifically said it could fail only with things like disk-full, file-exists, and so on. But all it actually says is "this can fail". The information richness you're describing does not exist in practice.


The problem with File::create is that it must interoperate with the existing OSes. And it turns out, File::create can fail for many many reasons. Your file can be on disk, it can be remotely accessible on the network, it can be virtual, etc... And the OS can return just about any error. Linux lists the errors returned on each operation, but they're really just advisory and not exhaustive.

So for io::Error, we're basically constrained by what the OS provides. Not much Rust can do about it, I'm afraid.


that just means the language isn't being understood, in a very basic and very dangerous way.

you don't need to check the call tree. without `finally`, it is not "always", period. we can debate at length about if that leads to desirable outcomes or not, but the behavior is unambiguous.


By that same logic we should not write multithreaded code, ever. It is still a good idea to strive for code where wrong things look wrong (which largely requires collocation), but you can't put that above everything else.


I'm not sure I follow. Sure, we shouldn't just have threads blindly working from the same memory, but it seems perfectly reasonable to use tools like Go channels or MPSC queues to write sane code.


Everything can fail. Anyone who knows Java knows the difference between “a; b” (b depends on a) and “try {a} finally {b}” (b always runs). It’s not even worth the effort to prove that dosomething can fail today, just assume it will after someone changes something.


This is precisely the power of the Rust model of errors: nothing can fail unless either a) the type signature says it can, or b) it's not sane to try to recover from the failure.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: