Hacker Newsnew | past | comments | ask | show | jobs | submit | LexiMax's commentslogin

A long overdue feature.

Though I do wonder what the chances are that the C subset of C++ will ever add this feature. I use my own homespun "scope exit" which runs a lambda in a destructor quite a bit, but every time I use it I wish I could just "defer" instead.


Never, you can already do this with RAII, and naturally it would be yet another thing to complain about C++ adding features.

Then again, if someone is willing to push it through WG21 no matter what, maybe.


C++ implementations of defer are either really ugly thanks to using lambdas and explicitly named variables which only exist to have scoped object, or they depend on macros which need to have either a long manually namespaced name or you risk stepping on the toes of a library. I had to rename my defer macro from DEFER to MYPOROGRAM_DEFER in a project due to a macro collision.

C++ would be a nicer language with native defer. Working directly with C APIs (which is one of the main reasons to use C++ over Rust or Zig these days) would greatly benefit from it.


Because they are all the consequence of holding it wrong, avoiding RAII solutions.

Working with native C APIs in C++ is akin to using unsafe in Rust, C#, Swift..., it should be wrapped in type safe functions or classes/structs, never used directly outside implementation code.

If folks actually followed this more often, there would be so much less CVE reports in C++ code caused by calling into C.


> Because they are all the consequence of holding it wrong, avoiding RAII solutions.

The reason why C++ is as popular as it is is in large part due to how easy it is to upgrade an existing C codebase in-place. Doing a complete RAII rewrite is at best a long term objective, if not often completely out of the question.

Acknowledging this reality means giving affordances like `defer` that allow upgrading C codebases and C++ code written in a C style easier without having to rewrite the universe. Because if you're asking me to rewrite code in a C++ style all in one go, I might not pick C++.

EDIT: It also occurs to me that destructors also have limitations. They can't throw, which means that if you encounter an issue in a dtor you often have to ignore it and hope it wasn't important.

I ran into this particular annoyance when I was writing my own stream abstractions - I had to hope that closing the stream in the dtor didn't run into trouble.


You can use a function try block on the destructor, additionally thanks to C++ metaprogramming capabilities, many of these handler classes can be written only once and reused across multiple scenarios.

Yes, unfortunely that compatibility is also the Achilles hill of C++, so many C++ libraries that are plain old C libraries with extern "C { .... } added in when using a C++ compiler, and also why so many CVEs keep happening in C++ code.


If I'm gonna write RAII wrappers around every tiny little thing that I happen to need to call once... I might as well just use Rust and make the wrappers do FFI.

If I'm constructing a particular C object once in my entire code base, calling a couple functions on it, then freeing it, I'm not much more likely to get it right in the RAII wrapper than in the one place in my code base I do it manually. At least if I have tools like defer to help me.


if you do it once - why do you care about "ugly" scope_exit? btw, writing such wrappers is easy and does not require a lot of code.

What do you mean with '"ugly" scope_exit'?

Do you mean why I care that I have to call the free function at every exit point of the scope? That's easy: because it's error prone. Defer is much less error prone.


your words... > C++ implementations of defer are either really ugly

I agree with @pjmlp - you need to write wrappers around the C api.

But if you.. > If I'm gonna write RAII wrappers around every tiny little thing that I happen to need to call once

use them just once.. so, why care about ugliness, just write ugly code just once? Code can't be perfect.


Not to mention that the `scope_success` and `scope_failure` variants have to use `std::uncaught_exceptions()`, which is hostile to codegen and also has other problems, especially in coroutines. C++ could get exception-aware variants of language defer.

What C++ really needs is an automated way to handle exceptions in destructors, similar to how Java does in its try-with-resources finally blocks.

While not automated, you can make use of function-try-blocks, e.g.:

    struct Example {
        Example() = default;

        ~Example()
        try {
        // elease resources for this instance
        } catch (...) {
            // take care of what went wrong in the whole destructor call chain
        }
    };
-- https://cpp.godbolt.org/z/55oMarbqY

Now with C++26 reflection, one could eventually generate such boilerplate.


What I’m thinking of is that the C++ exception runtime would attach exceptions from destructors to any in-flight exception, forming an exception tree, instead of calling std::terminate. (And also provide an API to access that tree.) C++ already has to handle a potentially unlimited amount of simultaneous in-flight exceptions (nested destructor calls), so from a resource perspective having such a tree isn’t a completely new quality. In case of resource exhaustion, the latest exception to be attached can be replaced by a statically allocated resources_exhausted exception. Callbacks like the old std::unexpected could be added to customize the behavior.

The mechanism in Java I was alluding to is really the Throwable::addSuppressed method; it isn’t tied to the use of a try-block. Since Java doesn’t have destructors, it’s just that the try-with-resources statement is the canonical example of taking advantage of that mechanism.


I see, however I don't envision many folks on WG21 votting that in.

Various macro tricks have existed for a long time but nobody has been able to wrap the return statement yet. The lack of RAII-style automatic cleanups was one of the root causes for the legendary goto fail;[1] bug.

[1] https://gotofail.com/


I do not see how defer would have helped in this case.

People manually doing resource cleanup by using goto.

I'm assuming that using defer would have prevented the gotos in the first case, and the bug.


To be fair, there were multiple wrongs in that piece of code: avoiding typing with the forward goto cleanup pattern; not using braces; not using autoformatting that would have popped out that second goto statement; ignoring compiler warnings and IDE coloring of dead code or not having those warnings enabled in the first place.

C is hard enough as is to get right and every tool and development pattern that helps avoid common pitfalls is welcome.


The forward goto cleanup pattern is not something "wrong" that was done to "avoid typing". Goto cleanup is the only reasonable way I know to semi-reliably clean up resources in C, and is widely used among most of the large C code bases out there. It's the main way resource cleanup is done in Linux.

By putting all the cleanup code at the end of the function after a cleanup label, you have reduced the complexity of resource management: you have one place where the resource is acquired, and one place where the resource is freed. This is actually manageable. Before you return, you check every resource you might have acquired, and if your handle (pointer, file descriptor, PID, whatever) is not in its null state (null pointer, -1, whatever), you call the free function.

By comparison, if you try to put the correct cleanup functions at every exit point, the problem explodes in complexity. Whereas correctly adding a new resource using the 'goto cleanup' pattern requires adding a single 'if (my_resource is not its null value) { cleanup(my_resource) }' at the end of the function, correctly adding a new resource using the 'cleanup at every exit point' pattern requires going through every single exit point in the function, considering whether or not the resource will be acquired at that time, and if it is, adding the cleanup code. Adding a new exit point similarly requires going through all resources used by the function and determining which ones need to be cleaned up.

C is hard enough as it is to get right when you only need to remember to clean up resources in one place. It gets infinitely harder when you need to match up cleanup code with returns.


In theory, for straight-line code only, the If Statement Ladder of Doom is an alternative:

  int ret;
  FILE *fp;
  if ((fp = fopen("hello.txt", "w")) == NULL) {
      perror("fopen");
      ret = -1;
  } else {
      const char message[] = "hello world\n";
      if (fwrite(message, 1, sizeof message - 1, fp) != sizeof message - 1) {
          perror("fwrite");
          ret = -1;
      } else {
          ret = 0;
      }
  
      /* fallible cleanup is unpleasant: */
      if (fclose(fp) < 0) {
          perror("fclose");
          ret = -1;
      }   
  }
  return ret;
It is in particular universal in Microsoft documentation (but notably not actual Microsoft code; e.g. https://github.com/dotnet/runtime has plenty of cleanup gotos).

In practice, well, the “of doom” part applies: two fallible functions on the main path is (I think) about as many as you can use it for and still have the code look reasonable. A well-known unreasonable example is the official usage sample for IFileDialog: https://learn.microsoft.com/en-us/windows/win32/shell/common....


I don't see this. The problem was a duplicate "goto fail" statement where the second one caused an incorrect return value to be returned. A duplicate defer statement could directly cause a double free. A duplicate "return err;" statement would have the same problem as the "goto fail" code. Potentially, a defer based solution could eliminate the variable for the return code, but this is not the only way to address this problem.

Is that true though?

Using defer, the code would be:

    if ((err = SSLHashSHA1.update(&hashCtx, &signedParams)) != 0)
        return err;
        return err;
This has the exact same bug: the function exits with a successful return code as long as the SHA hash update succeeds, skipping further certificate validity checks. The fact that resource cleanup has been relegated to defer so that 'goto fail;' can be replaced with 'return err;' fixes nothing.

It would have resulted in an uninitialized variable access warning, though.

I don't think so. The value is set in the assignment in the if statement even for the success path. With and without defer you nowadays get only a warning due to the misleading indentation: https://godbolt.org/z/3G4jzrTTr (updated)

No it wouldn't. 'err' is declared and initialized at the start of the function. Even if it wasn't initialized at the start, it would've been initialized by some earlier fallible function call which is also written as 'if ((err = something()) != 0)'

In many cases that's preferred as you want the ability to cancel the deferred lambda.

Just hope those lambdas aren't throwing exceptions ;)

I'm already using Matrix for a number of open source communities. It's fine as far as it goes.

However, from a community space point of view, it seems to be more similar to IRC than Discord. Much like an IRC server, there's a public list of channels and you join them individually. There is voice and video chat, but as far as I can tell it's only person to person without any voice and video conferencing like Discord has, and I've never actually tried them to see how well they work.

Some organizations, such as Mozilla, run their own Matrix server in order to corral the community into one place, but more often than not I see communities creating a single channel on the primary matrix.org server, and "expand" by adding more channels, IRC style.

As an IRC replacement, I think it's perfect. But if you are expecting something more similar to Discord in terms of functionality, you'll likely be looking elsewhere. Stoat is what I see most frequently touted, but I can't speak to it personally.


> Discord does have some user capture, but nothing like twitter's

More importantly, Discord's communities are silo'ed, private by default, and administered and moderated by human beings with almost no oversight from Discord proper.

There is no equivalent on Twitter. On Reddit, going dark makes you subject to administrative subreddit takeover. But if someone runs a Discord community that they want to migrate to another platform, they could easily lock the entire server to posting and post a link to the alternative community. Done.


It isn't siloed though, not truly - not in the way Teamspeak or Mumble used to be, at least. Discord's global friends list is what will keep people from abandoning it in droves, unfortunately, and until Teamspeak et al sort that out it isn't changing.

EDIT: Maybe I completely forgot how Teamspeak works. It seems like there is a global friends list, but I can't remember that it was a thing back in the day (10+ years ago).


The friends list is inconsequential. It's for sending private messages to people you already know and met from a Discord server. Long running group chats are an aberration, people just start up micro-discords instead.

And that is what Discord alternatives will have to solve - the ease of setting up a new Discord "server" by any old random user is hard to beat in terms of convenience. Matrix is the only real alternative on that front.

However, if you have an established community and have at least a little hosting knowledge among the staff, the moat is shallow to nonexistent, and it's just a matter of how much of a pain in the neck Discord decides to be.


The discord servers my friends and I use are just for shit posting and using voice among like 10 of us. If it becomes annoying we can move to the next thing. We're all millennials. We can run whatever server if needed it's not a big deal.

If you meet somebody mid match in a game like Valorant or Overwatch, it's simple to give them a username and they can add you and you then choose to group voice call vs inviting them to a private server, especially before you know them very well.

Teamspeak, as far as I know, doesn't have a way to solve this.


I'll admit that this use case didn't really occur to me, because the signal to noise ratio is so damn bad in matchmade games these days. If I want to play a game on voice call with strangers, I go to the community space first and then organize a team there.

That being said, after thinking about it, I actually have done what you're talking about before - just not on Discord. When I find someone, I simply add them on Steam, PSN, or whichever account the game uses.


There's also really nothing to a community beyond its mods, its users, and maybe some bots. Reddit creates a record of EVERYTHING and in many ways those years of discussion are the sub more than the current users or mods alone. Discord is nothing like that, if you could get everyone on the same page a Discord clone would work just as well, and relatively seamlessly.

tl;dr Discord has a moat, but it's not very wide or deep.


That's not true. Plenty of Discord communities have dozens of channels with long-running post histories, pictures, FAQ content, beginner guides; server roles and titles, permissions, custom emoji, stickers, etc.

Migrating all of that stuff to a new service (which may not even support it all) would be a huge pain.


You can still find the essence of community on the traditional internet in places like invite-only discords, smaller mastodon instances, traditional forums, and spaces similar to Lobsters and Tildes.

...and not on Hacker News. Too many pseudo-anonymous jerks, too many throwaways, too much faith placed in gamified moderation tools.


Potentially, but those areas are also more and more getting leveraged to further identify and profile people for targeting - see the latest Discord scandal for example.


> But where are the AI features?? Gonna get left behind!

Obviously vim doesn't need AI, but one feature I really wish vim had was native support for multiple cursors.

It's the feature that lured me away to Sublime Text in the first place many years ago, and it's a pre-requisite for pretty much every editor I use these days, from VSCode to Zed.

There are plugins, but multicursor is such a powerful force-multiplier that I think a native implementation would benefit.


The canonical answer to this request is as follows: if you need multi-cursor (or, worse, multi-cursor with mouse support) then you are doing something non-Vim way (aka: wrong way) and there is a better way to do it.

If you need multi-cursor to do manual search and replace in text, then don't, just do automatic search and replace, maybe scoped to a block. If you need multi-cursor for refactoring or renaming a variable across entire source file, then don't, use LSP plugin (or switch to Neovim) and do the proper refactoring action.

Sure, there are legit cases of using multi-cursor in Vim, but they are rare. So it's not worth to put it into Vim itself.


personally, I know I can use search and replace, but <ctrl-n>-n-n-c-replacement[0] is easier on my mind than the search&replace alternative

[0] I've been using vim-multiple-cursors for years, it's abandoned but still works ok most of the time.


Do you know about *? It puts the word currently under the cursor into the search history.

So you can do *ciw, type your replacement, then n.n.n. to do the rest.

Obviously LSPs are more powerful though.


Multi cursor is on the neovim roadmap https://neovim.io/roadmap/


Funny, I used multiple cursor a lot back when I used Sublime Text, but stopped needing them when I switched to Vim.


Vim (kind of) has it though it doesn’t render the cursors:

Ctrl-V, then move down the lines you want to edit, Shift-I to insert text on multiple lines at once.


There are plenty of ways to achieve workflows that can be done witg multiple cursors even in plain Vim: macros, :norm, visual blocks, :s, etc.


I'm curious to know what kind of editing you do that you need this so much?


It's a pretty useful feature when writing code.


How does it work?


> Game studios, and everyone that works in the games industry providing tooling for AAA studios.

You know what else is common in the games industry? C# and NDA's.

C# means that game development is no longer a C/C++ monoculture, and if someone can make their engine or middleware usable with C# through an API shim, Native AOT, or some other integration, there are similar paths forward for using Rust, Zig, or whatever else.

NDA's means that making source available isn't as much of a concern. Quite a bit of the modern game development stack is actually source-available, especially when you're talking about game engines.


Do you know what C# has and Rust doesn't? A binary distribution package for libraries with a defined ABI.


C? Never. I feel like that ship has sailed, it's too primordial and tied to too many system ABI's to ever truly go away. I think we'll see a lot of Rust or Zig replacing certain popular C programs and libraries, but I don't think C will ever go away.

C++ on the other hand? Possibly, though I think that it's just as much because of the own-goals of the C++ standards committee as it is the successes of Rust. I don't really consider Zig a competitor in this space because if you're reaching for C++, you are reaching for a level of abstraction that Zig is unwilling to provide.


They said the same thing about Fortran, COBOL, etc. Still "around" but not the de facto.


> Compiler vendors are free to chose what ABI stability their C++ implementations provide.

In theory. In practice the standards committee, consisting of compiler vendors and some of their users, shape the standard, and thus the standard just so happens to conspire to avoid ABI breakages.

This is in part why Google bowed out of C++ standardization years ago.


I know, but still go try to push for ABI breaks on Android.


For reference, here's where Zig's documentation lives:

https://ziglang.org/learn/

I remember when learning Zig, the documentation for the language itself was extensive, complete, and easily greppable due to being all on one page.

The standard library was a lot less intuitive, but I suspect that has more to do with the amount of churn it's still going through.

The build system also needs more extensive documentation in the same way that the stdlib does, but it has a guide that got me reasonably far with what came out of the box.


People do under estimate how nice it is that the language ref or framework/tool documentation is all on one web page i can easily pdf print it and push it to my ipad for reading.


> If you assume off the bat that someone is explicitly acting in bad faith, then yes, it's true that engaging won't work.

Writing a hitpiece with AI because your AI pull request got rejected seems to be the definition of bad faith.

Why should anyone put any more effort into a response than what it took to generate?


> Writing a hitpiece with AI because your AI pull request got rejected seems to be the definition of bad faith.

Well, for one thing, it seems like the AI did that autonomously. Regardless, the author of the message said that it was for others - it's not like it was a DM, this was a public message.

> Why should anyone put any more effort into a response than what it took to generate?

For all of the reasons I've brought up already. If your goal is to convince someone of a position then the effort you put in isn't tightly coupled to the effort that your interlocutor put sin.


> For all of the reasons I've brought up already. If your goal is to convince someone of a position then the effort you put in isn't tightly coupled to the effort that your interlocutor put sin.

If someone is demonstrating bad faith, the goal is no longer to convince them of anything, but to convince onlookers. You don't necessarily need to put in a ton of effort to do so, and sometimes - such as in this case - the crowd is already on your side.

Winning the attention economy against a internet troll is a strategy almost as old as the existence of internet trolls themselves.


I feel like we're talking in circles here. I'll just restate that I think that attempting to convince people of your position is better than not attempting to convince people of your position when your goal is to convince people of your position.


The point that we disagree on is what the shape of an appropriate and persuasive response would be. I suspect we might also disagree on who the target of persuasion should be.


Interesting. I didn't really pick up on that. It seemed to me like the advocacy was to not try to be persuasive. The reasons I was led to that are comments like:

> I don't appreciate his politeness and hedging. [..] That just legitimizes AI and basically continues the race to the bottom. Rob Pike had the correct response when spammed by a clanker.

> The correct response when someone oversteps your stated boundaries is not debate. It is telling them to stop. There is no one to convince about the legitimacy of your boundaries. They just are.

> When has engaging with trolls ever worked? When has "talking to an LLM" or human bot ever made it stop talking to you lol?

> Why should anyone put any more effort into a response than what it took to generate?

And others.

To me, these are all clear cases of "the correct response is not one that tries to persuade but that dismisses/ isolates".

If the question is how best to persuade, well, presumably "fuck off" isn't right? But we could disagree, maybe you think that ostracizing/ isolating people somehow convinces them that you're right.


> To me, these are all clear cases of "the correct response is not one that tries to persuade but that dismisses/ isolates".

I believe it is possible to make an argument that is dismissive of them, but is persuasive to the crowd.

"Fuck off clanker" doesn't really accomplish the latter, but if I were in the maintainer's shoes, my response would be closer to that than trying to reason with the bad faith AI user.


I see. I guess it seems like at that point you're trying to balance something against maximizing who the response might appeal to/ convince. I suppose that's fine, it just seems like the initial argument (certainly upthread from the initial user I responded to) is that anything beyond "Fuck off clanker" is actually actively harmful, which I would still disagree with.

If you want to say "there's a middle ground" or something, or "you should tailor your response to the specific people who can be convinced", sure, that's fine. I feel like the maintainer did that, personally, and I don't think "fuck off clanker" is anywhere close to compelling to anyone who's even slightly sympathetic to use of AI, and it would almost certainly not be helpful as context for future agents, etc, but I guess if we agree on the core concept here - that expressing why someone should hold a belief is good if you want to convince someone of a belief, then that's something.


I don't think you can claim a middle ground here, because I still largely agree with the sentiment:

> The correct response when someone oversteps your stated boundaries is not debate. It is telling them to stop. There is no one to convince about the legitimacy of your boundaries. They just are.

Sometimes, an appropriate response or argument isn't some sort of addressing of whatever nonsense the AI spat out, but simply pointing out the unprofessionalism and absurdity of using AI to try and cancel a maintainer for rejecting their AI pull request.

"Fuck off, clanker" is not enough by itself merely because it's too terse, too ambiguous.


To be clear I'm not saying that Pike's response is appropriate in a professional setting.

"This project does not accept fully generated contributions, so this contribution is not respecting the contribution rules and is rejected." would be.

That's pretty much the maintainer's initial reaction, and I think it is sufficient.

What I'm getting at is that it shouldn't be expected from the maintainer to have to persuade anyone. Neither the offender nor the onlookers.

Rejecting code generated under these conditions might be a bad choice, but it is their choice. They make the rules for the software they maintain. We are not entitled to an explanation and much less justification, lest we reframe the rule violation in the terms of the abuser.


> I don't think you can claim a middle ground here, because I still largely agree with the sentiment:

FWIW I am not claiming any middle ground. I was suggesting that maybe you were.

> Sometimes, an appropriate response or argument isn't some sort of addressing of whatever nonsense the AI spat out, but simply pointing out the unprofessionalism and absurdity of using AI to try and cancel a maintainer for rejecting their AI pull request.

Okay but we're talking about a concrete case here too. That's what was being criticized by the initial post I responded to.

> "Fuck off, clanker" is not enough by itself merely because it's too terse, too ambiguous.

This is why I was suggesting you might be appealing to a middle ground. This feels exactly like a middle ground? You're saying "is not enough", implying more, but also you're suggesting that it doesn't have to be as far as the maintainer went. This is... the middle?

(We may be at the limit of HN discussion, I think thread depth is capped)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: