Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Rust is a community of very nice and helpful people, much like Elixir but with more money behind it. However, I certainly enjoy Elixir's unwillingness to break working code with every release, or to force users to build against nightly releases just to use any features from the last year. Rust is improving in this regard, but isn't quite there yet.


What stuff are you running into with this? The vast majority of the ecosystem is on stable, but there are some holdouts. Working on them! I always like to hear people’s pain points, it helps with prioritization.


Just yesterday our build broke because a cargo update pulled in some dependencies that suddenly required experimental features. I think it was crossbeam via hyper or tokio.

Also, while I recognize it's a third party project, the hyper API keeps changing faster than I can adapt my code. If you are aware of a more stable HTTP server, ideally one that has proper TLS support and support for unix domain sockets, I would be very interested.

EDIT: on the other hand, after working with rust daily for ~1.5 years, I have only run into one compiler bug and had zero actual bugs in my application due to language/compiler updates so far. There have been a few occasions were essential features were missing from the standard library/language though and several times were we had to backport some code to an older, stable rust version that we're one (when a developer had mistakenly tested their code locally with a newer version).

The highest up on my wish list would be that - as I understand it - you can not currently exit the process cleanly (EDIT: early) with a non-zero exit code in stable rust. That makes it pretty hard to implement good command line utilities. Happy to be corrected if I am wrong on this one though.


Interesting, I'll check it out: sounds like a bug in crossbeam. Thanks for letting me know. It's also true that the language and the ecosystem are different things; that's one reason why we're focusing on stabilizing stuff this year, to help move people off of nightly. 1.26 contained a really huge stabilization in this regard. It is a downside of the "put everything in the ecosystem" approach, though.

You're writing a server with hyper directly? Is there a particular reason you're not using one of the frameworks written on top? I may have some advice there, depending.


We started using Iron, but later switched to just using hyper because that seemed easier and like a smaller API surface to target. I'm not really working on a web application so I need zero of the routing/web features of these frameworks.

I do need a lot of pretty specific other features though. Unix domain sockets are currently a must (luckily that works with hyperlocal).

The code runs on a pretty resource constrained system so I like the low level of control that hyper gives me of the request/response chunking behaviour (I can not afford to buffer large requests in memory and what exactly I do with the body payload differs between API calls).

One thing that would be lovely is proper TLS/HTTPS support. I need support for x509 client certificates and I need to get access to the full client certificate data, ideally including the full chain that signed it from the request handler. I have to admit that I currently run a hacked together nginx in front of my app and put this stuff into HTTP headers (hence the unix domain sockets) because I could not get it to work with rust ecosystem libraries.

EDIT: The TLS stuff is also true for the HTTP client case. Currently using the libcurl rust binding because it's the only thing that implements all the features I need (--cainfo, --cert, --key, --resolv). Also it needs to run on OpenSSL or something else that supports x509v3 extended attributes (subtree constraints).


Cool cool, that makes sense. I probably don't have good advice for you then; what I will say is that things should end up much more stable in a few months, but there's a massive async/await/impl Trait/futures/tokio/hyper upgrade going on, so stuff is a bit more chaotic than usual. Once that's over though, stuff should be solid for quite a whole.

I don't know if rustls supports your use-case for TLS, but you may want to check it out.

Thanks again. It's really invaluable to hear about these kinds of things.


A common pattern is something like:

    fn main() { 
      if let Err(e) = real_main() {
        let status = status_from_error(e);
        // print the error message, or whatever
        std::process:exit(status);
      }
    }

    fn real_main() -> Result<(), TopErrorType> {
      // ...
    }
And simply returning Result<_, SomethingConvertibleToTopErrorType> from functions called by real_main that might need to exit the program.

I believe that in a recent stable version of Rust, you can make this even simpler and have `main()` return a Result directly.

edit: not quite as simple as I described, but maybe more powerful because of the Termination trait: https://github.com/rust-lang/rfcs/blob/master/text/1937-ques... see: https://github.com/rust-lang/rfcs/blob/master/text/1937-ques...

    fn main() -> Result<(), io::Error> {
      let mut stdin = io::stdin();
      let mut raw_stdout = io::stdout();
      let mut stdout = raw_stdout.lock();
      for line in stdin.lock().lines() {
        stdout.write(line?.trim().as_bytes())?;
        stdout.write(b"\n")?;
      }
      stdout.flush()
    }


Yes, currently doing something similar to that, but I'd much rather have it as a first-citizen feature. Returning result directly from main sounds very interesting! I will have to look into that thx.

EDIT: I assume you mean RFC1937? Looks like it isn't implemented yet, so we will probably have to at least another year before we can get it in stable rust. But yes - without having read the entire RFC - I think that was what I was looking for!


Part of it is implemented; see https://play.rust-lang.org/?gist=d442c47833587ddeff2158492b0...

The rest of it makes it even better; it's a bit limited in ways right now.


Oh nice! Will start using that as soon as we get to 1.26 :) -- currently stuck on 1.24 (upgrading takes a bunch of work since we're building under openembedded/bitbake, so I wait until the new version hits the upstream layer)


Great! :D


> The highest up on my wish list would be that - as I understand it - you can not currently exit the process cleanly (EDIT: early) with a non-zero exit code in stable rust.

You don't need any fancy features to do this. Every single one of my Rust CLI programs has done this on stable Rust for years. All you need to do is bubble your errors up to main.

If you have destructors that you want to run, then put those in a function that isn't main.


I would not consider setting an exit code to be a fancy feature, but I guess we are living on the bleeding edge here :)

EDIT: I should have said explicitly in my initial comment that I knew about std::process::exit and panic!, but did not consider them to be a clean solution for exiting the program under normal circumstance -- more of an abort() mechanism.


I don't see why. process::exit is perfectly clean from my perspective. You don't need to be on the bleeding edge. This has been possible since Rust 1.0.


Unless I'm misunderstanding you, [`std::process::exit`](https://doc.rust-lang.org/nightly/std/process/fn.exit.html) does what you want.


std::process::exit does not call Drop traits (at least the last time I tried -- maybe I am doing it wrong?). So for example, if you're relying on Drop to clean up temporary files in the system, that will not happen. Providing no method to call destructors AND exit non-zero feels like a bug or missing feature.

EDIT: to expand on this: a suggested workaround is to only call std::process::exit at the very end of the main function. But consider stuff like "env_logger::init". What about that? Ok, that doesn't use Drop and if it did you could put it into it's own scope I guess - so there are workarounds - but it in my opinion that gets pretty ugly. Comparing to c++, std::process::exit or panic! is like abort(), but what I want is a "return 1" from main.


You're not wrong. The issue is, it's hard to make a good API for this; anything else is basically dealing with global state, and so you don't have a guarantee that something else doesn't re-set it to something else while unwinding is happening, etc. I do have one more thing to say on this, but it fits better in a reply elsewhere in the thread :)

panic! does return a non-zero exit code, but isn't really designed for good end-user output.


If you don't care about the specific error code as long as it's non-zero, just returning `Result<(), impl Debug>` does what you want on stable now, right?


That's true, but then you have to juggle Results as your return type everywhere. It's probably a good idea overall, but some people don't want to do that. And yes, you don't get to pick the code. Yet!


Calling `std::process:exit(1)` at the end of main is identical to `return 1`. I'm not sure what your point is with `env_logger::init`. It sounds like you think destructors are run on static items? They aren't. In fact, until very recently you weren't even allowed to have destructors on consts/statics


> I'm not sure what your point is with `env_logger::init`

That most of my programs have some global (i.e. "for the runtime of the program") stuff that is setup at the beginning of main. And that some of that might want to Drop when the program exits, for example to delete a temporary directory. Now, if I want to return a non-zero exit code I can not do so while still getting all this global stuff destructed correctly (or use a workaround like having a wrapper-main).

> Calling `std::process:exit(1)` at the end of main is identical to `return 1`.

The thing is that it is actually not the same with respect to destructors -- the documentation explicitly calls that out. See also the C++ comparison in my other comment.


Ah, I see. You're wanting to have destructors run on local variables assigned in `main`.


I'm just a dabbler and most of my experience is a few months old -- I have no specifics for you, I'm afraid.


If you're talking about Rocket that seems to be the largest thing that requires nightly. That said I can count on my hands the number of times I've used nightly over the last 2-ish years. It's pretty rare that you need to do it.

I'll also add that I've actually been really impressed with how much attention Rust takes to not break stable packages. Between crater[1] and how aggressively they've cut point releases to fix any issues that (rarely) happen to slip by.

[1] https://github.com/rust-lang-nursery/crater


Bindgen also wants nightly so if you're doing any FFI stuff you may be tethered to nightly.


You sure about that?

I've been using bindgen for ages on stable and just grabbed the latest version on an empty project and it still works for me.


I'm sitting next to bindgen's author in meatspace right now and he says yes, it has been on stable for a long time.


Ah sorry, was thinking of rustfmt-nightly, not the toolchain.


No worries!

You shouldn't be using rustfmt-nightly at this point either; install the rustfmt-preview component through rustup. It works on stable!


Does rustfmt-preview work with bindgen though?


I don’t know what this sentence means.


Bindgen relies on rustfmt. With the codebase I'm using rustfmt-nightly works, but the stable version segfaults. Known issue but that's why I've stuck with rustfmt-nightly.


Interesting! I was not aware of that. So, no idea :)


https://github.com/rust-lang-nursery/rust-bindgen/issues/104...

So yeah looks like I'm tethered to the nightly toolchain after all. :/


I happen to be in the same physical location as the maintainers of both. This should be re-opened. I’ll chat with them. But first... I’m also confused. The component is the rustfmt-nightly codebase. You’re still seeing problems with the component?


I'm using bindgen to generate bindings for radare2. But I'm also using TryFrom, so I'm not likely to move off of the nightly toolchain any time soon.

Before I ended up going down the TryFrom rabbit hole I ran into two issues with bindgen:

1.) Bindgen segfaults with the ancient LLVM on OSX 10.9. Issue #1006. Solution: use LLVM >= 5 binaries from the LLVM site.

2.) Stable (deprecated) rustfmt causes bindgen to segfault. Solution: cargo install -f rustfmt-nightly. It could be as simple as "stable rustfmt requires --force to run because it's been deprecated", but more helpful error messages would be a great thing here.


What happens if you uninstall the rustfmt-nightly and use the component instead?

(TryFrom is being stabilized fairly soon, it got caught up in some silly stuff but is basically good to go)


So on the mac, something's going on and even with LIBCLANG_PATH set I'm still getting a segfault in some clang stuff. On the BSD box, rustfmt-nightly allows everything to work. Using `cargo install --force rustfmt` will provoke bindgen into reporting an internal error:

Custom { kind: Other, error: StringError("Internal rustfmt error") }

I get the same results if I call the bindgen executable or have build.rs call the bindgen API. I've stuck with having build.rs call the bindgen executable because if I use the bindgen API, compilation is SLOW.


Not cargo install, the

    rustup component add rustfmt-preview


Results:

  $ cargo uninstall rustfmt && rustup component add rustfmt-preview
      Removing /home/alex/.cargo/bin/cargo-fmt
      Removing /home/alex/.cargo/bin/rustfmt
  info: downloading component 'rustfmt-preview'
  info: installing component 'rustfmt-preview'
  $ bindgen bindings.h -- -I/usr/local/include/libr > /dev/null
  Custom { kind: Other, error: StringError("Cannot find binary path") }
  $ rustc --version
  rustc 1.28.0-nightly (29f48ccf3 2018-06-03)
  $ cat bindings.h 
  #include <r_lib.h>
  #include <r_asm.h>
  #include <r_reg.h>
  #include <r_anal.h>
  #include <r_bin.h>


Okay. Let’s get that bug re-opened!


It's all good; I appreciate it anyway.


Things do break if you use nightly – and if you ask for help in the community, "switch to nightly" is a very, very common advice.


That's true. I do think that some people are too enthusiastic to recommend nightly; it happens when you have people who are really into where the language is going and want you to see the future rather than use something that works in the present.


There is good interop story for Elixir/Rust. I would not be so sure about money bit either.


Just some random background for those reading about why Erlang/Elixir<->Rust interop is A Big Thing.

The whole point of Erlang and Elixir is robustness in the face of high concurrency. All other design choices (eg functional programming, immutable data), follow from that goal. Core is that if one green thread ("process" in Erl/Ex lingo) crashes for whatever reason, the rest keep on running. Interop with native code is the sole exception here, which makes interop very scary. If a C function that's called into from Erl/Ex crashes, the entire program crashes. Boom, gone. Sure that holds for most other languages too, but Erlang/Elixir people get extra nervous because they're more accustomed to thinking about error scenarios and because they don't always have the same seven restart/recover layers outside the VM process that good devopsers wrap eg Node processes with because hey, no need, we have Erlang, we never crash.

As a result, the idea of a technology that allows writing native code that has a tremendously small likelihood of crashing is very appealing. Until recently, no such technology was available but Rust changed that. Rust allows writing native functions that can be called from Erlang/Elixir with much fewer worries than C/C++ native extensions ever did, because of all the safety guarantees provided by the compiler. This is a big thing and I hope that as the interop story improves, it means that Erlang/Elixir will in practice become much faster because more libraries will be rewritten in native code.


Or you can just make a remote node out of your C/Java/Rust/whatever code and not worry too much about it crashing your Erlang system.

It's not just crashing either, it's taking a long time to execute a given function in the native code - that can also cause problems for the Erlang system running it.


Remote nodes have overhead, however if you can handle that overhead then you definitely should use a remote node or Port over a NIF. However, NIF's taking a long time to run is not so much of an issue anymore now that the dirty NIF scheduler is enabled by default in the BEAM.


The danger of crashing the whole VM only applies to NIFs (native implemented functions), which is a very tight integration of the external code. There are also Ports, which don't carry the same risk of fatal crashes.


> There are also Ports, which don't carry the same risk of fatal crashes.

Oh, this is cute statement xD No, ports also carry the same risk as NIFs if you use port drivers, because it's still the same way of running the code, just with different API. You need ports backed by external process to be isolated from the crash, though there's still the problem of such ports eagerly reading all the available data without any backpressure whatsoever, so your BEAM can trigger OOM killer. This is much easier to avoid, fortunately.


> [...] the idea of [...] writing native code that has a tremendously small likelihood of crashing is very appealing. Until recently, no such technology was available but Rust changed that.

No such technology was available, barring maybe Ada or some other languages.


There's a fine interop story, I've used Rustler before. The problem is when Rust nightly is sufficiently broken that even the auto-generated Rustler boilerplate won't compile. :)


Has rust ever broken working code in a stable release?


Depends on how you define “working”. We have broken some code that used to compile due to soundness issues.

We’ve also made some small changes that in theory can break code but weren’t observed in the wild. The tolerance for that has dropped over time, of course.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: