Hacker News new | past | comments | ask | show | jobs | submit login
Rust Cryptography Should Be Written in Rust (briansmith.org)
116 points by bigfish24 on Aug 26, 2023 | hide | past | favorite | 105 comments



I'd love this to be the case, but ring, which the author of the post created, is unfortunately not really maintained. It doesn't build on Windows ARM, which in turn inhibits rustls. It's a shame because I'd prefer to not depend on OpenSSL. Not that it's the author's fault. We shouldn't be reliant on a single person's contributions to have a working Rust cryptography toolchain.


That's Brian's point. rustls/webpki/ring do not have funding in line with projects like Go's crypto or BoringSSL:

> ... ARM, Amazon Web Services, Google, and Microsoft [...] should support the Rust community by letting their experts help the Rust community create FIPS-validated cryptography libraries written entirely in safe Rust that expose safe and idiomatic Rust APIs.

Rust's large corporate sponsors need to step up to make these crates more broadly suitable for production.


Those corporate sponsors are currently still using C and C++ for this kind of stuff and there is no change in sight.

Alone how much Azure Sphere gets sold as high security device, and the SDK is C only. There is a new preview support for Rust, yet nothing announced for the stable product, only C on that one.


Last I tried it also had issues with building on platforms like MIPS or PPC. If hardware acceleration is not available it should fall back to software not fail to build.


> If hardware acceleration is not available it should fall back to software not fail to build.

It's not always possible to make the same security guarantees for these implementations. Software implementations of AES are frequently vulnerable to cache timing attacks, for example (e.g. [1]); even simple operations integer multiplication may not be constant-time on some architectures.

[1]: https://cr.yp.to/antiforgery/cachetiming-20050414.pdf


Bit sliced AES is about as good as you can do. ARX ciphers like ChaCha are theoretically better but it’s also not possible to be 100% certain of constant time on every conceivable architecture. And what if the architecture is being emulated? All bets are off then.

At some point this gets pedantic. Just not supporting anything but x64 and ARM64 means any project using ring or a dependency that uses ring can’t build elsewhere which turns ring into a land mine dependency.


The problem with making it too easy to fall back is trivial misconfigurations end up using the fall back instead of being corrected.


Rust had pretty clear architecture conditionals. Not hard to just specify software for architectures without HW acceleration or where it has not been implemented yet.


The problem is wanting both fast and constant-time. That's so machine level that it is hard to even talk about in a high level language.

On the other hand, most of the security problems found in OpenSSL are not in the core cryptographic functions. They're in the networking and certificate management machinery. All that should definitely be in Rust.


I feel like yes the basic crypto functions should be hardware, hopefully done right and those exposed in the language as builtin functions. Not compiled at the whim of the compiler. Although will point out hardware guys have been mucking up and creating security holes too.


That would make holes in the core unfixable.


I reckon https://github.com/RustCrypto is an effort in this space.


FWIW, RustCrypto is neither written in safe rust or only rust. It uses inline assembly, unsafe byte manipulation, and unsafe intrinsics


> unsafe intrinsics

You mean like AESENC which you should always be using if available?

The rust fanclub obsession with calling things like that unsafe simply because it has the same keyword in front of it is fairly ridiculous.

> inline assembly

Sometimes it's the only way to get some sort of constant time guarantees.


> You mean like AESENC which you should always be using if available?

Yes. And make it not unsafe

> Sometimes it's the only way to get some sort of constant time guarantees

I don't think you understood the article then. Brian wants the constant time guarantees as an intrinsic in the std library, guaranteed by the compiler, and exposed as safe rust.

> The rust fanclub obsession with calling things like that unsafe simply because it has the same keyword in front of it is fairly ridiculous.

If you have to use the unsafe keyword, you're creating the possibility of memory safety vulnerability. It would be amazing if you could immediately see that all code in a library does not use unsafe and know an entire class of vulnerabilities does not exist


> Yes. And make it not unsafe

That's simply not possible, all intrinsics are unsafe by definition, the rust compiler can't check them for the same guarantees.


You're not going to get good performance any other way.


I thought it weird that the post didn’t mention RustCrypto. It’s legit.


Agree. The problem with rustcrypto at the moment (or by design) is that there’s no TLS


What are the language/tooling gaps specifically that prevent this today, and have there been RFCs to close them? Are the gaps primarily "in-language" or missing tooling for formal verification?


Probably SIMD support and constant time support. Crypto libraries tend to use SIMD a lot to be fast.

You can write constant time code in Rust by carefully making sure your code only compiles to constant time instructions without branches, but you'd really want some kind of annotation on the code to enforce that.

That's mostly a guess though.


Rust has simd support, so maybe the latter is the issue.


not in stable yet


This is incorrect. Rust has had x86_64 (up to and including AVX2) intrinsics stable since Rust 1.26. Wasm32 simd128 and aarch64 neon intrinsics are also stable.


core::arch::x86_64::_mm256_shuffle_epi8 and such seem to be in stable.


AFAIK intrinsics are stable. Idk about auto-vectorizers and the like.


other commenters, call me when this issue is closed.

https://github.com/rust-lang/rust/issues/48556


isnt constant time orthogonal to whether there are branches?


No. Constant wall clock time involves not using branches. Maybe you're thinking of "asymptotic constant time" or "runtime bounded above by a constant." These are not what is needed, because what is needed is to not expose any information via timing.


What if there are branches but both paths result in the same number of cycles being required to execute the instructions?

Is it correct to say: "all branchless code runs in constant time, but not all constant time code is branchless"?


The subtlety is that eliminating branching isn't sufficient to have constant time code. A simple example is using trigonometric and transcendental opcodes. They don't branch (at the assembly level), but on x86 take variable amounts of time depending on the input operand. Very few algorithms actually use these opcodes though, so a more relevant concern is memory access due to variable latency. Even if you have that nailed down, integer operations like multiplication and especially division can take variable amounts of time depending on the input.

Writing truly constant time code on modern processors ranges is difficult at best, and usually less efficient than variable-time code.


Really interesting information, thank you.


Add cache misses, bus interference, SMT woes and it quickly becomes harder and harder to write (and check) constant-time (or WCET) properties. Even modern micro-benchmarks are a huge labyrinth of architectural traps.


Because of speculative execution, branchy code with equal-runtime branches will still take different amounts of time if it is called repeatedly, usually in ways that reveal information about the input.


Timing attacks are common everywhere, by the way. Simplest example, perhaps a bit too contrived:

I'm an attacker doing targeted research. I want to see if a multi-auth system has an association between two email addresses tied to the same account.

Pulling a database record or in-memory record (e.g. via LFU/LRU cache) in some cases may cache the account record, which means a subsequent record might be warm when fetched with the second email.

I run a time analysis against the endpoint with garbage addresses, known addresses (that I've set up) and the two target addresses to check subsequent fetch speeds.

In some cases, this will cause enough of a time difference to tell me if there's a connection.

Timing attacks are hard, and even a well-architected system can expose information indirectly. Encryption is a bit one if the inputs are static (e.g. keys or the like) and are a common way to target endpoints.


The issue is speculative execution. Whenever there is a branch, the CPU makes a guess. If it guesses wrong, it has to go back to the correct path which introduces a delay. So any branching code has the possibility of revealing information through the branch predictor.


> all branchless code runs in constant time

No - e.g. division is not constant time.

You have to have branchless code and only use certain instructions.

E.g. here is the list for RISC-V.

https://github.com/rvkrypto/riscv-zkt-list/blob/main/zkt-lis...

Most things except div/rem, branches and floating point are ok. Oh and obviously store/load.


Thanks, this is really interesting.


Because branch prediction exists: sometimes yes, often no. Among other reasons.


Deterministic builds, and inability to ensure constant-time operations are the two that come to mind. The first is a build security / supply chain issue, and the latter is a real vulnerability if the rust compiler "helpfully" optimizes away no-op operations in alternate code paths.


Whenever I'm wearing my tinfoil hat, I wonder if all the advice to never implement your own crypto is a conspiracy to reduce independent implementations of cryptography algorithms.

I know constant time operation is important for these algorithms, but couldn't I do this with a timer? Call the algorithm, store the result, return the result exactly one second (an eternity in CPU time) after it was called. Basically put a timer wrapper around the actual cryptography algorithm. It would harm latency, but not throughput.

This is a honest question I'm hoping to have answered.


"Constant time" algorithms isn't really about the time it takes. It's more important that they exhibit no observable side effects of a branch. This can be power usage, memory usage as well as time.

For instance, a multiply might take slightly more power than an add instruction and that can be monitored.

If you think these attacks are unreasonable, recently there was a post on HN about using the LED of a smart card reader to detect the fluctuations in power usage to gain information about the secret key. These attacks are real


People keep finding serious architectural leaks in CPUs like Spectre/Meltdown, which makes me question whether any constant-time implementation can really be without observable side effects.


Some CPUs do have non-constant-time multiplies.


And one protection against that is to map the key into another space using a random (or close enough) key for that transformation, perform the calculation homomorphically, then transform back.

This is often too expensive, but it does come up as a possibility in some zero knowledge protocols.


No. Timing attacks look at statistical distributions given a set of inputs. Adding a flat 1 to all the times does not flatten the distributions. Likewise, adding a random jitter has the same problem, where it increases the variance but doesn't remove it - you need more samples to get the same confidence interval, which is very different from preventing timing attacks entirely.


They’re not talking (I don’t think) about adding a flat 1, they’re talking about something like this pseudocode I believe:

   timer = SomeScaffoldThatTimesAFunctionAccurately()
   result = timer(Do Crypto Thing())
   wait(1-timer.elapsed)
   return result
   
So the idea in this scenario is that every operation would take 1 second.

The first obvious drawback of this from a practical point of view is speed. You need crypto operations to be extremely fast so although you could reduce the constant time somewhat it would still be very difficult to calibrate the timing in such a way as not to totally nerf performance. The delay you’re introducing here is required (as I understand it) on every branch and possibly other places so there are thousands of these required in order to do each user-visible crypto operation.

Secondly if you think about the way the attack works, the jitter the attackers are observing to obtain the leaked information is very small so you would need an extremely accurate timer in order to be precise enough not to still leak the jitter.

As I understand it the actual defense against timing attacks is not that different from the above, just that the timing is precomputed and then the delays introduced to be exactly correct. My understanding is it’s generally something like

    If some_conditional:
        Do some operation that takes 2 cycles
        Delay for precisely 1 cycle
    Else:
        Do some operation that takes 3 cycles

    …
    Continue
For example, say you are testing a password. You could do

    Def brokenPasswordChecker(password, attempt)
       For i in len(attempt):
          If attempt[i] != password[i]
             Return false
       Return(true)
But if you do it that way, then attackers could observe that you return immediately after an incorrect character and use that timing to gradually deduce the prefix of the password until they have the whole thing. So instead you do:

    Def maybeLessBadPasswordChecker(password, attempt)
       Bool result = True
       For i in len(attempt):
          If attempt[i] != password[i]
             result =  False
       Return(result)
…which does the exact same number of comparisons every time so this same attack doesn’t work.

This is why you need help from the language/compiler or to break into inline assembly - lots of compiler optimisations will “helpfully” elide the delays you are introducing specifically to make the branches take the same time so they are once again vulnerable to timing attacks. So you need something so you can force that not to happen.


All that will do is slow an attacker down by a very small amount, it doesn't change the basic nature of the attack at all, if it worked before it will still work, just slightly slower. It's like adding a little bit of noise to a signal.


It depends. If your threat model involved an attacker being able to monitor your power supply (as in some kind of embedded system), they’d be able to see the real work done and separate that from the fake delay.


Except if your 'wait' operation is doing the same computation (add, multiply...) but won't use the result. I often do that in some real-time/low-latency things where you want things constant-time (or can't easily define the worst-case execution time, just make all paths the worst). Then you still need to blind the speculative execution mechanisms so that they don't 'see' you're not using the result.


Related is this effort to write a PGP backend in Rust : https://sequoia-pgp.org/


So far the only solid use case for Rust that I have seen in applications where security is extremely important.

Not wonder it is becoming the de-facto language for building applications in the blockchain space.

Does anyone else use Rust outside the blockchain/cryptography space? What are you working on?


At my $BIGCORP, we have a multi-mode (discrete/continuous) simulation engine written in Rust. It has to be extremely fast, and interface with both firmware written in C, and scripts written in Python. C++ was the only other viable option, but none of us were confident writing safe and performant C++, so we picked rust instead.

Now that I've been using rust for ~5 years, I reach for rust for when I have to do almost anything else. I write small backend services in rust[1], and small utilities that would previously have been python scripts[2]. The only thing I still use any other language for is python for writing test scripts + interactive data exploration, and typescript for front end stuff. We still use a lot of C for firmware because the Rust target support isn't as good as C yet.

[1] you might argue that Go might be better suited for this, but I disagree. The lack of null safety and less strict typing would come back to bite me a lot. A typical backend service we write is 10x simpler than our simulation engine so 1) compile times are approx the same as the project is small and 2) we usually don't care that much about performance for these services so we can just clone or RC away any problems we might have with the borrow checker. Swift might be a nice, but the ecosystem isn't there for backend apps last time I looked.

[2] Total LoC for small scripts ends up being about the same for python/rust in my experience. Development time is approx the same, but I much prefer maintaining small rust apps when I come back to it after 6 months. You _can_ set up safe and maintainable python projects with external libraries for validation, but its too much effort when rust has it all out of the box. Also, python deployment/venv management is still immensely frustrating


> Does anyone else use Rust outside the blockchain/cryptography space? What are you working on?

I work at Materialize which is building database software in Rust. One of my coworkers blogged about our experience with the language here: https://materialize.com/blog/our-experience-with-rust/


Im using it for a highly multithreaded real time web app. Also i disagree with your assessment, i think rust can really be used for everything.

For the business im building the frontend is written in js but everything on the backend is rust. i needed one highly multithreaded realtime server and rust was the right choice for that. For everything else (api, file server) I decided to just use rust too, maintaining a node backend and a rust backend and having to deal with typescript build systems when i could just use rust for everything made my life easier.

What ive found is that open source authors interested in releasing libraries in python and nodejs choose to write it once in rust then release python and js libraries using rust ffi.

See polars, yrs, rapierjs, lancedb, candle, etc.


It's become rather popular in the graphics (e.g. https://wgpu.rs/) and game dev (e.g. https://bevyengine.org/ ) spaces.

It's pretty popular for AWS Lambda functions.

Pretty popular for terminal / shell applications.

Definitely a great way to write wasm for compute or graphics intensive browser/web apps.


I use it for a desktop file transfer app [0]. I chose Rust because my primary language is Python and I just wanted to learn something new and really different for this project. Go would have been easier, but Rust just feels bullet proof. It's so strict. If it compiles, it works! It's been a very interesting journey.

[0] https://github.com/transmitic/transmitic


Linkerd's sidecar proxy (https://github.com/linkerd/linkerd2-proxy) is implemented in Rust. It implements transparent mTLS, HTTP load balancing, telemetry, etc. Rust gives us safety and security with a minimal resource footprint.


Rust is just a nice general purpose language. It's good more or less everywhere.

There are a few places where I wouldn't recommend it - for beginners, when compile time is really important (e.g. as a scripting engine in games), or where you need a repl (science).

But otherwise it's a better choice than most languages for most tasks.


For our client, we've developed a desktop application in Rust – with a thin GUI frontend in C++/Qt – to filter, process, and visualize sensor data streams (3D point cloud data from a Lidar). Each data stream was about 400-500 Mbit/s in 55k UDP packets per second, with support for at least 4 simultaneous streams. The focus was on high performance and development speed, security didn't matter at all.

We chose Rust for its unique position in the performance/productivity tradeoff space, and didn't regret it even for a second. There's no way we could have pulled this off with C++ in the same time, especially the bug-free parallelization.


Why use C++ for the front-end though?


We selected Qt as a cross-platform solution. The C++/Rust interface is the clunkiest and ugliest part of the application, and rather complex because some state is shared between several windows in the GUI and several threads in the backend, and any component might modify that state at any time, and updates have to be transmitted to the other components without introducing inconsistencies. However, using cxx [1] helped a little.

The project began in 2020, and I'm not sure what I'd choose as a GUI framework today – definitely not Qt Widgets, though.

[1] https://cxx.rs/


Yes, in fact I see hardly any cryptocurrency stuff with rust.

I'm building an OS in rust, and write most firmware with rust too.


> Yes, in fact I see hardly any cryptocurrency stuff with rust.

Really?

Solana [0] a top 10 market cap cryptocurrency is written in Rust and so is its smart contracts.

NEAR Protocol is another [1] one as well as Sui [2] and Polkadot [3] written and using Rust.

[0] https://github.com/solana-labs/solana

[1] https://github.com/near/nearcore

[2] https://github.com/MystenLabs/sui

[3] https://github.com/paritytech/polkadot-sdk


I didn't say it didn't exist. I said I don't personally ever see any cryptocurrency stuff in my Rust experience aside from coming across a few crates here or there related to it.


As an outsider with some good feelings towards Rust and some negative one towards crypto, almost all Rust jobs I’ve seen have been in crypto


That's a fair point, my experience as well. I think I was mostly talking about the code I see on e.g. GitHub.


All of our core code is Rust and even in web backend and k8s tooling at http://www.ditto.live

Been a great choice even with trade offs like needing to train some hires and compilation speeds.


I'm building a 2D simulation game compiled to WASM using Bevy!

It's nicer writing in Rust than JS, no GC makes Rust a good WASM candidate, theoretically the performance can be better if you're careful, people take 2D game engines more seriously in lower level languages, and it keeps the door open for shipping on Steam later.

I've found it much slower to develop in, though. Compilation times are an issue and after working in JS for decades I realize how big the JS ecosystem is compared to something like Rust.

I doubt the decision was a good choice overall in terms of trying to ship an MVP, but it might still pay dividends for a finished product, if the game gets there, due to having a higher cap on performance


Defect language in the blockchain space is still JS/TS. Not sure you should be relying on their signals for security assurances.


I am talking about building nodes that execute the underlying consensus protocols. Not the front end client side applications.


> So far the only solid use case for Rust that I have seen in applications where security is extremely important.

Playing devil's advocate, when is security not extremely important, except maybe in throwaway bash script-type applications?


* Toy one-off scripts, not even shell scripts necessarily

* Small things for one's personal use or local network use only

* Quickly prototyped experiments, as long as they're not used in production


* academic/scientific research


Games. The attack surface for third-party input is very limited, and you can just secure those bits rather than adopting a new language for the entire program.

Like, you need to be able to sandbox mods if they're a thing, and Rust's memory safety only handles a tiny part of that.


Multiplayer games have huge attack surfaces.


Single player games maybe, cheating in multiplayer games can be big business.


Only hobby coding, leetcode stuff and such.

Work is all about Java, .NET, Web and mobile OSes, Rust hardly brings anything to the table there, other than less capable tooling, higher attrition with existing ecosystem and complexity in designing borrow checker friendly algorithms versus automatic memory management.


> So far the only solid use case for Rust that I have seen in applications where security is extremely important.

But that's basically anything that touches the internet.

Not having buffer overflow vulnerabilities in your communications code is huge.


> Does anyone else use Rust outside the blockchain/cryptography space?

Cross-platform desktop app development, systems programming and authorization which is still cryptography related, I guess.


I use it for one off stuff I used to use Ruby for at the hobby level. The learning curve was high but it is very easy to reason about correctness.


I've used Rust for years in embedded Linux devices and more recently in signal processing.


there are some data processing projects, www.pola.rs for example


> Rust should be improved to provide the necessary building blocks that are needed to write cryptography code that is free from timing side channels and similar hazards

I misread that at first as saying it already did and was rushing to the comments to say "like hell it does!"-- but this is a difficult situation given that it doesn't really even exist in C where it would be easier to provide.

Technically, since Intel and AMD won't make guarantees that operations like multiplies won't have data dependent timing no language on these popular systems provide what is needed, at least in theory. (In practice things are somewhat better).

Ignoring the processor interface issues, it would be totally rad if there were types in rust for secrets that were guaranteed to get suitable handling. But doing so would probably require architectural changes to LLVM...


Nope. Cryptography code should be written using proof assistents. Proving the correctness of the code.

Like the Everest project.


I like the part where he says companies should spend money to give him something that'll be less secure (because it'll be a redundant implementation) to satisfy an aesthetic request.


Forget the naysayers, I for one pray that the crypto graybeards will learn rust and grant the author his wish.


I can't seem to fathom the why in this. Why is Rust different from, say, Python?


Well, let's put it this way. The python cryptography package contains rust code. The rust cryptography libraries are certainly not going to contain python code.


I wasn't clear, I mean, why should Rust have it's own implementation, with the cost and worries about correctness, and maintenance burden vs using an existing library, as is typically what other languages (e.g. Python) would do in this situation.

I don't see the benefit, that's what I was wondering about.


What library? Who maintains the library?

For some background, ring/rustls is a Rust library that replaces openssl because it doesn't have a good track record for vulnerabilities - especially those caused by memory safety issues.


Unlike Python, Rust is efficient enough and suitable for low level bit twiddling to write fast crypto libraries without using an unsafe language like C.


So it's a safety concern, is that what this is about? Safety as in, thread safety that sort of thing? I personally would be more worried about correctness, and so i'm not sure what the win is over wrapping an existing library, like every other language does (ok, there are bound to be exceptions).


Well yes? Heartbleed? The majority of significant security bugs are memory corruption and data races, both of which safe Rust prevents by design.


20x performance increase when using rust over python.

Plus the syntax is not that different, you can hand transpose python -> rust.


Most of rust is written in rust, it targets being able to write low level code.


Security is a weak-link problem. Once you decide to solve the bootstrapping challenge, you can use Rust for everything else.


Once we've used Rust for everything else, we can solve the bootstrapping challenge.

You tend to the large bleeds before the small theoretical ones.


But which version of rust?


Crypto code should be written in assembly. Zero ambiguity, zero undefined behavior, 100% verifiable.


ISAs regularly leave all kinds of behavior undefined when they think it doesn’t matter (such as the state of the arithmetic flags after operations that shouldn’t need to the tested).

(But this is also irrelevant: assembly can be completely wrong and exploitable while also being perfectly well defined.)


Readability suffers though, with negative impacts on maintainability and even verifiability (fewer people able/willing to examine the source code).


goto fail wasn't caused by ambiguity or undefined behavior. C's rules here are crystal clear, and conditional branches in assembly also do not make the following instructions condition (unless you're using delay slots à la SPARC).

Heartbleed also wasn't caused by ambiguity or undefined behavior, if you believe compiler writers.


Wouldn’t the low readability make it easier to slip a vulnerability in by splicing it out into several changes?


There's plenty of undefined behaviour at that level, just look at Spectre and Meltdown vulnerabilities for example.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: