The biggest difference between Scala and Rust here is ".iter()", which is necessary mainly because you need to specify whether the iterator iterates over references (".iter()") or passes results by value (".into_iter()"). Because Java doesn't have value types and has no concept of move semantics, it has no need for the distinction, while Rust does.
In effect, the "elegance" of Scala here is really the product of its heavy GC dependence. So the comparison isn't as meaningful as it might seem.
Wouldn't the type guarantees provided by rusts type system allow for different implementations based on whether it was called on value types vs reference types? I thought there was an auto specialization feature created a while ago that did something like that.
It allows changing implementation depending on what you pass it, but it cannot change how you pass things. The issue is that if you just call zip(code_lengths), then ownership of code_lengths is passed into the function and it ceases to work in it's parent.
Looking at the example, I can't find a thing in there that is not required (except the obvious `:Vec<_>` type that is not required). We need `.iter` to tell that iterator is imutable (there is mutable version), we need `.collect` to actually run the iteration. I also don't think objects should have default constructor functions.
It may be possible to implement `.zipped` on tuples though.
Yep, but it's still an unfortunately big difference in expressiveness between the two languages, where one might think that they would be more or less on par, given the languages' similarities (both are more or less contemporary in their design, statically typed, with similarly powerful type systems, etc).
My question is then: is there something fundamentally preventing Rust from achieving similar levels of expressiveness to this Scala example without incurring in unnecessary runtime overhead? Given that both languages are statically typed, my inclination would be to say no: the information is there, the compiler should be able to figure it out. But, alas, i know very little about these things, hence my question :)
Update: pcwalton gave some nice insight on why Rust needs some of these constructs on an uncle comment.
Actually Rust already has izip and IntoIterator that covers the early differences. The type annotation `Vec<_>` is also optional since it can be inferred from the context.
The `map` is uglier in Rust because
* the comparison was against a constructor with positional, rather than named, arguments
* the Scala code didn't need to dereference any arguments, and
* and Scala's `Zipped` is a special type with a special `map` function that takes three arguments, unlike a normal iterator.
The first and last points could be easily copied in Rust: you'd build a constructor for HuffmanCode and augment iterators of tuples with with a starmap method (that can be done in a library). The middle point can be done before the zipping. The result would be
let codes =
izip!(data_table.iter().cloned(), code_lengths, code_table)
.starmap(HuffmanCode::new)
.collect();
Rust's collect is never implicit, like Scala's CanBuildFrom. This prevents accidental collects, which helps writing fast code, but in principle I don't see why it couldn't be implicit - it would just require the whole standard library to be overhauled.
Not at all -- there's a lot of commonality in the language designs.
Core language features: both have algebraic datatypes like ML ("case classes" in Scala, enums in Rust), and a 'match' statement that makes working with them easy. Both have pervasive destructuring that works with match arms, 'let' bindings, etc.
Data structures and mutation: both encourage a pragmatic version of immutability -- use values and provide functional interfaces primarily, but support mutation where needed. In Rust there are 'Cell' and 'RefCell' types, just like ML's cell-based interior mutability.
Library idioms: all three have the standard set of functional-style higher-order transformation functions ('map', 'filter', etc. -- Rust in particular has a very rich Iterator trait API), and it's idiomatic to use these.
FWIW, Rust's first/bootstrap compiler was written in OCaml and there's a lot of obvious influence.
I'm aware that Rust's bootstrap compiler was written in Ocaml. And yes, there are lots of ML inspirations in the language -- but not enough to make an ML language. Where are the module functors, for example? Many would consider that an essential element of an ML language, and it is a stretch to call a language an ML family member without them.
I don't want this to devolve into a No True Scotsman debate, I just feel that the parent post -- which decried a lack of similarity between Rust and Scala because they were both in the ML family -- was founded on a weak premise.
A module functor is nothing more than a higher order module...one that takes as a parameter some data structure that conforms to an interface (signature). Rust may not have constructs called module functors, but it definitely has constructs that are capably equivalent, as well as some that are even more powerful (a claim that applies to Scala as well)
For me, type inference, strong typing, and ADTs with pattern matching puts a language into the ML family quite easily.
Module functors were an SML feature - not really in the original 'ML' language. I would put Haskell in the ML family, and it doesn't have module functors. Rust has type inference, and a core type syntax and semantics that is heavily inspired by it.
It's straightforward with dependent types, unless the language treats multiple arguments (or functions of multiple arguments) specially. In my Scala example in the sibling thread the ().zipped is sort-of-language-level, but one could easily implement the same thing using a HList instead of a tuple; turning the function into a function that accepts a HList requires language-level support or a (standardized) macro but it wouldn't in a Haskell-like language where a function of multiple arguments is curried by default.