Hacker News new | past | comments | ask | show | jobs | submit login

I am hopeful that Rust can achieve what Elm did not.

I really fell in love with Elm early on, back when it was an experimental language for functional reactive programming that just happened to compile to JavaScript. It was an outgrowth of failed experiments in FRP from the Haskell world. I thought it got so many things right -- and it totally did. But then, just as soon as it started gaining real traction, development on Elm went silent and became siloed, staggeringly slow, locked-down, and unresponsive to users. I understand why this happened and I don't even hold it against the Elm team, but it certainly stunted the language's growth and adoption.

Rust has a much more expressive type system than Elm. The Rust world is much more open, responsive, and caring about user concerns. Rust isn't afraid to offer unsafe escape hatches even if they're not pretty or elegant. With Rust you have the added advantage of being able to use the language for the entire stack, both front- and back-end. That's especially compelling because Rust is on its way to being one of the strongest languages for back-end development due to its combination of type safety, expressiveness, and performance.




Clojure has been there as a fullstack language for a lot longer. It also fulfills the story that Elm was trying to do for much longer. And Reagent is a much needed improvement on React.

The borrow-checker in Rust is kind of silly tool to use in the context of a managed language (that does GC) like Javascript. I don't get it. It's like using a backhoe to plant a few geraniums. Am I just not getting this?


This is a complicated question, and ends up different for each individual. For me, I don't see the borrow checker as being more silly than GC, just an alternative, and one that speeds up my development process, not slows it down. I am also, of course, incredibly biased.

Rust also has many, many features that are not the borrow checker. Some people prefer Rust because of those features, in spite of the borrow checker.

https://without.boats/blog/notes-on-a-smaller-rust/ is also one of my favorite bits of writing on this topic.


That is a great link. The point at the end about making the language embeddable is actually one of the things I love about Rust the most. Rust is aggressively cross platform and I appreciate that a lot. Write a parser in Rust once, run it everywhere.

I’d use Swift a lot more (which has some of the features mentioned in the article) if the resulting code wasn’t limited to Apple OSes (the Linux support is crap). Or perhaps Kotlin if it wasn’t limited to the JVM, etc.


Kotlin isn't limited to the JVM thanks to KotlinJs and Kotlin native


Golang


Wow...there is really really good advice in that post. I was considering making my side project language into a simpler Rust and that post just gave some great suggestions. Especially since I’m writing the compiler in Rust to WASM.


That's a good link there. I think the answer to a smaller Rust is Go, however. The comparative compiler speed alone is worth the price of entry. Don't @ me ;)


Smaller Rust is more like OCaml than Go IMO. Go has an entirely different design philosophy.


Go may be smaller than Rust, but I really don't think it is a smaller Rust. There are too many differences in the philosophy, and they make Go just some other language.


Go is so much smaller its no longer anything like a smaller ruts, though.

Or, is Go bigger because the GC represents a ton of complexity?


I don't see clojure in the same realm here. Both rust and elm are geared more toward enforcing correctness through their type systems to tame complexity in large projects. In my experience, clojure, being dynamic, fits more as a comparison to vanilla JavaScript. In a langauge-to-language comparison, I think clojure is somewhat more appealing than JavaScript due to immutability by default and general functional niceties. However, the fullstack comparison is not quite apples-to-apples as clojure (backend) is JVM and clojurescript (frontend) is node. While it works for some people, clojure feels awkward to me as a fullstack language.


> clojure (backend) is JVM and clojurescript (frontend) is node.

Node is a backend JS implementation. ClojureScript is compile-to-JS and can run on the backend in Node or on the frontend in browsers.


Thanks, that's correct. I often make the mistake of interchanging node for browser js on accident...


Types provide only the simplest most trivial kinds of correctness, though. And people routinely make mistakes with types.


Things like null safety aren't trivial for practical purposes IME.


Most Optional implementations are kinda terrible, though. They cost more than they save. If I have a function with a param and I release it with it required, but later I change it to be optional (loosening a requirement) does rust require everyone calling that function the old way to change their code? If so that not an improvement!


You're basically just regurgitating a recent Rich Hickey talk. Interested readers can probably find it on YouTube.

Rich is certainly a brighter individual than me, but some of his points are either him missing the point or being intentionally misleading.

For example, he discusses how Either types aren't true sum types because Either<A, B> isn't the same type as Either<B, A>. So he disparages people who say that Rust/Scala/Whatever have sum types. He's missing the point because 1) All Either implementations I've seen have the ability to swap the arguments to match another Either with the types backwards, so it's a sum type in practice, and 2) Clojure has none of it, so why criticize the typed languages by saying their type systems aren't perfect when your language's type system isn't helpful at all? Throw the baby out with the bath water?

To your specific point (which is also one of Hickey's), yes, it does kind of stink that loosening a requirement forces consumers to update their code. However, that minor downside does not mean that Optional is "not an improvement". It's still a HUGE improvement over Java's absurd handling of null (IIRC, Clojure is the same as Java there).

Also, maybe changing something to optional isn't really "loosening" the requirements. It's just changing the requirement. If the parameter changed to optional, don't you want to be alerted to that? Why is it optional now? What will it do if I pass None/null? Maybe I actually would prefer that to the way I called the old version.

It just never struck me as offensive to have to change my code when I upgrade a dependency. I have trouble sympathizing with that mindset.

Edit: And what is the Clojure alternative? You can loosen requirements, but really, you never had enforceable requirements anyway. Is it apples to apples to talk about a typed language loosening its contract?


> "Either types aren't true sum types because Either<A, B> isn't the same type as Either<B, A>."

that's such a weird argument! did he also complain that tuples aren't true product types because (A, B) isn't the same as (B, A) ? why would they be the same, and not just isomorphic?


I haven't watched the talk recently, but my feeling at the time was that he was just being pedantic about the definition of a sum type. Kotlin's nullable types would be example of true sum types because they are symmetric. But you can only make a sum of `T + null` and not a more generic `T + U`.

His real point, I believe, was that the `Either` implementations weren't as good as true sum types because of ergonomics. It's part of his philosophy/bias that type systems get in the way and therefore cause more harm than good.

I don't really grok his point most of the time. It just feels foreign to me to not want as strong a type system as possible. But a lot of really smart guys feel that way: Him, Alan Kay, etc. I suspect that they're able to track much more stuff in their heads at a time than I am.


The point is Hickey brings up important points about language design as its experienced by devs actually using the language. Hardly anyone discusses this. Furthermore, you seem to be making my argument for me when you claim that Clojure doesn't have types so why complain about types. In Clojure you could write a type system to do all that, probably in a a dozen hours (the language is programmable after all), but it would be an academic exercise to most which is the point Rich is trying to make when he disparages other type systems.


I think its worth noting that in Rust you can create a function that can take either Option or the value itself, if that is what you really want to do:

https://gist.github.com/rust-play/b28257cd7c48d0f9e9b1893181...


That is nice


If you change a type from T to Option<T>, yes, all the other code has to change to take the option into account.


Yeah that's not good design. Lowering a requirement should not make callers complying to a stricter one have to change anything. But, ohhh.. right only the type changed. so everybody stop what you're doing and start over.


I strongly disagree, but this is why it's great we have a ton of languages! To me, forcing you to handle it is the exact point of an option type.

The type wasn't the only thing that changed, the possible values have changed. You may be getting more values than you were previously. This means that some of your assumptions may be incorrect. Of course your code needs to change.


I might be misunderstanding, but I think you are talking about slightly different points here. It seems to me that the critique of an explicit Option type (that acts sort of like a box, in contrast to Kotlin's T? vs T) applies to when you pass in the Option as a function parameter to a function that previously expected it to always be T instead of Option<T>. In that case you as a caller are never "getting more values than you were previously", but you can now certainly pass in more values than you could before.

Forcing callers to refactor their calls to use a new Option<T> type as a parameter simply amounts to a change in the type signature, but since the function is more liberal than before, it cannot break your assumptions (at least not assumptions based on the function type signature).

(For what it's worth, I do find Kotlin's T? to be more elegant than the Haskell/Rust-style Option/Some type. But then again, Kotlin is not fully sound, so there's that. Dart's implementation of T? will be fully sound though, so there are definitely examples of languages going that route.)


That is true! You're right that the perspective can be different.

You could write <T: Into<Option<i32>> if you wanted, and your callers wont change.

Frankly, using options as parameters is just not generally good design, so this issue doesn't really come up very often, in my experience. There are exceptions, but it's exceedingly rare.


Frankly, using options as parameters is just not generally good design

That's true. Even in annotated Python, I want to ensure all parameters are set before calling my functions/methods. Saves a lot of complications.


Ah, I see where the misunderstanding is. You can make it so you change only the function signature and behaviour or you can make it so you have to also change the function call site.

Ever since https://github.com/rust-lang/rust/pull/34828 you can transform any `f` that takes a `T` into an `f` that takes an `Option<T>` without any of the call sites changing.

For instance, look at this playground https://play.rust-lang.org/?version=stable&mode=debug&editio...

Your function `get_string_raw` which just handles `i32` can be transformed into a function `get_string` which handles `Option<i32>` without the thing calling changing how it calls the function. And the new `get_string` can accept `Some(i32)` or `None` or just `i32`.

Of course, this is slightly broad for brevity: you can now pass in anything that can become an `Option<i32>` but you can just define a trait to restrict that if you wanted.

You can get that sort of covariant effort that you wanted.


Well, yes, of course. The thing you could previously rely on being present can no longer be guaranteed to be present - that _should_ require code in the calling function to change.


Not if I loosened the requirement. Stricter adherents shouldn't have to change anything. This is poor language ergonomics.


To be clear, you're talking about the function signature changing from

    fn(a: int)
to

    fn(a: Option<int>)
?

Technically, yes, all callers would have to update, but practically, you'd just define

    fn(a: int) { fn_2(Some(a)) }
to avoid the breakage. That is, you're essentially telling the compiler how to loosen the requirements. Ergonomically, this seems rather fine. Especially if this means you gain (some) protections from the much more problematic case of restricting the requirements.


There is also the Option of accepting Into<Option<T>>, which does cover both variants and is completely backwards compatible.


does rust require everyone calling that function the old way to change their code? If so that not an improvement!

Can you share why is that bad? the compiler will tell you exactly where you need to make the changes.


Poor ergonomics: I loosened a requirement. Nobody should have to change their code. Kotlin does this right.


How often do you believe this really happens in practice? And does that truly outweigh the benefit of being able to define a precise contract on your APIs?

How many times have you written a function and a version later said "Oh, wait. I guess I don't actually need that required Foo parameter! I used to, but now I don't!"


> How often do you believe this really happens in practice?

Regularly if you're doing refactoring of code. Otherwise code becomes unchangeable because it's too big of a burden once it's clear it needs to change.

> And does that truly outweigh the benefit of being able to define a precise contract on your APIs?

I would point you to the XML standards which allowed people to do exactly that, and instead JSON won.


> Regularly if you're doing refactoring of code.

Are we talking about a published library or your internal-only code? If the former, I sympathize with the argument that relaxing a requirement should not force consumers to change their code. If the latter, then I find it much harder to sympathize. You're already refactoring your code, what is a few more trivial syntactic changes? You could almost do it with `sed`.

> I would point you to the XML standards which allowed people to do exactly that, and instead JSON won.

You know- this is an interesting point. And I guess I'm consistent because I absolutely hate JSON. I've only had to work with XML APIs very few times, but every time, it was perfectly fine! I could test my output against a DTD spec automatically and see if I did it right. It was great. JSON has JSON Schema, but I haven't bumped into in the wild at all. So it seems like "we" have definitely chosen to reject precision for... easy to read, I guess?


You might really enjoy going and reading about CORBA and SOAP -- two protocols that have tight contracts. I'm sure you can still find java/javascript libs that will support both. And if you really really want, you can put them into production -- CORBA like it's 1999 while singing to the spice girls.

And what you'll find is that the tighter the contract, the more miserable the change you have to make when it changes. It's one thing if it's in one code base, it's another if it affects 10,000 systems.


I'll admit that I've never deployed a service with 10,000+ clients.

And CORBA (after looking it up) seems to include behavior (or allow it, anyway) in the messages. That's about much more than having a precise/tight contract on what you're sending. It's much more burdensome to ask someone to implement so much logic in order to communicate with you. I'm fine with the contracts only being about data messages.

SOAP is closer to what I'm talking to. Or even just regular REST with XML instead of JSON.

I'm asking genuinely, how would life be worse between a REST + XML and a REST + JSON implementation of some service? In either case, tightening a contract will cause clients to have to firm up their requests. In either case, loosening requirements (making a field optional, for example) would not require changes in clients, AFAIK.

The only difference that I see is that one can write JSON by hand. And that's fine for exploring an API via REPL, but you surely don't craft a bunch of `"{ \"foo\": 3 }"` in your code. You use libraries for both.

It just seems insane that we don't have basic stuff in JSON like "array of things that are the same shape".


> And CORBA (after looking it up) seems to include behavior (or allow it, anyway) in the messages. That's about much more than having a precise/tight contract on what you're sending.

The IDL (interface description language) for CORBA is a contract. It defines exactly what can or can't be done. It's effectively a DTD for a remote procedure call, including input and output data. (Yes it can do more than that, but realistically nobody ever used those features)

A WSDL for SOAP is similar. CORBA is basically a compressed "proprietary" bitstream. SOAP is XML at it's core with HTTP calls.

> I'm asking genuinely, how would life be worse between a REST + XML and a REST + JSON implementation of some service?

So REST+XML vs REST+JSON alone (no DTD/XSD/schema) would be very similar -- other than the typical XML vs JSON issues. (XML has two ways to encapsulate data -- inside tags as attributes and between tags. Also arrays in XML are just repeated tags. In JSON they are square brackets []).

But lets say you need to change the terms of that contract (new feature usually), will code changes be required on client systems?

* If you used a code generator in CORBA with IDL the answer is yes, there will be code changes required.

* If you used a WSDL and added a new HTTP endpoint, the answer was no. If you added a new field to an existing endpoint, the answer was yes. (See [2])

* If you used a DTD/XSD, the answer is usually yes, since new fields will fail DTD validation using an old DTD -- that is if you validate all your data upon receipt before you process it.

And this was fine for services that didn't change frequently or smallish deployments.

In large systems, schema version proliferation became a nightmare. Interop between systems became a pain of never ending schema updates and deployments, hoping that you weren't going to break client systems. And orchestrating deployments across systems were painful. Basically everything had to go down at once to update -- that's a problem for banks, say.

What's sad to me is that was well known back in 1975. [1] When SOAP was developed around 2000 they violated most aspects of this principle.

> but you surely don't craft a bunch of `"{ \"foo\": 3 }"` in your code. You use libraries for both.

In python, JSON+REST is:

     resp = requests.post(url, data={"field":"value"})
What I find really appealing in REST+JSON is that validation just happens on the server side, and that's usually good enough. Sure there's swagger, but that's a doc to code against on the client side.

I don't feel that schemas and the need for tight contracts are all bad. I think if your data is very complex, a schema becomes more necessary than not when documents are bigger than a 1MB, say. I also think it's fine if your schema changes rarely. And yeah, if you need a schema for tight validation, JSON kinda sucks.

But that's the question, do you really need tight validation, and therefore coupling, or is server-side validation good enough? And in most cases people tend to agree with that.

[0] https://en.wikipedia.org/wiki/Service-oriented_architecture

[1] The Practical Guide To Structured System Design (1st ed.), Page-Jones, Yourdon Press, (c) 1980, pp103, footnote at bottom of page.

[2] https://www.w3.org/TR/wsdl.html#_wsdl


> If you used a DTD/XSD, the answer is usually yes, since new fields will fail DTD validation using an old DTD -- that is if you validate all your data upon receipt before you process it.

I'm not sure I follow. DTD, as far as I know, allows both optional elements as well as attributes. If you add a feature, a client with the old version should continue to work correctly if you add optional elements. If they are NOT optional, then the client will fail regardless of whether you did XML+DTD or JSON, because your API needs that data and it simply wont be there.

What am I misunderstanding?

> What I find really appealing in REST+JSON is that validation just happens on the server side, and that's usually good enough. Sure there's swagger, but that's a doc to code against on the client side.

As a client, you don't have to validate your request before you send it. But it's nice (and probably preferable) that you can.

>In python, JSON+REST is:

> resp = requests.post(url, data={"field":"value"})

requests is not built-in to Python, right? So you are still using a library to JSONify your data. If you were to use urllib, then you'd have to take extra steps to put JSON in the body: https://stackoverflow.com/questions/3290522/urllib2-and-json

What's more, you still are not crafting the JSON yourself if you call json.dumps on a dictionary.

But, yes, crafting a dictionary with no typing or anything is still many fewer keystrokes than crafting an XML doc would be, even with an ergonomic library. But again, how much are you doing what you typed in your real code? That looks more like something I'd do at the REPL.


> If you add a feature, a client with the old version should continue to work correctly if you add optional elements. If they are NOT optional, then the client will fail regardless of whether you did XML+DTD or JSON, because your API needs that data and it simply wont be there.

Sure but, that begs the question, How is that better than JSON exactly? Maybe strong typing? And why isn't just sending a 400 Bad Request enough if the server fails validation?

I mean you could say well, "I know the data is valid before I sent it". But you still don't know if it works until you do some integration testing against the server -- something you'd have to do with JSON, anyway. XML is only about syntax, not semantics.

From what I've seen, XSD's tend to promote the use of complex structures, nested, repeating, special attributes and elements. And if you give a dev a feature, s/he will use it. "Sure, boss, we can keep 10 versions of 10 different messages for our API in one XSD" But should you?

JSON seems to do the opposite, it forces people to think in terms of data in terms of smaller chunks say. Yes you can make large JSON API's that hold tons of nested structures, but they get unwieldly quickly. And most devs would just break that up in different API's, since it's easier to test a few smaller messages than one large message.

> As a client, you don't have to validate your request before you send it. But it's nice (and probably preferable) that you can.

If you unit test your code, good unit tests serve as validation -- something you should be doing anyway. If you fail validation on your send, you have a bug anyway -- it's just you didn't get a 400 Bad Request message from the server. But to the user/dev, it's still a bug on the client side.

> requests is not built-in to Python, right?

Yes. But there's a lot of stuff not in the standard library that should be. The point is normal day to day code can be just a one-liner using native python data types.

> What's more, you still are not crafting the JSON yourself if you call json.dumps on a dictionary.

Sure, maybe a technicality here. If I type this, is it python or JSON?

    { "field": [ 1, 2, 3 ]}
Well, the answer is that both will parse it. json.dumps() just converts it to a string. No offense here, but I see it as a distinction without a difference.


> Kotlin does this right.

Typescript as well :-)


I think simple, trivial type systems offer simple, trivial correctness. Robust type systems can offer robust correctness (when used to their potential).


Types still exist in clojure of course, you just have to spend time reasoning about what they may be in any given place. The longer I've worked with clojure, the less I've understood this argument.


Writing your own type system in a Lisp is an afternoon task, but nobody does that because...


In addition to the benefits listed in the other comments, exhaustive checks in sum types are useful for catching unhandled cases, especially when refactoring.


>The borrow-checker in Rust is kind of silly tool to use in the context of a managed language

I suspect a lot people are drawn to Rust more because of its type system and great tooling. I'd be perfectly happy with a version of Rust that swapped out the borrow checker for a GC, but such a language doesn't exist today, and Rust does exist.


I thought that F#, OCaml, Standard ML, Swift were such languages.


Yeah the borrow checker is an interesting solution to a problem I don't have with a managed language. It's the main "safety" feature people coming to rust are introduced. I'm in agreement about Rust here with a lot: "one does not let one's friends skip leg day" there's a lot more to good language design than memory management. Is it worth it?


One of the things I like about non-managed languages is the ability to have true destructors. Releasing resources is awkward at best in Java, et al.


Note that you can have linear types (which is what gives you what you're talking about) in managed languages. You can have them as an extension in Haskell for exemple.

Rust is the only mainstream-ish language to really use them, though.


Rust uses a mixture of affine and ... "regular"? types. My understanding is that affine is a looser version of linear because the type doesn't have to be consumed.

You can have dtors in non-affine types (types that implement the Copy trait) in Rust as well. I'm really only talking about C++ style destructors. Those don't require linear or affine types. But, I agree, that in a managed language, having a linear type is one way to get predictable destructors to run.

Strangely, Swift has deinit{} for its class types (ref-counted), but not for its struct types (value types).


You cannot implement Drop for a Copy type.


Fair enough! I never wanted to implement Drop on a Copy type, but I assumed you could.


> "one does not let one's friends skip leg day"

That is such a silly meme. It's silly because it suggests that Rust is a one-trick pony, or that it only focuses on memory safety, which isn't true in the least.

Besides, the borrow checker has at least one other major upside: Show me another mainstream language in which safe and efficient shared-memory concurrency and parallelism is as easy as in Rust.


In many ways the borrow checker is a tool for enforcing safe mutation of values. As a side effect it happens to prevent entire classes of memory bugs, but it also prevents many bugs that aren't related to memory safety. Historically that side effect was the motivation for designing and implementing it, but in practice it's useful as a general guide for writing rigorous and correct software. As a result, I don't think the borrow checker as silly in this context as you're suggesting.


Rust is not running in the context of JavaScript. It's running on WebAssembly, which has no garbage collector.



One nice thing the borrow checker enables, besides GC stuff: You can guarantee that you never keep a reference to the thing a Mutex is protecting, after you unlock the Mutex.


Replying to my own comment, and very much related to that, another is thread safety. In most languages, if I have a type with some method that e.g. increments a member variable, that method is non-thread-safe by default. (Modifying internal data structures can be even worse of course, depending on the language and the data structure.) In Rust, such a type is trivially thread safe by default, because the languages just won't let you modify it from multiple threads without a Mutex or similar. When you want types that can be shared and modified without locks, the author of the type has to take steps to implement this (using atomics, lock-free data structures, etc). That means that there's generally no need to trawl through docs or source code to find out whether a method is thread-safe or not. The type itself knows.


For anyone curious about Clojure, I'd recommend checking out the CircleCI frontend [0] for an example of a production ClojureScript app.

[0] https://github.com/circleci/frontend


One of the problems with Clojure is the same as with JS: it lacks a static typing system. It lets through a ton of bugs which the compiler normally catches, both trivial and nontrivial.

Yes, Typed Clojure addresses this problem somehow, as does Typescript.

But Elm, Rust, and the language formerly known as BuckleScript address the problem in a nicer and better-integrated way, to my mind.


I don't think anyone who uses Clojure in anger would put it on the same level as JavaScript when it comes to letting through a ton of runtime errors. The integrated tooling around editors, clojure.spec and the REPL is very good at curtailing the downsides of dynamic typing.


There is no javascript in a rust web application.


[flagged]


A lot of ad-hominem and not a lot of substance in your comment.

>due to some misguided agenda that it’s safer (code is data!!!)

What does that even mean? Rust is a lot safer than C or C++, although of course here we're talking about effectively replacing JS so it's a different discussion.

I would personally love to write front-end code in Rust, not because it's necessarily the most appropriate language for such a task but because I'm familiar with it and can reuse what I already know. Basically the same justification people have used to bring JS outside of the browser.


> What does that even mean?

Turing machines are not safe, code is data and it has no way of telling the difference. For instance, if I were to compile valid rust code, then open the binary in a hex editor and start changing the it rust can / will do nothing to stop it and so long as the code is valid it will run.


Clojure is nice for heavy data engineering projects that require robust/stable/mature tech like the JVM. But for a full stack language for a web app it just adds a tons of complexity over just using JS.

In Clojurescript interacting with the JS ecosystem is painful cause of its reliance on the closure compiler.

In Clojure, is almost the same, most Java libs are over-engineered and horrible to use but you need to reach for them because Clojure lacks an ecosystem.

So while Clojure is a better/nicer language than JS, the tradeoffs are not worth it if you want only one language for your webapp (SPA and server).


> adds a tons of complexity over just using JS.

What "tons complexity" are you talking about? Clojure is a much simpler language (in the decomplected sense) than JS. Less syntax, better build tools, uniform stdlib, no webpack/babel nonsense. This sentence makes no sense.

> interacting with the JS ecosystem is painful cause of its reliance on the closure compiler.

Again, what? With tools like shadow-cljs, requiring JS libs and using them in you project is trivial (just require and import like you would any cljs library).

> In Clojure, is almost the same, most Java libs are over-engineered and horrible to use but you need to reach for them because Clojure lacks an ecosystem.

I've been writing Clojure for 10 years and rarely have I had to reach for Java. This is absolute rubbish.

You seem to hold strong opinions about a language you barely understand.

Edit: reading you comment history, you seem to have an axe to grind with Clojure.


It is a valid point I don't know why the downvotes.

I think Rich had to make a decision, writing a completely new LISP from scratch or leverage an ecosystem and create a LISP on the top.

>> In Clojure, is almost the same, most Java libs are over-engineered and horrible to use but you need to reach for them because Clojure lacks an ecosystem.

The biggest problem with Java libs the mixed quality. Many big data project have this problem, if you peek under the hood you gonna be amazed about abstractions leaking into different bits and pieces of the system. My favourite example how ORC imports Hadoop FS

https://github.com/apache/orc/blob/master/java/core/src/java...

There is one more problem with Clojure that I find annoying, the actual Java interop. I was running into issues with this many times and some Jave libs are almost unusable without a thin wrapper written in pure Java.

Other than that, I think Clojure is still one of the best options out there.


What you see as negatives, I see as Clojure's main selling points.

I usually avoid languages that keep reinventing the wheel of established libraries on the platform, "'cause it isn't idiomatic".


Don't disagree with you but two things:

1. Clojure doesn't have as good interop as Kotlin.

2. For some heavy interop projects, it's simpler to write the thing Java.


What good interop does Kotlin have better than Clojure in what concerns calling Java code?

And on the other direction is just as fun, try to call Kotlin co-routines from Java without having to write wrappers for it, or Kotlin code without putting @Jvm... annotations everywhere.


It’s been years since I used clojurescript so ymmv, but imo integrating so tightly with the closure compiler was a big mistake. It adds tons of futziness and build complexity for what, in practice, are usually pretty marginal reductions in bundle size that end users won’t notice.


Never had a problem with the closure compiler vis-a-vis ClojureScript, is this really a thing?


I prefer to use languages in the domains they were designed for, and I think it's almost impossible to a general purpose language, or a system programming language, to be better in a domain-specific language that was designed with the right trade offs in mind. Elm having less abstraction-power and doesn't offering full interop with JS, was a design decision. For this reason, we have lots of high quality packages that makes sense to Elm, instead of bindings of JS libs. For the other side, I can't see any language being better at system programming than Rust.


> “ I prefer to use languages in the domains they were designed for”

So for web development, do you use PHP?

Because neither Python, nor Ruby nor even JavaScript itself was designed for web development.


JavaScript was definitely designed for web development. Though, at the time web development meant a tiny bit of magic in an otherwise static html document.


I wouldn’t call “animating a piece of text” on a HTML page as “web development.

16 bit home computers could do this better 10 years before JS.


The crucial distinguishing factor is that JS was made to animate a bit of text... on every device in existence, past present or future. That's the biggest difference between the web and native.


The "every device in existence" fails with node, at example, which has a different standard library, and every browser support libraries and even syntax differently (which is why we backport stuff with tools like babel)


I can only guess but, I think the parent means web development in the context of an HTTP server. (GET/POST/etc).


Most of Ruby's life has been dedicated to web development.

The original use-case at birth of a language matters less, as programming languages are living things that evolve and mature, sometimes morphing into very different things than they were at the beginning.

Likewise Rust's current lifespan has almost entirely been dedicated to taking on C/C++ and their use-cases in systems programming and server applications. That's all that really matters here.


No, because PHP was not designed at all, it was just a hack that solved some problems and started to evolve. Anyway, web development in '95 is not the same thing as today.


Your statement would pretty much also be true if you substituted Javascript for PHP.


Language design does not end with the release of the first version.


> nor even JavaScript itself was designed for web development

Javascript was designed for web development of the 90s. It evolved along with web development.


> I can't see any language being better at system programming than Rust.

Ada/SPARK, but it won't get a standing ovation from current generation of devs adopting Rust.

Luckily it gets some headlines exposure via NVidia and Genode OS adoption.


I actually see Ada/SPARK referenced pretty frequently in discussion with other Rust programmers, and I've been to at least one Rust talk in which learning from Ada was the main topic.

Anyway, it's hard to blame people for looking elsewhere when the best Ada compilers were proprietary for so long.


I not so much, beyond some occasional assertions of Rust being responsible for features that Ada already had them first.

As for compilers being expensive, while true, I learned Ada via books a couple of years before being able to put my hands on a compiler.

Then again, maybe that is no longer fashionable way of learning.


Ada seems very nice, but compared to rust I think it lacks momentum and (some) ergonomics. It's sad, but a C-like syntax on top of the Ada semantics would possibly be more popular.


I re-ally struggle to see the appeal of Rust for frontend web development over typescript.

Perhaps there are some rare scenarios where you need to eek out every little bit of performance... But in normal circumstances, typescript offers a fantastic combination of familiarity, expressiveness and ... if not type-safety, then at least some degree of type-sanity.


I would choose Rust over TS not for any performance characteristics, but because I strongly prefer it as a language, due in large part to features like pattern matching, sum types, exhaustiveness checking, traits, etc.

I do quite like TS, but you can never really get away from the fact that you’re still limited by many historical JavaScript gotchas, and the type system is nowhere near as powerful as Rust’s.

Edit: That being said, as a business decision, TS will often be a better pick because it’s so much easier to find talent, and the learning curve is much smaller.


Sum types and exhaustiveness checking are doable with typescript! Though it's not especially ergonomic

https://www.typescriptlang.org/play?#code/C4TwDgpgBAygrgWygX...


There are major benefits to using the same language on backend and frontend though, especially with typed languages where you can share types. In my experience, this makes a much bigger difference than the specific language you choose.

So the real question imo should be which is better considering both frontend and backend. I suppose it’s also a very project-specific question. Maybe a heavy frontend project = ts, while heavy backend = rust.


Typescript's type system actually pretty limited. I spent about two weeks trying to do something nice with it before giving up and trying Scala.js, where I was able to do everything I wanted within a couple of days.


How long ago was that?

Typescript gets better all the time, I didn't the last year or so run into anything I couldn't do in Typescript (although using it all the time, and Scala too)


I think it was last year. There was no nominal typing at all, the unsound variance of arrays meant I couldn't really trust my types anyway, I wasn't able to treat structures generically as records the way I'd like (i.e. no shapeless equivalent) and of course there was no HKT.


Ok. I probably don't use so very advanced parts of the type systems (didn't know about Scala Shapeless until now) so I might not notice the differences.


Rust wins in familiarity... if you're already familiar with Rust!

If you have a Rust project that needs frontend code it's nice to be able to use the same language everywhere.


As a longtime user of both, Rust has a long way to go on the frontend before I would consider switching.


For anyone looking for an Elm like experience but without the limitation imposed by its creator, take a look at Elmish, a port of the Elm architecture in F#.

Advantages: You can use F# on the front and backend. Pretty good js interop with a lot of opportunities to shoot yourself in the foot.

Disadvantages: You will probably need to use the Elm docs/tutorials to learn about Elmish.


> It was an outgrowth of failed experiments in FRP from the Haskell world

I'd like to hear more about this failed experiment. Reflex is alive and active and perhaps is the best tool out there for fullstack applications.


As someone who uses Reflex at work and has also written a bunch of Rust, I wish Rust the best but am highly skeptical it's going to be a productive tool, and not just nerd-fodder, for user interfaces.

Reflex is distilled mutation and week reference black-magic, and the safe interface it provides relies heavily on higher kinded types. Even if someone goes though all the trouble to implement it or something like it (And that would be cool!), I don't think the resulting algorithms can be packaged up in nice abstractions.

Rust is a great language, but it just shouldn't bother competing where GC will do. It's just no point trying to win a race with that handicap.

------

I do believe in "one language for all tasks", actually. It's just that one language will have split ecosystems as it will support many idioms that don't interoperate well. Put another way, let's start with "multiple languages, perfect type-safe FFI", and then go for endosymbiosis.


Why not have tools that specialize and are strong in some areas instead of trying to have a Swiss Army Knife language? In other professions it's normal to have different tools for different tasks. Or is there something I'm missing?

I personally would be happy with many different languages that are focused on front end development, like how we already have many that are focused on back end development.


I noticed Elm is listed as an inspiration by a number of Rust front-end frameworks (seed and iced for instance). It seems a lot of people feel the same in the Rust community.


I don't understand why Rust is more compelling than ReasonML (OCaml).

Reason is more expressive and on the web low level memory management isn't a concern.


Isomorphic codebases. I would much rather use Rust on the back-end than ReasonML/OCaml.


Why's that? Ocaml has excellent performance and is a lot higher level than Rust. I personally see Rust as a language that really only should be used if utmost performance or hard latency requirements makes GC'd languages unsuitable. Most backends for web-app would run only marginally better with Rust than Ocaml and incur a steep productivity and maintenance penalty.

A lot of people seem to be suggesting Rust as a GC language replacement instead of a C/C++/asm replacement, which I've never really got. Is the mutation/lifetime/pointer management really worth it for this kind of thing?


I think it stems from the root cause that many equate GC languages with heap allocation everywhere, and never got to learn properly how to do value allocations on the languages that offer language or library features to do such allocations.


I was wondering about that too. OCaml has a really good track record and the whole Bucklescript ecosystem also proven to be good for reliable frontend development.


> I am hopeful that Rust can achieve what Elm did not.

I am hopeful we can see the web ecosystem evolve into being capable of 'Single Language Web Applications' like Rust seems to be capable of, and C# as well. Of course there's Flutter, but it seems to just pain on a canvas instead of reusing the DOM.


Im not sure Rust provides a lot of value for web development. Instead of Rust, use something like OCaml which has a nice front and backend story. It's certainly going to be a lot less work than using Rust. And more functional to boot.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: