Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Are all the new languages such huge advances over older ones that they do that?

I don't know about other languages, but having personally migrated from Perl and Ruby to Erlang/Elixir, I at least believe that to be the case in that particular circumstance.

A big part of it is that parallel programming is starting to catch on as mainstream rather than something to be shunned or avoided; whereas in the past folks were conditioned to avoid having to deal with threading or forking because of having to think about thread safety, there are now languages like Erlang (and its kids Elixir and LFE), Rust, Go, Scala, Julia, etc. that are meant to make parallel computing easier to reason about or less dangerous (or sometimes even both).

On top of this, though, Erlang-and-kids' reliability features allow them to achieve the legendary "nine-nines" of uptime (99.9999999%) relatively easily (I say "relatively" because - while you certainly still have to think about it (particularly if you're trying to push a change to, say, how data is represented on the (D)ETS or Mnesia or CouchDB or SQL or whatever side of things) - it's much easier than with a lot of other languages). That level of availability has huge implications for a company's bottom line, particularly those companies that measure downtime in "thousands of dollars per second" instead of just seconds. A well-written Erlang/OTP (or Elixir/OTP or LFE/OTP or anything/OTP) application can do this on the software side rather effectively (on the hardware side, you'd still want redundancy wherever possible/feasible, including on the network side of things; most well-run hospitals, for example, will have connections with at least two different ISPs so that if one goes out, they still have the other, and will only lose connectivity in very extreme circumstances).

In the case of Elixir, it makes for a very solid migration target from, say, a Rails codebase; the syntax is a bit more modern (I personally prefer Erlang's in many cases, but Elixir's pipe operator is a godsend) and familiar to Rubyists ("do" blocks, more flexible pipelining with the |> operator, etc.), while being fully compatible with the existing Erlang and growing Elixir ecosystems.



Don't get me wrong, it would be great if all Ruby developers switched to Erlang/Elixir, but what next? I mean, does switching a language more often than once a decade result in net benefit to the industry or net loss? I fear it's the latter.


It depends on how drastic the switch is.

Most of these switches are driven by a shift to some new programming paradigm. We started with procedural programming, then made a big shift to object-oriented programming, and now we're starting to see a big shift toward functional/declarative programming. Who knows what'll be big next?

If this happens too frequently, then there are certainly going to be problems, yes. The current trend, however, indicates that it's not happening too frequently, at least not yet; the current shift from object-oriented imperative to functional/declarative programming (with some object orientation here and there, though this isn't very pronounced in Erlang and its family tree) seems to be by necessity rather than adopting shiny for the sake of shiny, and I think that's what makes the difference here.

Basically, the more dramatic the switch is, the more likely the switch to be a net gain rather than a net loss, since it indicates that the switch was truly necessary. Switching from Python to Ruby, for example, would be a net loss, since - while they have different syntax and in some cases different programming styles and ecosystems - they're both imperative scripting languages and you're not really changing all that much besides a bit of syntax and semantics. Meanwhile, a switch from Python to Haskell, for another example, is more likely to be a net gain, since you wouldn't even consider that switch unless you felt the need for the massive paradigm shift that's required to make such a switch.

As a disclaimer, my claims above are mostly conjecture, though I think they correspond well with reality, at least by my own observations.


> Most of these switches are driven by a shift to some new programming paradigm.

Most of the switches should be driven by one thing, and one thing only: reducing the cost of software development (or increasing the quality, which is really the other side of the same coin). Whether a new programming paradigm actually achieves that or merely advertises itself as achieving that is the cause for the very high churn in fashion-driven SV and low churn elsewhere (where people are actually interested in cost reduction only).

> the current shift from object-oriented imperative to functional/declarative programming

As someone familiar with the software industry, I can tell you that there is no such shift happening other than in the minds of some wishful-thinkers. Are functional concepts being adopted by OO languages? Sure! (although Gilad Bracha would tell you, correctly, that most of these concepts were already in some OO languages -- i.e. Smalltalk -- long before people in the industry started using the term FP). But people aren't really switching languages outside those sectors that have made switching languages every few years a lifestyle choice.

> Basically, the more dramatic the switch is, the more likely the switch to be a net gain rather than a net loss,

I completely agree with that.

> since it indicates that the switch was truly necessary.

But not with this. Dramatic changes do have the potential to make a dramatic impact, but 1/ they rarely do, and 2/ when they do, it's mostly in the runtime -- not the language. Erlang is certainly an excellent runtime that has a big impact (though personally, I prefer the JVM, which can do anything BEAM does only better), but Haskell? Haskell is a very, very unproven language. I remember that about 15 years ago everyone was talking about it, saying how it's going to be the next big thing. 20 years into its existence, though, the largest single Haskell program is still its own compiler, which is not particularly large. The truth is that nobody really knows Haskell's actual benefits (in terms of real impact to project development) because in twenty years no one has tried to actually use the language for anything big (and small projects are usually cost-effective in any language). Its impact could be great but it may well be negative. Nobody knows. So if anyone tells me they've switched to Haskell out of necessity, I call BS.

But switching to BEAM is always a good idea (sooner or later) for large projects (unless you're already on the JVM), although if you're making the switch, I'd advise on the JVM, as it's a safer bet (both in terms of future, as well as likeliness of addressing the needs you may actually face; BEAM is a slow runtime, and many Erlang shops rely on C for important parts of their code).


I agree with most of your points. However:

> though personally, I prefer the JVM, which can do anything BEAM does only better

In my experience, this couldn't be farther from the truth. Java's threading model is bloated and slow compared to Erlang's process model, and that process model - the ability to spin up processes so lightweight that they make even Java threads look heavy in comparison - is the key to its success when it comes to parallel and distributed computing.

Now, this isn't to say that a JVM implementation can't do the things BEAM can (otherwise, projects like Erjang wouldn't exist), but I've found Erlang applications with BEAM to perform far better in that domain. This isn't particularly surprising, seeing as BEAM was designed to run declarative, distributed, concurrent languages (like Erlang and eventually Elixir and LFE and the like) right from the start, whereas the JVM, .NET CLR, etc. were originally designed for imperative languages (like Java and C#, respectively) and have been gradually adapted for fuctional/declarative programming (like with Scala or F#, respectively).

> no one has tried to actually use [Haskell] for anything big

Maybe because the equivalent Haskell codebase to a "big" project in most non-declarative languages ends up being "small" in comparison? Or are you talking about importance?


> the ability to spin up processes so lightweight that they make even Java threads look heavy in comparison - is the key to its success when it comes to parallel and distributed computing.

Except you can do the same in Java -- see Quasar or Erjang -- the JVM is so powerful that true lightweight threads -- just like Erlangs -- can be added as a library.

> I've found Erlang applications with BEAM to perform far better in that domain.

I've found the exact opposite. Java with Quasar fibers handily beat Erlang code. The more actual work being done, the bigger the difference (BEAM is excellent at scheduling, but pretty terrible at running user code; it's notoriously slow, which is why all important Erlang library functions are implemented in C, and why heavyweight Erlang shops do a lot of C coding; when you do Erlang, it's often Erlang and C if performance is important).

> whereas the JVM, .NET CLR, etc. were originally designed for imperative languages

So was your machine, yet BEAM runs on it just fine. You can like the imperative style or not, but it's more general than the functional one when it comes to implementations on real hardware. BEAM runs on an imperative, shared-state machine, and creates a nice abstraction. You can do the same on the JVM without loss of generality, and, as it turns out, with a nice boost to performance (because HotSpot's JIT and GCs are state-of-the-art). The only advantage BEAM has over the JVM (or at least HotSpot) is a better level of isolation between processes (i.e. it's a bit harder for one process to impact the performance of another, because they have sort-of separate heaps). But again, BEAM is a fine, beautiful runtime that is perfectly suitable if you need good concurrency but aren't worried about processing speed.

> Maybe because the equivalent Haskell codebase to a "big" project in most non-declarative languages ends up being "small" in comparison? Or are you talking about importance?

Well, both. Haskell has never been used to write a large ERP, airport management system, manufacturing automation, an air-traffic control system, device drivers, an OS, a database, a banking system or a large social network -- take your pick. It may have been used to a small extent for some specific projects in banking, but that's about it. The only large, complex, "interesting" from a cost estimation perspective ever written in Erlang is its own compiler (and possibly other compilers). Now, I may have missed one or two specific projects, but their rarity only demonstrates the problem. Go, a much younger languages, is already more battle-tested than Haskell and even Go is far from being truly battle tested, so this says a lot.


> (BEAM is excellent at scheduling, but pretty terrible at running user code; it's notoriously slow, which is why all important Erlang library functions are implemented in C, and why heavyweight Erlang shops do a lot of C coding; when you do Erlang, it's often Erlang and C if performance is important).

That's really useful to know - I'd not heard about this before, only that Erlang was fast!


Erlang is actually quite slow; it just responds quickly and has excellent concurrency. The programming language shootout places Erlang HiPe (high performance) at 2x-22x slower than Java (http://benchmarksgame.alioth.debian.org/u64q/erlang.html) and sometimes faster and sometimes slower than Python (http://benchmarksgame.alioth.debian.org/u64q/compare.php?lan...).

That's why Erlang is often used as the application control plane, routing requests and the like, while actual data processing is done in C.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: