Hacker News new | past | comments | ask | show | jobs | submit login
Lovely Week with Elixir (ramblingcode.dev)
262 points by sgadimbayli on May 20, 2020 | hide | past | favorite | 132 comments



I learned Elixir a few weeks ago as a quarantine self-improvement project and pretty much loved it. There are some warts, as in any language. As someone who's primarily worked in Go and Java in the past and has also been learning Rust I don't super love the optional typing thing, and I end up missing higher-level data constructs like interfaces and traits.

But there are some great language features, like guards and pattern matching, that are hard to give up when you go back to other languages.

Plus it's great to have OTP goodies like GenServer at your fingertips if you run into performance bottlenecks (which you may not!). The OTP APIs are a bit weird coming from other languages but not too bad.

Other things I've liked:

1. Ecto is simply the best DB library I've encountered in any language. I'd almost recommend learning Elixir just to be able to use Ecto.

2. Plug provides great HTTP ergonomics highly reminiscent of Go's context-based middleware approach. Having direct access to the full request/response lifecycle is a win.

3. Phoenix is nice because it's essentially just Plug with some convenient helpers on top. Strikes a really nice balance between configuration and convention by letting you use only what you need. Haven't tried LiveView as I'd prefer to handcraft my own JS but probably worth a shot.

4. Absinthe is the best GraphQL framework I've encountered after many others in other languages left me completely cold.


> 1. Ecto is simply the best DB library I've encountered in any language. I'd almost recommend learning Elixir just to be able to use Ecto.

It's interesting, people always talk about Phoenix, which is nice, but Ecto is really special. I often recommend it to anyone who is feeling burned by Active Record implementations.


I thought so too, at first, but now I'm really down on it. Ecto Changesets work fine for flat data, but when you have to deal with nested data, it becomes a nightmare, because you have to use these special functions to read and update the data. Doing that in a nested context just gets really clunky, especially coming from Clojure, where I would just do something trivial, like

    (assoc-in changeset [:children 1 :title] "New title")


Clojure sets an unfairly high bar for data manipulation with core, transducers, libraries like supdate and specter.


That is indeed a bit of a pain point. From what I understand, it's a bit intentional and the library authors want you to prefer flatter data structures. Now of course not all structures CAN be flat so you can hit a bit of an impedance mismatch. However, I think Ecto is amazing for more common (for me at least) cases.


Having worked for years with both Ecto and Ruby's ActiveRecord I really prefer ActiveRecord. It's less flexible but so much easier to use. Ecto is somewhat closer to SQL and still suffers from the "I have to relearn how to do SQL in yet another language" syndrome and it's a particularly difficult case of it. A lot of boilerplate to represent database tables and a lot of not obvious code to do easy queries. Definitely the part of Elixir I like less, by a large margin. Ecto and the deployment system are what stop me from using Elixir in my personal projects.


I have a huge dislike to ActiveRecord because its high abuse rate.

Beginners will most certainly will have highly inefficient start with it.

There are others in Ruby world which needs more praise such as Sequel and ROM(which is what Hanami uses also).

With Rails having the monopoly and most gems having AR as dependency, it's hard to see where we will go regarding ORMs in Ruby world.


I can definitely see where this criticism comes from. There is a lot of extra data description, and an abstraction that leans towards SQL.

However, that "boilerplate" makes data access and return values super clear. It also allows you to segregate Ecto queries by domain, meaning in a User context you may only have access to certain fields, whereas in a data processing context other fields may be available. This is great for coarse table access control, and you get it almost for free. It also allows you to very easily validate data at the edges of the system.

As for the query language, I can say for sure that there's a learning curve but it's close enough that your 1st guess is usually correct in my experience, and it removes a lot of foot guns like n+1 queries by default.

TL;DR there's some learning curve and extra LOC to write, but you get a lot in return.


Changesets are beautiful. And they're great for general structure validation outside of databases as well.


Oh yeah, absolutely. They're great for building API clients.


Check out the Elixir "protocol" feature, it's similar to interfaces: https://elixir-lang.org/getting-started/protocols.html


Protocols + the Typeclass library [0] makes for some of the most extensible Elixir code I've worked with.

[0] https://github.com/witchcrafters/type_class


Hi Elixir friend! Have you done a blog post on this by chance? I'd be really interested in reading one.


I haven't, but the creator of the library (Brooklyn Zelenka / @expede) has given a few excellent talks that reference her libraries. This is a personal favourite: https://www.youtube.com/watch?v=JD0FJTmuV_Q


I've used protocols, and they are useful as a kind of bare minimum interface/implementation abstraction. But they're nowhere even close to something like Rust traits. Which is totally fine given Elixir's goals.


I'm pretty deep into Elixir and have been learning a bit of Rust on the side this year, so this is interesting. Could you elaborate a bit?


I was just about to say, protocols fill the interface slot pretty well


Interesting. I really like Python and SqlAlchemy. Changing objects in the database via changesets and applying them on top of eachother feels kinda alien, when I can't do object.property = something.

The thing that confuses me most about Phoenix and Elixir is how do I actually deploy the application to production? I have my own server where I can run whatever I want (running Ubuntu), so what do I have to do to get my Phoenix application working there? How do I get my code running as a systemd service?


Doing object.property = something is, I think a mental anti-pattern because it engenders uncertainty about which source of information holds the truth (local object or database), and now you have a needless (mental) distributed state problem. At least in a system with bare structs and changesets, especially in an immutable language, there's no ambiguity about the fact that your data are stale the minute you drive it off the dealership lot and your data synchronization events must pass through a single explicit point of entry. Explicit is better than implicit, after all.

To answer your second question, that's not entirely easy to answer, but I personally use mix release command, ship it to s3 as a tar.gz, then pull it down into the server, unzip, put the command line entry point "start" (there are instructions when you do mix release) into a systemd service, and it's good to go. Couldn't be easier. It takes us several ansible scripts, and annoying wrangling with virtualenvs if something goes wrong for us to create a new Ubuntu Django server from scratch and literally two commands (one is a 5 line shell script) to get Elixir up and running.


1. It's a functional thing, it's quite nice after you use it for a bit, and really powerful because you can use it for any data, and not just when working with a DB.

2. There are several ways. You can use releases [1]; copy the code to the box, run mix release, then ./app start, and point whatever you want to that process. If you're running phoenix you could always compile it on the box and just run mix phx.server. I wouldn't do that though. You could also use a tool like distillery[2] which is kind of like releases. There is another more esoteric option where you build locally without the runtime, but I wouldn't recommend that.

[1] https://elixir-lang.org/getting-started/mix-otp/config-and-r...

[2] https://hexdocs.pm/distillery/tooling/cli.html


+1 for distillery, very convenient...

https://github.com/bitwalker/distillery


Once `mix release` was added natively to Elixir in 1.9, I haven't found the need to use distillery anymore. Is there still a use case?


Distillery has 2 strong use cases; it has config providers which allow for a lot of config flexibility and it can support hot code upgrades.


I like to deploy using Releases as detailed here (https://hexdocs.pm/phoenix/releases.html).

This gives you folder packaged with everything you need to run the application on the architecture you compiled it on. Very straight forward.

If you want to set it up to be a systemd service I guess it would be like any other binary command for systemd. I usually run things under something like Supervisor (http://supervisord.org/)


We use distillery in a project I've been working on for a few years. I know other people that build a docker container and deploy it. Weird for the OTP ecosystem but it works.

By the way how do you deploy Python code? There isn't a standard AFAIK.

Finally, check how systemd handles rabbitmq. It's an Erlang application and you could run and stop your application in the same way.



I would strongly caution anyone new to the language against using edeliver. There are a lot of gotchas and edge cases with the scripts that can really bite you. It's super cool if it just works, but miserable if something breaks.


> I don't super love the optional typing thing

It might not be a plus if you're writing a web-app, or some other kind of easily-restarted "shared-nothing business layer" system. But get deep-enough into absorbing the Zen of Erlang (i.e. by trying to write a big-deal telecom system; or by working on a distributable Erlang DBMS like CouchDB), and you'll come to appreciate it. Specifically, you'll realize that it'd actually be impossible for the Erlang runtime system to support hot-code-reload plus automatic distribution plus static typing, all at once. One of them has to give. (I think this is one of those universal "choose only two properties" triangles, like CAP, though I don't think this one has a name.)

If you have a "living" distributed system—one that you don't bring down entirely for each upgrade—then inevitably you'll hit a situation where a node with the fresh new version of a module (V3, let's say), tries to use that module to send a V3 message to a node that is still only running V2 of the module. In a distributed system with static typing, that'll inevitably crash the V2 node—unless you did a whole extra "V2.5" rollout step to first teach V2 about the wire-format of V3 and how to handle it.

Much of the point of Erlang/Elixir's user-facing "data architecture" design—e.g. using partial destructuring pattern-matching (unwrapping tuples one layer at a time) instead of uniquely tagging messages with message-type UUIDs like COM/CORBA—is to "automatically" cope with this, allowing your "living" distributed system to interoperate in the face of cross-node module-version heterogeneity, without having to write explicitly backcompat "also recognize the previous/next version of the message" code during the rollout period.

Every Erlang term sent in a message is, in some sense, like a little extensible file-format; and the tools you're given to "parse" the term—when you use them idiomatically—give you forward-compatible "parsing." As long as you don't validate that a term has a specific "deep" structure corresponding to some static type, then it doesn't matter if your module-V2 actor has actually received—and is now holding onto—a module-V3 parameter in its state. It won't know what to do with it, but nor will that cause any problems as long as it doesn't poke too hard at it. It can even keep hold of it, storing it away for later in its state generically, without understanding exactly what it's received. It'll be there in the state for when the module upgrades to V3, and suddenly cares about that property.

(OTP's supervision system also comes into play here, ensuring that any unhandled edge-cases of this term "parsing" just become temporary restarts of the leaf-actors that receive them, without affecting the stability of the node as a whole. The network upgrade completes; the affected actors come back up running the new module version; the system continues.)


That’s all true, but I think a Typescript-like system would work really well here. Typescript is very tolerant of the data being different than the type definition, and it doesn’t know or care if your data actually matches the type definition at runtime.

It would be up to the developer to account for different versions of the data when writing out type definitions, and the worst case is that you try to access a property that doesn’t exist or is the wrong type, and it crashes and causes the process to get restarted like what already happens in Erlang.

So it wouldn’t be a guarantee of correctness just like Typescript isn’t, but it could still offer a lot of safety assuming you get the type definition a right.


I mean, that's already what Erlang/Elixir has, in the form of Dialyzer, a runtime "type linter" (which is what the typespecs in the article feed into.)

It's static typing that Erlang can't complete-the-triangle on; not typing generally.

The only problem is that "offline" type-checking like this does nothing to solve one of the main use-cases/pain-points where Erlangers want types (or, at least, think they want types): in the messages that actors receive. You can't make any sort of a type assertion about what other actors in the system are allowed to send this actor, and get that validated; because "other actors" in a distributed system necessarily include ones that aren't even part of your present codebase!

I have a philosophy about this—not sure where I picked it up, but I think it hews to the Zen of Erlang quite well:

If you already know the type of a message, then by definition, it's not a message any more, but just a regular data value. A message is an OOP concept (and Erlang is an OOP language, where processes are the "objects.") An OOP "message" is a piece of data the meaning of which is up to the interpretation of the recipient; where that interpretation can change as the recipient's internal state changes. The whole point of the "receiving a message" code that you write in an Erlang actor-process, is to allow you to do custom logic for that interpreting part. To use the value itself, in making the decision of what the value is.

In fact, I would extend that: the whole point of Erlang as a language is to do that "interpreting" part. Once you know what something is and have put it into a canonical validated structure, you may as well hand it off to native code (using e.g. https://github.com/rusterlium/rustler). If you think of native code as being a pure domain of strongly-typed values, then picture Erlang as the glue that lives in the ugly world of "not yet typed" values, making decisions under partial-information conditions on what types to try to conform received messages into, before they can enter that pure strongly-typed domain. That's Erlang's idiomatic use-case! (You can tell, because using it for that produces absolutely beautiful code; whereas using it to do e.g. crypto math, produces an abomination.)

Which is all to say: the interpretation (or, if you like, constraint) of a message into a typed value is a Turing-complete operation; and the logic for doing so is best represented as an Erlang program. Erlang doesn't need a type system for messages; Erlang is a type system for messages. :)


It’s a bit of a stretch, but your thoughts reminded me of one of Joe Armstrong’s interests, how to document/prove two components were adhering to the protocol they were using to communicate (he was describing the problems of interconnecting two hardware components in the informal talk I attended, but obviously he had a significant interest in software as well).

I miss Joe quite a bit. Such a keen mind with seemingly infinite curiosity.


> A message is an OOP concept (and Erlang is an OOP language, where processes are the "objects.")

Yes. If only GenServer had an OOP syntax instead of a handle this / handle that list of functions which obfuscate what the process actually does. Elixir in particular lost a chance to make it easier to deal with.


It's not too late. You can write your own wrapper around the :gen module, and if it's really good maybe people will adopt it.

I will say one thing:. If you are thinking of BEAM processes as actors/(Kay objects), you're missing the real meat of what makes BEAM processes special; what they really are are atomic units of failure domains. The other stuff is just a useful analogy that lets people grok the code structure by comparing it to something familiar. The trouble then is that people will take habits from OO and apply them to BEAM where blindly copying the organizational form will lead to performance regressions. If you're treating BEAM processes like python or Java or Ruby objects, you're going to bottleneck your system and turn it into a needlessly complex mess.

In short, not all function calls (cheap) should be message passing (expensive). Sorry Alan Kay.


We have very few GenServers in our system, with their supervisors to keep them alive and read back the initial status from the db in case of failure. They're not object in the Ruby or Python or Java way. They really are servers that take care of a specific action. The vast majority of code is function calls in the main process of the system. Still I'd like to be able to program the GenServer in a more understandable way.


Yes. Genserver is quite frankly a mess. I just want to discourage pushing that object metaphor. Have you seen the Dave Thomas ornery video where he calls a genserver a dog's breakfast?

https://youtu.be/6U7cLUygMeI


You’ve just summarized my thoughts on Elixir after using it(and still loving it) for years. Thanks :)


I've been meaning to do this for so long and I've started at least twice but other work or life have gotten in the way. And I haven't gotten any "quarantine-time" as my industry has been busy!

It's good to hear this, though. I might have to try and re-prioritize learning it again.

Do you have any good resources on hand that you used and would recommend?


Actually I think the one place where elixir community is relatively lacking is online long form tutorials. There is some fantastic content on YouTube. I quite like plangora for intro stuff and knowthen for more advanced content.


The BEAM is a huge win: having lightweight threads means you can often do away with things like Redis for job queues and PubSub stuff. I love this answer on StackOverflow by Elixir's creator:

https://stackoverflow.com/questions/32085258/how-can-i-sched...

So simple. Something that would require a job queue and a job runner fades away into a piece of the OTP application tree. When it crashes, it will even come right back up!

Phoenix feels a little too heavyweight for really small projects—maybe I'm spoiled having used Mojolicious's [0] single-file web servers. (Example on the linked page.) But for anything slightly larger, Phoenix scales really well. I work on a decently-large application in Phoenix for work and it's been an absolute joy to work with this langauge.

Typing could be better. Though, Dialyzer does a decent job of catching type errors. That's saved my neck on more than one occasion.

[0]: https://mojolicious.org/


I don't really understand this sentiment of not needing a queue system. This is not much different from spawning a thread in Java to delay the email sending.

For any serious application you want a job like that to be persisted so you can guarantee it runs even if your application is restarted.

I know that Erlang/Elixir is designed for stateful applications and if you have a cluster and do hot deploys this is less of a problem, but who does that? Most Elixir applications I've seen are deployed as stateless systems just like any Ruby, Python, Node etc systems.


Slightly offtopic, but consider a pattern of persisting the state machine of a given task instead of the task queue itself. For example, for a daily email job, you might have three states: UNSENT, BEGIN(begin_time), SENT. Then you can just have a supervisor job that scans your table or whatever and schedules jobs, retries, etc. as needed. Now you don't have to worry about the queue state at all!


Having a queue means you have a distributed system. How do you handle network problems, errors/retries, back pressure? OTP has excellent idiomatic tools for all that and more.


Elixir developers who need a queue generally reach for Rabbit (which any language can use), or something backed in a database like rihanna[1] or honeydew[2]. Rolling your own distributed system is very much a last resort, and despite its excellent concurrency characteristics the BEAM still lacks basics such as a battle-tested raft implementation.

[1]https://github.com/samsondav/rihanna [2]https://github.com/koudelka/honeydew


> and despite its excellent concurrency characteristics the BEAM still lacks basics such as a battle-tested raft implementation.

A couple of years ago the RabbitMQ team has published a raft library[1] which they use in their implementation of persistent queues. It has a flexible API and implementing your own state machines is quite straightforward, as it follows the OTP gen_* behaviour paradigm.

And by now, I'd say it's pretty well battle-tested.

[1] https://github.com/rabbitmq/ra


I didn’t know about that - thank you for pointing me at it!


Wait, how does that answer prevent only one instance of the job across the whole system? That's what I'd use Redis or something for - for locking.


If there is only one process running this GenServer, there will only be one instance of the job in the system.

An BEAM process, and therefore a GenServer, has an atomic message queue built in. Only the process itself can pop messages out of its queue, so there is no need for locking. It is not possible to get messages out of the queue concurrently. Messages can be processed concurrently, but that's after they've already been removed from the queue.


What if you have 100 servers running the same application code and you just want to run this job, one time, for a customer? Usually I'd use a Cron with a distributed lock, which is served via Redis.

Just curious as to how the BEAM solves it as I imagine it can.


You can use tricks like the global spawn option to make processes globally unique. Doesn't scale infinitely, but for a 2->low double digits HA/load balanced system it should be fine.


Oh wow, so this will work across multiple instances of the BEAM?


If they are clustered using erlang clustering, yes. https://erlang.org/doc/man/global.html

Note that this works by taking a transaction lock on the entire cluster, so it's not recommended if you're making requests frequently. It also doesn't respect netsplits, unless you instruct in code to fail with a deficient cluster membership.


How is that stackoverflow answer good?

You lose everything about the scheduled jobs when your app crashes.

This doesn't even address the rampant memory leak issues that plague long running genservers.


Some jobs don’t need persistence, or might have everything about them stored in a Postgres DB. Examples: I have a project where I render lots of little PDFs as previews for a LaTeX editor. My use case means it’s good to keep them around for a short while; after an hour or so I want to delete them all so they don’t fill up my disk. Perfect case for this. The question mentions rebuilding a site map—that sounds like it doesn’t need to persist much state. Finally, at my work we have orders that sit in a Postgres DB that we regularly check to see if any are stuck. We do use the Quantum library for that, but a solution like this could theoretically work.

> Rampant memory leak issues

As long as you’re not sending this genserver bogus messages, I don’t see how memory would leak in this example. Erlang applications are known to run uninterrupted for years at a time, so I figure this is a solvable problem.

When you do need persistence for more robust job scheduling, yes, having a real queue system in place is much better. You are correct there. I think the answer is good because it is so small and fits these simple use cases—the fact that you can handle those cases without needing something heavier is pretty awesome.


Elixir is decent and I've worked with it a fair amount in production systems... Mostly Rubyists seem to really click with it. And ruby idioms are all over it - you can taste its history and proximity to ruby's ecosystem. As a scala dev that ended up working with elixir for a couple years, my opinion is that a typesafe elixir-like language would really bring BEAM back into the mainstream. Akka is alright but it's shoehorned onto the JVM. BEAM is good as long as you don't need to do heavy computation, but lack of type-safety (need to use dialyzer?) means that shit breaks in prod that the compiler would have caught. And yes, you can mitigate this with boatloads of testing and data-validations with ecto or whatever. But every time we broke shit that a compiler would have caught I cringed.

It's a great path for rubyists to move to Elixir/BEAM and every rubyist should give it a whirl! I'm back working on scala and akka.


I want to echo Gleam [0] as a project to watch out for. It's still very early, but it's evolving quickly, and has a big focus on providing ergonomic tooling.

It's an ML inspired, statically typed language that compiles down to Erlang, and supports interop with the existing ecosystem. This means that you get access to ADTs, type inference, etc, while still being able to lean on OTP for your concurrency primitives. There's also examples of calling it from Elixir, so there's the option of falling back to statically typed Gleam for an especially gnarly piece of code, and calling it from your Elixir application [1]. I wouldn't necessarily recommend this for commercial apps yet, but Gleam today is about as usable as early-Elm was, in my opinion.

The project is also very welcoming to new contributors, and Louis (the language's creator) does a great job of curating a list of beginner friendly issues to tackle in the compiler. I've been spending my evenings learning Rust by adding onto the language, and it's been a ton of fun. If you want to help out, there's a fairly active IRC channel on Freenode, in #gleam-lang :)

[0] https://gleam.run

[1] https://dev.to/contact-stack/mixing-gleam-elixir-3fe3


I’m rooting for Gleam as well. I do wish it adopted the syntax styling of Ruby/Elixir/Crystal though. Either way, it is very compelling.


The older I get the grumpier runtime errors make me.

I want ReasonML (language!) and Erlang (OTP!) to have a baby, and I want it birthed by the Go runtime. (Go? Yeah, Go. I don't love the language, but I am a lover of low latency and garbage collection, what can I say?) Yes, there's Gleam, but if something's based on BEAM, the throughput generally won't impress. :-( Would seem a shame to do all that static typing, and then not reap the speed benefits.

Relatedly, I think there's a sweet spot for a language that accepts mutability inside of actors, but only allows immutable objects to be sent as messages, with an escape hatch available if needed. (Pony explored this space, would love to see it evolve.) Combine that with OTP for happy-path programming, and an ML so you catch most of your errors at compile time, and you could end up with great throughput, low latency and great ergonomics, all at the same time.


> but if something's based on BEAM, the throughput generally won't impress

I'm not sure where you're getting that from- that's typically the area it does well. It's bad at number crunching, but you if the work is IO bound (say, like a web application backend) it offers consistently low latency with high throughput.


We're defining throughput differently. I'm talking about CPU utilization, i.e. non-IO-bounded work. Sorry for the ambiguity.


I would be very surprised that Go had lower latency than BEAM based languages, higher throughput/better CPU utilisation OK, lower latency??

Do you have a benchmark showing this?

As for Pony, in theory it should be great but it looks very complex..


Not sure where I implied Go would beat BEAM on latency, didn't intend to. On the contrary, BEAM's had decades poured into keeping the long tail under control, while with Go it's a work in progress.

I singled out Go because it's the only reasonably mainstream multithreaded (need this for actors), statically typed (need this for CPU throughput), garbage collected (need this for ergonomics) language out there with an emphasis on keeping latency under control, and as such would be the only sensible target I know of to host the language I proposed.


I'll say one thing. I accidentally forkbombed my running elixir system in prod (miscontacting an error reporting service triggered two more error reports, and the error reporting service 500'd during an outage), and it kept servicing user requests without much of a sweat.


I have assumed BEAM has similar latency to Go and is garbage collected?

Here is some benchmarks where Erlang beats Go in throughput:

https://timyang.net/programming/c-erlang-java-performance/ https://stressgrid.com/blog/benchmarking_go_vs_node_vs_elixi...


That first link is from 2009. A lot's changed since then, so I didn't read it.

And maybe I missed something in the second link, but Go showed very similar I/O performance to Elixir, while consuming a boatload less CPU doing it. That's what I'm after. Open to being told I missed something, though.


> Open to being told I missed something, though.

The Bean keep using CPU even with no work to be done, so avoid context changes. We can't compare about CPU values.


We can't compare about CPU values

Can you explain more why we can't compare? Looking at this chart...

https://stressgrid.com/blog/benchmarking_go_vs_node_vs_elixi...

...shows pretty clearly how the CPU utilization grows linearly (ignoring some sawtooth) as the load increases, plateaus once the load remains constant, and then comes down linearly as the load decreases on the other end. Looks like a very clear mapping between work done and CPU load to me.


It is explained in the blog post after that benchmark - https://stressgrid.com/blog/beam_cpu_usage/ .

Essentially in order to optimise responsiveness the BEAM uses busy waiting, which in reality is not actually utilising the CPU as much as is reported by the OS, which results in misleading CPU usage being reported by the operating system.


"I think there's a sweet spot for a language that accepts mutability inside of actors, but only allows immutable objects to be sent as messages, with an escape hatch available if needed."

That's kind of what akka is on scala or java. Messages are immutable _BY CONVENTION_ but you can do whatever you want.


Isn't that the basis/point of the actor model? Actors can message each other and processing the message can trigger state mutation of the recipient, but they can't directly mutate each other.


All data is immutable on the BEAM with a few exceptions, so no mutation within actors.


No, that's not true. Actors in elixir/erlang mutate their state through tail call recursion through the loop function.


I said data is immutable on the BEAM, actors can update their state.


"I think there's a sweet spot for a language that accepts mutability inside of actors, but only allows immutable objects to be sent as messages"

Absolutely agree. I had great hopes an "Actor" type would be created in Swift, where every public function would be thread-safe, accept only "pure" structs / value objects, and which would automatically run on its own coroutine.

Unfortunately the concurrency story seems completely abandoned for this language.


We may get our wish when multicore finally drops for OCaml.


The Kotlin KTOR framework is the most promising new web stack I’ve seen. It doesn’t have the higher level runtime features of Erlang but it checks most of your other boxes.


I like Kotlin but I'd say a strength of BEAM is that it doesn't allow for infinite loops, which means coroutines can't block others. This is a fundamental strength.

It allows the runtime to schedule coroutines effectively - they can't block for more than a function call (recursion is how you do "infinite" loops).

I think a future competitor to BEAM languages would need this feature.


This is the kind of thing I was referring to by higher level runtime features in Erlang. If you really need this sort of thing then you should probably be looking at an Erlang stack but I think for a lot of projects a less exotic and also much more rigorously typed language is going to be more productive.


What do you mean "more productive"? You'll get your code out to prod way faster in a BEAM language and in my experience the only remaining errors are relatively minor and easy to "wait to fix", because the BEAM will keep on keeping on and there's no user facing effect (maybe your error logs are a bit polluted with them). Whole classes of errors are not even possible because of "copy-on-write" function passing. I recently fixed a code bug that tripped during a race condition entangleing with a blocking call across two datacenters 1000 miles apart in about one hour, because you can introspect literally everything in the vm with very little hassle, and IO writes are atomic (if you call an IO write to screen it will never be interrupted by another IO write to screen).

I call that productivity.


As someone that uses elixir for webapps and ocaml/reasonml for frontend work, I hope that https://gleam.run gets traction, seems like a nice BEAM language with types that is evolving quickly.


I would argue that if you're looking for a simpler way to build CRUD apps, just skip the middleman and use postgREST. I've been playing with Elixir and Phoenix, and while they're pretty great I like postgREST much better. You eliminate an unnecessary (in most cases) abstraction and backing onto the grown up postgresql authorization and authentication is invaluable for securing your app.

For non webapp stuff I've been happy using golang.


I'd love to hear your opinion on liveview if you've played with it at all. The two ways forward for the industry I see are something like hasura/postgrest with heavy js in the frontend, or something like liveview/blazor. I'd expect the more decoupled approach to win long term, but demos like this: https://github.com/moomerman/flappy-phoenix are extremely impressive.


I commented elsewhere, but I made a quick quirky game with full source code using LiveView.

[0]:https://hn.lddstudios.com/ [1]:https://github.com/ldd/hn_comments_game/

Not to share my views or anything, and I have nothing to sell about it, but I thought I would share it with people anyways


This 100%.

Dialyzer helps a bit but is difficult to work with due to really cryptic errors. Also the workflow of having the typechecker run as a separate process not part of the compiler feels really cumbersome; it's easy not to notice you have a type error somewhere (often miles away from where the error originates!).


Elixir and Phoenix are living proof that functional programming can be easy - most of the time you just play with struct and functions, that's basically it. No need to pay attention to monads and type classes.

On the other hand, it also reflects functional programming is also more straightforward than Object-oriented programming because it's more objective to just model the data, than modeling the data and procedures at the same time. If you have two people writing the same thing, there's a much larger chance they'll yield similar or identical results.


I love Elixir. Phoenix makes web development a pleasure and LiveView is even more exciting.

I would love to find a job doing it.


Dude, LiveView is wild. It feels kind of magical, and I think people are gonna start adopting slowly, then all at once.

There's an awesome blog post on their website. where Chris McCord, creator of Phoenix, builds a real-time Twitter clone in 15 minutes.

https://www.phoenixframework.org/blog/build-a-real-time-twit...


God I hope so. I loved Erlang and OTP when I tried it back in the late 00s and it never really caught on, despite many of its strong points being what most websites would look for.

Hopefully with the focus on the web and the fresh coat of paint Elixir has given the ecosystem it will eventually gain more adoption. But I still have my doubts because despite looking more mainstream than Erlang, it's still kind of a "funny language".


Yeah, it does have that stigma. It certainly did for me; "Why the hell are there colons before these variables..?"

It's funny how much the first languages you learn (for lots of people, Java/C/JS) have such an impact on what "feels weird," when every programming language is basically magic anyway.


Well, Ruby and Julia and scheme, I think?) all have those same colons too, so it's not totally crazy. It's the post- colon that's strange (but Ruby has those too)


In a JSON world post colons are normal :-) and we've been living there for ages. Ruby added x: y as a short form for :x => y


I've been watching Elixir/Phoenix/LiveView from the sidelines, reading just enough to salivate a little but still left with the feeling that my use-cases wouldn't be adequately covered by it. Which kind of SPAs wouldn't work well in LiveView? I'm working on an app in Ember right now where a lot of data is fetched by the client asynchronously and then either showed once its fetched or only when the user pushes a button. How would the latter be possible with LiveView? Can you push data to the client that's only show on client-side interaction? Do I still need a client->server->client roundtrip to toggle a "visible" flag to show said data? I just don't understand how "smart" the client side of LiveView is.


The spas that won't work are the ones that need to keep state while offline. Everything else works pretty fantastic, and arguably better than most spas which need to do heavyweight xhr Json or grpc calls, though if you can have a persistent grpc connection over websocket you might be competitive performance-wise.


I'm honestly not sure. I would ask on the Phoenix [1] forums -- they're pretty active, including a lot of responses from Phoenix's creator, Chris Mccord.

[1] https://elixirforum.com/c/phoenix-forum


The biggest "homerun" case for LiveView is all of the forms where you'd typically do server side rendering then add a bit of JavaScript for the few dynamic pieces you need. Think about what you might typically see in a Ruby on Rails app. Lots of Ruby generating the views, a bit of JavaScript for client-side whatever.

LiveView can be used for way more than that, but I think replacing "glue" JavaScript is the most slam-dunk case.


I'd love to hear your opinion on this https://news.ycombinator.com/item?id=23252862 if you have any experience.


Controversial opinion: the "magic" is why I'm staying away. It's the same reason I stopped coding in Ruby/Rails: too much magic.

But I'm also a full-stack developer and I'm not afraid to write JavaScript. I realize not everyone is on the same boat, and LiveView might cater to them.


LiveView is a nightmare from a deployment point of view. LiveView stores state on the server associated with the web socket connection. If the web socket connection disappears which is what will happen during a deployment then the user's state will disappear.

The value of LiveView is not having to write javascript but the only way to preserve any non-complicated state in LiveView when the web socket disappears is to write javascript. It is only recently that LiveView added support to automatically preserve form input values across a state crash. [https://github.com/phoenixframework/phoenix_live_view/blob/m... | Recovery form input data automatically on disconnects or crash recovery] Prior, to 0.7.0 if the web socket died while you were typing input values in a form then the page would reload and all your input into the form would be wiped out.


You are right that state is transient but JavaScript is not the only option. For transitions and screens, you can use live navigation. For forms there is automatic recovery (as you mentioned). Another option is to persist the state on the server.

For example, go back to the Twitter timeline example. Imagine that you want to do a banner that says "N tweets unread - click here to expand them". You could keep this counter in the LiveView but if you reconnect the counter will be lost. But you can also solve this by storing a pointer to the last shown tweet on the DB. Then if you reconnect, you can compute how many tweets are pending.

If you are doing a multi-step wizard you can persist each step on the DB too, etc. Depending on the use case, this could even yield further benefits, such as automatic synchronization between devices. I.e. if you read some tweets on your phone, you can automatically bump the "last shown index" on all devices. Or start the wizard in one place and finish it elsewhere.

That's the point of LiveView, you can build interactive applications that are more server-centric. Some of those features are new but that should be no surprise, it is still pre-1.0 software. Even as we add new features, you may still have to (or want to) rely on JS on many scenarios, and that's totally fine.


My team at Synopsys is hiring. We were recently acquired by them, and we build dynamic security analysis tools in Elixir, and make heavy use of Phoenix and GraphQL through Absinthe. We're also open to remote work, and have offices in most large cities.

The job posting is a little bit generic, but it's specifically for my team: https://sjobs.brassring.com/TGnewUI/Search/Home/Home?partner...

Feel free to reach out to me (my website is in my profile) if you want more info!


LiveView will take some time to really get general acceptance and will probably always be somewhat controversial. Both for server roundtrips and because it is so oddly specific and doesn't have a comparable solution in most other languages. But it is fantastic where it fits, truly.


https://news.ycombinator.com/item?id=20383814 exists, so I'd expect it to catch on in other languages if it really is a killer app. I don't know how well such a thing would work without OTP though.


Yeah if you have a minor miscode/footgun in blazor you're really not gonna have a good time. If you have a minor miscode in liveview, OTP's got your back.


Kamanahealth.com is hiring, remote-first, and doing a lot to help combat covid. I don't work there, just got got an email from a friend noting that they are looking for Elixir devs. https://www.notion.so/Kamana-is-Hiring-644e0dd0eb2d433daf7a7...


If you're interested in using Elixir, jump right into Phoenix with LiveView. It is fantastic for prototyping new tools. Once you learn the basics of LiveView, it's ridiculously easy to create the web UI that interacts with your Elixir back-end.

For a while I was using Phoenix<>Elm as my stack and I enjoy programming in Elm. But there's a lot of boilerplate you have to tediously connect for every input and output on both sides of the server and client.

LiveView eliminates all that boilerplate by letting you write templates in Elixir that are macro-compiled to javascript. Getting rid of that boilerplate eliminates time writing it as well as time debugging the extra complexity that boilerplate introduces.


I really hope there's a book covering more in depth on how to use LiveView. Im a little bit of a noob in programming and mostly learn using book for the first few steps. A lot of my coworker asking me to learn through docs as it seems to be more "full" but I found it's a bit steep learning with it compared to book that usually hold your hand through each concept. This is how I learnt Go web programming about 8 months ago.


Have a look at the free Phoenix LiveView course by Pragmatic Studio: https://pragmaticstudio.com/phoenix-liveview (I'm not affiliated with them).

The first six of eventually 15 lessons are already available and they really helped me grasp the basics. I've already implemented a few small tools using LiveView - it's a real breath of fresh air!


Started a little CRUD-app project in Phoenix (Ruby on Rails to Elixir's Ruby), and my expectations have been totally surpassed. It's pretty amazing.

Working with Elixir is really cool, too, coming from C++ & JS backgrounds. Pattern matching feels like a programming technique from the future.


Another person who liked Elixir, cool :)

Btw. If anyone is interested, I'm working on prettier plugin for html l?eex files. Probably gonna publish it next week.


That sounds lovely. Thanks for building that! Followed you on twitter, pls let me know when it's live.


I wrote two blog posts before on Elixir:

The first piece is longer, on my journey learning Elixir and building machine learning libraries.

https://fredwu.me/blog/2016-07-24-i-accidentally-some-machin...

The second piece is shorter, on Elixir’s functional aspect and how doctest changes the way I code.

https://fredwu.me/blog/2017-08-07-elixir-and-doctest-help-wr...


The only downside I've found with Elixir is that because of the language and runtime internals, it doesn't really lend itself to power serverless/lambda functions (like Python, Ruby, Go, etc).


I had an excellent week with Elixir building a really silly game:

https://hn.lddstudios.com/

MIT Licensed. Source:

https://github.com/ldd/hn_comments_game


Awesome! If you are up for reviewing each other's code, ping me!


Welcome to the λ-side! We've been using Elixir and loving it for 3+ years.


I really like Elixir and Phoenix. I just wish I had more reasons to use them!


I built the API for a side project in Elixir/Phoenix a couple years ago making use of Phoenix WebSockets, and it was truly mindblowing how much you could get done with such a little amount of code.


Like the author, I'm really truly loving Elixir. I recently wrote up some of my experiences with Elixir/Phoenix coming from a Ruby/Rails perspective - https://medium.com/swlh/3-months-with-elixir-phoenix-2810f65....

Curiously, unlike most others here, I'm not completely loving Ecto. I appreciate the philosophy and get changesets, but I'm finding the query syntax clumsy.


I got really excited about discovering a new progamming language, Phyton, for a second... but after some searching, I think that's just a typo


thanks!


Elixir: So good that your team simply won’t believe your claims

https://ekarak.com/2020/05/16/of-elixir-phoenix-and-analogie...


The job market for Elixir is pretty small. Sure, there are a couple of people here responding with specific listings, but still. I was hoping it would have ramped up by now but it seems that it's going to stay a niche player.


I think it's healthy but small, and growing.


Looks like elixir is gaining more and more traction :)


The fact that it's a dynamic language make problems similar to Node / Python, can't dev serious backend services without a strongly type language. Not saying you can't, but you will have a lot of issues overtime that would have been catch at compile time.


As long as you've got some kind of strong sanity checks at I/O boundaries and your language & related library ecosystem's not batshit insane (ahem) it can be fine.

It's when you have JavaScript (not even TypeScipt) sending and receiving JSON (not even bothering with JSON-Schema) to/from Node/Ruby/whatever, saving everything to a permissive document store or to some poorly-normalized RDBMS schema that's missing half the constraints it ought to have and has a few columns that aren't the most correct reasonable type, that you end up with a brittle and slowly-advancing pile of shit pretty reliably. "Let's do a Rails REST API with Mongo and a Javascript SPA & React Native clients!" = run far, far away.


Having used all three in backend systems, I definitely disagree that they have the same problems. Elixir is strongly typed but lacks static typing, in exchange it has a powerful pattern matching system that more than makes up for this problem in my experience.

But fundamentally the issues caused by lacking static typing is different from the problems caused by being weakly typed like python and node systems are.


Python is strongly typed. I fully believe that Elixir lacks the same problems, but not because of strong vs weak. If I had to guess, I would put think that it's because Python code is more likely to be mutable than Elixir, but that's just a guess.


In my experience python is strongly typed as well - unless you meant something more nuanced?


Compilers aren’t a panacea of avoiding problems in production. You just experience different types of problems.

There’s entire classes of short term and long term issues that you get to avoid by working within the BEAM.


The BEAM VM, whether using Elixir, Erlang, or another languages, has been successfully used to implement telecommunication switches, chat services (WhatsApp), and plenty of other "serious backend services."


I don't endorse the parent's perspective, but he was very clear that he wasn't saying it was impossible to build serious backend services, only that you will run into problems that you wouldn't with statically typed languages.


Elixir is a relatively strongly typed language, but it's not statically typed.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: