View-less meaning Phoenix.View module is refactored/replaced. The amount of boilerplate code is still a challenge in Phoenix.
Worked on a large project using live views.
Pros:
- can achieve interactivity without using client side js frameworks
- less developers / man power needed compared to client side js frameworks
- evolving in right direction
- community / forums
Cons:
- still evolving and releases are breaking (no backward compatibility).
- the amount of boilerplate code which is generated is too high.
- forms are basic and need to evolve for handling challenges which come with Server side rendering and interactivity
- there was leex, then heex, now components taking centre stage.
- most of the books, tutorials, videos, articles are stale due to frequent changes / evolution
Elixir/Phoenix Community is small and non toxic on forums. But sometimes the size of community is drawback. Some questions get fewer replies and will be a puzzle for the developer who is working on the project (this does not happen frequently).
Anyone who wants to adopt Live View for production projects will have to overcome these challenges.
> - still evolving and releases are breaking (no backward compatibility)
To clarify, this is only true for LiveView which has not reached v1.0 yet. Even then, we have been careful with deprecations while exploring a new programming model with several new ideas.
In fact, Phoenix v1.7 is precisely the consolidation of those ideas. I’d say Phoenix v1.7 would be tagged as v2.0 in most web frameworks, but we kept it as v1.x exactly because it is still backwards compatible (and it has been so for 7.5 years!).
The last steps before LiveView v1.0 are likely forms (which you mentioned) and perhaps a better mechanism to stream with appending/prepending/replacing data.
Once it is out, we hope the remaining cons will be quickly addressed, except the boilerplate bits. The boilerplate is always a trade-off: if we hide too much, then the pieces become more coupled and it is harder to understand how they fit together. So I’d say Phoenix strikes the right balance here and if you have a highly formulaic application, projects like Ash may be to your taste. :)
All web frameworks fail with forms one way or the other :)
I think one point would be the docs, to outline the basic cases, best practices. Maybe something tying the form helpers and changeset together.
I have a side project [0] with a form which have inputs dependant of others and a dynamic section (assoc).
I struggled a bit with the data, checking what I would get inside a changeset struct, until I "discovered" get_field and put_change.
For the assoc I was also a bit lost until I read an article using a delete virtual field. In the end I made it work but I still not sure I used all the right changeset functions.
To LiveView credits this app has been used all over the world and got very good feedback, nobody mentioned any lag issue etc (well, someone with a crappy connexion in Istanbul once got an incomplete uploaded file without getting an error).
That’s one aspect. But I would say the fact that existing Phoenix apps can migrate to v1.7 and use other features such as verified routes without going through a major migration is an important aspect that should not be downplayed, especially when other frameworks may have gone through more than one major version in those 7.5 years and broke existing code more than once.
Heex and components are essentially one-in-the-same. Heex enabled components upon its release and now the framework is (rightfully) pushing components.
Forms I agree with, though I don't feel that pain too badly. Having to be more explicit than just passing in raw parameters to an update function actually has its benefits (though I'm sure better forms are coming along).
Agreed that the framework is moving faster than the available literature on it, though people are doing their best and are doing a pretty good job of keeping up.
What is it about the boilerplate that annoys you? It's quite a bit less than Rails, though that's my only point of comparison unless you're going to unfairly compare it to a microframework. There aren't any generated files that I'm not thankful I can easily edit if I need to.
W.r.to boilerplate - I have no reference point/comparision to other frameworks.
When the project grows with more contexts, components and forms, one will end up with a bunch files which differ only in Module.func_names.
Phoenix < 1.7, View files which are required for controllers are boilerplate which contain 3-4 lines. In the end there will be a folder with view files which don't will have similar content unless any view specific functions are added.
It seems like essential complexity to me. I think maybe there's some ceremony around explicitly casting, validating, etc. with Ecto, but it would be hard to hide that without 'magical' assumptions and making it more inflexible. But I agree View files always felt a little redundant for many use cases where the template doesn't really need to make any function calls – and from memory, function calls in LiveView templates is not performant and causes the whole DOM to be regenerated when values change.
I'm working on a medium sized LiveView project now.
The leex to heex change was 1 day for us and the benefits were immediately worth it.
(HTML auto formatting and validation, better syntax).
Components were a thing before heex, I wouldn't say they are taking center stage now, there is no rewrite required there, you either use Component, LiveComponent or LiveView. When to use which is very simple as the their name indicate.
I'm hoping to see more patterns in the forms space. I spent 18 months building a product in LiveView (didn't work out in the end) and complex forms were one of the most difficult things.
I got really good at creating them in the end, but it was all homegrown patterns that I don't think I'd want to push on others.
My favorite (and fairly simple) pattern was emitting an event from server to client to get the client to send it's entire form state to the server. That was my escape hatch because it worked inside of any component structure.
What kind of forms were you creating? How to handle forms ergonomically has been a central interest of mine since 2003 and I like hearing about what people try and how it works for them.
Ranging from very basic static form to forms that had dynamic /polymorphic embedded objects in them. Like being able to hit a + button and a new row is added as a child of the form that has it's own validations based on a type attribute.
Another thing I am concerned about is the removal of default integration with FE build pipeline and npm libraries. I understand the complexity of the pipeline, and Rails is doing the same. But the difference is that Rails has a replacement in place (importmap), while Phoenix recommends vendoring the lib. In additional, Rails has suitable replacement for the use case (hotwire) and a clear focus on the philosophy of productivity. I am not sure why Phoenix did the same things without a clear path forward.
Regarding import maps, I am still slightly skeptical. First they don’t solve all needs of npm (for example, what if you need to precompile your FE code?). Second, I have feeling that every import map management tool will eventually become a package manager through slowly creep in of requirements. At the same time, there is nothing stopping anyone using it on their Phoenix apps if they so desire.
I would hope it to be long term but it is impossible to know given how fast the JS landscape can change.
But also keep in mind that Phoenix is not tied to any of these. The esbuild bits are only part of the generated app, which you can stick to or change altogether.
So when we moved to esbuild, we broke zero of the existing apps and no deprecations were emitted. The same would happen if we move away from esbuild.
As an example, I moved most of my apps to esbuild (and I have been very happy with it), but one still uses webpack because it uses the Monaco editor and we rely on certain plugins. So our approach has always been “sane defaults” but not getting in the way if you know better or prefer something else.
- there was leex, then heex, now components taking centre stage.
- most of the books, tutorials, videos, articles are stale due to frequent changes / evolution
Last two points should have been one. Got split while composing the post. My mistake.
- there was leex, then heex, now components taking centre stage. most of the books, tutorials, videos, articles are stale due to frequent changes / evolution
I am OK with evolution. From project maintainance and developer - its a challenge to keep up with things coming their way.
regarding frequent changes, in my experience keeping 5 or 6 Phoenix/Liveview apps up to date the pain I have seen with changes is far less than I have experienced with Rails, Django and even front end updates required to keep react and vue inline. All of that when Liveview is not even a 1.0 yet...
I do get that books and other rotting materials obviously fall behind quickly but have you seen hard uplifts to codebases to keep them up to date?
> the amount of boilerplate code which is generated is too high.
Relative to what? From my experience other MVC web frameworks (e.g., Django, .NET, Laravel) have the usual fragmented directory structure typical of MVC + ORM. Unless you're comparing to other LiveView frameworks? In which case, Phoenix is still the OG, and more mature and capable vs other platforms.
It comes down to whether you want to maintain a bunch of auto generated files which will never be automatically updated. I would rather maintain only the lines of code I write not ones that were generated by some tool and then immediately became legacy.
I understand that generator is for education purpose. New comers might find it's cumbersome to get start if generators are absent. However, I think it should be just some guide docs. Generated context can mislead the concept of how to use the context itself. It's easy to quickly write a dirty API in phoenix, but practice guides lead to ceremony.
As oppose to php, ruby, javascript culture which is; make/build/ship stuff to the world. IMO, this is the growth stopper. I couldn't care less about sophisticated clean architecture, module boundary, nice pooling http clients, principle goodness. The second growth stopper is jobs. Good luck finding future elixir developers if you do not help build the elixir army. Companies be like, lets choose birth control and adopts senior elixir devs instead.
What I said is... true. (doesn't mean I don't appreciate status quo of ecosystem. Everything is good except it's not growing, probably side way up and down)
I'm building a startup in my free hours using Phoenix Liveview. I can ship features in hours vs days thanks to the simplicity liveview gives me. ZERO react/JS nonsense.
I back to writing server side rendered templates with for-free reactivity. Give it a try it's very productive
As someone who knows js really well, I love liveview's hooks system. the js interop is the bees knees and lets me save my js chops for where I really need it. Also, I can emit js events that get picked up server side and vice versa. ie: no having to write any ajax calls. My frontend can react to server events with very little boilerplate. Thats a WAY better proposition than merely zero js.
PS: the frontend buildchain is esbuild so you get out of the box support for typescript.
I'm also not sure. I think it's an interesting point that you bring up.
My comment on value is mainly one of "all or nothing". I think a manual typing at the edge is probably sufficient for most people.
That said, you could implement all of your forms and events as structs or schemas and rig up a piece of the dev process to convert them into types. I think many in Elixir would view it as overkill so it hasn't been done afaik. But I also think it could be implemented very easily and then you'd have end to end typing.
Yes, the downside is basically offline. When offline the LiveView app doesn't work because it can't update it's DOM. A decent solution I've enjoyed is using Alpine.js. It fits very neatly into the LiveView philosophy and can do nearly all UI updating easily. It still minimizes the amount of JS you have to write but you get good client-side update behavior.
High latency isn't an issue if you do it correctly by using the loading classes and what not so that the user knows they are waiting for some data to load. It's pretty much identical to a typical React SPA that has to show loading indicators while waiting for a JSON call to complete.
You aren't suppossed to use LiveView for every interaction, only the ones that require the server to be involved somehow just like a regular SPA. For pure client-side interactions LiveView has some utilities like LiveView.JS or you can use a lib like AlpineJS.
Live View might still be applicable in some latency sensitive contexts: one of the first Live View demos was a server-side rendered animation hosted in US that played at a smooth 60 fps in Europe.
There are very few problems Live View isn't good for, and conversely, there are very few problems client-side single-page application are necessary for.
It handles crappy connections ok -- it will auto reconnect as needed.
As far as full offline -- it has interfaces to easily alert the user via UI that they are disconnected, however, just like 99.999% of non liveview apps when you are offline things start breaking (load just about any static page site and submit a form or navigate when offline).
Just to point out that "ZERO react/JS nonsense" is based on looking down on mainstream languages even after they've improved and even though the platforms use them.
I mean... It's hard to find anyone who prefers Java to Kotlin for Android development. I know people who still have Java One backpack, and they would rather write Kotlin over Java any day.
JavaScript...I didn't mind how quirky it is when it was SSR + a little jQuery for interactivity. nodejs, asset pipelines like webpack, React make me want to barf. They haven't improved at all.
LiveView looks very interesting and I wish Phoenix/Elixir was an option for me.
I've used them enough to have my own informed opinion on the languages, frameworks and mentality in the ecosystems I talk about. We're both right -- because we both are sharing our opinions.
> With Phoenix.LiveView, Phoenix.View has been replaced by Phoenix.Component. Phoenix.Component is capable of embedding templates on disk as functions components, using the embed_templates function.
I was going to come here and link directly to this - the migration from previous versions is always covered in detail and seems to always work which is great. A lot of frameworks seem to struggle with this quality of documentation when upgrading versions.
LiveView is evolving into a great piece of tech, but as others have noted elsewhere in the comments one of the challenging parts with LiveView right now (and to an extent Phoenix) is the outdated books & tutorials.
Bruce Tate and Sophie DeBenedetto have been authoring the book “Programming Phoenix LiveView” (https://pragprog.com/titles/liveview/programming-phoenix-liv...) which has the potential to be a great source for people that want to really dive into LiveView. The challenge though is they have not updated it to support the changes introduced in 0.18.0 which makes it really hard to start using the book when a new Phoenix application “mix phx.new dev_app” looks different than what’s in their book and some of their code breaks with the default installed versions of included plugs.
While I wish the book would receive an update sooner that brings it back to a compatible state (meaning there are no issues following along with the book), the good news is they have committed to having the book be updated when LiveView hits 1.0.
This makes sense to me. Views always seemed a like unnecessary boilerplate to me. Better to keep all the code together in one module, and just import common functions. On the other hand, I don't typically like defining the template in a sigil, unless it's a really small one, so I probably won't do that much.
I agree in the case of HTML rendering, but I write a lot of JSON APIs and putting the JSON definition in the View has been very pleasant. It didn't say in OP, but how does this change an API? Maybe a `GreetJSON` instead of `GreetHTML`?
I feel like the next big unlock in user adoption is continued improvements to BeamAsm. Hopefully something on the order of 100-200% raw performance speed up. Note: I’m talking about raw perf, not concurrency.
Actually, I think the next big unlock in user adoption is static typing. Lack of it is frequently cited as the number one reason that makes people hesitate in switching to Elixir. I know it is an active research project right now, and I hope it bears fruit.
For me personally, this is not the case. I work in multiple languages, some static, some not. And I have as many bugs in the one as the other. I add strings to numbers less in static ones, but I tend to make more design mistakes, because I have to steer through the constraints of the type system.
I view "must have static types/compile time checking" types as a bit reductionist. It's like saying "all food must be seasoned with salt." Salt's a good seasoner. For some meals, it's a must have. For others, it's a nice to have. And for others, it actually detracts. Compile time checking fits in the larger context of what programming in a given language is all about. Syntax, libraries, ecosystem, tools, execution semantics, runtime components. All of these compose for an end result. And typing plays a different role for each composition.
I've been doing Elixir off-and-on for about 1.5 years now. The community is great. If I personally, had to lobby for "next big thing", it would be having a real IDE. I did Smalltalk for 20 years. I want an IDE where I work in modules/functions, not files and do/end syntax. I want a formatter that is not a PHD paper on generalized layout. I want refactoring tools. Inline macro expansion visibility. If I were independently wealthy, I would work on just such a thing in my spare time.
ElixirLS and VSCode are OK, but so so limited really.
What makes Elixir a standout candidate for a killer IDE experience is it's simplicity. Simple execution models make the language easier to model, navigate, and so IDEs don't need parse engines that are constantly needing updating (I'm looking at you Kotlin/Compose). Elixir as a language eats its own dogfood. It's sad to me that the Xerox Parc folks exploited this so beautifully with Smalltalk, but Elixir is stuck in the file editor mode still. :(
> And I have as many bugs in the one as the other.
The case for types is very strong in Javascript though.
I also use Ruby for backend and TypeScript for frontend and see no problem with that. Some languages are well suited for dynamic. Sometimes not just due to the language but the community + culture around it. Ruby codebases are also very heavily test driven which helps compensate and the inherent flexibility/developer experience of Ruby can't be matched. Elixir being the only close match.
That would make sense. More information, better IDE.
But it has not been my experience. Nor some of my peers. Despite a meh text editor, the Smalltalk IDEs (Smalltalk/V, VisualWorks, and VisualAge) were amazing. The original "refactoring" work done by John Brant and Don Roberts was pioneered at UIUC in these environments. Their adaptation to the different IDEs just improved them.
I remember sitting next to John and Don at OOPLSA and speculating that refactoring inside of Java (Eclipse was The New Editor at the time) should be superior in Eclipse vs Smalltalk because Java had more type information to work with. John and Don, both Smalltalk enthusiasts in those days, admitted they had anticipated that as well. And they did a bunch of work trying to port refactoring stuff to Eclipse. They were surprised to discover it wasn't so. But it's not typing vs not. It's simplicity vs not. When I asked if that meant that rising start Ruby would benefit from their refactoring work, they said that too was hard. Don explained that in a nut shell, it was the AST. Much of what tools/IDEs do is work with a model of the language. The Smalltalk AST had 15 polymorphic objects to represent its language model. Ruby was 90+ and climbing at the time. Java was insane. Their take was that while typing might add some information, the combinatorial explosion of modeling the language just made tooling for the language more difficult. It was an interesting insight.
Elixir/Erlang has a language model that is even simpler than Smalltalk in some ways. The trickiest part is the macros.
I left elixir and went back to Go because statistic typing.
The elixir project I was in had several dozen developers on it and specs were not enough. I always had to backtrack through several calls and/or run the code and dump the data and then capture the example data as a comment so future devs could know what was coming into the function. Not a great solution.
Based on the number of attempts to add static typing to Erlang, I don't see this ever happening.
I know the author of Gleam is going full-time working on it, but I really know much about it or how it gets around the problems with statically typing message-passing. I'm perfectly happy working in dynamic languages and find Elixir's type hinting to be more than adequate.
You've been able to write PureScript compiled to Erlang for years. PureScript is a well tested language at this point, and the Erlang compilation works very well. Statically typed message passing is a non-issue in practice and it has been a bad reason for not having it the whole time.
I've been using Elixir since 2015 and can confidently say there is no use case I would choose Elixir for over PureScript for the BEAM. In practice my projects are Elixir + PureScript because we can have them interoperate, but there are zero technical reasons to choose Elixir over PureScript for BEAM work.
Edit:
`dialyzer` and similar tools are unusable for almost all sizes of projects in part because they're badly made, they're too lax and have zero useful abstraction properties. Making a spec generic, for example, is an exercise in futility, whereas an actual type system has no issues with expressing that very basic property.
Edit 2:
As an upside you also have PureScript for your frontend, so you can just write everything in the same language regardless of how much frontend work you expect to be doing. PureScript has great bindings and a great story around React (it actually fits better since it's a purely functional language, so things like "You can only do effects in `useEffect` actually are enforced and make sense) and also has its own frontend framework in Halogen which is very nice.
I'd like to see someone tackle a PhD thesis (or ten) on applying formalism to the concept of migrations. I suspect there's a missing aspect of type/group/category theory that's at least as big and important as variance is.
I think it might already mostly exist, it just isn't organized into a discipline.
While it's not as robust as static-typing, compile-time type checking for Erlang has come a long way. Eqwalizer works pretty well, though but I may be biased since my employer sponsors the project.
I really don’t get this, personally. I’ve worked in Java, Ruby, Typescript, Elixir, JS, and a bit of Elm and I literally never feel like I’m missing anything by not having static types in Elixir. This is doubly true with web based projects.
What are people looking for that they might get from static types?
I used to think this, but nowadays I avoid any language that doesn't have static typing.
The difference in tooling support/IDE completions I get on practically any language that has types vs those that don't is just too drastic for me.
It's also so much less mental load to have to always remember the types in my head.
Another issue I've seen in dynamic languages is when people do try to document the types via comments or otherwise, but then this drifts from reality due to not being updated.
I actually tried elixir and this is one of the main things that turned me off it.
Completions in Elixir in VSCode are as good as anything if seen in intellij for Java, through that may be a low bar.
As for the burden of the mental model, I can’t really speak to that. In Elixir I usually try to think in data shapes or structures which can be matched on by function heads. I only think of types at the edges of the system.
ElixirLS developer here. Nice to hear that people like the autocompletion but comming from .net/java I think it still can do much better. It’s not very context aware and poorly handles code with parse errors
In a large JavaScript codebase that's partway to being a TypeScript codebase, I find myself constantly having to insert `console.log` statements to find out what attributes some object actually has. Plus, our number one source of crashes and errors is unhandled null/undefined because someone didn't realize a parameter might be nullish. Static typing would basically eliminate both of those issues.
I hear this about big JS codebases but the 7 year old Elixir app I work on is many 10s of thousands of lines of code and we just don’t have that problem.
It’s not that we never have type errors, it’s just that they’re the least frequent kind of error I see monitoring sentry. We try to do a zero exception policy, so we’re really on top of what kind of errors are happening.
I’ve worked on some pretty large codebases and while there’s usually no IDE button for it, there are good and well known patterns for refactoring dynamic code.
As for bugs, the research on this has always been mixed at best, I guess YMMV.
I’d personally argue that the structural typing in Elixir is more useful and practical than the types in most popular static languages like Java or TS.
Mostly I think it’s a wash where you get a little but give up some useful things too.
Size of projects and lifetime of project. Team members swapping in and out, moving project/repo ownership to another team, ability to make simple changes to a code base without having full understanding, etc.
I'm working on a 7 year old application now with 60+ past contributors, 3-4 teams working in the code base, 3,500 elixir files, and some semi-complex dependencies and I can honestly say that most of it is pretty easy to work on. We have one section that is full of some really thorny code, but that's because it relies heavily on code generation and that's rough in most languages. Onboarding usually only takes about a week, less sometimes.
Every time people talk about static types and the lack of them in Elixir I chime in to repeat that it is a misconception to think of typing as a black and white issue.
Typing systems exist on a spectrum: I constantly get type issues on Javascript and Python, refactoring is a nightmare without a ton of tests, while it was never a big issue in Elixir. Yes, its typing system is inadequate, and dialyzer isn't great, but in practice it is not that much of a problem. Pattern matching saves the day, and type errors found in rarely used code paths, what's the worst they can do? Crash the process? That's the least of our problems on the BEAM.
I've maintained a big data ingestion system that kept getting fed with bad, unforeseen data and it's never gone offline in the 3 years I've overseen its operation. If a bug causes your program to segfault or throw a NullPointerException, you will definitely want to have a strong typing system.
A type system saves a lot of time when reading code that has been written by other people and has evolved over time. In Elixir, I often end up adding debugging statements to code and running it, just to check the data structures.
A type system also gives additional assurances when changing code that is used from many places. It's so nice to make a change and have additional confidence in it because the type system is happy with it.
I can't think of many bugs that I've seen that would have been prevented by a type system. I'd still like to have a type system though, it's sort of extra documentation within the code.
I agree 100% and I'd add, static typing helps when you inherit a poorly written codebase. This unfortunately is probably 99% of codebases in the wild. It has happened so many times when i've seen a lack of test coverage, a lack of understanding of the codebase but the business requirement to make changes.
Having a type system in place makes minor refactoring possible in this nightmare scenario.
I have been in the situation of poorly written ruby codebases and I can tell you 100% that I would prefer to have a poorly written java codebase with its static types. I prefer ruby as a language but man when it's bad, it's terrible. Just trying to work out the intent of a function when multiple types are passed in as the same argument over the codebase is pure hell.
I’m currently doing two startups: one with rapid consulting-esque development and one more long term B2B one. I would 100% be using Phoenix for the the majority of the projects in the former if it had static typing. Have been a fan since release but, for me, it’s a hard requirement for anything that isn’t a personal project
Has anyone figured out how to do static typing in Erlang yet?
FB a few years ago announced they were going to work on it for WhatsApp but then it was indefinitely delayed. There’s also been a few other attempt, but I don’t believe anyone has succeeded.
In the meantime I don’t know what people have against using @spec. It’s a far more powerful type specification than the majority of static-typed languages out there
Dialyzer is almost always right but I've been using Elixir for around 4 years and I occasionally still find it difficult to decipher its verbose and deceiving messages.
I previously worked with TypeScript which felt like a major productivity boost but I've ran into a few instances where Dialyzer came up with the most cryptic error message that took me at least an hour to figure out. It really sucks when that happens and I wish there were more high quality learning materials that would dive into Dialyzer.
Yep, I've had some weird errors with Dialyzer, but in the mobile world I've had some similarly baffling errors from Kotlin and Swift, particularly around lambdas and generics. Let's not forget that the typing flexibility allowed in Elixir is really quite powerful compared to other languages. (e.g., function signatures can have guards, union types, and pattern matching)
I wish some LLM magic sauce could be applied to compiler and linter warnings in general that provide a context-appropriate plain language explanation, along with suggestions and automated fixes.
Typespec is a nightmare to work with. The error messages are arbitrary and at times misleading. I use it because it is better than nothing, but it has much room for improvement.
I think I've encountered a couple of times where I've gotten obscure errors, and I agree that error messaging needs to be improved. On the other hand, every time it's pointed out an error, it was right. And IMHO union type-checking alone puts it above most popular static-typed languages out there.
The other problem with specs is that they can become out of sync with function signatures.
> In the meantime I don’t know what people have against using @spec.
It's not ergonomic enough (you end up writing function definitions twice).
And there's very little tooling to help automate the process: if you return the result of a function call, good luck figuring out the spec for that if that function is in a library somewhere
> Actually, I think the next big unlock in user adoption is static typing.
From reading that post, it sounds like Jose does not believe static typing is needed. But a nice to have to address a very narrow set of use cases that primarily would help for documentation, not bug mitigation.
I think that Jose makes a solid case that static typing doesn’t bring as much to the table to Elixir as most people believe, but enough people point to the lack of it as a reason not to adopt the language that it’s worth addressing. Again, I think @spec is great and people should use it more – just treat the dialyzer warnings as errors.
Edit: Not sure why the downvotes, but the talking points by Jose are on his ElixirConf keynote
I'm not saying that types aren't a useful addition to Elixir, just that many people have wrong claims as to how they could be a useful addition to Elixir.
You've been able to write PureScript that compiles to Erlang and has perfect interop for years, via `purerl`[0]. Using it with Elixir is as simple as adding `purerlex` as a compiler and having your PureScript code automatically compile when `mix` compiles things, and off you go.
In terms of the typing itself, it's exactly what you get in all of PureScript, strict static typing with no `any` or the like. Using `Pinto`, the de facto OTP layer in PureScript your processes are typed, i.e. their `info` messages & state are typed, which means that they are all much more like strongly typed state machines than anything else.
You can see an example of a basic `gen_server` here:
The differences aren't very big in terms of what you'd expect to be doing. One small thing to note is that the `GenServer.call` expects a closure to be passed instead of having the split between `gen_server:call` & `handle_call`, removing the need for synchronizing two places for your messages being sent and handled.
As an upside you also have PureScript for your frontend, so you can just write everything in the same language regardless of how much frontend work you expect to be doing. PureScript has great bindings and a great story around React (it actually fits better since it's a purely functional language, so things like "You can only do effects in `useEffect`" actually are enforced and make sense) and also has its own frontend framework in Halogen which is very nice.
I come from working with mostly golang for the past decade and have been very happy with it and its concurrency capabilities, but after playing with elixir for the past couple of weeks, I realize there's something to this, especially in terms of what it has to offer for distributed computing. Very cool to hear that they're working on making it more performant!
Honestly without proper editor support from a company like intelli-j, I just can't see myself picking it up. Hard to give up on the niceties it provides, especially when working on a non-static language
I did try the third-party intelli-j plug-in but never got it to work
If it had that... look out!
Elixir-ls provides Language Server Protocol support as well as VS Code Debug Protocol support which gives extra powers to VS Code, NeoVim, Emacs, and the like.
Unless I did something wrong with my setup the elixir LS is kind of lackluster compared to a full IDE. It doesn't have any automatic refactoring (even variable renaming) nor does it provide automatic detection of syntax errors. It's pretty much just symbol lookup and some autocomplete functionality.
I really wish intelli-j provides a first-class editor. I could never get VSCode + ElixirLS to work smoothly. It's just broken. I have switched to NeoVim + plugins. It feels better mostly because of Vim's eco-system. But I am missing a lot of productivity due to lack of a good IDE.
This has been awesome. I'm currently using the new Phoenix 1.7 generator to build https://locationsquared.com/ and I used the view-less layout to have a small helper function to build the meta tags for each pages.
Makes little difference to me, it has been few years now that I create API backends consumed by separated frontends, them being either CLI or JS etc. If anything for me this approach has done wonders for e2e tests using api mockservers
But probably for simpler projects and new starters makes sense
But I would really suggest anyone out there just to avoid backend-frameworks-based view/templating (guess json responses are view layers) layers
> But I would really suggest anyone out there just to avoid backend-frameworks-based view/templating
I personally struggle to let go of backend templating. It's so easy to translate your data into the right pieces of markup in the template, and just serve the fully-baked HTML to the user instantly. How should I be looking at this differently?
I think few reasons that made me go with this new (for me, at the time) approach were:
- Backend/API that can be consumed by anything, are frontend-agnostic
- Frontend that depends on JSON/source-agnostic data so that can have functionality tested faster (no "proper"/"dynamic" backend involved, just give it some ApiBlueprint/swagger mock server
- The frontend rendering isn't a concern of the backend removing some load
As company gets larger, it's so nice to have "important business logic" via API only, that way mobile apps, web apps, partners, etc can use same logic/data. Server side templating is a real drag on enabling this.
> But I would really suggest anyone out there just to avoid backend-frameworks-based view/templating (guess json responses are view layers) layers
Are there client-side rendered frameworks that can match the developer productivity of Phoenix or Rails for early stage, small team products? It seems they are still the best for small teams moving quickly in the early phases of a project.
So, I am not sure what kind of developer productivity are we talking about, I think it's about what people are comfortable with and know of. I am a solo dev working on a side project with rust+phoenix backends (one doing IO-intensive/Multithreading stuff and the other presenting the info/business logic through API) and svelte frontend and svelte for me is very simple. On the other hand if there's an emphasis on developer productivity, and your team has experience on Phoenix/RoR, then you can't fight experience
We're building a startup ( https://www.batteriesincl.com/ ) with Elixir and Phoenix 1.7rc (git master really). It's been amazing; I could not be happier.
- We went hard on components and it's made building UI's easy. In fact I wrote a test library to make component testing easier.
- Live view is so easy with a good component library. I'm not a designer, but with snappy interactions and easy to use components, it's not hard to get something that's exciting.
- Elixir is a very nice language to write distributed systems in. Functional in all the right places plus it has OTP.
- The community is full of very senior people who freely answer your noob questions.
- On-boarding people has gone well. The syntax is friendly enough that experienced engineers grasp the basics, leaving functional programming and OTP left to discover.
> The community is full of very senior people who freely answer your noob questions.
The role of José Valim, the creator of Elixir, can't be understated. He's everywhere, he's got his hands in many of the top used libraries, and he's incredibly welcoming and responsive to all the inane GitHub issues I've opened over the years.
People wrongly compare Elixir to Ruby, but I wonder if he decided to recreate the welcoming and newbie friendly community and leadership of Ruby.
(Chris McCord, the creator of Phoenix, seems like a pretty swell guy too)
Also, a note on the upstream code: Elixir and its core libraries are some of the few projects you get a quick answer to your issues and they aren't closed because they've gone stale. 18 open issues, 5k closed on the core repo is an impressive ratio these days.
I ask because I’m under the impression that using it comes with some sizable scaling problems, which somewhat defeats the point of using Erlang in the first place.
I could be wrong though. Would
like to learn more.
Each LiveView requires a persistent WebSocket connection to the server. This means it does have a different scaling profile than the usual request/response lifecycle, but the Erlang VM is greatly capable of holding millions of connections at the same time and therefore is a perfect fit for the LiveView model.
In fact, LiveView is built on top of the same Phoenix Channels we used to achieve 1 million connections on a single server. So I would say that LiveView fully leverages the Erlang VM strengths. :)
What situations would face possible issues with the persistent-socket-per-view approach? A sprawling app with hundreds of distinct views that over-use LiveView? Would there also be multiple persistent sockets for each particular page?
I can't imagine the type of site that would need that sort of structure. Typically you'd have your highly-interactive primary subset of your app, about <10-25% of the routes/views which gets 75-90% of your traffic. While the other 75%+ of routes are just simple static-y/REST CRUD/server-rendered pages.
We use LiveView extensively and one meaningful issue we have is users with intermittent connections on mobile.
On a "normal" req/resp page, their intermittent connection would manifest as a blank then slowly loading page: things that the user understands as being the result (in a way) of "bad internet".
With LiveView, the site simply becomes unresponsive. Nothing changes, and the user interprets this as the site being "broken" (which in a way it is)
Some of the 1.7 stuff has an alert banner that pops up when the connection is broken. I think that could really help.
However I haven't put that in our app as I have seen other issues of flakey connection reconnect issues, and I would hate to make any of those more visible with a flashing notice.
> I would hate to make any of those more visible with a flashing notice.
idk I haven't used LiveView in production but an interface that offers a well designed fallback "refresh" button helps for unpredictable edge cases, more so than the negatives of added complexity, which yes implies failure but is also certainly better than failing to optimize for reality.
I ran into a similar hypothetical issues, that was really in practice an edge case, because we cached views for typical users. But I still provided a way to force reset of both server caches AND Localstorage caches because otherwise such an option was limited to advanced users/devs who either know backend or frontend.
This is win-win ultimately because the user feels taken care of even in niche failure edge cases and QA/devs doing testing aren't forced to use exceptional means to reset state.
there are not a ton -- the pain points mostly boil down to:
Don't use it for extremely latency sensitive UI's (there is around trip and most of the time for most uis this is not an issue even using a server across the world).
Don't use it for animation heavy UIs (you can do animations with it but think things like game UI).
It does not work offline (in most cases it s the same point with standard web and offline).
All of that said, the answer to your second "question" is: There is a process (this is an erlang process and VERY light -- much lighter than a thread its not unusual to have millions of them running at a time in beam) per active page. So one per use per tab or session.
Also yeah, there is no lock in -- you can have 99% of your pages use static rendering (dead pages) and just use liveview for the ones that would benefit. The choice does not come with sharp edges.
IIRC (don’t claim to be an expert) the main issue is gracefully handling the case where the server end goes away unexpectedly - network issues, reboots, that kind of thing.
yes active clients reconnect automatically, if you are doing rolling traditional deploys its great. It is running on beam so you can do live in place deploys too (although it is much more work).
> I’m under the impression that using it comes with some sizable scaling problems
Interesting. I have the exact opposite impression (I'm not experienced in live view; only written 2 small utilities). Erlang (therefore elixir) is fundamentally distributed, so as you mentioned, it would defeat the purpose.
So why gives you that impression? LiveView is just a smart/reactive socket built on Phoenix that behaves just like any other elixir process right? Why would it specifically have scaling issues?
LiveView has been pretty great. Mostly it's just a couple of `handle_event` methods away and we have a fully reactive UI.
I haven't seen any scaling issues for LiveView. Under the hood there's a websocket that push and pull events from a running GenServer for each session. Since each process is independent it's horizontally scalable as long as your able to route websockets to the same process in a cluster.
While not the same, I do know that single machine has been able to scale to a couple of million connections ( https://www.phoenixframework.org/blog/the-road-to-2-million-... ) with phoenix channels. Which are harder to scale than independent live views. Also our use case is for smaller scale than that (Not too many companies have a million people looking at their ml deploy pipelines) so I haven't been too worried.
If you’re reading this, would be super interesting to update this 7-year-old benchmark to use Live View (and the latest stack). Thanks for all you do btw.
Could even switch over to Bandit which was on a recent Thinking Elixir podcast
> In recent performance tests, Bandit's HTTP/1.x engine is up to 5x faster than Cowboy depending on the number of concurrent requests. When comparing HTTP/2 performance, Bandit is up to 2.3x faster than Cowboy
The PR you linked to adds support for generating new phx apps with the relevant change already incorporated; it’s just a generator change. We’re waiting a bit to incorporate this change to ‘soft launch’ Bandit support.
Bandit is written in pure Elixir which is a bit undesirable to integrate with pure Erlang projects. From what I gleaned off the podcast, it's a subset of the HTTP features available in Cowboy, specifically those that are available on the Phoenix side, and cleaving off the unused functionality realized much of the performance benefit.
Bandit author here. Correct! The byline of Bandit is ‘a web server for Plug applications’ and being able to focus on that narrowed set of requirements is a large part of where the perf boost comes from (less code, easier to reason about, fewer processes, etc).
I'm also doing a solo-startup right now that relies heavily on LiveView and Channels/Websockets. I wouldn't have been able to build it by myself without Elixir/Pheonix. It's absolutely fantastic, can't recommend it highly enough.
100% this. I started with it solo and I was able to achieve what would have been very hard for a three person team with the standard js app + api app + backend app.
What are you using for the component library? I was thinking of trying out Elixir and if there’s something that works well with it I’d rather start there than by trial and error.
> Elixir is a very nice language to write distributed systems in. Functional in all the right places plus it has OTP.
This is the biggest blocker for introducing Elixir to any company I work at. I don't want to become / hire experts in the OTP and the Erlang VM.
I'm ignorant about it in general, but my feeling is it's not only a new language, it's built on abstractions that I'm not sure I'm comfortable owning or operating. Is that wrong and how did you handle the tradeoffs for your company?
My answer to this is the following: do your software have by any chance, SQL writes and HTTP request to any third party APIs within the same endpoint? Or maybe it uses some background job processing that's not SQL based and you have at least one endpoint where a job is enqueued AND something is written with SQL?
If the answer is yes, you already have a distributed systems with all the downsides and none of the mitigations.
If that's the case, you can also use Elixir without still knowing anything and have the same level of penalties.
It's really hard nowdays to avoid distributed systems, it just happen that people don't realize they have one.
The classic Rails + Postgres + Sidekiq is a distributed system
It's not the distributed system bits. It's the heavy abstractions over the distributed system bits that make me question if operating it requires specialized knowledge. I have a performance issue or an edge case with actors from the OTP. How do I debug and resolve it? Say network partitions aren't behaving like I expect and there seems to be data inconsistency or lost. I think you're doing a disservice when you pretend this is the same class of issue as a three tier app you can Google and StackOverflow for.
This dependency should raise questions for anyone looking to adopt this, and I want to know how people have attempted to mitigate this vs. handwaving.
The central abstraction isn't IMHO that heavy. It's just this:
A process has an isolated execution state, a message queue, and an id.
Messages can be added to the queue using the process id (from local or remote processes), and messages can be pulled from the queue by the process (with pattern matching, first matching message is removed from the queue)
If you have a performance issue, introspect with erlang:process_info to see what your processes are doing, and language indepedent tools to see what the VM is doing. process_info gives tons of information, IMHO, much more operable than other systems I'm familiar with. Ex: if your process has a large message queue, either it's not processing it's messages fast enough, it's selectively processing messages and leaving junk in the queue, or some combination; you can get inspect the queue with process_info and maybe figure that out. Lots of possibilities for why too slow, maybe it's waiting, maybe it's slow numerically, etc; process_info can tell you what function is running, which is usually helpful, etc.
Network partitions and their effects are highly application dependant, but if you have multiple nodes in your existing system, you have to think about it already, or choose to ignore it, which you can also do here, although if you use mnesia, you shouldn't ignore it: the default is for partitioned and then rejoined nodes to stay apart, waiting for an intervention, which may be the only reasonable default (because there's no right answer), but it's not what people usually want.
If there's a limit, it's new and I can't find it in a quick search; I did find a 2014 PR that was rejected because dropping messages violates core semantics, IMHO killing the recipient would be OK though. Traditionally it's unbounded, but when you exhaust the memory of the host (or the ulimit, if set), you'll likely loose the whole VM.
EEP 42 is not implemented in mainline OTP. If it were, the proposal would likely link to the implementation, and the EEP index in EEP 0 would show what version it was first included in: https://www.erlang.org/eeps/eep-0000
If your application is using something like this, you're running a modified beam (which is fine; my professional experience is on a modified beam where we added a process_info option to drop all messages in the queue of a given process, and to allow for messages to be added to the front of the queue, neither of which would be acceptable upstream, but both of which were super handy for us)
I see your point, I still disagree, I encountered some bugs that were incredibly surprising with postgres + sidekiq combo (made perfect sense though)
Elixir doesn't distribute anything unless you tell it to, keep that in mind. You can run it like a normal rails app.
The OTP by default runs on one machine and as such, it's not subject to distributed systems law.
And you can still scale horizontally in the traditional way (deploy the app to multiple machines), no need to learn or use the multi-nodes.
The debugging experience is the same.
Of course you have to learn the OTP library at some point, but it's equivalent to study the ruby standard library: it's natural, must be done, like for any programming language.
That being said, the tools provided are really nice, given erlang is very old, the tools do exist.
And you can connect to a production machine with an "irb session", except that the one in elixir is in the same memory space as the main process, so you can actually inspect in memory stuff, very powerful and dangerous.
You can kinda ignore OTP outside of the initial "run these (erlang) processes when the vm boots", which is a few lines - often scaffolded out automatically.
But then when you suddenly need some async task x service x parallelism its there.
I guess if you're building a company it does behoove you to at least read a bit about it but you don't need every team member to be an OTP expert. If you can understand how an OS runs processes at the most basic level and how javascripts async/await works you're already most of the way in a practical day-to-day sense.
Ignore OTP for the most part. Just write good code that works at one level at a time. Then when you hit something that needs OTP I send them towards Designing Elixir Systems with OTP
This is all about the onboarding experience of engineers coming onto a team. So you have to be careful with the projects that engineers newer to elxir get. But that's part of the cost of a more niche language.
The abstractions you need to critically know are map and reduce. It's not functional like Haskell, the limit at which you need to "think about it being functional" is that a value inside a variable can't change from underneath you when you pass it to a function. It pretty quickly changes from "I have think about values not chamging" to "I don't have to worry about values changing"
The abstractions I’m talking about are the Erlang VMs and the primitives like actors that are provided by the OTP. I have cold sweats thinking about needing to debug a non-obvious performance issue and diving into that layer.
Ok. Well there are a lot of other options for performance and no matter what system you're in (python, ruby, rust, jvm, c++) those kind of performance debugging is going to be a slog, and GenServers are relatively easy to work with and the VM gives you a lot of tools to figure it out. Most people at scale seem to be doing okay with elixir.
I will say the one thing that I do see coming up over and over again is OOM errors, but I personally feel that's because there are a few gotchas that juniors don't always know about
Speaking as an amateur programmer, is this like Laravel's livewire or intertiajs with inline blade component? (but in Elixir). Or is this something else?
imitators use long polling to periodically check for updates. liveview spins up a dedicated vm process for each active user and can push updates over websocket to update things. This process can do things like listen for serverside events as well.
It may be redundant but I think its always important to note that processes as listed above are not like os processes here -- but an extremely lightweight internal VM thread like thing in the BEAM VM. In is not uncommon to have many millions running in the beam VM. Very much lighter than threads or OS processes.
I love how web development has gone full circle. In 2006 I wrote a chatroom site that was server-side with Ruby on Rails-based javascript sprinkled in for realtime page updates. It was long polling of a single HTTP request (websockets didn't exist then) but it worked extremely well. Never liked SPAs and the npm bloat. Happy that stuff like Svelte and Phoenix is taking it back to the server-side with JS sprinkles place which seems optimal to me.
Was excited to hear views are being dropped. Been creating Web applications for over 20 years and have seen very little benefit of using views. Always saw them as an extra layer to think about, especially when trying to find where a function/method is defined when maintaining code. I think the MVC/MVVC ideas as applied to Web development were a step forward in getting people to organize code in general, but a step backward in keeping coherent, colocated, and maintainable code as projects grow. I've found that reusable encapsulated components for structure and well documented utility classes for CSS styles have always been the most helpful tools for creating maintainable Web applications, which seems to be the direction Phoenix and LiveView are going.
Personally I'd highly suggest investing in time in both, but start with basic Elixir and understanding the concurrency model and functional programming if you're coming from an imperative or OOP world.
If you like learning by watching videos, I cannot recommend this course highly enough, it's what accelerated me from zero to a lot of the key concepts:
The instructor speaks in a way that's easy to consume, for me. It assumes you already know how to program, and the way he talks about the various topics gives a lot of depth to the various concepts that's missing in a lot of material I've seen -- he explains the "why" of things, not just the "what" and "how".
The course will have you implement a hangman game from a basic client/server model, eventually to a Phoenix non-Liveview page, then to LiveView, so you get a great understanding of how to architect an application the right way and how to make good design decisions.
The whole thing costs just $35, which I think is perfectly reasonable and doesn't break the bank for anyone. The Pragmatic courses are good, but Phoenix is too out of date in them now and it'll cause a lot of pain if you're trying to follow along exactly. Phoenix has evolved quite a bit from 1.5 to 1.6 and now 1.7, and those changes can be a bit confusing to a first time person.
Other place to start would be just the basic docs on the Elixir and then Phoenix site. They're really good. Elixir School is also good (and free).
Thanks. I’m on the pattern matching section of that course now, and and am loving the style so far. I like that the explanation isn’t solely dependent on the videos, too.
Also, if I may, recommend two videos to watch which is what got me into Elixir in the first place.
1. "The Soul of Elixir and Erlang" - https://youtu.be/JvBT4XBdoUE ; this fast-paced talk by Sasa Juric will highlight exactly what is special about Elixir/Erlang in terms of fault-tolerance and scalability
2. "Using the Beam to Fight COVID-19" - https://youtu.be/cVQUPvmmaxQ ; this highlights a real-world use of Elixir and the speciality of how the BEAM works using an actor model which I found fascinating. Should get your excited about the possibilities of this language.
3. "The Do's and Don'ts of Error Handling" - https://youtu.be/TTM_b7EJg5E ; this talk by Joe Armstrong, inventor of Erlang, again highlights the shift in thinking about error handling and failure recovery. Not Elixir specific, but since Elixir has it's roots in Erlang it's well worth listening to this talk, which is conceptual and not code-centric.
None of the videos above are Phoenix or LiveView but they should help build some concepts in your brain about why you want to go down this learning path of Elixir and Phoenix, why it's different. I listened to the first two, and I couldn't stop thinking about them, personally.
A last thought I would give you as you're going through any training material or topics, I cannot recommend strongly enough. Consider this pattern: download LiveBook (https://livebook.dev) and create a notebook for each lesson or major topic you're learning. Gives you a great way to learn, take notes, experiment and keep the history rather than just using iex at the command line all the time. I've found this greatly accelerated my active learning, as it's easy to get into the passive learning loophole of just regurgitating the code in tutorials and not stick the concepts into your brain.
Awesome, glad you liked it! Yes, I get bored when it's just one format and I like the mix of video, which isn't overwhelming, and then supplemental reading and exercises. I also love that the author makes mistakes along the way (as he points out in the beginning) because that caused me to pay even more attention to what he was doing.
I would recommend reading these 2 books in parallel: Elixir in Action by Sasa Juric (one of my favorite programming books in general), and Programming Phoenix from PragProg. And before that, to wet your appetite and get a feel for the Erlang VM, watch a small video by Sasa Juric on youtube: https://www.youtube.com/watch?v=JvBT4XBdoUE
Thanks for these recommendations. I'm now part way through the book, and have been amazed at the clarity with which Sasa writes.
Then today I watched half of the video, and was blown away by how clearly he communicates, even when (or especially when?) he's speaking twice as fast as most people delivering presentations.
I feel like the title is a bit misleading. All 3 are still uncoupled, it's more that the V and C is now cohered in the directory structure, which seems a bit more ergonomic.
As it stands you have to write a separate API, though there is work being done on native [0].
Otherwise, part of the initial value-prop of LiveView is that if you don't need multiple frontends and you don't need offline support--which a whole lot of apps don't--then LiveView drastically simplifies your stack while still delivering "SPA-like" speed.
like others have said, you'd probably write a JSON API separately. One thing I'll mention though, is that the conventional design of Phoenix to use context modules often makes that very easy, in that your liveviews and API can just call the same business logic functions to crud resources.
This. As long as you keep the API relatively simple, something like basic CRUD "restful", it is really low cost to transform your domain model modules (regularly called "Context" in the community) into whatever JSON you need.
It is just two different presentation of the same data, so you do the work on the data model and the way to access it in elixir, then you just thinly wrap it in the API for it.
Can someone _please_ get around to building a framework for Gleam[0] so that I can finally get over my hesitance to dive into a BEAM language over erlang's/elixir's dynamic typing?
> he who types statically types twice the amount and thus half as fast.
If you mean by this, "static typing is not worth the effort", the zeitgeist is definitely not with you on this one.
> There's honestly no need for static typing in Elixir.
I'd love for that to to be the case! Is the only way for me to find out for me to give it a try, or are there somethings you can say to help me believe?
Yes, that's definitely what I mean. I've never much cared for the zeitgeist. I care about a reasonable rate of return.
If you care about static type checking, Erlang/Elixir actually has an opt-in thing called Dyalizer that lets you annotate functions and then traces the types through. I've never even tried it though, because I never felt the need for static typing.
Worked on a large project using live views.
Pros:
- can achieve interactivity without using client side js frameworks
- less developers / man power needed compared to client side js frameworks
- evolving in right direction
- community / forums
Cons:
- still evolving and releases are breaking (no backward compatibility).
- the amount of boilerplate code which is generated is too high.
- forms are basic and need to evolve for handling challenges which come with Server side rendering and interactivity
- there was leex, then heex, now components taking centre stage.
- most of the books, tutorials, videos, articles are stale due to frequent changes / evolution
Elixir/Phoenix Community is small and non toxic on forums. But sometimes the size of community is drawback. Some questions get fewer replies and will be a puzzle for the developer who is working on the project (this does not happen frequently).
Anyone who wants to adopt Live View for production projects will have to overcome these challenges.
(edits: formatting)