But LV is still in its infancy and there are very real implications of using websockets that are unrelated to open connections.
Just the other day someone posted on the Elixir forums about live_redirect causing 2 network round trip connections[0]. Basically there was double the latency to transition between pages due to this "bug". I air quoted "bug" because it's working as intended, it's just not optimal.
The creator of Elixir mentioned it was doable to fix the issue but when an issue was open on GitHub[1] it was shot down with a "this is a known tradeoff, you'll need the 2nd route trip for the websocket". I'm not sure what the state of the issue is but it currently stands as closed.
I've pinged 80ms to servers where LV was being used and the delay was extremely noticeable (before I even knew about this double network round trip issue). It feels much worse than a Turbo Drive / Turbolinks driven site (which uses HTTP instead of websockets like LV does). In some cases LV feels slower than a regular site even with the DOM having to re-parse all of your assets. The only time LV feels like a net win to me is when you're on a local connection with 1ms of latency.
I wanted to use LV a lot, but backed out because I kept running into bugs and missing features. Plus after feeling a LV site on a non-local connection I can't say that I would want to impose that experience on users. Especially folks who happen to connect from let's say Europe to the US, or even further away. The web is a global place.
> The creator of Elixir mentioned it was doable to fix the issue but when an issue was open on GitHub[1] it was shot down with a "this is a known tradeoff, you'll need the 2nd route trip for the websocket"
Please Nick, you can literally check your links to see this is inaccurate. I mentioned a solution after the issue was closed - and not before as you describe. And I figured out the solution *with Chris*, as I clearly mention in my comment.
And then later on:
> dive into the source code that I can't read very well because there's a lot of macros
There are like 8 macros...
> since LV is a pre 1.0 release the docs aren't really written up yet since stuff is changing all the time
Seriously? No one has ever said this is the case. Just go to the [official docs](https://hexdocs.pm/phoenix_live_view). All public functions are properly documented, there are introductory guides, etc. Sure, we don't have official screencasts but that's something we rely on the community to step in: Pragmatic Studio has a fantastic course on LiveView (which I was involved as a sounding board), there is Grox.io, etc. And it hasn't stopped either, a new book was literally announced today.
There are other inaccuracies in the comments below but honestly I don't have the energy to go down this rabbit hole again.
I should have used quote / unquote instead of the word macro.
When I looked into the code I started with the engine and renderer. Between both modules they had dozens of quote / unquote usages which to me was hard to follow. Not because it's written poorly or anything like that, but it's not exactly easy to trace that code to learn how something works in more detail.
> I mentioned a solution after the issue was closed - and not before as you describe.
Yes, after it's been closed. But look at it from what end users of your library see from that chain of events:
1. User asks question on forums and presents a case where something very bad happens (2x network round trips)
2. User posts issue on GitHub
3. Creator of LV says it's a known trade off and quickly closes the issue
4. You and the creator of LV talk offline and figure out a potential work around
Re-opening the issue after #4 would have done a lot of good because it shows at a glance that it's a current issue, it's being addressed and open for discussion.
With the issue being and staying closed this gives off a message that you're not actively working on fixing the bug and aren't open to any form of discussion or assistance around fixing it. Maybe that wasn't your intention but that's the message you're sending to some people.
> Seriously?
That's the answer I've always received in the past when asking questions about the state of the docs on Slack and IRC over the years. The current docs usually give you a partial understanding of how something works. It's usually enough to get a basic idea of how something might work but not enough to get the ball rolling to implement a solution in your own application. I don't think I'm the only one who feels this way either because I've seen a lot of repeated questions on IRC and the forums, especially around LV components.
When did you last use it? Javascript hook support wasn't there early on. The state of the art is the "PETAL StacK" [1] which uses client-side JS for interactions that don't require a roundtrip
Once when it first came out, then again a year later and then again 6 months ago.
Lack of hooks wasn't a concern I had at the time. It was more around core behavior of the library and critical features that were missing. Some of those features have been added after I posted about them but all that did was destroy any confidence I had in using LV because these are things that would have been encountered on day 1 of deploying a single LV app to production. We're talking huge things, like how to invalidate and update assets in the <head> of your page in a user friendly way.
It left an impression on me that LV isn't really being used much in real world apps by the team developing it. I could be wrong of course but that's the impression it left. Plus it feels like it's taking a really long time for certain features to make their way into the library. For example file uploads took something like 18 months to go from being talked about publicly to getting an alpha release and now it feels like it's on the burden of the community to test this in production without really knowing much about the feature.
That and the docs still leave a lot to be desired (especially around the time I read them) and the story for the last ~18 months is that since LV is a pre 1.0 release the docs aren't really written up yet since stuff is changing all the time. I know docs take a long time to write (I've written literally over a million words of blog posts / course notes / documentation) but docs and practical examples are also the most important thing IMO to nudge folks into using something.
Personally I don't want to have to read minimal docs, API specs and dive into the source code that I can't read very well because there's a lot of macros to see how something works just to use it effectively. Especially if I'm on the front lines of having to pioneer the tech, which means I'll probably be under pressure to report and fix bugs that I don't know how to fix.
I don't know. All of this experience with LV and Elixir / Phoenix really made me understand that this tech stack is not for me. Especially not when Hotwire Turbo exists and works with any back-end language and it also has proof of it being used in a mission critical massive SAAS application (https://hey.com). That leaves me super confident that it'll work for me (and it has been), even outside of Rails.
Maybe in 5+ years I'll try Elixir again (hopefully Stripe and other payment providers have Elixir clients by then!), because the core Elixir eco-system in general has a bunch of nice things. It just doesn't feel optimized yet for building applications (IMO). At least not compared to most other web frameworks.
Also, while I don't use Laravel I also have major respect for Caleb Porzio. He created Laravel's version of Live View (LiveWire)[0] by himself. It mainly uses HTTP and he also has a ton of docs / videos on implementing practical application features with it. He shipped 2 major versions and everything about its API and docs just oozes creating something made for developers to develop applications. It's funny how a slightly different position on something can make something explode in popularity.
I haven't even written a single line of Laravel and have no intention on switching to it, but his presentation and execution of an open source library is something I admire.
I think ultimately Phoenix and by extension LV just don't have the manpower.
LiveWire builds on Laravel which is a massively popular framework on a massively popular language, Laravel itself using components from Symfony that is basically the backend framework with the most contributors in the world.
> I think ultimately Phoenix and by extension LV just don't have the manpower.
For comparison's sake:
- [LivewWire] Caleb (creator of LiveWire) made 1,000+ commits and added 200k lines of code from Jan 2019 to Feb 2021
- [LiveView] Chris (creator of LV) made 700+ commits and added 80k lines of code from Oct 2018 to Feb 2021
- [LiveView] Jose (creator of Elixir) made 450+ commits and added 25k lines of code from Oct 2018 to Feb 2021
There's even more contributors to LV (207) overall than LiveWire (145) which is interesting because LiveWire is 3x more popular based on GitHub stars.
From that you could say that LiveView has more manpower than LiveWire since Caleb wrote all of that code himself in a shorter amount of time while 2 people are main contributors to LV.
Plus at the same time Caleb wrote a huge amount of practical documentation (focused on building features), created lots of screencasts where you build features you would expect to see in most apps (data tables, etc.) and started a podcast around LiveWire. And on top of all of that he created AlpineJS at the same time.
> LiveWire builds on Laravel which is a massively popular framework on a massively popular language, Laravel itself using components from Symfony that is basically the backend framework with the most contributors in the world.
At a fundamental level it feels like the creator of LiveWire is investing in creating a tool that helps developers build applications better and faster. I think that stems from Laravel giving off a sense of developer productivity for building apps, but don't forget that library creators are making these decisions.
I never really got that same impression from working with Phoenix or LiveView. I think it caters towards a completely different type of developer than myself which is why I struggle so much using it. It always felt like instead of showing you how to do something, it makes you figure it out yourself.
> But LiveView may hit 1.0 this year :)
That would be nice to see but I'm not sure how much of a difference that will make in the short term. In the short term going 1.0 is really just deciding to tag a commit. If Chris and Jose plan to write another book to use LV that could still be over a year or 2 out unless they've been writing it in private, or if they re-write the documentation that's also another long journey.
Neither of them really strike me as the screencast type either. When they do create videos they do an excellent job at explaining things tho. Really wish they did more of them to be honest. But yeah, these things take time. I don't know what their schedules are like too so maybe it's not even fair to compare LV vs LiveWire in terms of how fast the library is being built. Maybe Chris and Jose only work on Phoenix and LV for 2-3 hours a week where as Caleb is working on his stuff full time.
This is an incredibly offensive and insulting comment about people you don’t know, regarding software you openly state you don’t use on a daily basis.
I would care a lot less if we were talking about billionaires building commercial startups, but you’re literally attacking open source contributors. Moreover, my personal experience in using LiveView and interacting with the community has been the exact opposite of yours, so I find your comments to be totally misleading for people who might think that your walls of text have any nuggets of wisdom in them.
I’ve tried a lot of open source software that left me thinking, “wow, what a waste of my time.” For some reason I never felt a need to openly attack the authors of these low-commercial-value-for-the-creator, volunteer-driven projects. You might want to consider directing your critical eye toward the actual problems in this world, rather than volunteer programmers’ projects that didn’t met your personal, highly-opinionated requirements.
The parent comment is Exhibit A for why so many people refuse to get actively involved in the open source software scene, and why so many drop out.
> Regarding software you openly state you don’t use on a daily basis
Perhaps I was too critical in some of the replies and should have phrased things in a more positive way but I did use Elixir / Phoenix / LV on a daily basis for a pretty long time.
Over the course of those 18 months I'd estimate putting in 250-300 hours of programming time in spurts while writing 3 applications totaling around 9,500 lines of assorted Phoenix / LV code. Not a lot of code by any means, but enough to put a decent dent towards developing the main app I was building which is where I encountered those issues at the time. I have no agenda or reason to make anything up. I want to see Elixir and Phoenix succeed in the end. I just decided to temporarily put it on hold until it gets over the early adopter phase.
> Comparing two projects by the number of commits and lines of code written. You must be a manager.
We are dealing with limited information here, and GitHub makes it easy to see at a glance what folks are doing on a project. It just so happens it focuses on presenting commits and lines of code written.
Everyone knows it's not the best metric but to get a high level overview of activity those stats work. Especially when the person I was replying to said LV is maybe moving slow due to a lack of manpower. Those numbers show the opposite (LV has more folks working on the project than LiveWire, despite it being much less popular due to Elixir being a smaller niche than PHP).
> Maybe the Elixir/LiveView code is just better written and doesn't need rewrites?
I don't think it's fair to jump to any conclusions about the quality of either code bases.
Maybe my impression following the Phoenix and LiveView commits was wrong but it still feels it's mainly two people.
José works at the same time on Elixir, Phoenix, LiveView, Ecto, Nx, Dashboard and more, it's a lot.
To be fair there also has been integration of LV in the Phoenix mix tasks, in addition to the Dashboard (using LV, ah!), so it's not like things are stalling.
> The Campaign is open for submissions from academia, research institutes and economic operators registered in any of the GSTP participating states: Austria, Belgium, Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Ireland, Italy, Luxembourg, Netherlands, Norway, Poland, Portugal, Romania, Slovenia, Spain, Sweden, Switzerland, United Kingdom, Canada.
Not open for submissions from the USA in case anyone else from there was getting excited :(
I can't understand their reasoning. Why would they need an unlimited price cap on gTLDs? I can't imagine their expenses are that high and they're a non-profit so whats the incentive to increase gTLD pricing?
It seems their argument is that the New TLDs (the program they created that allowed the creation of .donut and .pepsi and such) doesn't have these limits, and it's unfair to the companies running the "old" TLDs that they do.
Which is a terrible argument, not just because an answer could be "then impose limits on new TLDs", but also because those new TLDs are new products which those companies have to pay for, market, etc, whereas the companies running the old TLDs are just maintaining something that was created by the public.
Probably the usual, the people running the non-profit have financial ties (direct or indirect) to the people who'd make money from increasing the price cap.
7/20 ICANN board members have ties to Internet Society (which manages PIR - the .ORG registry), to PIR directly or NeuStar (managing .biz registry). That was just from reading their bios on ICANN.
4/8 PIR members were connected to ICANN previously or are still active in some capacity (including former board member and former liaison to the board).
How can they be orthogonal when eventually you need open software of some kind to make a verification?
If the software I use to verify the closed source software is closed, then I will need another piece of software to verify that the verify-ing software is what it says it is and so on until we reach a piece of software that I can verify with my own eyes (or a friend has verified with their eyes).
We all eventually have to place our trust somewhere but we shouldn't have to agree with where you decided to trust.
If you didn't trust the closed source disassembler you use, for whatever reason, you would verify the assembly output, not the actual software. In practice this is often done unintentionally anyway: it's common to run a debugger (ex: gdb, windbg) alongside an annotating disassembler (like IDA). objdump (from GNU binutils) also supports many non-Unix executable formats, including PE (Windows) and Mach-O (OSX/iOS).
For fun I just compared objdump's entry point function assembly of a downloaded Pokemon Go IPA to the Hopper representation, and, no surprise, they are identical.
It sure helps though. And being able to compare source code to binary is good for checking blobs.
No one knows what nasties are hidden in these blobs that run on EVERY phone out there. I’d actually go as far as saying cell phone processing modules are completely insecure. Unlikely to receive updates. Poorly documented. Security standards are likely to be weakly implemented with gaping holes.
As for proof, I’d point at the lack of source code as a starting point. Open source doesn’t guarantee security but it at least lets interested parties try to the degree they want or need to.
Sure. Go and have a look at GSM and friends. I’d say the cellular network in general is routinely being manipulated. The protocols and standards for over-the-air are demonstrably insecure and assuming the actual hardware is also insecure is reasonable as well.
Part of the point of this thread is that 'modern flagship phones' are designed with problems like that in mind. You're the one claiming to have 'proof' they are trivially insecure.
I pointed at the lack of source code as a starting point for proof of complete insecurity. I pointed at the ease of the existing protocols in active use as an example of that insecurity. That insecurity is the basis for Stingray fake towers. If you can fake the tower then the cellular modules can’t be much better.
I’m sure various agencies are quite frustrated by their inability to use the cellular modem as an entry point into Apple’s phones. That by itself is another pointer.