If they could add support for remote development (meaning the claude code instance runs on the remote server in the same folder that you have already opened as a remote/ssh project in Zed) and add a way to paste images in Zed and have them interpreted by CC on the server, this would really be a killer feature.
As someone who’s running a development agency I need to have tens of dev environments for different client projects running at the same time, and being able to switch between them multiple times every day (often from multiple client computers), so a remote server is the only way to go–I don’t want all of that stuff running on my Macs.
Nowadays I also have tens of CCs running on the dev server, switching between them using tmux, which works great, but the lack of support for pasting images through the terminal/ssh/tmux has been a real bummer. It would be great if Zed found a way to bridge that gap.
Heh, I do know that to some extent, but I feel like it’s a different issue. We are all submitting sensitive data all the time, and trusting that whatever service we are using (and the services they in turn are using) will handle our secrets responsibly.
But that stuff is regulated by laws (GDPR etc.) and, at least to some degree, self-regulated by economic principles (leaking passwords or credit cards should be bad for business). More importantly, though, it isn’t in itself a violation of security best practices. You often have to submit sensitive information to live in the modern world.
What is totally unnecessary, though, is for highly trusted services to teach people to share sensitive files unreservedly, just because they are really nice to have during debugging.
Appreciate the perspective. I do realise that HAR files are very useful, if for nothing else than being able to rule out any client-side issues. However, I don’t agree with their decision to make it impossible to get something looked at without HAR files - especially when there is a legitimate consern that they may contain highly sensitive data (even after their tool’s automated cleaning) and for something that is almost certainly a backend issue.
Also, it’s not so much that I don’t trust Google to handle the files responsibly, I just think it’s principally wrong to ask customers to send highly technical files (that most people won’t understand the implications of) in this day and age, when everywhere else we are all trying our best to educate people how NOT to get tricked into sharing security credentials and credit card info.
How easy wouldn’t it be to call someone you know are having a payment card issue, claim you are from Google Support, and then ask them to follow the procedure to record a HAR file while they are trying to add a new card, and then send it to some Google-like email? Even though many now have learned that they shouldn’t give out their password to anyone or click random links in emails, I suspect that a huge percentage of people would have no idea of what they just emailed to some stranger in this scenario.
Do we really want the major players to teach their customers that it’s perfectly fine to share whatever with someone claiming to be a support rep? Shouldn’t we be moving in the other direction instead?
There's definitely a line to walk there re: consumer education, but I'll give the analogy that if you walk into a bank to obtain a loan, you'll hand over _far_ more sensitive information than is in a HAR file. Typically this is fine though, because we're confident we're talking to a party that actually needs this and is whom they say they are, both to lend legitimacy and potentially follow back if something goes wrong. (The fact that we initiated the interaction as well would seem to lend some legitimacy to otherwise "escalatory" requests) I personally see a similar relationship when I reach out to some service provider/utility with an issue, e.g. I'll tell them my SSN but if someone on the street walks up and says "I'm from the water company tell me your birthdate" I'd... make a very confused face.
Both as such, and to be clear, I am sensitive to the making it impossible part, and stand by my earlier statement that ideally you should be able to push back enough to get a cogent answer from the PG as to why they need it, or get an exception if not. (We should absolutely teach people to have informed reservations. Ideally we'd also have better mechanisms for easily verifying identity and securely sharing and ring-fencing information, but if wishes were nickels etc.)
(To wrap this ramble up, I will grant you a scary addendum though: A slight variation to the phishing attack you described even broaches the "We initiated the communication" trust-exercise, as a more sophisticated phisher may be able to by side channel identify that you're having a certain issue and may have reached out for assistance, and can try to intercede in that by extending help pretending to be the intended respondent. The mitigation to this one is typically "never trust someone who reaches out to you, call the trusted verifiable root-of-identity yourself each time." but it illustrates the balance one has to strike in keeping ahead of the escalating cat and mouse game while still being able to securely exchange information when necessary.)
Almost every time I see a discussion about LiveView there’s someone complaining about the issue of latency/lag, and how it makes LiveView unsuitable for real-world applications.
From what I understand, the issue is that every event that happens on the client (say, a click) has to make a roundtrip to the server before the UI can be updated. If latency is high, this can make for a poor user experience, the argument goes.
As the creator of LiveView, what’s your take on this? Is it a real and difficult-to-solve issue, or do people just not see "the LiveView way" of solving it?
I think LiveView looks amazing, but this possible issue (in addition to chronic lack of time) has made me a little unsure of whether it’s ready to use for a real project.
> There are also use cases which are a bad fit for LiveView:
> Animations - animations, menus, and general UI events that do not need the server in the first place are a bad fit for LiveView. Those can be achieved without LiveView in multiple ways, such as with CSS and CSS transitions, using LiveView hooks, or even integrating with UI toolkits designed for this purpose, such as Bootstrap, Alpine.JS, and similar
Second, it's important to call out how LiveView will beat client-side apps that necessarily needs to talk to the server to perform writes or reads because we already have the connection established and there's less overhead on the other side since we don't need to fetch the world, and we send less data as the result of the interaction. If you click "post tweet", wether it's LiveView or React, you're talking to the server so there's no more or less suitability there compared to an SPA.
I had a big writeup about these points on the DockYard blog for those interested in this kind of thing along with LiveViews optimistic UI features:
Thanks for the pointers and insights. I’ve been reading up on this tonight (local time), and this whole issue seems to be mostly a misconception.
Between things like phx-disable-with and phx-*-loading, and the ability to add any client-side logic using JS, there doesn’t really seem to be any limitations compared to a more traditional SPA using (for example) React and a JSON API.
I hope I haven’t added to the confusion about this by bringing it up, I was just very curious to hear your thoughts on it.
I think the big difference is that with React a lot of interactions can be completed completely client side, with the server side component happening only after the fact (asynchronously).
I’ll grant you that that isn’t often the case, and recovering from inconsistencies is pretty painful, but I can see how people would go for that.
I kind of like the idea I can just build all my code in one place instead of completely separate front and back-end though.
> LiveView will beat client-side apps that necessarily needs to talk to the server to perform writes or reads because we already have the connection established and there's less overhead
Don't modern browsers already share a TCP connection for multiple queries implicitly?
Yeah. The overhead I see that's being reduced from a performance point of view is the server not needing to query the session/user information on every message, compared to ajax. That's true for websockets in general. And then the responses might be slightly smaller because it is an efficient diff with just the changed information.
There's a funny story here. We created Fly.io, Chris created Phoenix. We met earlier this year and realized we'd accidentally built complimentary tools. The pithy answer is now "just deploy LiveView apps close to users". If a message round trip (over a pre-established websocket) takes <50ms it seems instantaneous.
This means moving logic to client side JS becomes an optimization, rather than a requirement. You can naively build LiveView and send stuff you shouldn't to the server, then optimize that away by writing javascript as your app matures.
What I've found is that I don't get to the optimize with JS step very often. But I know it's there if I need it.
How would you exactly 'optimize with JS'? Do you think this optimization can be done to the extent of enabling offline experiences? Might not be full functionality, but bookmarks/saved articles, for example.
Lots of answers here including one from Chris McCord himself, but I'll offer my take based on my professional experience developing web apps (though I've never used Phoenix professionally):
A large majority of businesses out there start off targeting one region/area/country. The latency from LiveView in this scenario is imperceptible (it's microseconds). If these businesses are so lucky as to expand internationally, they are going to want to deploy instances of their apps closer to their users regardless of wether or not they are using LiveView.
LiveView could be a huge help to these startups. The development speed to get a concurrent, SPA-style app up and running is unparalleled, and it scales really well. My guess would that be that people who are worried about the latency here (which is going to exist from any SPA anyway) are the ones who are developing personal pages, blogs, educational material, etc. that they are hoping the world is going to see out of the gates. In this case, LiveView is not the answer!!! And as I've stated elsewhere 'round here, LiveView does not claim to be "one-size-fits-all". If latency really IS that big of a concern, LiveView is not the right choice for your app. But there really is a huge set of businesses that could really benefit from using it, either because they are a start-up focused on a single area/region/country, or they are already making tons of money can easily afford to "just deploy closer to their users" and could benefit from LiveView's (and Phoenix's) extreme simplicity.
Pretty much this. Also, I’m not sure most people realise how incremental LiveView can be. You can use it for a little widget on any page and later swap in a react component if you truly need one (which most apps probably don’t).
It’s not designed to run the NY Times. But it is a super useful tool that will benefit a ton of apps out there.
Is microseconds correct? Even with a good connection in online games I’ve only seen ping latencies of 3ms or so, and a more common range on an average connection is 20ms-50ms.
Should be, though milage may vary, of course. I'm having trouble finding a better example but https://elixir-console-wye.herokuapp.com/ is made in LiveView. You can try it out and see what you get (I have no idea where it's deployed, it's a phoenixphrenzy.com winner and plenty more there to browse through). Its payloads are a bit larger than some typical responses I have in my personal apps and I'm seeing 1-2ms responses in Toronto, Canada (chrome devtools doesn't show greater precision than 0.001 for websocket requests).
Yep, which is what I meant in my comment you're replying to (as per my statement that devtools only report to the 0.001). But as pointed out by jclem, I'm probably wrong about microsecond response times anyway. I'm very likely thinking about the MICROseconds I see in dev, which of course doesn't count :) But with the heroku link above, I am seeing as low as 1-3 MILLIseconds in Toronto, Canada.
I pointed out below that I actually DID mean microseconds but likely skewed by times I was seeing in dev. Hopefully it does not take away from my point that response times are still imperceptible when in roughly the same region (I'm seeing 1-3 milliseconds in the heroku-hosted LiveView app I linked below).
For a lot of the LiveView applications that I write (which is actually quite a few these days), I will usually lean on something like AlpineJS for frontend specific interactions, and my LiveView state is for things that require backend state.
For example, if I have a flag to show/hide a modal to confirm a resource delete, the show/hide flag would live in AlpineJS, while the resource I was deleting would live in the state of my LiveView.
This way, there are no round trips to the server over websocket to toggle the modal. Hopefully that example makes sense :).
The PHP equivalent would be the TALL stack (Tailwind, AlpineJS, Laravel and Livewire). Although Livewire just communicates over AJAX. The original Websockets version didn't make it.
I just found out that Livewire was inspired by LiveView.
It’s telling that every answer is “just deploy servers near your users.”
One of YouTube’s most pivotal moments was when they saw their latency skyrocketed. They couldn’t figure out why.
Until someone realized it was because their users, for the first time, were world wide. The Brazilians were causing their latency charts to go from a nice <300ms average to >1.5s average. Yet obviously that was a great thing, because of Brazilians want your product so badly they’re willing to wait 1.5s every click, you’re probably on to something.
Mark my words: if elixir takes off, someday someone is going to write the equivalent of how gamedevs solve this problem: client side logic to extrapolate instantaneous changes + server side rollback if the client gets out of sync.
Or they won’t, and everyone will just assume 50ms is all you need. :)
> It’s telling that every answer is “just deploy servers near your users.”
This isn't the takeaway at all. The takeaway is we can match or beat SPAs that necessarily have to talk to the server anyway, which covers a massive class of applications. You'd deploy your SPA driven app close to users for the same reason you'd deploy your LiveView application, or your assets – reducing the speed of light distance provides better UX. It's just that most platforms outside of Elixir have no distribution story, so being close to users involves way more operation and code level concerns and becomes a non-starter. Deploying LiveView close to users is like deploying your game server closes to users – we have real, actual running code for that user so we can do all kinds of interesting things being near to them.
The way we write applications lends itself to being close to users.
Imagine how painful HN would be if you upvoted someone and didn’t see the arrow vanish till the server responded. Instead of knowing instantly whether you missed the button, you’d end up habitually tapping it twice. (Better to do that than to wait and go “hmm, did I hit the button? Oh wait, my train is going through a tunnel…)
Imagine how painful typing would be if you had to wait after each keypress till the server acknowledged it. Everyone’s had the experience of being SSH’ed into a mostly-frozen server; good luck typing on a phone keyboard instead of a real keyboard without typo’ing your buffered keys.
The point is, there are many application-specific areas that client side prediction is necessary. Taking a hardline stance of “just deploy closer servers” will only handicap elixir in the long run.
You'd use the optimistic UI features that LiveView ships with out of the box to handle the arrow click, and you wouldn't await a server round-trip for each keypress, so again that's now how LiveView form input works. For posterity, I linked another blog where I talk exactly about these kinds of things, including optimistic UI and "controlled inputs" for the keyboard scenario:
https://dockyard.com/blog/2020/12/21/optimizing-user-experie...
While we can draw parallels to game servers being near users, I don't think it makes sense for us to argue that LiveView should take the same architecture as an FPS :)
> Deploying LiveView close to users is like deploying your game server closes to users – we have real, actual running code for that user so we can do all kinds of interesting things being near to them.
Then why do you start running forward instantly when you press “W” in counterstrike or quake? Why not just deploy servers closer to users?
Gamedev and webdev are more closely related than they seem. Now that webdev is getting closer, it might be good to take advantage of gamedev’s prior art in this domain.
There’s a reason us gamedevs go through the trouble. That pesky speed of light isn’t instant. Pressing “w” (or tapping a button) isn’t instant either, but it may as well be.
> Then why do you start running forward instantly when you press “W” in counterstrike or quake? Why not just deploy servers closer to users?
You do both? Game client handles movements and writes game state changes to a server, which should be close to the user to reduce the possibility for invalid state behaviors? You really haven't seen online games that deploy servers all over the world to reduce latency for their users? What?
Both web apps and games do optimistic server writes. Both web apps and games have to accommodate a failed write. Both web apps and games handle local state and remote state differently.
I read his post as a criticism of how little optimistic updating is done in web apps, and how bad the user story is. Why can't it be easy to build every app as a collaborative editing tool without writing your own OT or CRDT?
Because an occasional glitch when the client & server sync back up is acceptable in a game. Finding out that my order didn't actually go through is much worse. Especially since click button, see success, and close browser is an relatively common use case.
1. SPA with asynchronous server communication. A button switches to a spinner the moment you click it, and spins until the update is safe at the server. Error messages can show up near the button, or in a toast.
2. LiveView where updates go via the server. The button shows no change (after recovering from submit "bounce" animation) until a response from the server has come back to you. To do anything better, you need to write it yourself, and now you're back in SPA world again.
There's a reason textarea input isn't sent to a server with the server updating the visible contents! Same thing applies to all aspects of the UX.
That's a deliberate UI choice, though, and it doesn't always make sense in non-transactional workflows. It's easy to wait for Google Docs to say "Saved to Drive", and going to a new page to save a document would be really disruptive to your workflow, for example.
I remember this story but can't find it anywhere. If I recall correctly they deployed a fix that decreased the payload size. However, in doing so they actually opened the door to users with slow connections that were unable to use it at all before. So measured latency actually went up instead of down.
YES! Thank you! I’ve seriously been searching for like five decades. What was the magical search phrase? “YouTube Brazil increase latency” came back with “How YouTube radicalized Brazil” and other such stories. (Turns out the article mentions “South America” rather than “Brazil”; guess my Dota instincts kicked in.)
Thank you! It was impossible to find anything on Google since any variant of "youtube", "latency" etc showed results for problems with YouTube or actual YouTube videos talking about latency.
> Mark my words: if elixir takes off, someday someone is going to write the equivalent of how gamedevs solve this problem: client side logic to extrapolate instantaneous changes + server side rollback if the client gets out of sync.
most games have the benefit that they're modeling the mechanics of physical objects moving around in the world and are having their users express their intentions through spatial movement. the first gives a pretty healthy prior in terms of modeling movement when data drops out and the latter can be fairly repetitive and thereby learnable and predictable.
whether or not user interaction behaviors can be learned within the context of driving web applications seems a little less clear, to me at least. it does seem like there are a lot more degrees of freedom.
Nothing so complicated.
All that's needed is a local cache so that when you type a new message in the chat window, you immediately see it appear when you hit submit (optionally with an indication of when the message was received by the peer).
But there's quite a bit of tooling required to reliably update the local cache, run the code both in the client and on the server.
Firebase does this brilliantly with Firestore queries. Any data mutation by the client shows up in persistent searches immediately, flagged as tentative until server acknowledges.
On modern internet, with some assumption, you can get to like 2x faster(in my case) when sending data over an *already establised* connection.
Example:
A full fresh HTTP connect from client to first byte take ~400ms(I'm in the US the server is in Europe). This includes: resolve dns, open tcp connection, ssl handshake etc...
But if the connection is already establish, it only takes ~200ms to first byte.
If I deployed the server in the same region, say US customer <-> US server, this came down to 20ms...
That means, it's good enough.
Not super ideal but It's a trade-off we're willing to make.
LiveView (I think) already achieves this optimisation as both the initial content and any future updates come over the same persistent websocket connection.
A better balance would be to build the webapp in hybrid mode, some logic can be run by client-side javascript. Only the event handlers that rely on data from the server needed to be sent to the server.
I had never used LiveView and tangential to the latency consideration here. Two applications that I think can be enabled by siphoning all events to server are server-side analytics and time travel debugging (or reconstruction of a session). I am so glad to learn of this tool and definitely giving a try in my next project
Awesome, thanks for linking that! I was wondering how these improvements would flow through to Elixir.
Looks like we get the JIT performance improvements for free.
And unrelated to the Erlang/OTP changes, Elixir 1.12 looks awesome. Totally small, but such an unexpected little quality of life improvement, Kernel.then/2 for pipelines, looks great. I love the core team's focus on the UX of the language.
I have just a few remaining complaints about the language at this point, and Kernel.tap/2 and Kernel.then/2 will solve two of them.
When Jose mentioned a few years back that Elixir the language was more or less "done" or at least stable, and that they would focus on ergonomics and UX going forward, I remember getting a little worried. But I’ve found myself agreeing more and more - there’s not much I miss in the language itself, and projects like Nx, LiveView and LiveBook have shown that it’s an excellent foundation to build very powerful and modern stuff on top of.
As someone who has been programming with Elixir for my day job for the past few years, I find this aspect of the language to be super pragmatic and productive. It's a nice feeling to not have to chase new language features and syntax and focus more on the problem at hand. In addition, I've never felt limited by the language given that the underlying constructs are so powerful (message passing, immutability, pattern matching, etc). Glad that Jose made the decision that he did.
Curious, do you use typespecs and dialyzer? If so, how do you find it?
Elixir checks pretty much every box I'd want in a language, but after dealing with nil in Ruby for years and having fun with TypeScript... I'm feeling more drawn to working with type systems.
I find working with nil in Elixir to be quite reasonable. One way in which elixir is different is that you have different operators for when it must be boolean, and nil is not tolerated for those operators (and vs &&). This coupled with the convention of using ? at the end of functions which emit boolean makes things easier.
The one thing I wish is that people stopped writing boolean functions with is_ in front (that is supposed to be only for guards, but not everyone follows that convention).
If you want type systems, you can probably work with Gleam, in the future, I imagine there could be great interop where you can just drop in a .gleam file in your elixir code base with zero hassle attached, and have parts of your code base that are completely type safe, and let Elixir handle all the risks of IO and other effects.
I'm not a huge fan of Dialyzer myself. I would put it strongly in the "much better than nothing" category rather than the "usable and useful type system" category. I always write specs for my functions and types, and while they sometimes catch bugs, they're not quite as expressive as I would like them to be.
I suppose you could write Elixir in a style that was more type safe by writing in a less polymorphic or recursive style, but the language does not lend itself well to it. Structs and maps are mostly fine, discriminated unions less so.
Answering from similar experience. I personally use them both, dialyzer as part of the Elixir Language Server and typespecs when I feel something needs more clarity and definition.
Depending on what itches your types scratch for you it might be enough, might not. I've never wanted more type system in my Elixir personally.
I generally verify types only at the boundaries of my application (or very critical modules) using norm[1].
Either you have a strict type system that does not have an "any" type (yes, I'm looking at you Typescript), or you have a flexible type system like Python/Erlang/Elixir and you do runtime type checking whenever it's needed.
I'm writing more Typescript code than I would in Javascript for almost no type safety benefits (but for documentation, it's awesome).
Dialyzer and typespecs are better than nothing but not with much because they can introduce a lot of friction along the way, for not much benefit.
As another poster said, it can catch the very occasional potential bug but to me at least it's rarely worth the hassle.
I miss static typing after I got back to Elixir from Rust but the BEAM will likely never be statically typed.
To that end, I found utilizing metaprogramming to generate normal and property tests to be much more productive use of my time, with a measurable impact to boot.
"When Jose mentioned a few years back that Elixir the language was more or less "done" or at least stable, and that they would focus on ergonomics and UX going forward"
That's awesome to hear. I think there's a lot of language communities in the 10+ year range (and look, that's right where Elixir is) could stand to have that recognition. I'm getting kind of saddened by the number of languages I see that are good languages that proceed to festoon themselves with so many new features that they become quite difficult to use.
Since the benefits seem to be linked to ketosis, I believe we’re talking about fasts longer than 1-3 days, since that’s typically how long it takes to reach ketosis. So anywhere from 1-14 days, I would think.
You can reach ketosis without fasting, it's worrying that both this article and it's source are not really clear on the methodology.
This the actual source:
"Here we report that β-HB promotes vascular cell quiescence, which significantly inhibits both stress-induced premature senescence and replicative senescence through p53-independent mechanisms.
Further, we identify heterogeneous nuclear ribonucleoprotein A1 (hnRNP A1) as a direct binding target of β-HB. β-HB binding to hnRNP A1 markedly enhances hnRNP A1 binding with Octamer-binding transcriptional factor (Oct) 4 mRNA, which stabilizes Oct4 mRNA and Oct4 expression. Oct4 increases Lamin B1, a key factor against DNA damage-induced senescence.
Finally, fasting and intraperitoneal injection of β-HB upregulate Oct4 and Lamin B1 in both vascular smooth muscle and endothelial cells in mice in vivo. We conclude that β-HB exerts anti-aging effects in vascular cells by upregulating an hnRNP A1-induced Oct4-mediated Lamin B1 pathway."
I read it too, and there is absolutely nothing new. Pick any protein network database, and all these actors are already linked. Basically, you could have written the same conclusion without sacrificing any mice.
You definitely don’t need 1-3 days to reach ketosis, your previous diet macros have a huge impact on this. My blood ketones are at 1.2 mmol/L right now after 7 hours of sleep.
Hi HN! Creator here. Happy to answer any questions you might have. All kinds of feedback is much appreciated!
Outkit is my attempt to solve outgoing message delivery once and for all. That’s a lofty goal, to be sure, but I’ve spent way too much time setting up the same type of infrastructure many times over, or unsuccessfully trying to trace lost messages, or having to learn a whole new set of quirks when switching from one provider to another. Not to mention the frustration of recreating the same email templates in project after project.
There are existing products that solve some of these problems, sure, but I wanted them all in one package (in addition to some other useful features) and I hope that’s the case for others too.
Note that this article is from 2014, which confused me for a moment. I’ve had this exact setup working for quite a while, using a regular mosh release.
As someone who’s running a development agency I need to have tens of dev environments for different client projects running at the same time, and being able to switch between them multiple times every day (often from multiple client computers), so a remote server is the only way to go–I don’t want all of that stuff running on my Macs.
Nowadays I also have tens of CCs running on the dev server, switching between them using tmux, which works great, but the lack of support for pasting images through the terminal/ssh/tmux has been a real bummer. It would be great if Zed found a way to bridge that gap.