Their custom UI framework might be all fun and games for now, but that will probably change once they realize they need to implement accessibility. Doing this in a custom framework without sacrificing performance won't be easy and is going to require lots of messy, per-platform work. It's not like it's optional for them either. It would be for a simple editor that you can just decide not to use, but they're positioning Zed as a collaboration tool, so making sure that everybody on a given dev team can use it is going to be crucial.
I wish developers finally learned this lesson. As a screen reader user, I'm sick of all these "modern" tools written in Rust (and yes, it's nearly always Rust) where Voice Over just sees an empty window. It's far easier to dig yourself out of the trap of no accessibility if all you need to do is to slap a few aria labels on your buttons and sort out some focus issues than when you need to expose every single control to every single OS.
At least there's AccessKit[1] now, which might make the work a bit easier, though I'm not sure how suitable it is for something as big as an editor.
> Currently, many of Zed's themes are largely inaccessible. We are working on a new accessible theme system, which will launch with Zed 1.0
> A11y (accessibility) in Zed will be a long project. Likely lasting far beyond 1.0. Due to GPUI being written from the ground up we don't have access to the same a11y features that Swift, Web-based apps or [insert other language] does.
> Making Zed accessible will be a joint effort between things on the Zed side, and building out features in GPUI.
If you think about it, doing accessibility right -- in particular screen reader work -- requires thinking very carefully about the data model behind the presentation. What you need to declare and when. And doing that thinking actually could force engineers into building UI frameworks that not only are accessible for the visually or auditory impaired, but for broader systems as a whole.
It's hard work that has to get done, but it's not particularly sexy.
I think it could be sexy! I honestly think a graphical user interface designed from first principles to be accessible would also be kind of incredible for everyone else to use too, much in the way many accessibility improvements (in the US at least) like curb ramps and subtitles make things better for everyone as a side effect. Honestly, I think an ideally a11y interface might end up working like a modern easier on the eyes version of Symbolics Genera — think about it: every UI element on the screen actually retrievably connected to its underlying data representation, and imbued with a ton of semantic metadata that's designed to be used to actively facilitate further interactions with the UI (maybe for instance a generalized way to attach metadata about related UI elements to each UI element, so the UI itself essentially forms a sort of semantic hypertext web you can navigate behind the scenes), as well as composability, rearrangeability, and scriptability! That would probably be a massive boon for disabled people and also amazing for us nerds. Perhaps it would even have a focus on representing the interface in terms of that metadata, with the visual elements being sort of abbreviations/overlays on top of that metadata, sort of like how Emacs does graphics and GUI elements?
I'm not surprised that most of rust guis are not a11y friendly, there is no established gui library yet, none of them I would call mature yet
Not long ago there weren't any gui libraries that wouldn't be just binding to existing C framework or was in proof of concept state of lifecycle
I'm sure this situation will improve in the future and I understand frustration of someone that rely on a11y features, but you need to understand that everyone first will try to achieve solid gui library before will start adding accessable functionality
From a product perspective, re-inventing the wheel for something that — at best, many years from now — will be at parity with native presentation layers in terms of performance, a11y support, user experience, etc. is normally considered a risky move. Many startups have failed in part from pouring resources into shiny non-differentiators.
The only examples of successful products that use non-native UIs either (1) leverage web technologies or mature frameworks like Qt, or (2) are Blender (age 30). Apple did this with iTunes, but iTunes felt unpleasant on Windows, and people used iTunes for Windows in spite of this. I understand the appeal of creating frameworks like GPUI, but the article doesn't explain the relationship to the problem Zed is trying to solve.
There's also Zoom, which apparently uses an internal, heavily-modified fork of some Chinese UI framework on Windows (and QT on everything else). They did go the extra mile and add accessibility support though, their American clients, particularly the ones in government, healthcare and education, didn't really give them any other option.
There's also Google Docs, which uses weird tricks instead of rendering straight to DOM. They didn't even bother implementing accessibility in that layer, something which would probably have been impossible back then. Instead, they offer an accessibility mode and mark their entire UI as hidden to assistive technologies. When the accessibility mode is on, all speech is generated by a micro screen reader implemented in Google Docs directly, and the generated messages are sent as text to be spoken by your real screen reader. This is an ugly hack that doesn't really support braille displays very well, so they later implemented yet another layer of ugly hacks that retrofits the document on top of an actual DOM.
Your point still stands though, these are exceptions that prove the rule.
Interesting, thanks! It looks like Figma (a clear success) also took the "just give me a canvas" route:
"Pulling this off was really hard; we’ve basically ended up building a browser inside a browser. […] Instead of attempting to get one of these to work, we implemented everything from scratch using WebGL. Our renderer is a highly-optimized tile-based engine with support for masking, blurring, dithered gradients, blend modes, nested layer opacity, and more. All rendering is done on the GPU and is fully anti-aliased. Internally our code looks a lot like a browser inside a browser; we have our own DOM, our own compositor, our own text layout engine, and we’re thinking about adding a render tree just like the one browsers use to render HTML."https://www.figma.com/blog/building-a-professional-design-to...
The value proposition of a "boil the ocean" approach to UX frameworks is clearer for browser-based apps than native apps. That said, 7 years in, Figma apparently has a long way to go:
"To repeat a familiar refrain: We still have a lot of work to do! As we continue to improve access to our own products, we’re also advancing our understanding of what our users need to design accessibly."https://www.figma.com/blog/announcing-figjam-screen-reader-s...
> The only examples of successful products that use non-native UIs either (1) leverage web technologies or mature frameworks like Qt, or (2) are Blender (age 30). Apple did this with iTunes, but iTunes felt unpleasant on Windows, and people used iTunes for Windows in spite of this
Successful non-native UIs? Microsoft Office. Every single Adobe product, including the ones they got from Macromedia. I feel most professional software fall in this category. Spotify originally launched with completely custom UI. On Windows it is more difficult to name successful products that used native UI than those that did not.
I always assumed it was since the accessibility is there, but these are all Microsoft APIs after all, so they might just have implemented them on their own.
Are there possibly AI-based solutions possible that can add assistance at a more generic level that doesn't require software that has "deep" knowledge of the window architecture (and the actual text in it etc.)? My understanding is that this is how tech like VoiceOver works- it knows the actual window definition and all the elements of it at a programmatic level and can take advantage of that.
I'm not asking if they're already available (although that would be a nice option), but for projects like this that wanted the speed of rendering on the GPU (at the possible cost of, as you said, VoiceOver seeing an "empty window"), it would be at least a fallback position.
(So does this mean that ALL content that renders through the GPU, such as games, are inaccessible to you? If so, I'm sorry...)
Not sure if this might help you but I have an Apple shortcut defined on my iPhone called "GPT Explains" that is activated by a double-tap on the back of the phone (which you can assign, as you probably know, in Accessibility settings)- it takes a screenshot, ships it off to OpenAI and returns with a description of what it's seeing, any to-English translation of non-English text, and any counterarguments to any claims made in a meme, etc. (yeah, the prompt for this is kinda wicked, lol). If this is helpful to you, I can give you a link after I remove my OpenAI key (you'd have to provide your own).
iOS already has such a feature, but it's obviously not 100% accurate, not real time, and not great for battery life.
It's good enough if you have to click a broken "I accept your terms and conditions" checkbox, but nowhere near good enough to daily drive your phone with. In other words, a band-aid solution for when everything else fails, mostly for situations where the app you're trying to use is mostly usable, but has an accessibility barrier preventing you from carrying out a crucial step somewhere.
There's also vocr on Mac (and equivalent solutions on Windows), which recently got some AI features, but it doesn't even recognize control types (a very basic feature of any screen reader), just text. Again, good enough to get you through most installers where all you're doing is clicking "next" ten times, probably good enough to get you through the first-run experience in a VM that doesn't yet have a screen reader installed, at one tenth the speed of a sighted person, but that's about it.
Adding control type recognition seems like THE most trivial add-on feature to implement with an AI-powered solution. I mean, at that point it's just about training data... "Here are 100 different button looks. Here are 100 different radio button looks. Here are 100 different checkboxes. Here are 100 different dropdown menus." etc.
From the current state of things, which to Voice Over is an empty window with no elements whatsoever, if they have thought about accessibility, they definitely don't consider it a priority. With such a monumental task, I'd be willing to excuse some slip-ups, but there was literally zero work put into this.
Saying “zero work out into this” and noting they didn’t prioritize it is different from implying they’re naive and unaware of how complex and intricate Accessibility can be. I’m not actually interested in whether they support it or not - I just think your comment is a poor and/or lazy attempt at dunking on them. It contributes to this site being less interesting discussion and turning it into /. 3.0.
(For what it’s worth, if I was building a next gen (attempt) at developer tooling, I would punt on Accessibility at the start as well. It sucks, but that’s such a smaller segment of the market that you don’t _need_ to serve immediately. It only matters that you eventually get there.)
Also, even if they did use native widgets that come with integrated accessibility features, would those actually work as intended for a multi user collaborative editor like Zed?
Imagine a group of four people live collaborating on a file in Zed. How do you present the actions that are being taken by everyone so that a screen reader can understand what is going on?
Mute everyone except the user? Speak every keystroke pressed by everyone all of the time? Announce line changes made by others when they pause for a while / when they move to another line?
In short, I think if accessibility for screen readers were to happen for Zed it would take a monumental amount of effort to make it usable, regardless of whether they are using native widgets or not.
You would not believe the hubris of our profession in general around "how hard can writing a GUI toolkit be?" and also how little most engineers seem to think about accessibility when scheduling and estimating and architecting.
Backdrop: helped ship accessibility features onto the Google "Nest" Home Hub, somewhat last minute. And then watched it all get re-written for Fuchsia w/ Flutter (which of course had no accessibility story yet) and, yeah, last minute again.
No, I’m aware of how people downplay the scope involved in building a GUI framework. I’ve written your comment no less than (I estimate) 10 times on this very site.
I give these devs the benefit of the doubt given their prior body of work.
They've thought about it, and have a "follow this link to discuss" that links to the wrong thing so you can't actually follow the link to discuss it. Unless you're interested in a closed issue regarding back and forward buttons.
To be honest, the previous editor they've worked on was Atom, and they didn't care then either, even though their job would be much easier. It's definitely in the realm of possibility that they have next to no accessibility experience.
I'll take this seriously since lots of people probably wonder this even if they don't bother to ask it.
Disability isn't a permanent state that you start with. It's something that can happen to you 5 years into your career, or 15. It can also be temporary - you break your leg and now you need crutches, a cane or a wheelchair until you heal, for example.
Accessibility also helps people who you wouldn't traditionally classify as disabled: Designing UI to be usable one-handed is obviously good for people who have one hand, but some people may be temporarily or situationally one-handed. Not just because they broke an arm and it's in the cast, but perhaps they have to hold a baby in one arm, or their other hand is holding a grocery bag, or they're lying in bed on their side.
Closed captions in multimedia software or content are obviously helpful for the deaf, but people who are in a loud nightclub or on a loud construction site could also benefit from captions, even if their ears work fine.
So, ultimately: Why should someone who's used to using a given editor have to switch any time their circumstances change? The developers of the editor could just put the effort in to begin with.
Not to defend GP, but if I suddenly went blind, I really don't know if it would take longer to learn how to use my existing tools with a screen reader or to learn new tools better designed for it. It would be a completely new and foreign workflow either way.
This is not about what tools you want to use, but what tools you're forced to use by your team.
If this were a simple, offline editor, a decision not to focus on accessibility would be far easier to swallow. They seem to be heavily promoting their collaboration feats. If those on your team collaborate using Zed and expect you to do the same, other tools aren't an option.
Have you considered that disability is not always permanent? What if you were temporarily blind? Or could only see magnified or high-contrast UIs? Or you broke both arms, but your feet are fine for using your USB driving Sim pedals as an input device for 10 weeks while your arms heal? Would you still want to learn new workflows to be used over a few months only?
A11y isn't about helping one set of users (those who have completely lost their sight), it's about helping a whole spectrum of accessibility challenges - not by prescribing boxed solutions, but giving the user options customizable to their specific needs.
The amount of time one is willing to set aside to learn new tools & workflows isn't worth it if they are to be used for a limited period. It's much better to use the old tools one is familiar with in those cases.
ramps (by parents with babies in their strollers), subtitles (by people learning languages or in loud environments), audio description (by truck drivers who want to watch Netflix but can't look at the screen), audiobooks (initially designed for the blind, later picked up by the mainstream market), OCR (same story), text-to-speech, speech-to-text and voice assistants (same story again), talking elevators (because it turns out they're actually convenient), accessibility labels on buttons (in end-to-end testing, because they change far less often than CSS classes), I could go on for hours.
For user interfaces specifically, programmatic access is also used by automation tools like Auto ID or Autohotkey, testing frameworks (there's no way to do end-to-end testing without this), and sometimes even scrapers and ad blockers.
Getting ratioed because people think “my app must follow accessibility for… oh.. uh… because big company does so we must do same ooga booga smoothbrain incapable of critical thinking”
Waste of time unless you are aiming your application AT people who use screen readers e.g. medical or public sector. EVEN THEN has anyone actually tried? Even accessible websites are garbage.
I'm guessing this was downvoted for being rude, but I think there is a valid question here. It looks like Zed is putting a lot of work into minimizing the latency between a key being typed and feedback being displayed on a visual interface which is easily parsed by a sighted user.
If a programmer is using audio for feedback, then there is probably be some impedance mismatch by translating a visual interface into an audio description. Shouldn't there be much better audio encoding of the document? There would also be many more wasted cycles pushing around pixels, which the programmer will never see. An editor made specifically for visually impaired programmers, unencumbered by the constraints of a visual representation, would be able to explore the solution space much better than Zed.
This has been tried in Emacspeak[1] and doesn't work that well in practice. I'm in the blind community and know plenty of blind programmers, none of whom seriously use Emacspeak. VS Code is all the rage now, and for good reason, their accessibility story is excellent, they even have audio cues for important actions (like focusing on a line with an error) now.
Before anyone jumps on a new text editor band wagon, just a note on the
license they have you agree to in using it:
"Customer Data consisting of User content created while using the Solution is classified as "User Content". User Content is transmitted from Your environment only if You collaborate with other Zed users by electing to share a project in the Editor.
[...]Zed's access to such User Content is limited to debugging and making improvements to the Solution."
No commentary from me. Come to your own conclusions.
I would like some commentary from you, sounds very reasonable to me, I don't understand what the problem is.
Of course if you choose to share your project with others for collaboration, the content of that project is transmitted from your machine, what else would you expect? How would it work otherwise?
I misread the above comment it seems, however, my point was that for many it is not instant no, since so many buy space from GitHub et al. Probably most of the small companies.
They have already trust in place. Do they trust Zed too?
The thing is every time you load company proprietary code and/or sensitive data you better make sure you don’t hit the share button as well.
Not the end of the world but also something we didn’t have to think about until recently. That pushing a button (other than delete) could potentially get you fired.
This question of who gets to see your company data is I think a lot more thorny these days than ever before.
You're joking about email, but that's of course the reason why companies will pay a lot to host email on premise instead of relying on cheaper offsite solutions. I think Exchange Server is Microsoft's biggest foot in the door to access conpanies tbat otherwise wouldn't care much about the other Microsoft services.
Having a third party look at every email you're sending around is just a non starter for many businesses.
Getting the same setting in an editor where your code is shared with the editor company everytime you want to show it to a colleague is not trivial at all.
I tried out the editor because of this post: it looks very promising. Unfortunately I can't use it because it doesn't have support for remote hosts/devcontiners. That feature of VScode is critical to my workflow, as I don't actually want to program on a Mac host, but rather use my Mac as a portal to the VMs and containers I actually code on. It massively helps with segmentation of my projects and improves my security posture (by not having a development environment or dependencies on my actual host machine).
I use development virtual machines to segment projects and clients also although I just run my editor in each VM. What's the benefit of the vscode remote hosts/dev containers over a normal remote session?
In my experience its mostly the input lag when dealing with the remote environment. I'm quite sensitive to delays in editor input, so I prefer something native. If you aren't sensitive or your remote connection is fast enough for your preferences, I don't think there are many other advantages.
In addition to lag and a poor visual experience as others have mentioned, there's the issue of two (or more) operating systems with two separate shells/UIs. When using VMs or a VDI/remote desktop the cognitive overhead of remembering which OS shell I'm in for the purposes of keyboard shortcuts, clipboard, switching between programs and etc impacts my productivity significantly.
VSCode (or any other editor with similar features) shell is great because it completely separates the editor environment from the dev environment. I can run as many instances of VSCode as I want each with their isolated dev environment of the target host, but all managed by one shell, one window manager and one clipboard.
I find that the experience doing this is essentially unbearable due to graphics problems on retina displays, input lag and the like. I also figure that the kind of person who wants an editor which is designed to paint as fast as possible probably wouldn’t want to have a whole VM and spice/similar client sitting between them and the editor.
Fantastic interview where you really get into the mind and mindset of the developers for how they approach development from many different angles. Highly recommended.
I only have one disagreement with them. . .
> the perfect name for a text editor in Zig is already taken: Zed
> > the perfect name for a text editor in Zig is already taken: Zed
> No, it’s “Zag”. ;)
Except that `zed` contains `ed`, precursor to `ex`, `vi`, and `edlin` yet still around:
`ed` is a line editor for Unix and Unix-like operating systems. It was one of the first parts of the Unix operating system that was developed, in August 1969. It remains part of the POSIX and Open Group standards for Unix-based operating systems, alongside the more sophisticated full-screen editor `vi`.
Assuming you mean GNU ed (which is the one that has had a recent release): https://fossies.org/linux/ed/ChangeLog (seems to be a web version of the file `ed-1.20.1/ChangeLog` within the ed release itself).
I don't use Zed, but I noticed José Valim using it when he was live streaming a coding session. I mostly use VSCode, but one feature he used in Zed was really compelling: he did a "Find All", which was similar to VSCode in that it opened a results pane with snippets from all the files that matched, but then he was able to edit the snippets directly from there, and was able to use multi-cursor editing and all the other usual niceties. That was pretty neat and impressive to me, since in VSCode you have to actually click the search result to open the file, and then edit it there. It wasn't quite enough to make me switch, but I've been thinking about it from time to time whenever VSCode annoys me.
In VSCode if you do super-shift-f for find-in-project, at the top of the results pane, just right of where it's marked "x results in y files" there's a link button titled "Open in editor" which I believe does what you're describing. I'd actually forgotten about it until I read your comment so I'll start using it again now.
Oh, is that what they mean? I set "Search Mode" to "newEditor" immediately whenever I configure VSCode on a new computer, since the default behavior of opening in the side panel is such hot garbage. I entirely forgot that some people don't have that and took for granted in my post that everyone knew about opening the results in an "editor".
But the point is that "editor" is non-functional. It's nice for browsing the results and has syntax highlighting and surrounding context, but you can't actualy edit from there. You can only use it to open the source file and then edit the source file.
In Zed, the search results "editor" is actually functional. You can make changes to the text that you see from the surrounding context, right in the search results, and then hit save, and have those changes propagated to all the touched files.
So, say you update a function to take another argument, and you want to update your codebase appropriately. Well then you do a global search for that function name, and then scan down the results list. The irrelevant search results (maybe you mention the function in a comment, but aren't actually invoking it) you can skip. The complicated updates you can open the source file like you do in VSCode. But the trivial ones where you can see what you need to pass as the new argument, you can just update right then and there.
I only half-conveyed what I was aiming to; I'm able to do what you're describing by editing the search-results scratch-file then saving it. The changes propagate to the target files with the save.
I've had a look though, and you were right: it's due to an extension that I can save from the scratch file:
Mmm, -ish. I searched for an extension in VSCode when I saw it in Zed, and that extension came up. But it looks like you still have to open the editor tab to save it and stuff. It was much more streamlined in Zed. You just made the change right there, and I think if you hit "save" on the search results it would save to all the files that you touched.
Logically it doesn’t, but in actual good-faith communication people usually follow Grice’s relevance maxim[1]: the points they mention are relevant to the conversation and the point they’re making. Thus, if neither Linux nor Windows support are planned, and the question is about Windows support, saying that Windows will come after Linux would be (vacuously) true, but the mention of Linux would be irrelevant.
(Notably, communication coming out or through legal counsel cannot be assumed to be good-faith, the premise of the court system being that the best we can achieve is two bad-faith adversaries and a neutral arbiter. But that’s not what we are dealing with here.)
Pedantry aside, I think I remember one of the developers saying they do plan on Linux support at some point in one of the previous Zed threads here. There were also some “small team” and “laser-focused” and “best possible experience” in that comment, but they did say outright they were planning on it. Though plans change, I think that’s the best we could hope for at this point, as I doubt even they themselves know more about their future.
Hey! I'm the mentioned Thorsten. Linux is actively being developed. Here's a PR from 2 days ago that shows file-opening in Linux starting to work: https://github.com/zed-industries/zed/pull/7852
And so far Linux support has been a big community effort. I think more community member contributed to Linux support than Zed teammates. Very cool to see.
So: Linux is in the works. Windows will probably happen after that, or if someone in the community wants to emulate what the Linux users are doing and start before that.
I’m trying to get to a point of convergence with 3 different contributors hacking on it right now, and make it a PR. There is also a new “windows-port” channel on Discord. Finally, we are waiting for Zed team to confirm if they are interested to have Windows port in tree at this point.
Love how much thought is being put into what you “gold-plate”. I’ve always felt that my best work comes around on round two (or three or four…).
Curious what you are planning for the ability to script the configuration? I haven’t played with zed much yet; is it possible today? Would something like Neon [1] help bridge the gap from VSCode and old Atom users?
> This second is the most dangerous system a man ever designs. When he does his third and later ones, his prior experiences will confirm each other as to the general characteristics of such systems, and their differences will identify those parts of his experience that are particular and not generalizable. The general tendency is to over-design the second system, using all the ideas and frills that were cautiously sidetracked on the first one.
- Brooks, Mythical Man Month
It is always interesting to see v2. I have witnessed cases where they are catastrophic due to feature overload but also cases where they are phenomenal because they are streamlined and lean.
I also wonder, with all the tooling available now at least in the space of web apps, if this quote notion of danger applies as much to v1s as I have seen v1s remarkably bloated these days. I often have to purposefully seek out tools that do less.
I tried Zed, and it felt similar to VSCode. I know there are multiplayer features that are better than live share, but on the surface, I needed convincing to switch.
I would be more inclined to use Zed if it could displace XCode. It pains me to use it from deleting derived data or cleaning the build folder to random crashes.
Contrasting the DX to Android Studio, it's night and day. I always wanted an Android studio-like experience for iOS development.
Part of me thinks it could be related to our project using CocoaPods. I've always appreciated how nicely Gradle worked to install dependencies, and the DX always lacked in Xcode. SPM works similarly, but I have yet to try it on a medium-sized codebase. So, my frustration could be related to CocoaPods.
Apart from package managers, I like the auto-import features for frameworks in Android Studio. As well as the "fix it" UX, which is similar to VSCodes. Having an integrated terminal is something Xcode still lacks, and maybe a better UX than a Plist to configure projects; I know XcodeGen/Tuist and other tools exist, but something built-in would be nice for fast project config.
From personal experience, SPM is much smoother in moderately complex projects. Gradle (and to a lesser extent, Android Studio) drives me a special kind of crazy, especially when a project has sat for a while and Gradle updates have accumulated and Gradle version compatibility of dependencies has diverged.
This is somewhat exacerbated by the need to import so many libraries in Android projects. My Mac/iOS projects have between a fourth and sixth as many dependencies as their Android counterparts.
CocoaPods though… ugh. Horrible. Was thrilled to part ways with it several years ago.
I really love native apps, but I'm stuck using VS code for now. It just kills me to see how much power goes to blinking the cursor in VS code. I tried Zed for a bit but couldn't make it work. I loved that its so light weight and fast. Looking my all my VS code processes it 3GB vs Zeds 300MB. 1/10 the ram is a meaningful difference. However I really need the Jupyter Notebook support that VS code provides. I'm also too used to doing remote dev on a Ubuntu box from my mac and VS code works great for that. I hope they stick with it long enough to get to supporting my workflow.
I guess that I'm lucky. I presently have multiple VS Code projects open (local & remote), and a Notebook running, and rarely break ~650MB -- less than 1% of my Macbook's memory. Maybe everyone has more extensions on than I do or something.
Had a look at the About page, and the live coding feature does sound useful. I'm sure the guys are excited; it's a fun project. You get to write algorithms, optimise performance, and do GPU programming. But who needs another text editor that will probably never reach feature parity with Vim and a terminal multiplexer.
I would guess most developers do not use vim. Pretending that vim is the universally loved editor that every developer has agreed upon using seems pretty disconnected from the real world.
VS Code came up out of nowhere pretty recently, and is used by a lot of people, so that shows that there is (or was, but still post-vim) opportunity for a new editor. Whether Zed is able to gain momentum to cater for the long-tail that other developers do remains to be seen, but I'm pretty keen to see more products trying to compete for users.
I found a SO poll from 2021 result listing percentages of respondents IDEs [0]. There's overlap, so developers can use more than one, but Visual Studio Code is at 71%, Visual Studio at 33%, Notepad++ at 30% and vim in in 5th place at 24% (IntelliJ is at 29%).
VSCode came out of nowhere and took the market share previously held by Atom and Sublime Text 2
The folks I know who use Emacs or vim (including me) are by and large still using those tools since before Atom and Sublime got popular. We just have LSP, now, like VSCode does.
And VSCode came about mostly from Microsoft's push. They wanted an "in" for non Microsoft languages, and VSCode gave them that, as well as a Microsoft-provided environment for dotNet on Linux / Mac. This meant that JavaScript / Node developers could still be folded into Microsoft tooling (telemetry) without forcing them into the crushing heaviness of Visual Studio on Windows.
It was a really slick play. Still, it was an overall positive for the profession, and you can still use VSCodium without the telemetry.
I moved from vim to vscode. The main driver was an easier time writing extensions.
I've tried to switch to neovim twice since and gave up. Lua is nice and I did get into that the first time. Crashing and errors were rife though. The second time, which was very recently, it seemed like everything had changed again, all completely new plugins etc. And I struggled to do some basic stuff, I guess I've just forgotten the more in-depth file/window management. So it goes.
I'm primarily developing Elixir and Javascript and a few years ago I switched from Emacs/Spacemacs to VS Code. What pushed me over the edge were projects like VSpaceCode (basically replicating Spacemacs keybinding and menu system) and edamagit (replicating Magit). I've tried Zed and I'm quite optimistic about it, but so far not willing to put the work in to replicate enough of my setup.
I feel like these classic editors are good to know just for general education like writing cursive. If you end up on some barebones system you will know how to edit a config file and exit. But for day-to-day it's all IDEs nowadays.
The line between "IDE" and "text editor" has blurred to the point where I'm not sure they're super useful terms anymore. When I used vim up till about 2019, I had it configured with all the toys to the point where it wasn't all that far off of where my VSCode setup is today.
Also, you know, insert Emacs joke here assuming if you still have enough RAM to post, etc.
It's even blurrier when you realize how many people are using LSP. Emacs with elgot is literally just a different UI in front of the same IDE tooling as VSCode.
The editor wars are over and everyone won because of separation of concerns. Yay!
I don’t believe there’s a useful distinction, at least for more advanced editors. For instance, I’m not aware of anything you can do in, say, PyCharm that you can’t do in Emacs or Vim. I don’t mean that in a curmudgeonly way like “nothing I care about, because we don’t need those fancy features to write a pageful of Fortran on my 1997 laptop”. I mean, I don’t know of a single feature of any kind that doesn’t exist on essentially all modern editors.
In emacs your repeat commands are C-u <number> <other command>. Your selection would be C-space C-s func M-b.
M-b causes it to go back to the beginning of the word and search ends, you can still adjust the selection with other movements. It's not as tight as V?<regex> but it's still composable.
Or you use evil mode and get those vi-style bindings in the editor.
EDIT: Actually, playing around with your `V?` command doesn't it select that entire line rather than to that pattern? So the emacs equivalent would actually be: C-space C-M-s ^func C-e
Well, ok. I’m specifically talking about what might count as an IDE feature. Vim and Emacs can run in a terminal, but that’s not a defining characteristic of an IDR. More like, everything can do interactive debugging, and syntax highlighting, and code completion, edit-time error flagging, etc. etc.
Parent seemed to believe that it's a waste to write another editor/IDE because vim already exists. Prevalence of other more popular editors disagrees with that.
In my opinion I wish more editors would become a face to Neovim which can run in a headless mode, allowing you to not have to emulate VIM at all, but take full advantage of it and all its plugins. It still kills me JetBrians chooses to maintain what VIM users call an awful plugin that simulates VIM, when they could just implement a Neovim front-end natively into their IDE, giving them the edge of "we fully support Neovim and all it brings" which is a much bigger selling point than, we have a VIM-like plugin.
I did not know this. My google/brace/ddg foo is failing me. Do you have a link to the full reference on how one would use headless neovim to provide a full headless vim with a GUI wrapped around it?
I'm thinking it would be nice if my Lazarus IDE supported Vim commands.
> But who needs another text editor that will probably never reach feature parity with Vim and a terminal multiplexer.
Feature parity with Vim is not meaningful in my opinion. LSP evened the playing field enough for all editors to the point where you can daily drive anything and be no less productive than most.
Use whatever you like and helps you get the job done. That includes Vim too, but I'm getting sick and tired of people acting like using Vim is some kind of irreplaceable boon. Becoming a better thinker will make you an exponentially better programmer than any tool.
Vim is just an example. My point was that code at the end of the day is just text, and there's only so many features you need to be able to write/compile/edit efficiently in 99.9% of the cases. Any new power tools for text editing will end up taking more time to learn and remember than be of use.
I'm not so sure. The fact that were editing the code as text and not as mutations and annotations on its syntax tree has always struck me as a sign that we're still in the stone ages when it comes to expressing ourselves precisely to a computer.
The basic model of computation is the turing maching, and it’s a symbol manipulating one. So editing text is at the core of what computing is. You could go a step higher to edit tokens and that’s what VIM does, albeit imperfectly due to tokens not being a finite set.
I'm not sure what you mean by "tokens not being a finite set". I suppose there's the theoretical issue of token length being potentially unbounded, but whatever problems your editor has with that, your lexer will likely also have. For any finite length file, there is a finite number of tokens, and once you parse it, you've got a much smaller list of symbols plus a convenient address for each one. I don't think it's a practical issue.
Vim's understanding of tokens makes some reasonable assumptions, but unless you've configured the textobjects plugin to talk to a properly configured language server, you're working on vim's presumed tokenization and not a tokenization that's native to whatever the underlying language is. Helix tries to bundle this in by default, but it still doesn't feel like a first class citizen.
As for turning machines, not since the 80's have the tokens that appear in our editors been the tokens that are manipulated by our processors. There are typically a myriad of bytecode translations or compiler optimizations or parser hijinks between what you're editing and what you're running. It's the AST that matters to the code author, and the AST is a tree, not a string.
We need to get to the point where you can directly annotate on a function parameter:
> this function is slow when this parameter is > 100
...such that the annotation sticks to that parameter, however the viewer has chosen to render the text.
The best we can do at present is to sprinkle some text nearby leave the problem of deciding which parameter and which function are referenced an exercise for the reader. This then necessitates that we preserve the way the text appears, which prevents us from presenting it differently based on the view context (e.g. maybe the reader prefers different units, timezones, or a language which flows their text differently than the author).
> you're working on vim's presumed tokenization and not a tokenization that's native to whatever the underlying language is.
LSP can be the foundation to a paradigm of code editing instead of text editing. I want the kind of integration we have with Smalltalk IDE like Pharo and the SLIME plugin for Common Lisp and Emacs.
> It's the AST that matters to the code author, and the AST is a tree, not a string.
I'd take variable inspection (not sure it's the real term) before AST manipulation. More often than not, I'm more worried about the result of data processing than the processing itself. Such capability exists in live programming, such as the system itself. And I believe this kind of rapid feedback is a much better experience.
If you're using languages that are themselves closer to the syntax tree, then you're part of the way there. Lisps do that. Editor features for most lisps typically include manipulation of syntax-tree-level features like moving forms around. (Emacs comes to mind here.)
I can't think of a nice way to express what I think of vim, but I think your general point is apt.
Some pretty full-featured editors already exist that people are happy with or, perhaps, have at least gotten used to. Where does a new editor fit in?
It's neat that it's "multiplayer" but that's an edge case.
I'm also not convinced by the business model. Do people really want channels, calls and chat integrated with their code editor? Personally, I have an almost visceral negative reaction to the idea but maybe that's just me.
This might be one of those things like monitor refresh rate where you can only really tell the difference if you've experienced the better version for a while, but I haven't ever felt slowed down by the speed of VS Code.
I do think it's something like that. Things can quickly get to the speed that feels "fast enough" that they don't feel subjectively slow, but can still be sped up by a couple orders of magnitude. If you Ctrl-F something and it takes a few hundred ms, you probably don't feel like it was slow, but in reality the "speed-of-light" for this operation was probably orders of magnitude faster than it happened on your device. Once you experience something close to the theoretical speed, it's really hard to go back to something you thought was perfectly fine before. And you start noticing that everything feels slower than it "should"..
I think of something like grep, where if I tried to grep a large hierarchy it'd be really slow and I'd sorta reason to myself "well yeah it's a lot of files in a large tree, of course it'll be slow!". Then I installed ripgrep and suddenly what I thought was a reasonable speed was shown to be unreasonably slow!
Every now and then I switch back to Apple's Terminal app, and I'm blown away at how much faster it is at just typing than iTerm it is, and how much nicer that is.
it is actually faster and snappier to use on a beefy M2 max, on the same hardware zed starts up in half the time. the difference is very noticeable. of course it is much less configurable and doesn't work with a lot of things VS Code can do easily.
> Startup would have to be terrible for me to bother.
that's just the first noticeable difference.
but there have been instances lately where opening, editing and saving the file took me less time with zed than just open it in VS Code and waiting for it to be ready for inputs
> VS Code starts in under a couple of seconds
I am talking about relative speed differences. Imagine you open the same code base and the editor is ready in half a second. going back to the "slower" one would be unbearable.
now admittedly zed is no way near to the extensibility of VS Code so it is probably doing less and that's where probably much of the speed difference comes from, but it can't really be overlooked once you experienced it.
I have. I have some huge Markdown documents I've needed to load and VSCode cannot render them without becoming super slow. In contrast, vim gets it done.
Am I wrong in thinking that "topic hidden" means this specific post, and that future posts related to Zed (such as a cross platform announcement) would still show up for them...?
Or even worse, you own a Mac (say, through work), but aren't entirely in the Apple ecosystem and don't want to relearn everything and fight muscle memory every time you switch devices.
I have (but or course Apple make it hard to do so you need third party software). But that doesn't help with special mac only software that has it's own style, like Arc or Zed.
Most people who say “all editors have vim keybindings just use that” miss the fact that bindings or not, a lot of vim’s functionality is just not available on other editors.
And that Vim is more than its keybindings, despite the often repeated jokes.
Tabs, window splits, search and countless other details may work very differently, and usually not completely with the keyboard.
I think they mean compared to cross-platform apps that feel equally weird on every system.
There’s some talk in the Mac world about “Mac-assed Mac apps”. I use BBEdit as my main editor because it feels right. The default shortcuts are like every other Mac app. You can use standard Mac tools like AppleScript to automate it. It uses the same fonts, widgets, and menu systems as everything else. It’s made for that environment and it shows in a million ways.
VSCode is a marvel of engineering and I love that it exists. It also feels uncanny-valley “off” on my Mac in ways that make my brain itch, so I don’t use it. Same with Obsidian: it’s a brilliant app, but it bugs me. It’s not bad in any way, it’s just not the right choice for me.
One of the people I admire for their programming skill works on a raspberry pi. I use my work MacBook through ssh because I prefer Sway to MacOS's window manager. We are many and varied.
I used Zed for a while. The biggest performance improvements I noticed, compared to VS Code, were in start-up times and the opening of files. Sure, Zed feels snappy in those areas, but I feel VS Code is simply not that bad speed-wise when it comes to everyday coding. Especially, in the era of M1 Macs. Zed may win the battle in the longer term, but I feel slow performance has to truly annoy the fuck out of the user to convince them to make the switch. Right now, I am sticking with VS Code.
For our team, VS Code is reaching that point. The Macbooks can’t keep up with VS Code’s decay.
Unfortunately Zed lacks good defaults (like a way to change tabs without the mouse) and certain vs code features like snippets. Makes it difficult to transition a team which has been dependent on Vscode and which has absolutely no interest in spending our days configuring tools
The part about just using the underlying primitives instead of relying on a 3rd party’s abstraction layer really resonated with me. As often as not I find myself fighting with the abstraction and having to go down a layer, where there is much greater freedom.
>I don't know about zero cost — every abstraction has a cost, I guess
Part of the cost Rust incurs is the compile time. But thanks to LLVM it seems you can in general have zero-cost abstractions, if we mean high level syntax with low level performance. Feels like a golden age of language design atm.
Anyway, the “let’s do it right, and do it ourselves” philosophy is attractive and I’ll be downloading Zed to check it out.
There was an editor I forgot the name might of been Omnivim, which was coded in ReasonML, but compiled natively to a UI, and supported VS Code plugins, which is still wild to me.
Anyway, development kind of died off on it, it had insane potential in my eyes.
Hopefully Zed can achieve a similar feat (more likely targetting VS Code plugins?) or some other rich plugin ecosystem.
It was onivim2. Iirc it was a one-man show, and stopped when funding dried up. I also hoped to see a a lot from it. Maybe the dev took too much work on his plate, with an unproven language with limited libraries?
memcpy, strchr, etc. use SIMD. The rust HashMap uses SIMD. LLVM can and will auto-vectorize. So yes - of course Zed - "uses SIMD", but it's not a tool you can throw at everything to make it faster.
And auto vectorization isn't all that common. In many cases you need to structure your code in a certain way, in which case you're better off doing it yourself if you want the vectorization benefit.
despite what the article claims, clang and gcc are acc very good at vectorising code, and intel, arm, etc have entire teams dedicated to improving vectorisation in modern compilers
i'm not saying that manually writing simd assembly or intrinsics is useless, it's very often necessary, i'm just disagreeing w/ the statement
> Unfortunately, compilers are notoriously bad at autovectorization and with the exception of relatively trivial loops, compilers rarely autovectorize effectively.
As a lowly web developer I must not know what i'm missing, can anyone explain what the issue is with an array being a list of references? Why is there a chase going on, is someone trying to get away?
> JavaScript is... You think you have an array of objects, but you really have an array of pointers to objects. So every single time you're walking over that, you're chasing it down.
I think it refers to the fact that you can't just compute an offset from element 0 to get to an element N, like you could if you'd have an array of structs or classes. Assume a struct uses up 128 bytes, then you can get to element N by using a pointer of N * 128 and you'd be positioned directly at the memory location of that struct. - Edit: cache locality like sibling comment mentions sounds more convincing.
> But the goal I've always had is a lightweight editor that is minimal that I love using that feels like a text editor, but has the power of an IDE when needed, without all of the slowness in the experience and kind of heaviness in the UI, but still powerful. That was very early on what I wanted. And for it to be extensible.
Sorry for being that guy, but vim. Nvim specifically.
I have the luck to were able to buy a Ryzen 7 2700, 32 GiB and some Samsung NVMe (don't remember) some years ago. Running Ubuntu LTS.
I hate VS Code so much for many reasons, but performance or memory usage are not among them. It starts in some seconds, never been a problem. Runs fast enough. At work I got a MacBook Pro, performance is even less of an issue, of course.
So, I'm not sure all the work done regarding performance is the most efficient way to get into the market. It's features, like their collab stuff, I'd say.
Or usability features. What I hate most about VS Code are the clunky editor navigation shortcuts. Code navigation is fine, but moving around the editor, opening files, command palette, ... if you ever have used neovim with telescope or JetBrains IDEs then VS Code feels so cumbersome.
Recently, my faithful Sublime Text journey kind of ended, and I did not want to tinker/fix it. I have been using it since its first year of release.
So, I had to decide on an IDE to live for the next few decades or for as long as I needed one. The final battle was between Emacs and Vim. I had played around with both in my prior developer life. I took time to read up, play around, and realize I'm not living in an IDE (Emacs), so I ended up with MacVim. I set it up enough to my liking.
Just as I was getting around, a recent release of Zed surfaced on Hacker News. This is my go-to IDE for now. I still fire up MacVim for quick edits and to keep learning in case I need to settle down on it.
I've been using ST for 10+ years. I pulled a copy and installed some extensions recently. Most extensions I want have multi-year gaps since last updated. ST crushes VS Code performance wise but the DX I'm used to with Code is much better than in ST because of the community.
One thing I don't get, and I hope I didn't miss an obvious comment, is that for all the complains about elektron and js being slow, vscode was and still is faster than atom.
I think that's just like one C++ application being faster than another equivalent C++ application due to using smarter data structures and whatnot.
VS Code and Atom have different code bases with different decisions and that makes a difference despite them both using Electron and JS. (Hopefully I have understood your question properly.)
Zed feels good to use, even though it lacks a lot of features I regularly use in vs code. But the biggest hurdle for me is that windows is not supported. I switch between my laptop (mac os) and desktop (win11) depending on where I'm working from for the day. And its too much stress for me to remember a completely different set of workflows and shortcuts. So, for now I wish zed the best, but vs code remains my editor of choice.
I wish developers would break out of the silicon valley bubble and realize that the majority of their potential userbase – including technical users – are on Windows and Linux. Heck that's the entire reason Atom (and them VS Code) got popular. No one cares about the nanoseconds of performance you are able to optimize. Working across all my devices and development environments (including the web) is table stakes for all software today.
You're right, and it's weird to hear them dunk on Atom when it had so many de-facto features Zed still lacks. They're very proud of their technical stack and their performance, but it feels more like they're defending the cathedral to discredit the bazaar.
It's their call, but I feel like I've seen this story play out the same way you describe hundreds of times. I'll never forget when the warp.dev people came to HN looking for feedback and got torn to tatters by the community. A single-platform POC editor is cool, but not really a functional replacement (or even comparison) to what Atom did and the community it garnered. I'm glad they're making what they want, but they're absolutely trapped in bubble-vision afaict.
They do care about latency, and I credit these developers for having a goal to minimize it. VS Code isn't bad but when regular Visual Studio changed its design like 10 years ago, latency and lag went through the roof.
However, the tech stack itself isn't the solution to latency. Doing an unbounded operation before responding to input will cause it, so will overusing memory and/or cache.
Yeah, who needs more editors after ed already solved this problem? Good thing problems usually get solved once and then you can never improve on top of that, then I'd have to switch editor like once every decade or something.
Why did you comment this? If a actual commenter replied with some of those points there could be a back-and-forth discussion about it, but you just contributed nothing.
I wish developers finally learned this lesson. As a screen reader user, I'm sick of all these "modern" tools written in Rust (and yes, it's nearly always Rust) where Voice Over just sees an empty window. It's far easier to dig yourself out of the trap of no accessibility if all you need to do is to slap a few aria labels on your buttons and sort out some focus issues than when you need to expose every single control to every single OS.
At least there's AccessKit[1] now, which might make the work a bit easier, though I'm not sure how suitable it is for something as big as an editor.
[1] https://accesskit.dev/