I'm bullish on Rust, but there's a long way still to go. The overhead of passing values across the boundary between JavaScript and Rust is quite high. There are a lot of cases where you want to be able to provide a dynamic configuration to something on the Rust side, ideally from JavaScript, and that's still pretty costly from a performance perspective.
One of my projects (https://markdoc.dev/) is a Markdown dialect that supports custom tags and a React renderer. I recently experimented with implementing a parser for it in Rust in order to increase performance. My Rust-based parser is significantly faster than my existing JavaScript parser, but then I have to serialize the AST in order to move it from Rust to JavaScript. I'd like to implement the entire processor in Rust, but I need to let users define custom tags in JavaScript, and the overhead of going back and forth is far from ideal.
I'm hopeful that the recently-ratified Wasm GC proposal—which introduces managed structs and arrays that don't cost anything to pass between the Wasm environment and JavaScript—will help a lot. But it's going to take awhile for Wasm GC features to land in LLVM and be properly supported in Rust.
I believe automatically inferring lifetimes in the general case is equivalent to solving the halting problem (for every sub-program, determine if a given reference is stored to the heap before the sub-program terminates), so transipiling JS to Rust would presumably need manual lifetime annotation or else require significant limitations to simplify lifetime analysis.
Seems challenging. JavaScript code does not come with lifetime information that the Rust compiler expects to have to get rid of stuff that's not used anymore, while JS has a garbage collector to do this.
There's also no borrowing information encoded in JavaScript code. Rust expects the developers to provide this information in their code so it knows who and where what can modify what.
This article is more so talking about tooling like SWC rather than using WASM as the bridge between JS and Rust. Personally I feel that the former is much more important than the latter which doesn't seem to have as many benefits for people making web apps on average. Sure, if you're running something very intensive like a video editor like Veed or a design tool like Figma, WASM is nice, but most web devs are making CRUD apps.
The underlying problem is still highly relevant in relation to JavaScript build tooling. Let's say that you have a transpiler written in Rust and you want to write a plugin that performs a custom AST transform.
If you want to be able to write the plugin in JavaScript, you have to take the AST from Rust and convert it to a JavaScript data structure in order to pass it into the plugin and then you have to convert the output back into a Rust data structure on the other end. Or you have to provide a JavaScript API that can safely mutate a Rust data structure from JavaScript while converting primitive values each way on demand. This is exactly like the problem I described with my Markdown processor. There's a ton of overhead involved, and it can cancel out a depressing amount of the performance gain that you would otherwise get from moving things to Rust.
Ultimately, these build tools need to have some degree of programmatic extensibility, and people want to get that without having to write their domain-specific logic in Rust and recompile the whole binary. There needs to be a better extensibility story and a cheaper (ideally, zero-copy) way to share data across the language boundary.
That's true that it needs more extensibility and zero-copy sharing of data, but in your example, are you serializing to something like JSON? Why does the end user need to specify the tags in JavaScript particularly and not something like JSON? Forgive me if I'm not understanding and you are indeed using JSON.
Markdoc custom tag definitions can include arbitrary AST transforms on the child nodes, as in this example: https://markdoc.dev/docs/examples#tabs In order to do this, you need some API, so you can't just define the tags in JSON.
Not sure why ESBuild is included since it's in Go and there's no working link to learn more about a hypothetical speedup over ESBuild.
Having used both swc-node and Rome after having used ts-node and prettier/eslint. SWC is a lot more finicky than ts-node (which itself is quite finicky) with a lot less documentation when something goes wrong (& often just looking at your setup sideways can cause it to stop working in surprising ways). Certainly I enjoy using ESBuild and I doubt swc could fill that niche right now (too immature I think). Rome is "special" in that it has none of the depth of eslint's TypeScript linting support and has very strange opinionated defaults that make no sense with limited configuration because the whole philosophy of that project is "just follow our rules without any configuration". Rome is to its word quite fast but given that it does a fraction of the linting eslint can do, it's hard to say whether that speedup will win out.
Special shout out to dprint. While it too has Rome's disease of limiting configurability, it does have a sensible amount of it and it is drastically faster than prettier. I will note though that one thing that still annoys me is that two identical ASTs will result in different output formats because the input formatting can affect the output (i.e. it tries to preserve input, particularly around line breaks). I understand the philosophy of it but I'm not sold if I'm trying to enforce a consistent style across the codebase.
Do you have any resources online about this? I'm using Rome as well -- quite enjoying it -- but we are running into its limitations which this news does not instill confidence.
It's on their Discord. While there is going to be a new release, the development definitely has slowed down. People on the Discord at least are thinking about whether a fork makes sense, since the Rome trademark is with the corporation and not the OSS project, if I understand correctly.
While that's true, the project attracted new core contributors. The development slowed down, but the project is still in active development. A new release is planned for this month.
Oh that's good to read! I'm pretty excited about rometools and desperately want to get out of this super pluggable linting world with a dozen+ eslint related packages. Everyone wants to use airbnb's style guide anyway, just make that the default with some minor overrides and I'm happy.
> Rome is "special" in that it has none of the depth of eslint's TypeScript linting support and has very strange opinionated defaults that make no sense with limited configuration because the whole philosophy of that project is "just follow our rules without any configuration".
This was true at the start of the project. They added many configurations asked by dev such as semicolons and indentation style. Any lint rule can also be disabled.
I would argue that SWC is more mature than esbuild as SWC is on a version 1+ release (the author on their site says that is “late stage beta” and vite still doesn’t use it for production builds), has been around for 2 years longer than esbuild (going by github histories) and is being made by more than one (very capable) person.
¯\_(ツ)_/¯. Just relating my experience. I've found the ESBuild plugin ecosystem to be quite mature and blazingly fast. Any additional speedup of SWC wouldn't be felt as the startup overhead of npm to do `npm run build` dominates whatever time ESBuild spends.
ESbuild seems more popular than SWC. Moreover, ESbuild has really good documentation and a great CHANGELOG. It is always a pleasure to read a new entry in its CHANGELOG.
IN the other side, the changelog of SWC is too close to the code. This makes hard to understand what is added in a new version from the user perspective.
I also chose ESbuild over SWC because ESbuild has no dependencies. SWC has more than 130 (transitive) dependencies [0].
> And will JavaScript as we know it survive if it does?
IMO yes, for two reasons:
1. a lot of people like coding in JS! I know, anathema. But having coded in both JS and Rust I know which one is going to be more acceptable to the largest audience of developers. There's a reason Node got as popular as it has.
2. libraries. I'm sure there will be plenty of people doing DOM manipulation via WASM but if there are going to be a limited number of libraries you can pull off the shelf to do it. By comparison JS is always going to have a ton (arguably too many, but still). But I'm sure Microsoft won't be able to resist raising the ghost of WebForms to make something that actually does run .NET in the browser.
I am fairly certain the answer is "Yes". The DOM and other APIs have well-defined interfaces so it would likely be reasonable to wholesale export those to WASM. If the timeframe is "ever" the answer is "yes".
However I don't think it will be soon. I think the current biggest question is how garbage collection works with WASM. DOM nodes and many other resources are garbage collected. You need to define how WASM interfaces with that. (Does every object just get reference counted when passed to WASM and require an explicit free? Is there some sort of GC that traces over WASM memory?).
That would probably expose it to so many security concerns that it would face a similar fate as Java applets & Flash: causing more hacking/snooping than benefits for a typical browser user.
And I can't get a consistent answer on what problem WASM is trying to solve.
The problem with Flash and Java was that they were not integrated with the browser's sandbox. WASM is integrated with the sandbox.
How would allowing DOM access be insecure? The DOM is designed to be completely controlled by the webpage. I don't see how adding an additional way for the webpage to do so adds any insecurity.
WASM neatly solves a pretty common problem of running code that can't be easily ported to JS inside the browser.
It also sort of solves a common problem of running untrusted code, both in browser or on server, because WASM VMs are much more isolated and restricted.
Javascript is a language like many since 1961, too, and not the best of them; it has the unique advantage of having been universally implemented and deployed already.
Same thing with WASM. JVM never achieved this level of penetration, and Flash allowed way too much access to ever be secured.
It sure is. I've talked about this before but there is no reason to have tooling be in the same language as the end user writing it. Python for example has libraries in C and C++ (numpy, pandas, other scientific libraries) and the end user just uses Python to interface with them. Imagine if they were all written in Python instead, and how slow that'd be.
The only reason JS and TS tooling are written in these languages is because V8 is decently fast enough, but at scale, we can definitely do a lot better.
Eh there's definitely some value in having the tooling be explorable and comprehensible to the line programmers. The value depends on the situation of course, like in the scientific python example probably low-close-to-zero.
But I think that's an outlier and the value is usually higher than that. Not always necessarily enough to outweigh other concerns, and I don't think always using the end language for tooling is the correct choice either. But it's a sensible default that you should make sure you get real value out of deviating from.
> there's definitely some value in having the tooling be explorable and comprehensible to the line programmers.
I think part of the reason Rust has been so popular in this niche is that it is pretty comprehensible to JavaScript/TypeScript/Python programmers. It's in any case much more approachable than C/C++ which would be the other alternative. The Rust build system also helps a lot too.
I'm not a Python dev, so correct me if I'm wrong, but it is my understanding that Python's language bindings and its dependencies on other languages for major libraries is one of the main impediments to alternative or innovative implementations like PyPy or Jython reaching compatibility.
ESBuild is probably more popular than all the other non-JS JS dev tools combined, and (as the blog post even mentions, despite the title and main thesis) it's not written in Rust.
I don't see a contradiction. Were ESBuild already written in Rust, Rust would be the present of JS infrastructure. The author hopes it merely to be the future.
Yeah but that future is probably a long ways off, if it ever comes at all. Currently ESBuild has 5x as many npm installs as Rome and SWC combined, and the gap doesn't seem to be shrinking https://npmtrends.com/@swc/core-vs-esbuild-vs-rome
FWIW, that doesn't take into account that Next actually has its own wrapper around SWC, and also vendors it. I think it's probably the largest consumer of SWC:
Python hasn't failed, because CPython doesn't have one, and PyPy has been largely ignored. So if anything it has been a failure in performance since early days.
In regards to JS, there is still so much to implement, like JIT caches with PGO metadata across runs.
So it isn't as if everything has been tried out.
What happens, and is proven by Java and .NET, is that FOSS alone doesn't produce top level JIT and GC compilers.
And since Google doesn't care about improving V8 for nodejs workloads, everyone else rewrites JavaScript into something else.
No, PyPy isn't fast enough for these workloads (again see my comment, I tried)
v8 isn't fast enough either, and it's basically a Rube Goldberg machine maxed out in code complexity right now. It probably 2M lines of code, with multiple interpreters and compilers. They're lucky if they can improve metrics 5% -- things aren't getting 100% faster
With JITs, both JS and Python runtimes are still ~10x too slow, as opposed to ~100x too slow without them.
They also offer you approximately zero control over memory, and as a result use ~10x more memory. This can certainly matter even for a tool running along aside your editor on a desktop
I did not assert that PyPy is fast enough for anything.
Naturally V8 isn't fast enough, I also haven't said that.
What I said was that Python never had a proper JIT history, and Google doesn't care to improve V8 for nodejs workloads, so naturally it isn't fast enough.
I also said that FOSS community is not in the position to make otherwise, hence why it is easier to switch to compiled languages instead, no one has the capacity to make top JIT infrastructure with late nights and long weeks, paid with Github stars.
What I'm saying is that there's no amount of money that Google can spend to make v8 5x or 10x faster for these stateful, object- /pointer-rich workloads.
It's not a matter of investment.
The language is the bottleneck, not the runtime. It's better to change the language and get 100%-500% improvement, not eke out 5% from the runtime. That's trying squeeze blood out of a stone.
"proper JIT history" -- this claim doesn't make sense.
We are discussing here performance for CLI applications, not AAA game engines.
As for the claim, it makes perfect sense, when PyPy never went beyond a research project largely ignored by the Python community during the last 20 years, while other attemtps never achived half of what JS, Lisp, Scheme, Smalltalk have done.
I was a big believer in Java, until I started work on the Google indexing pipeline in C++. There are cases where you really need to push both code generation and lifetime decisions from runtime to compile-time. I think there's still a good case to be made in many domains for ahead-of-time compilation combined with on-stack-replacement/dynamic re-optimization using runtime profiling information. However, mandatory garbage collection is a difficult sell.
Yes, Azul systems, etc. show that you can have nearly zero-overhead GC if you have a few spare cores on which to push the majority of the GC work, but in some systems you can't spare the cores.
Yes, developer productivity is lower when you push these decisions onto the developer. However, there are a few select cases where that productivity hit is worth it.
OS kernels, web browsers, low-latency trading, and a few other domains are such that you really don't want the several percent overhead (and, as a rule of thumb, 2x peak space overhead)/latency variance of the garbage collector. (Yes, Bacon et. al.'s concurrent reference counting collector brings down that peak size, but to get good performance, you end up batching the cycle detections.) Firms that use Java/C# for low-latency trading end up manually managing object pools to remove most of the GC overhead, but then you're essentially back to manually managing memory using compile-time lifetime decisions.
In the specific case of web browsers, Netscape/Mozilla attempted Javagator. With the number of vulnerabilities in JavaScript/DOM in browsers, it's a near certainty that very capable senior engineers at both Microsoft and Google have had long serious looks into managed languages for their JavaScript engines / DOM implementations. Maybe we've advanced enough in both JIT and GC technology that it's worth another stab at a browser written mostly in a managed language, but it's certainly still a very difficult task.
Edit: I also love PyCharm and CLion, but GC pauses in both nearly drive me nuts. Sometimes I drop back to vim to avoid the GC pauses.
Edit: I'd also love to see Microsoft able to recompile Office as Managed C++ and start writing new features and bugfixes in C#/F#/etc. I want to believe. I'm rooting for people to find competitive solutions for the last few cases where GC isn't competitive. However, evidence suggests that most of a premier browser's code is still off limits for GC. I want to be a true believe in GC/JIT again, but I need to be shown.
Yet, year after year, C++ loses market share in distributed computing and CNCF projects to Java, .NET and Go.
Google OSes only expose a JavaScript/Browser userland on one, and a Java/Kotlin on the other one, with a very castrated NDK. Where folks like Termux keep refusing to accept that the world has moved on from POSIX.
Office is already managed, it runs on Azure and on the browser, the native version is slowly eroding from most Microsoft shops, as WinDev loses political power against Azure business unit, and everyone else rather ship WebWidgets and Electron, than whatever WinDev with their love for COM put out. XBOX is probably the survivor here.
Market share isn't everything. Last I checked, SkyllaDB wiped the floor with Cassandra. Hadoop Distributed FS underperformed GFS, etc.
I'm glad C++ niche is shrinking, but there are still plenty of good use cases for non-GC'd languages. Top-tier browsers and top-tier games are the main niches on the desktop.
Yet, it is Cassandra. Hadoop Distributed FS that most people reach for, not SkyllaDB.
As proven by Minecraft and plenty of games on the Switch, there is plenty of money to be made, and becoming a top-tier game, when attention to game design is taken, instead of what engine one should care about.
WinDev doubling down on COM and C++ for WinRT and now WinUI/WinAppSDK, with tooling remanescent of the glory ATL days, is one of the reasons why even the most die hard Windows developers started to look elsewhere. Even other Microsoft business units, they get all the C++ they need on Microsoft Edge WebView2.
One of the reasons why .NET team finally started to care about performance, and is finally on the record that C++/CLI is done, is that they want to reduce as much as possible the need to use anything else, and with each release nowadays there is a bunch of runtime code that gets ported from C++ into C#.
I only see three big use cases for no-GC languages, kernel and drivers, GPGPU and LLVM/GCC integration, and even there I am a beliver in systems programming languages with GC, but I guess one needs to compromise somewhere and ChromeOS/Android/Cloud like workloads is the best we can get.
> manually managing object pools to remove most of the GC overhead, but then you're essentially back to manually managing memory using compile-time lifetime decisions.
But those very firms will use ordinary normal Java for the initialization part and only switch to the more restricted “manual memory managed” Java on the hot loop — you are still much more ahead in productivity/manageability. Especially that it is not rare at all to just manage a huge ByteBuffer from Java - for indexing that’s what I would use with some sane abstraction over.
Oh and mind you, in case of low-lat trading where Java is not sufficient, nor is C++ -> the next step is ASICs.
Odd that there's no mention of Bun here. I agree with the general thrust that JavaScript infrastructure goes faster when written in a fast language, but as Bun and esbuild show, Rust isn't the only future.
edit: there's a date tag on the title now... whoops.
Rust or some other good language should be the future of your software instead of just using it to prop up your JS/TS layer cake of spaghetti.
I get that many feel like it's sunk cost etc but every day I use the TS stack I lose a little bit of love for programming. Not to mention the huge chunks of motivation that get eaten up by yet another bullshit "build" system issue, or sourcemaps not working after upgrading X or Y or tsserver getting confused or Babel macro plugin garbage, etc.
Especially having worked with Rust, Kotlin, etc that you know.. work the way you would expect them to - hell, Java feels refreshing after Typescript.
I miss just writing code with tools that work well. Especially on the server where there is really no reason to ever use Typescript and yet everyone seems to think it's a wonderful idea and I'm the crazy one for suggesting there are better options.
Rust is a very nice fit for language tooling. It has several features that are particularly useful for syntactic manipulation, like matching, enums, and Result<x> error handling. And a focus on efficiency is especially nice for language tooling, because no matter how fast your tools are, a codebase can get big enough to make them annoyingly slow.
If only they could fix the toolchain.
If version 1.58.0 is so different from 1.58.1 that it cannot compile 1.66 something is terribly wrong. And getting crates from internet is a terrible idea from a security point of view.
There's nothing inherent i Rust that makes it the perfect JS tooling language, it's more correct to say JS tooling will adopt more performant and safe languanges, and Rust is one of them.
Yup, more generally, I'd say that statically typed languages that compile to native code (Rust, Go, C++, etc.) are necessary to achieve good performance when writing linters / formatters, type checkers, compilers, and interpreters (AST- and graph- based workloads).
You could frame it as a "failure of JITs in the 2010's". JavaScript isn't a good language to write the TypeScript compiler or a linter, because v8 isn't fast enough.
The semantics of JavaScript do not allow v8 to be fast enough.
IIRC this was precisely why the Dart project was started more than 10 years ago by the original v8 authors. They were spending a lot of time looking into why real world web page performance was falling off various cliffs in v8. They realized they needed to change the LANGUAGE in order to be able to write fast programs. A major use case was developing programs like Google Docs and GMail in the browser, which had to compete with native programs written in C++.
JITs are fast in common cases, but they not only have big costs in terms of memory (code storage) and startup time, but they're hard to ENGINEER with!
Similar story with Python tooling -- the linters, formatters, and type checkers are quite slow due to being written in Python. mypyc gives a bit of speedup, but it still uses the Python runtime.
Related story from yesterday:
"Even the pylint codebase uses Ruff" (linter in Rust)
That is, he says that in the 90's and 2000's, we thought that clock speeds would continually increase, and JITs would get better, and so we could design language semantics without regard to performance -- language that almost REQUIRE slow implementations.
(I think his take is about 50% true. The other 50% is that dynamic languages simply allowed people to produce popular and useful software at a greater rate, especially for the web, so we ended up with a lot of software written in dynamic languages! Doing web apps in Java vs. Ruby/Python/JS is a huge difference in productivity, and I'd say you often end up with a BETTER result, due to increased iteration / fast feedback.)
---
This also tracks with my experience with https://www.oilshell.org, where we reverse-engineered the shell in an experimental fashion with Python, and then evolved that implementation into a statically typed language that generates C++ (using MyPy, ASDL, and algebraic data types).
This core of the program is the elaborate and strongly typed "lossless syntax tree", which is basically what's used in linters and formatters.
Even though I've been using both C++ and Python for >20 years, I was a little shocked how much worse Python is for AST- and graph-based workloads.
I'm looking for references/measurements specifically on these types of workloads. I think a lot of papers about JITs are misleading with respect to them, or at least you have to read between the lines.
I'd say that Python and JS are 10x as slow as native code for "business" and "web app" workloads, and I've never had a problem with them in those settings. Quite the contrary, I've actually sped up poorly working code in static languages with Python. If you're within 10x of the hardware's performance, you're doing VERY WELL compared to "typical software", which has layers of gunk and can be 100x to 1000x too slow.
But bare Python and JS (no libraries) are closer to 100x too slow for ASTs and graphs. This is because of all the allocation and GC overhead -- in both time and space -- in addition to dynamic dispatch, etc.
(The funny thing is that Oil is now the most statically-typed shell implementation, even though it's nominally written in Python :) It uses fine-grained static types, where as shells written in C use a homogeneous "WORD*" representation in C, and strings with control codes embedded in them for "structure" and "types". I should probably write a blog post about that ...)
---
To shine some light on the other side, I'm still a bit skeptical of Rust specifically, because:
- memory management is littered all over the codebase.
- Borrow checking seems to work better for stateless/batch programs (it thinks about parameters and return values), but linters and type checkers for language servers are STATEFUL: https://news.ycombinator.com/item?id=34410187
- Many ASTs are actually graphs
- pattern matching can't see through boxing apparently ?
Also, the author of esbuild tried BOTH Rust and Go, and ended up with Go. IIRC it performed better because it didn't have deterministic destruction on the stack -- GC was more efficient?
It seems like a language with both garbage collection and algebraic data types would be nicer, but neither Go or Rust fit that description!
Rust makes a lot of sense for kernels and so forth, but for language processors -- especially stateful ones (which includes the Unix shell!) -- I think GC is still a big help. And we know already know how to make GC fast for that use case, i.e. you don't need a GC that scales to 1 TB of memory on 128 cores.
If only folks didn't ignore the JVM which is still wicked fast and getting faster all the time. JITs didn't fail, you are just using the ones that suck at what you are trying to do (v8) or don't exist at all (Python).
.NET and Java have convincingly proved that JIT'ed bytecode VMs are plenty fast enough for almost all general purpose computing.
I wouldn't write a browser/game/kernel/hard-real-time application in one but they are awesome for data manipulation, databases, web servers, general desktop apps, etc.
My point is that you don't need "shitty scripting language" + "fast AOT compiled languaeg" when you can literally just chose "fast managed bytecode language" for 99.99% of usecases (that aren't running in a browser because JS monopoly there ofc).
AoT with periodic profile-based re-optimization (like the latest Dalvik / Android Runtime provide), perhaps also with dynamic re-optimization/on-stack replacement seems ideal. As long as you're ditching Dalvik's earlier use case for hybrid interpretation/JIT, a compact SSA-based format (like Michael Franz's et. al.'s SafeTSA) seems better suited as an intermediate format.
In any case, pre-generating/caching the native code with profile-guided optimization seems ideal, giving you more time to perform expensive optimizations and also avoid repetitive re-compilation when nothing has changed about your usage patterns.
Platform-independent binary distribution formats with profile-guided optimization seem like clear wins for most applications that aren't currently using hand-written assembly. Re-compiling every time the binary is launched seems wasteful. In some domains, there's also a compelling case to be made for making the garbage collector optional.
The JVM is better for these workloads than JITted runtimes for dynamic languages, but I'd still stay Go, Rust, and C++ are better -- both in a practical sense of what tool to use, and in theory (AoT compilation, static types, and language control over memory layout).
It's not like people have been ignoring them on purpose. Plenty of code has been written in those languages, but the point of my post is that these pointer-rich workloads are even harder for them.
Java seems to lack value types on the stack, which results in a lot of extra garbage for language processors (something I learned the hard way)
(IIRC Guy Steele's famous "growing a language" talk over 20 years ago specifically advocated for value types in Java.)
Speed isn't the only important dimension -- they often trade speed for memory usage and startup time, and those become issues. As well as the weight/size/complexity/deployment of the runtime (i.e. Go is managed, but there's no separate runtime to deploy or configure). Also IIRC Java bytecode is untyped, so the JIT has do a lot of work to recover the types again, which is weird.
I think the JVM / CLR make a lot of sense for many server-side workloads. They haven't caught on much for tools deployed to people's desktops, I think for good reasons.
I agree for short lived executions like tools etc.
That said value types are coming to Java as part of Project Vahalla (and tangentially related to Project Panama) and AoT will be shipping as part of OpenJDK proper which would likely allow Java to be suitable for some of those cases.
I would say Rust has a distinct advantage for language processing tasks however, not just because it's small runtime, compact memory layout etc but also because the existing body of work w.r.t language projects is very rich and well developed.
JVM and .NET have had AOT compilation for 20 years now, even if not always available as free beer.
> I think the JVM / CLR make a lot of sense for many server-side workloads. They haven't caught on much for tools deployed to people's desktops, I think for good reasons.
How does go fit in with the other two? I hate that it somehow considered low level, when it is closer to JS in execution semantics/performance/everything.
With respect to performance and semantics, it's not closer to JS -- it's a typed language that uses types for AoT compilation, like C++ and Rust.
The GC is a big difference, but it's also a pretty good one from what I know.
The author of esbuild is understands software performance deeply (e.g. architected Figma in the browser in C++). He tried esbuild in both Rust and Go, and preferred Go for performance.
Go barely does any optimizations, and its GC is quite subpar if anything (it does optimizes for latency, while most other optimizes for throughput, for what it’s worth). It does have value types and pointers (so do C# and D for quite some time), but with a naive compiler it will generally sit at the ~2+x niche of C performance, which.. performant JITted languages also occupy, including JS.
OK, but what's your point? That Go isn't good for writing language processors?
For that problem, Go can be put in the family of Rust and C++, and IMO the GC is actually an advantage over them. As mentioned, ASTs are often graphs.
Putting Go closer to JS is just wrong, in an absolute sense, and relative to this problem. esbuild proves that Go is good for these workloads (again see those links). I think you're over-generalizing language performance without paying respect to the workload -- performance is very multi-dimensional.
On microbenchmarks, Go is probably 2x slower than C, but (1) that's VERY good, and (2) those benchmarks aren't representative of the pointer-rich AST workloads we're talking about here.
I have some beefs with Go myself, but it sounds like you just have some beefs with it that aren't all that relevant to the problem being discussed.
Go exists in a weird space. It's not the top performer but is respectable (as long as you don't pressure the GC too much) however it's also not very expressive.
This leads to it not being favoured as a high level language because it lacks the primitives to write very concise code. On the other hand it has a big runtime and a GC so it's too high level for many true systems programming tasks. This lack of high level language features coupled with it's higher level runtime means that it finds itself occupying a space between systems languages like Rust and C++ but "lower" than Python/JS/Ruby/Kotlin/Swift/etc.
Essentially it ends up competing with Java on the server and supplanting C/C++ for systemsy tools in fields it got to before Rust arrived on the scene.
That’s a good summary. I just see it way too often bundled together with low-level languages and it benefits no one to prefer/not prefer a language based on false knowledge.
When it comes to JS ecosystem, developer common sense doesn’t work because it fast doesn’t mean it’s bound to succeed among its counterparts, the major thing such could help is their ecosystem they are written in
One of my projects (https://markdoc.dev/) is a Markdown dialect that supports custom tags and a React renderer. I recently experimented with implementing a parser for it in Rust in order to increase performance. My Rust-based parser is significantly faster than my existing JavaScript parser, but then I have to serialize the AST in order to move it from Rust to JavaScript. I'd like to implement the entire processor in Rust, but I need to let users define custom tags in JavaScript, and the overhead of going back and forth is far from ideal.
I'm hopeful that the recently-ratified Wasm GC proposal—which introduces managed structs and arrays that don't cost anything to pass between the Wasm environment and JavaScript—will help a lot. But it's going to take awhile for Wasm GC features to land in LLVM and be properly supported in Rust.