Hi! I saw your PR review of a community effort to add Bun to the Techempower benchmark. You had really great, exact feedback about "unnecessary allocation here", "unnecessary allocation there".
It was eye-opening, in terms of how often us JS programmers play fast & loose with "a little filter" here, "a little map" there, and end up with death by a thousand allocations.
Given that context, I'm curious:
1) How much you think Bun's (super-appreciated!) fanatical OCD about optimizing everything "up to & outside of JSC" will translate to the post-boot performance of everyday backend apps, and
2) If you're tempted/could be compelled :-D to create a "Mojo of TypeScript", where you do a collab with Anders Hejlsberg to create some sort of "TypeScript-ish" language that, if a TS programmer plays by a stricter set of rules + relies on the latest borrowing inference magic, we could bring some of the Bun amazing-performance ethos to the current idiomatic FP/JS/TS style that is "lol allocations everywhere". :-)
Or, maybe with Bun bringing the right native/HTTP/etc libraries wrapped around the core runtime, doing "just the business logic" in JS really won't be that bad? Which iirc was the theory/assertion of just-js when its author was benchmark hacking Techempower, and got pretty far with that approach.
Anyway, thanks for Bun! We're not running on it yet, but it's on the todo list. :-)
> 1) How much you think Bun's (super-appreciated!) fanatical OCD about optimizing everything "up to & outside of JSC" will translate to the post-boot performance of everyday backend apps
Bun is extremely fast at data processing tasks. Shuffling data from one place to another (disk, network, APIs, etc). We use SIMD, minimize allocations/copies and pay lots of attention to what system calls are used and how often and where. A naively implemented script in Bun that moves data using Bun’s APIs will often outperform a naively implemented program written in Rust or Go. Bun’s APIs try really hard to make the obvious & default way also the fast way.
That, and Bun’s builtin build tooling are where Bun’s performance shines.
> “Mojo of TypeScript”
I’ve thought a little about this. I think it’s a project that’d take 5+ years and crazy hard to hire the right people to do it. I think it’s mostly unnecessary though. JITs are really good. The API design is usually the reason why things aren’t as fast as they could be.
> Bun’s APIs try really hard to make the obvious & default way
> also the fast way.
Nice! That makes a lot of sense, and look forward to trying them.
Fwiw I sometimes worry about the slippery slope to infra that exists on the JS side, i.e. I work a lot in a GraphQL backend, and even if Bun gave us (or really our framework, so fastify/mercurius) a super-optimized way of getting the raw bytes off the wire, Mercurius is still doing GraphQL parsing, validation, routing, response building in JS land.
Granted, I want to keep my application's business logic in TS, but naively it seems tempting to push as much of the "web framework" / "graphql framework" as possible into the native side of things, where as I think historically Node/etc API have stopped at the "here's the raw HTTP request off the wire".
> I’ve thought a little about this.
Sweet! That's awesome just to hear that it's crossed your mind. Agreed it would be a moonshot. And, yeah, I'm perfectly happy leaning into JITs and good APIs.
What are the major difficulties you see? Is this estimate for supporting all existing TS code...or as the OC said, a new language with only newly written code.
The way I naively think about it is to imagine transpiling TypeScript code to Zig code. How far could that take you?
And if you restricted how much dynamic-y stuff you could do...maybe with a linter. I always get the feeling that 90% of the business logic (sequence, selection, iteration) is that same between languages whether they are interpreted or compiled, with just some memory management stuff added on top - which can be abstracted anyway.
This just shows again that readability is entirely subjective, IMHO combinations of .map, .filter, .reduce, etc are often less readable than doing the same thing in a nested loop.
> combinations of .map, .filter, .reduce, etc are often less readable than doing the same thing in a nested loop.
I often find map/reduce/filter easier to read when using named functions (or lazy data structures ) for intermediary results - depending on the language/runtime that might imply more allocations - or not.
Eg pseudocode:
Integers.filter(//non-obvious prime sieve).sum()
Vs
Primes = Integer.filter(//non-obvious prime sieve)
Primes.sum()
Or lifting the anonymous filter to primes()-filter:
That's an interesting point. Agree that readability is somewhat subjective, but many programmers find .map, .filter, .reduce, etc... as more convenient and provides clarity as to their intentions. Many languages (like Vlang, Java, Python, etc...), arguably also have them to more closely align themselves with the functional programming paradigm.
For simple combinations I agree (maybe 2-long chains with very simple conditions and transforms - but for such things loops are also trivial to read).
But I have seen "functional contraptions of horror" where those functions are both chained and nested which were completely undecipherable by mere humans.
And at least from my personal impression, people who are a fan of this type of functional style are also more likely to create such horrors (which they themselves of course find totally readable and superior to "unreadable" loops) - I suspect that there's often a bit of cargo-culting going on.
A nested for or while loop isn't always less readable FWIW. If you need more than 3-4 filters and/or maps the balance starts shifting back the other way.
yeah, cause fixing the missing index on your DB that adds 3 seconds to an API call is better than optimising loops to save 2ms.
In the front-end we were making about 20 API calls to fetch data we probably don't need yet and the developer is like: the problem has to exist in the way we call them, time to optimise the loops!
The fact of un-readability isn’t necessarily implied by the statement being replied to. `map` and `filter` are names for common operations on lists, not less performant alternatives other things. If there’s a more performant alternative to either operation, give it a name and express it in a function. That’s what functions are for, no?
Lodash is MUCH faster than native because it uses iterators so you aren't actually looping over and over.
We need builtin iterator versions of all these looped functions so it adds an implicit/explicit `.valueOf()` that calls the chain and allocates only one new
array.
There are now builtin iterator versions of most of these looped functions [1], should be shipping in stable Chrome next month. The "give me an array at the end" function is spelled toArray.
But it's not going to help all that much with this problem. The iterator protocol in JS involves an allocation for every individual value, and while in some cases that can be optimized out, it's pretty tricky to completely optimize.
It's always going to be difficult to beat a c-style for loop for performance.
Concerning stephen's item (2).
The stricter set of rules was laid out by Richard C. Waters
in Optimization of Series Expressions: Part I: User's Manual for the Series Macro Package, page 46 (document page 48). See reference Waters(1989a).
The paper's language is a bit different than contemporary (2023) language.
`map()` is called `map-fn`.
`reduce()` a.k.a. `fold` seems to be `collect-fn`, although `collecting-fn`
also seems interesting.
sorting, uniqueness and permutation seem to be covered by `producing`.
Just think of McIlroy's famous pipeline in response to Donald Knuth's trie implementation[mcilroy-source]:
As far as pipeline or stream processing diagrams are concerned, the diagram on page 13 (document page 15) of Waters(1989a) may also be worth a closer look.
What the SERIES compiler does is pipeline the loops. Think of a UNIX shell
pipeline. Think of streaming results. Waters calls this pre-order processing.
This also seems to be where Rich Hickey got the term "transducer" from.
In short it means dropping unnecessary intermediate list or array allocations.
Shameless self-plug: Eliminated unnecessary allocations in my JavaScript code
by adding support for SERIES to the PARENSCRIPT Common Lisp to JavaScript
compiler. The trick was (1) to define (series-expand ...) on series expressions so that they can be passed into (parenscript:ps ...) and (2) the parenscript
compiler was missing (tagbody ... (go ...) ...) support. The latter is
surprisingly tricky to implement. See dapperdrake(2023). Apologies for
the less than perfect blog post. Got busy actually using this tool.
Suddenly stream processing is easy, and maintainable.
When adding a Hylang-style threading macro (-> ...) you get UNIX style
pipelines without unnecessary allocations. It looks similar to this:
Sadly, the SERIES compiler available on quicklisp right now is a
bit arcane to use. It seems like it may have been more user friendly
if it would have been integrated into the ANSI Common Lisp 1995 standard
so that is has access to compiler internals. The trick seems to be to
use macros instead of (series::defun ...) and use (series::let ...) instead of (cl:let ...). Note, that the two crucial symbols 'defun and 'let
are not exported by SERIES. So using the package is insufficient
and pipelining fails without a decent warning.
Am chewing on the SERIES source code. It is available on sourceforge. [series-source].
If anybody is interested in porting it, then please reach out.
It seems to be of similar importance as Google's V8 relooper algorithm [relooper-reference].
Waters(1989b), page 27 (document page 29) even demonstrates an implementation for Pascal. So it is possible.
The paper about Series explicitly bemoans the lack of compiler integration, explaining why the hacks are that way: why Series has its own implementations of let and so on.
For both getting function composition and avoiding unnecessary intermediate
allocations, the naive approach to using the SERIES package is insufficient.
And the error messages it returns along the way are unhelpful.
Evaluating (defpackage foo (:use :cl :series)) fails to import
(series::defun ...) and (series::let ...) and (series::let* ...).
So, when you think you are following the rules of the paper, you are
invisibly not following the rules of the paper and get the appropriate
warnings about pipelining being impossible. That seems somewhat confusing.
After reading the source code, it turns out the answer is calling
(series::install :shadow T :macro T :implicit-map nil). How is
(series::install ...) supposed to be discoverable with
(describe (find-package :series)) in SBCL if it is an internal
symbol of package SERIES (?) Usability here is somewhat less than discoverable.
Listing all exported package symbols of SERIES obviously also fails here.
Furthermore, the source code and naming in "s-code.lisp" suggest that
(series::process-top ...) may be useful for expanding series expressions
to their pipelined (read optimized/streaming/lazy) implementations.
This is desirable for passing the optimized version on to PARENSCRIPT
or other compilers. Here is the catch: It fails when the series expression
is supposed to return multiple values. One of the points of using SERIES,
is that Waters and his fellow researchers already took care of handling
multiple return values. (If the lisp implementation is smart enough, this
seems to mean that these values are kept in registers during processing.)
After some tinkering, there is a solution that also handles multiple
return values:
(defun series-expand (series-expression)
"(series::process-top ...) has problems with multiple return values."
(let (series::*renames*
series::*env*)
(series::codify
(series::mergify
(series::graphify
series-expression)))))
Will submit pull requests once I am comfortable enough with the source code.
Yes, the SERIES papers Waters(1989a,b) bemoan the lack of deep integration
with Common Lisp compilers. And yes, they could have been resolved by
making SERIES part of ANSI Common Lisp like LOOP was. They could
theoretically also have been resolved by having explicit compiler and
environment interfaces in ANSI Common Lisp. That is not the world we
seem to live in today. Nevertheless, package SERIES solved all of the
hard technical problems. When people know about the documentation
failings, then SERIES is a powerful hammer for combining streaming/lazy-evaluation with function composition as well as other compilers like
PARENSCRIPT.
I think, one issue is that Series had been hacked on since that paper, which had not been updated.
Anyway, I guess series::let is an unexported symbol? I.e. not series:let? There is a good reason for that.
If series:let were exported, then you would get a clash condition by doing (:use :cl :series). The CL package system detects and flags ambiguous situations in which different symbols would become visible under the same name. You would need to a shadowing import for all the clashing symbols.
It's probably a bad idea for any package to export symbols that have the same names as CL symbols. Other people say that using :use (other than for the CL package) is a bad idea. Either way, if you have clashing symbols, whether exported or not, you're going to be importing them individually if you also use CL, which is often the case.
> PR review of a community effort to add Bun to the Techempower benchmark
Has this been added? That PR got closed right? Whilst valid, it was sad that Bun didn't make it. Would be good if someone from the community or Bun team can give it another go.
That was certainly the promise/hype of Rust ~2-3 years ago, that it was going to become so ergonomic that even "boring line-of-business applications" (i.e. JS backends) could be written in Rust, by everyday programmers, without any slow down in delivery/velocity due to the language complexity.
But, from what I've seen, that's not played out, and instead the community is still on the look out for the killer "systems-language performance, but scripting-language ergonomics" language.
I write a fair bit of C#. It feels like the prequal to Typescript. Its lack of discriminated unions and type narrowing feels like a step back from Typescript.
Given that TypeScript is driven in late part by one of the main long term (until they moved to TS) C# language designers, I feel a lot of the core of the TS type system is what they wanted to do in C# but for legacy reasons just can’t.
Until recently I've been using Deno (mostly to avoid using Node and the tooling hell that entails) and it looks like for my use-cases Bun is getting there. I've had a pleasant experience using Bun as the basis of a test harness.
Here's my question (with a tiny bit of lead-in):
What I like about Deno is the integrated LSP (reducing tooling hell), are there any plans for Bun to feature this too? Bun already internally transpiles TypeScript which is great but having the LSP bundled too would give this single binary integrated experience a boon I feel.
Looking forward to Bun 1.0!
P.S. I'm starting to stretch my Zig muscles, you looking for Zig developers? ;)
I don't understand the hand-wringing about this. Bun explicitly says, they are a small team working very hard on some hard problems. If that's your idea of a good time, you're free to try and join. If you want a chiller job, there's 1,000 of them out there. It's not like Jarred is some corporate overlord demanding people slave for him while he sips margarita on a beach... he just wants people working on the same frequency as himself.
There's a school of thought on work life balance which amounts to wanting just enough life overhead to support the work. That 'balance' is not for everyone - but crucially it is what some people want.
>That 'balance' is not for everyone - but crucially it is what some people want
I've never seen anyone that could sustain an 80+ hour per week grind and make it out without severe personal issues (whether they are willing to acknowledge it or not). I've seen many, many incredibly talented people burn out and suffer permanent health or career damage to hit their short-term goals. I personally know an otherwise healthy 30 year old swe who had a stress related heart attack. It may be what some people want but you can't grind your way out of being a human.
But are they compensated or are we dealing with disguised wage theft[1]? A lot of times, when it's time to pay all that overtime or when someone finally speaks up about it, suddenly the "fun" stops.
Then there is the not speaking out, resulting in: 1) Burn out and quit. 2) Company dumps or fires them after burning them out. Then does the same to the new ones. Until something obvious or tragic stops them. 3) Quiet destruction of personal lives. Sometimes leading to significant health and/or mental problems, related to stress, and even suicide in some cases.
Balance is necessary, because otherwise it can be like playing with fire. It's all "fun and games", until people get or realized they got burned.
New product teams are a grind, but with the right people, also a lot of fun. It isn't for everyone. When I was hiring for a NPT working on cutting edge tech I told everyone I interviewed they the work life balance was super skewed.
The people who accepted job offers self selected for having a passion for pushing technology forward.
I tried to keep things as sane as I could, but I'd have to go in on weekends and usher people out of the office.
For some people, building cutting edge things is /fun/.
You can pretty much use tsserver with bun-types and get most if not all the features you get from deno-lsp. I know because we provide both deno-lsp and tsserver for windmill.dev to provide intellisense over websocket/jsonrpc for our monaco webide at windmill.dev and it works great :)
> The plan is to run our own servers on the edge in datacenters around the world. Oven will leverage end-to-end integration of the entire JavaScript stack (down to the hardware) to make new things possible.
So how does oven-sh the company make money? It sounds like you release Bun open source, and then sell access to your edge infrastructure to enterprises?
Is these edge servers basically a running a smarter version of NPM with a CDN? Can you say more about what this may eventually do?
Will individuals be able to use the edge servers via some free tier?
Does Bun in it's current form use this edge infrastructure already?
Have you given thought what Bun 2.0 would look like? What major features it would have? Or 2.0 if it ever happens is mostly making Bun work 'at the edge'?
The honest answer is: don't know yet, but if it doesn't happen in 1.0, it will be the priority for 1.1.
I'm going to do some experiments in the next few days and see how it goes.
Roughly, the way we're thinking of adding Windows support to Bun is:
1) Get all the Zig code using platform-specific system APIs to use the closest Windows equivalent API. Fortunately, we have lots of code for handling UTF-16 strings (since that's what JS uses in some cases)
2) Get uSockets/uWebSockets (C/C++ library we use for tcp & http serve) to compile for Windows, or fall back to using libuv if it takes too long to make it work
3) Get the rest of the dependencies to compile on Windows
4) Fix bugs and perf issues
There are a lot of open questions though. None of us are super familiar with I/O on Windows. JavaScriptCore doesn't have WebAssembly enabled on Windows yet.
The biggest thing I'm worried about (other than time) re: Windows is async i/o. In Bun, we _mostly_ use synchronous I/O. Synchronous I/O is simpler and when using SSDs, is often meaningfully lower overhead than the work necessary to make it async. I've heard that anti-virus software will often block I/O for potentially seconds, which means that using sync I/O at all is a super bad idea for performance in Windows (if this is true). If that is true, then making it fast will be difficult in cases where we need to do lots of filesystem lookups (like module resolution)
On Windows you may consider using higher level IO routines. For example, for HTTP requests you can use WinHTTP which is super fast and scalable. For other IOs you can use Windows Thread Pool API(https://learn.microsoft.com/en-us/windows/win32/procthread/t...) so that you do not need to manually manage threads or register/unregister IO handlers/callbacks. gRPC uses that.
Though Windows IOs internally are all async, actually it makes using sync I/O easier and you do not need to say it is a super bad idea. Windows has IOCP. If the machine has n logical CPUs, you may create a thread pool with 2*n threads. And, by default the operating system will not make more than n threads active at the same time. When one of the threads is doing blocking IO and entered IO wait state, the OS will wake-up another thread and let it go. This is why the number of threads in the thread pool needs be larger than than the number of CPUs. This design doesn't lead to an optimal solution, however, practically it works very well. In this setting you still have the flexibility to use async IOs, but it is not a sin to use sync IO in a blocking manner in a thread pool.
Disclaimer: I work at Microsoft and ship code to Windows, but the above are just my personal opinions.
> JavaScriptCore doesn't have WebAssembly enabled on Windows yet.
I got JavaScriptCore compiling with WebAssembly enabled yesterday, but I don't know how long it'll take to get it to actually work.
The bigger problem for Bun is that JavaScriptCore doesn't have the FTL JIT enabled on Windows [1]. It's going to be much slower than other platforms without that final tier of JIT, shows up pretty dramatically on benchmarks.
Sync IO is probably fine on Windows with the exception of CloseHandle, in which case Windows Defender or other AV will invoke a file filter in the kernel's file I/O filter stack to scan changes for data recently written to the file. A common approach used in Rust, version control software, and other runtimes is to defer file closing to a different thread to keep other I/O and user-facing threads responsive. All that said, I think IOCP on Win32 is a far superior asynchronous programming model to the equivalent APIs on Linux which feel far less usable (with more footguns).
This definitely also used to be true on macOS. Bun previously would just request the max ulimit for file descriptors and then not close them. Most tools don't realize there are hard and soft limits to file descriptors, and the hard limit is usually much higher.
On Linux, not closing file descriptors makes opening new ones on multiple threads occasionally lock for 30ms or more. Early versions of `bun build` were something like 5x slower on Linux compared to macOS until we narrowed down that the bug was caused by not closing file descriptors.
Hey Jarred, any idea where the bundler + React server components fall in the priority list? Colin's post[1] made me excited about the idea of having a lightweight RSC compiler/bundler built into bun. I'm curious when it'll be considered usable and ready for experimentation.
Bun is an executable as far as I understand. Would it be possible to call Bun directly from another language with bindings?
For example Erlang (and Elixir) has Native Implemented Functions[0] (NIF) where you can call native code directly from Erlang. Elixir has the zigler[1] project where you can call Zig code directly from Elixir.
Maybe you can see where I'm going with this, but it would be super cool to have the ability to call Javascript code from within Elixir. Especially when it comes to code that should be called on the server and client. I'm the developer of LiveSvelte[2] where we use Node to do SSR but it's quite slow atm, and would be very cool to use Bun for something like this.
When will Bun use an open standard like IRC, XMPP, or Matrix as a community chat option instead of being limited to usage of proprietary Discord & subject to their ToS?
The crowd that is blocked by sanctions, values privacy/freedom, has certain accessibility needs third-party clients could provide, or doesn’t have powerful enough hardware or a big/fast enough internet plan likely hasn’t been able to participate even if they wanted to. It’s a bad choice, and we see users get banned for weird, non-project-related reasons & they lose access to the community due to the whims of the Discord corporation.
My point is that the vast majority of users want Discord now, and second is Slack. I wish there were good enough Matrix clients and features that users demanded Matrix, but they don't.
Eh, I mean it’s better than Discord & Slack, yes. The Matrix model of mirror the entire history of all user conversations, including attachments, make it costly to host the storage side. This in turn makes self-hosting not appeal which has lead to de facto centralization around Matrix.org. I would like to see a greater uptake in XMPP MUCs as it’s way lighter to self-host …or IRCv3 which is lightweight but has some modern comforts. Chat needs to be treated an non-permanent where Slack & Discord have made folks over-reliant on it & now we can no longer do a simple web search.
Without in any way suggesting this is a solution rather than a work around, the last time I needed Discord for something I used bitlbee to present it as an ircd and it worked out nicely.
That workaround is obviously not ideal & still requires an account (& possibly a phone number too). And your private message will still be logged. And using the service also still causes a big issue for search as you can’t find solutions using an engine like you could with a forum.
If there are the resources a community would self-host a decentralized, federated server where users are in control of their account/data along with the community in control of moderation, bans, CoC, ToS… & then bridge to other services from that base if they are seen as useful. If a community has less resources certain servers use less resources—especially if the server isn’t supposed to hold the entire history (which these chat rooms shouldn’t be seen as a place for permanent decision making anyhow).
There's Deno already if someone really wants their JS runtime in Rust. Personally, since everything just runs on a JIT JS runtime anyway, I don't really see to much of a difference between Node, Deno, and Bun, it's not like the JS is being AOT compiled via Zig or Rust, which would be very interesting, you could basically treat JS as a compiled language rather than an interpreted one, even if the JIT is already fast.
Why would the implementation language matter for the compiler? It is a traditional input-output algorithm, it is either good and does many fancy optimizations for good output, or it isn’t. Sure, the speed of compilation may vary, but that’s a different question.
Also, JS being such a dynamic language, it will likely perform better with a speculative JIT compiler, e.g. a shape can be assumed for a frequently used object. This is only possible with PGO to a degree.
In theory it could be: compiling for the specific latency profile and instruction-set of each individual machine. In practice that has never been done.
Are there any plans to enhance the bun compile functionality, e.g. bytecode generation? Similar to pkg in the node world. We use the latter to ship close-source binaries to customers.
"Rather than runtime permission checks (Deno's model) which could potentially have bugs that lead to permissions being ignored, Bun plans to have binary dead code elimination based on statically analyzing what features/native modules are used by the code."
If you have a lot of duplicated functionality in a web frontend and backend, it may be a lot more maintainable to do the development once and not have to keep two implementations in sync.
As far as performance, if you’re using one of the JS application frameworks with server-side pre-rendering time to interactivity may very well be faster than anything you can build in Go or Rust.
If you're building in plain Go with pure SSR, sure, but OP is talking about building a React frontend that is pre-rendered server-side. TTI very well could be faster with that than with a React front end that talks to a Go backend.
I'm kind of lost here. It's been ages since I wrote anything frontend related so here the question: how does it matter?
I mean unless you have pure front end app - you are still going to talk to some backend. Regardless of how you got your frontend part - generated by the server or a static file served by nginx.
Modern JS frameworks pre-render the components on the server then they attach to them to "rehydrate" and add JS handlers on the client. They also allow mixing of purely-server-side components (essentially templates, no JS or hydration) and mostly-client-side components (that still pre-render on the server)
For Rust, Leptos (https://leptos.dev/) would be one choice that can do SSR+hydration (but not intermixed with server-side components, at least AFAIK not yet)
While many people have the impression the Rust type system slows them down while prototyping, I think it is a huge time saver when working on existing code, so I'm not sure why you think Rust isn't as maintainable.
Because there's a large productivity gain from being able to use one language for both the frontend and the backend, and server-side JS tooling is far more mature and usable than client-side Go or Rust.
Because they simply can't convert all of their js code into either go or rust for a forseeable future? Even providing they actually find those languages desirable, comparing to typescript.
Yeah it’s a silly question. It’s like going into a python thread and being like “why didn’t you implement all of this in Go or Rust?” Well, there’s a thousand reasons, and it’s really not even worth the time to unwrap this very reductive question that is really only going to turn into a language flame war for no reason.
We provide both bun and deno as typescript runtimes for windmill.dev (oss retool alternative) and we get overwhelming feedback that people are confused by the little changes required to adapt to deno and would rather just stick to the node.js mode aka bun. Now that bun is getting close to 1:1 node.js support, I predict we will see a lot more bun everywhere.
@Jarred was super responsive to help me adapt bun to our distributed cache storage, so all props to him.
While it is sad, I can't help but agree. I've spent many, many years working with Node and tried picking up Deno recently. The small cuts quickly add up where getting even some of the official examples to work takes effort because they're ever so slightly out of date and things have changed in the interim.
I remember when the project started. I believe that the ability to learn the language as quickly as he did, get the feedback required and iterating quickly helped a lot! All of this was possible with Zig. People are amazing and great inspiration, it's not sad at all.
Is that still the case tho? Recent Deno releases have obscured how well it's Node compatibility mode is, I was just wondering if you have more details.
Thanks for building Windmill. I spin up a test instance and was blown away by the well-designed-ness.
I'm looking to investigate using Windmill as a website builder for some small internal system. Few questions:
- Is it possible to setup custom path for flows (to hack it into a REST API)
- How can we go and make authenticated flow
I have never used Bun before. I tried to create a nextjs app with it and it hangs for me when it gets to the Tailwind CSS step. I am using WSL and following the official nextjs installation docs for installing nextjs using Bun.
$ bun x create-next-app
What is your project named? … mytest2023
Would you like to use TypeScript? … No / Yes
Would you like to use ESLint? … No / Yes
? Would you like to use Tailwind CSS? › No / Yes
Wow, this looks like a great release. Excited about debugger support!
> This domain debug.bun.sh hosts a stripped-down version of Safari Developer Tools designed for debugging Bun. Open the link in your preferred browser and a new debugging session will start automatically.
Is there a way debug without an Internet connection?
It's also sad that Bun is written in Zig, which has not had a proper review of the stdlib since it's not v1 yet, where Andrew has already stated publicly that Zig should not be used in production until v1 due to security vulnerabilities in the standard lib.
Bun uses Zig’s standard library carefully. We read the code for the parts we use (and sometimes change it locally). We also rely on mature C/C++ libraries like picohttpparser, BoringSSL, c-ares (along with the C std lib). We use lots of code from WebKit, and are starting to use more of their security features like segregated heaps
Bun is currently below v1.0, so not declared production-ready either.
But I'm not sure Bun should go to 1.0 without Zig going to 1.0 too. And Zig 1.0 is a distant target, which is a normal timeline for a programming language.
Bun is apparently targeting September 7th for a 1.0 release. I also have concerns about delivering a stable api using a language that doesn't have one yet.
Tbh I've had the same issue delivering my own 1.x software with 0.x dependencies and my argument has always been that it's my problem, not the client's, so "do not worry". Being realistic, no sw offers any guarantees and Zig's 1.0 stability is more of a "contract" than a "guarantee".
I've never heard Andrew say you shouldn't use Zig in production because of "security vulnerabilities", but simply because Zig is quite immature (expect bugs including segfaults in perfectly "good" code) and changing constantly, not something anyone should want in a production setting.
Honestly. In a world in which JavaScript is the number one language, I’m walking back on the idea that “instability means no good for production”.
10 years ago, I thought for sure that JavaScript devs were eventually going to get sick of breaking and deprecating changes, but they’re still going strong with picking frameworks that just don’t give a shit, switching to new breaking tool chains, etc.
It seems that there are A LOT of developers willing to put up with a whole lot more than I am.
The one I found a while back was a DOS in UTF-8 decoding. I believe it's since been fixed but things like that given that the standard library hasn't been audited. Andrew will definitely have that happen at some point but I'd not put anything Zig into production right now, personally.
> Andrew has already stated publicly that Zig should not be used in production until v1 due to security vulnerabilities in the standard lib.
Andrew has made the claim a couple years ago that Zig should not be used in production yet. The part about security is not at all part of anything he ever claimed, and is in fact only something that you went crazy about on your own.
While I can understand holding real hard onto your opinion, please don't put words in people's mouths, especially when the person in question does not share your position on the matter at all.
As an outside observer without any connection to any of these project I'd recommend that you step back from posting strongly held negative opinions and reflect on your biases and assumptions here. It sounds like you are unreasonably disgruntled against zig and derailing tangentially related threads, then claiming unwarranted victimhood.
We are talking about two projects (zig and bun), likely years before 1.0, and you complain they're not perfect, or improving on your timeline. Projects improve security and quality by increasing adoption and thus human resources available for auditing and fixes. You seem to be advocating against adoption, or presuming current users are uninformed about the project's status.
My hobbyist-level interactions with the zig community indicated nothing by calm professionalism and enthusiasm for quality software.
I find it rather odd that junon's constantly bringing up this security issue for the past 6+ months in all threads that are even in the same neighborhood as Zig, acknowledges that maybe his conversations should go private enough that he asks Dang to anonymize his past public comments so they aren't associated with him.... Just to repeatedly do it again.
Are we going to come back to this thread in a month and see all these comments of his anonymized too?
I have no dog in this fight as I don't use Bun or Zig nor do I plan on it, but from another outsiders perspective, he definitely seems to have a grudge against Zig and Andrew and is trying to play victim over it.
Please observe that in my post I only recommend reflection, and describe how their communication sounds like to me. Seems like I'm not the only one. I am specifically trying to avoid the overtly adversarial language of telling someone off, or telling them what to think or do, so please don't ascribe that unnecessarily.
Dude. It’s really odd behaviour of a person to have their pull request rejected, so they respond by going in to every single hn post related to zig and posting major exaggeration.
Your pull request was rejected because it wasn’t the direction the language wanted to go.
You keep saying that zig people don’t wish to reconcile privately, but looking at your posts, it’s clear why they’ve stopped engaging with you.
> Yes you and Andrew seem to have a vendetta against me on HN
Two years ago you found a utf8 decoding function in the stdlib that asserted in its documentation that it expects valid utf8. You then went on a Zig community on Discord and started saying that it's a vulnerability because if you feed it invalid utf8, the function will not work correctly. People told you that, well, that's part of the function contract, but you didn't want to hear it and went to post everywhere that Zig doesn't take security seriously (actual quote from you). People also tried to explain to you that a function that does validate the encoding would be welcome, but that since Zig was a new programming language, we didn't have one yet and that for now that's what the stdlib offered (ie the function that expects valid utf8). In the meantime somebody else did implement the better API but, two years later, you're still here fixated on that same thing.
> Just strange the Zig team refuses to reconcile this privately and instead resorts to berating me on HN of all places.
From my perspective the best outcome would be that you somehow realize how silly this entire thing is and finally let go. For more complex situations I could understand having an "agree to disagree" conclusion, but given the incredibly ridiculous nature of this specific issue I don't think there's much more for anybody else to learn.
If the above can't happen then I would ask you simply to stop posting misinformation about Zig.
I wonder if Bun would already be past 1.0 if all the time spent doing the "bundler" stuff was instead spent on the runtime itself.
Did Jarred think that having a weaker variant of every feature in the Javascript world would somehow make it more attractive? Why was "bun build ..." considered better or necessary when there is `bunx esbuild ...`
I like that it's trying to be a faster Node, and the better native database drivers etc. Having a decent package manager alongside is also nice but anything else makes me question the long term vision/feasibility of Oven and Bun's maintenance.
`Bun.build` is a game-changer. They key is it is a fast and simple bundling _primitive_. You can use it to write your own `webpack-dev-server`, you can spin up 10 of them in parallel or series, with one api call. And it's only going to get better. It 100% made sense that Bun own this. Bun's plugins work for bundling and server runtime. Another example: for SSR, you can parse the AST of a file once, and re-use that for SSR runtime and bundling during dev. They key is that bundling is no longer this _heavy_ task that you spawn in another process, it can be as simple as function call.
Once you get it, you will realize how much of a game-changer it is. This is the reason Bun is going to win.
I like that Bun is going in the direction of a monolithic, self-contained binary. I'm sick of fiddling with tsc, ts-node, and stringing together 5 different build tools.
I'd rather install Bun, then get on with building our app.
Isn’t that like saying “glad Typescript exists but still prefer Flow?” One does a tiny subset of the other after all, and when you compare what Flow does to what Typescript does, I think it’s possible to argue quite well that the very tiny subset of what Typescript does is arguably comparable to what flow offers but that perhaps you might get basically everything that Flow offers and 100x more if you adopted typescript. And perhaps Flow might be better at several things but if it’s only really good at solving 5% of the total problem I’m going to pick the solution that also solves the other 95% even if the 5% Flow solves is just a mediocre solution
Note that I don’t really think that bun is the solution for everything or really that it is all that great. I am just pointing out that you are trying to compare two things that are not really the same thing at all. One is probably a strict superset of the other
I think their point is bun is not a runtime either because you can use it to spit out a blob of js that is run by the browser as normal. Which is how I use it to build js libs. Bun is then just the build and test tooling, and the various runtime bugs do not affect my downstream users (but it can impact testing).
Oh, people only use Bun as a build tool? Sorry, I thought it was being discussed as a runtime in the same land as Deno and Node. If it’s just a build tool, then parent comment is legitimate and I apologize. It’s weird though, when I google bun js the first thing it describes itself as is a “fast all-in-one JavaScript runtime”
And a fricken great build tool because it speaks typescript, jest and bundling out of the box so you remove several layers of partially compatible tools (eg. Esm, jest and typescript don't really work that well together), yet still end up with a blob of js and the end so it's transparent to the browser end user