> It's in Java, but the lessons can be applied in every language.
I can only discourage anyone from applying Java patterns all over the place. One example in JavaScript: There was a functionality that required some parameters with default values. The plain solution would have been:
function doStuff({ x = 9, y = 10 } = {}) { ... }
Instead, they created a class with private properties and used the builder pattern to set them. Totally unnecessary.
- Everything locally stored in the repo: PRs, comments, issues, discussions, boards, ...
- CLI first
- Offline first (+ syncing)
- A website for hosting/presentation
Noted :) In another comment I linked to beads, which is a cool project to keep your issue tracker in your repo, but that's just a personal thing, no comment on what the company plans to do (or not) in this area.
I use command-line tooling much more than IDEs (e.g. VS Code), so the `gh` command-line tool (https://cli.github.com) for doing most of the usual hub-oriented workflow (PR authoring, viewing issues, status updates, etc) really helps a lot - I don't have to constantly <cmd>+<tab> to my browser, and point-click-point-click through web pages so much. It would be fantastic if ersc or any other jj-centered code-sharing hub had similar tooling early on.
When I tried Fossil it had things weirdly separated.
I was expecting when I make a commit, I would have the facility to specify what issues it addressed and it would close them for me automatically. It seemed there is so much opportunity there to "close the loop" when the issue tracker, etc and integrated in your VCS, but it wasn't taken.
That's my favourite thing about fossil though. History is what it is, not simplified to look "clean" (i.e. hide what actually happened and when) and you get a lot fewer footguns to ruin everything by accidentally rebasing things to the wrong place without noticing.
I have huge respect for Mitchell, it's impressive what he achieved.
I agree with all the points of this article and would like to add one: Have a quick feedback loop. For me, it's really motivating to be able to make a change and quickly see the results. Many problems just vanish or become tangible to solve when you playfully modify your source code and observe the effect.
This perfectly aligns with my experience.
Every large project I have worked on showed a clear correlation between the ease of setup and running and the number of problems on the project, like bugs and missed deadlines.
Totally agree. I work in LLM training software and I believe progress in the field is actually much slower than it should be because of the excruciatingly long feedback loops involved in development. The software stacks are deep and abstract and much of the testing involves full integration tests that take a long time to spin up.
Interesting. What aspects of the development workflow/cycle have the most room for improvement (i.e. is there ranking of the "height" of the "hanging fruit" throughout the process)? What sort of software tooling would help?
YES that is one of the all-time most inspiring talks I've ever seen. DX is so important. I got a taste for this kind of thing when I first encountered LiveReload (circa 2012?) and radically upgraded my and my team's webdev workflows.
E2E tests in a high ratio to other tests will cause problems. They’re slow and brittle and become a job all on their own. It’s possible that they might help at the start of debugging, but try to isolate the bugs to smaller units of code (or interactions between small pieces of code).
Hermetic e2e tests (i.e. ones that can run offline and fake apis/databases) dont have that problem so much.
They also have the advantage that you can A) refactor pretty much everything underneath them without breaking the test, B) test realistically (an underrated quality) and C) write tests which more closely match requirements rather than implementation.
> i.e. ones that can run offline and fake apis/databases
I can see a place for this, but these are no longer e2e tests. I guess that’s what “hermetic” means? If so it’s almost sinister to still call these e2e tests. They’re just frontend tests.
> A) refactor pretty much everything underneath them without breaking the test
This should always be true of any type of tests unless it’s behavior you want to keep from breaking.
> B) test realistically (an underrated quality)
Removing major integration points from a test is anything but realistic. You can do this, but don’t pretend you’re getting the same quality as a colloquial e2e tests.
> C) write tests which more closely match requirements rather than implementation
If you’re ever testing implementation you’re doing it wrong. Tests should let you know when a requirement of your app breaks. This is why unit tests are often kinda harmful. They test contracts that might not exist.
> try to isolate the bugs to smaller units of code (or interactions between small pieces of code).
This is why unit tests before e2e tests.
It's higher risk to build on components without unit tests test coverage, even if the paltry smoke/e2e tests say it's fine per the customer's input examples.
Is it better to fuzz low-level components or high-level user-facing interfaces first?
IIUC in relation to Formal Methods, tests and test coverage are not sufficient but are advisable.
Competency Story: The customer and product owner can write BDD tests in order to validate the app against the requirements
Prompt: Write playwright tests for #token_reference, that run a named factored-out login sequence, and then test as human user would that: when you click on Home that it navigates to / (given browser MCP and recently the Gemini 2.5 Computer Operator model)
And I would add that e2e tests should be more about the businesses rules. Making sure everything is there for a specific flow and not caring that much about the intricacy of things. And such, it should really be part of Ops, not Dev.
Quick feedback with unit tests can help. It can be a pain to decouple stuff so you can test them better, but it’s worth it IMO.
This might be said in jest. But does everything have to be for world domination? Is the guy not allowed to have actual hobby projects? That go just where he fancies, including potentially nowhere at all really...
A little bit unrelated, but how do people deal with the abstinence of payloads in zig errors? For example, when parsing a JSON string, the error `UnexpectedToken` is not very helpful. Are libraries typically designed to accept an optional input to store potential errors?
The idiomatic way in Zig is to return the simple unadorned error, but return detailed error data through a pointer argument passed into the function, allowing the function to fill in extra information before returning an error.
The advantage of this is that everything is explicit, and it is up to the caller to arrange memory usage for error data; ie. the compiler does not trigger any implicit memory allocation to accommodate error returns. This is a fundamental element of Zig's design, that there are no hidden or implicit memory allocations
Well, it is an optional parameter (my typo made that unclear), so the caller may omit it if desired. The reason for this particular tradeoff, is in favour of correctness rather than convenience, eg. if the failure is in an OOM condition, it can still be reported.
Ok this sounds nice. You would just need some syntax sugar to make this ergonomic (both in the side of function calling, and also in the side of the erroring function) but the fundamentals seem solid
I think there's space for a post-Rust, post-Zig language to combine the approaches of both and make it possible to do away with automatic heap allocation (when needed - not every piece of code wants to bother with this), but also don't make code overly verbose when doing so.
Well, to look at it optimistically, it's meant as a foundational language, to take over the role C still has today, of being the only true glue language that can underlie all the others. If Zig can actually take over that function, then it will be a major upgrade to the ecosystem, even though it will never be the language of choice for most projects.
Despite all my bashing regarding C, I would rather keep C around with a standard -fhardening, -fsafe, or whatever, with similar constraints, enabling enums without implicit numeric conversions, standard library types for arrays and strings, no decays of arrays into pointer.
Probably more than enough if we leave C as a kind of portable Assembler role, to be used as much as Assembly is, and leave everything else to safer managed languages.
A model just like UNIX authors themselves have applied when creating Inferno with Limbo, with the learnings of UNIX and Plan 9.
Not a fan of @, !, .{ } all over the place, or the struct based imports that look like Javascript require() instead of a proper module system.
The main difference is that C doesn't have error (result types) baked into the language. So the expectation would be in the Zig example above, the calling function would never even bother to inspect the error details, unless the error path was triggered by the called function.
> Are libraries typically designed to accept an optional input to store potential errors?
Yes. Stdlib's JSON module has a separate diagnostics object [1]. IMO, this is the weakest part of Zig's error handling story, although the reasons for this are understandable.
I'd like to note that std.json, as it currently stands, is not a good example of proper error handling. Unless you use that awkward lower level Scanner API, if you get a schema mismatch it reports some failure code and does not populate a diagnostics struct, which is painful and useless.
On the other hand the std.zon author did not make this mistake, i.e. `std.zon.parse.fromSlice` takes an optional Diagnostics struct which gives you all the information you need (including a handy format method for printing human readable messages).
I presume sometime in the not-immediate-but-not-too-distant-future there is going to be a push to "unify" std with a bunch of the "best practices" and call them out in the documentation.
I wrote an article about one possible pattern which is a concrete realization of your question -- though with more ceremony and complexity since the pathway is fully compiled out if you don't use it (vs a nullable pointer strategy):
> Are libraries typically designed to accept an optional input to store potential errors?
Thank you all for these great and detailed explanations, I've learned a lot! I like the approach with an optional pointer, it fits to zig's philosophy quite well. Although there's a bit of a disconnect between the unadorned error and the corresponding data struct. I could imagine it requires care when the data struct is a union, as one needs to know which error corresponds to which variant.
I think the idea is errors are for control flow. If you have other information to return from a function, you can just return it — whether directly as the return value or through an “out” parameter or setting it in some context.
At a practical level, most of the language doesn't care about the distinction between errors and other types. You mostly just have to consider `try/catch/errdefer`. Your question then, mildly restated, is "how do people deal with cases where they want to use `try/catch/errdefer` but also want to return a payload?"
It's worth asking, at least a little, how often you want that in the first place.
Contrasting with Rust as an example, suppose you want Zig's "try" functionality with arbitrary payloads. Both functions need a compatible error type (a notable source of minor refactors bubbling into whole-project changes), or else you can accept a little more boilerplate and box everything with a library like `anyhow`. That's _fine_, but does it help you solve real problems? Opinions vary, but I think it mostly makes your life harder. You have stack unwinding available if you really need to see the source of a thing, and since the whole point of `try` is to bubble things up to callers who don't have the appropriate context to handle them, they likely don't really care about the metadata you're tacking on.
Suppose you want Zig's "catch" functionality with arbitrary payloads. That's just a `union` type. If you actually expect callers to inspect and care about the details of each possible return branch, you should provide a return type allowing them to do stuff with that information.
The odd duck out is `errdefer`. IMO it's reasonably common for libraries to want to do some sort of cleanup on "error" conditions, where that cleanup often doesn't depend on which error you hit, and you lose that functionality if you just return a union type. My usual workaround (in the few cases where I actually want that information returned and also have to do some sort of cleanup) is to have a private inner function and a public outer function. The inner function has some sort of `out` parameter where it sticks that unioned metadata. The outer function executes the code which might have to be cleaned up on errors, calls the inner function, and figures out what to do from there. Result location semantics make it as efficient as hand-rolled code for release builds. Not everything fits into that paradigm, but the exceptions are rare enough that the extra boilerplate really isn't bad on average (especially when comparing to an already very verbose language).
Depending on the API, your proposal of having a dedicated `out` parameter exposed further up the chain to callers might be appropriate. I'm sure somebody has done so.
Something I also do in a fair amount of my code is let the caller specify my return type, and I'll avoid work if they don't request a certain payload (e.g., not adding parse failure line numbers if not requested). It lets you write a reasonably generic API without a ton of code complexity, still allowing callers to get the information they want.
> suppose you want Zig's "try" functionality with arbitrary payloads. Both functions need a compatible error type (a notable source of minor refactors bubbling into whole-project changes), or else you can accept a little more boilerplate and box everything with a library like `anyhow`. That's _fine_, but does it help you solve real problems? Opinions vary, but I think it mostly makes your life harder.
This is not true, you simply need to add a single new variant to the callers error type, and either a From impl or a manual conversion at the call site
I can only discourage anyone from applying Java patterns all over the place. One example in JavaScript: There was a functionality that required some parameters with default values. The plain solution would have been:
Instead, they created a class with private properties and used the builder pattern to set them. Totally unnecessary.