Plenty. I assumed that the code examples had been cleaned up manually, so instead I looked at a few random "Caveats, alternatives, edge cases" sections. These contain errors typically made by LLMs, such as suggesting to use features that doesn't exist (std.mem.terminated), are non-public (argvToScriptCommandLineWindows) or removed (std.BoundedArray). These sections also surfaces irrelevant stdlib and compiler implementation details.
This looks like more data towards the "LLMs were involved" side of the argument, but as my other comment pointed out, that might not be an issue.
We're used to errata and fixing up stuff produced by humans, so if we can fix this resource, it might actually be valuable and more useful than anything that existed before it. Maybe.
One of my things with AI is that if we assume it is there to replace humans, we are always going to find it disappointing. If we use it as a tool to augment, we might find it very useful.
A colleague used to describe it (long before GenAI, when we were talking about technology automation more generally) as following: "we're not trying to build a super intelligent killer robot to replace Deidre in accounts. Deidre knows things. We just want to give her better tools".
So, it seems like this needs some editing, but it still has value if we want it to have value. I'd rather this was fixed than thrown away (I'm biased, I want to learn systems programming in zig and want a good resource to do so), and yes the author should have been more upfront about it, and asked for reviewers, but we have it now. What to do?
There's a difference between the author being more upfront about it and straight-up lying on multiple locations that zero AI is involved. It's stated on the landing page, documentation and GitHub - and there might be more locations I havent' seen.
Personally, I would want no involvement in a project where the maintainer is this manipulative and I would find it a tragedy if any people contributed to their project.
> and yes the author should have been more upfront about it
They should not have lied about. That's not someone I would want to trust and support. There's probably a good reason why they decided to stay anonymous.
We really are in the trenches. How is this garbage #1 on the front page of *HN* right now?
Even if it was totally legitimate, the "landing page" (its design) and the headline ("Learning Zig is not just about adding a language to your resume. It is about fundamentally changing how you think about software."?????) should discredit it immediately.
When was the front page of HN that impressive anyways? It has always been latest fad and the first to comment "the right thing to say" gets rewarded with fake internet points.
I made a comment about it obviously being AI generated, and my comment[1] was quickly downvoted, and there were comments explaining how I was obviously incorrect ("Clearly your perception of what is AI generated is wrong."). My comment was pushed under many comments saying the book was a fantastic resource. Extremely strange.
> The Zigbook intentionally contains no AI-generated content—it is hand-written, carefully curated, and continuously updated to reflect the latest language features and best practices.
That issue has been deleted. In addition, the author has tagged other issues with labels that are not appropriate for the mission of being a core zig learning resource:
I seem to remember seeing this a week or two ago, and it was very obviously AI generated. (For those unfamiliar with Zig, AI is awful at generating Zig code: small sample dataset and the language updates faster than the models.) Reading it today I had a hard time spotting issues. So I think the author put a fair amount of work into cleaning up hallucinations and fixing inaccuracies.
The exchange on https://github.com/zigbook/zigbook/issues/4? If so, your botdar is better than mine. While the project owner doesn't understand the issue at first, they seem to get it in the end. The exchange looks sort of normal to me, but then I guess I am doomed to be fooled regularly in our new regime.
Some text in the book itself is odd, but I'll be a guinea pig and try to learn zig from this book and see how far I get.
The exchanges look totally like a robot to me. It looks to be deleted now. But the first response that responded to the bug report with how many systems its tested on seems weird and robot like. Then the screenshot that the "person" has to be told to scroll down, that is very ai like.
I literally just came across this resource a couple of days ago and was going to go through it this week as a way to get up to speed on Zig. Glad this popped up on HN so I can avoid the AI hallucinations steering me off track.
> The Zigbook intentionally contains no AI-generated content—it is hand-written, carefully curated, and continuously updated to reflect the latest language features and best practices.
The author could of course be lying. But why would you use AI and then very explicitly call out that you’re not using AI?
There are too many things off about the origin and author to not be suspicious of it. I’m not sure what the motivation was, but it seems likely. I do think they used the Zig source code heavily, and put together a pipeline of some sort feeding relevant context into the LLM, or maybe just codex or w/e instructed to read in the source.
It seems like it had to take quite a bit of effort to make, and is interesting on its own. And I would trust it more if I knew how it was made (LLMs or not).
Because AI content is at minimum controversial nowadays. And if you are ok with lying about authorship then It is not further down the pole to embelish the lie a bit more
I looked into that project issue your referencing. There is absolutely zero mentioning of zig labeled blocks in that exchange. There is no misunderstanding or confusion whatsoever.
It's a formatting bug with zig labeled blocks and the response was a screenshot of code without one, saying (paraphrasing) lgtm it must be on your end.
I'd love it if we can stop the "Oh, this might be AI, so it's probably crap" thing that has taken over HN recently.
1. There is no evidence this is AI generated. The author claims it wasn't, and on the specific issue you cite, he explains why he's struggling with understanding it, even if the answer is "obvious" to most people here.
2. Even if it were AI generated, that does not automatically make it worthless. In fact, this looks pretty decent as a resource. Producing learning material is one of the few areas we can likely be confident that AI can add value, if the tools are used carefully - it's a lot better at that than producing working software, because synthesising knowledge seen elsewhere and moving it into a new relatable paradigm (which is what LLMs do, and excel at), is the job of teaching.
3. If it's maintained or not is neither here nor there - can it provide value to somebody right now, today? If yes, it's worth sharing today. It might not be in 6 months.
4. If there are hallucinations, we'll figure them out and prove the claim it is AI generated one way or another, and decide the overall value. If there is one hallucination per paragraph, it's a problem. If it's one every 5 chapters, it might be, but probably isn't. If it's one in 62 chapters, it's beating the error rate of human writers quite some way.
Yes, the GitHub history looks "off", but maybe they didn't want to develop in public and just wanted to get a clean v1.0 out there. Maybe it was all AI generated and they're hiding. I'm not sure it matters, to be honest.
But I do find it grating that every time somebody even suspects an LLM was involved, there is a rush of upvotes for "calling it out". This isn't rational thinking. It's not using data to make decisions, its not logical to assume all LLM-assisted writing is slop (even if some of it is), and it's actually not helpful in this case to somebody who is keen to learn zig to decide if this resource is useful or not: there are many programming tutorials written by human experts that are utterly useless, this might be a lot better.
That didn't happen.
And if it did, it wasn't that bad.
And if it was, that's not a big deal.
And if it is, that's not my fault.
And if it was, I didn't mean it.
And if I did, you deserved it.
> 1. There is no evidence this is AI generated. The author claims it wasn't, and on the specific issue you cite, he explains why he's struggling with understanding it, even if the answer is "obvious" to most people here.
There is, actually,
You may copy the introduction to Pangram and it will say 100% AI generated.
What’s the evidence that it is human-generated? Oh I see. If it is AI generated then you still have to judge it by its merit, manually. (Or can I get an AI to do it for me?) And if they lied about it being human-authored? Well what if the author refutes that accusation? (Maybe using AI? But why judge them if they use AI to refute the claim? After all we must judge its on its own merit (repeats forever))
> 2. Even if it were AI generated, that does not automatically make it worthless.
It does make it automatically worthless if the author claims it's hand made.
How am I supposed to trust this author if they just lie about things upfront? What worth does learning material have if it's written by a liar? How can I be sure the author isn't just lying with lots of information throughout the book?
> Learning Zig is not just about adding a language to your resume. It is about fundamentally changing how you think about software.
I'm not sure what they expect, but to me Zig looks very much like C with a modern standard lib and slightly different syntax. This isn't groundbreaking, not a thought paradigm which should be that novel to most system engineers like for example OCaml could be. Stuff like this alienates people who want a technical justification for the use of a language.
There is nothing new under the Sun. However, some languages manifest as good rewrites of older languages. Rust is that for C++. Zig is that for C.
Rust is the small, beautiful language hiding inside of Modern C++. Ownership isn't new. It's the core tenet of RAII. Rust just pulls it out of the backwards-compatible kitchen sink and builds it into the type system. Rust is worth learning just so that you can fully experience that lens of software development.
Zig is Modern C development encapsulated in a new language. Most importantly, it dodges Rust and C++'s biggest mistake, not passing allocators into containers and functions. All realtime development has to rewrite their entire standard libraries, like with the EASTL.
On top of the great standard library design, you get comptime, native build scripts, (err)defer, error sets, builtin simd, and tons of other small but important ideas. It's just a really good language that knows exactly what it is and who its audience is.
I think that describing Zig as a "rewrite of C" (good or otherwise) is as helpful as describing Python as a rewrite of Fortran. Zig does share some things with C - the language is simple and values explicitness - but at its core is one of the most sophisticated (and novel) programming primitives we've ever seen: A general and flexible partial evaluation engine with access to reflection. That makes the similarities to C rather superficial. After all, Zig is as expressive as C++.
> Most importantly, it dodges Rust and C++'s biggest mistake, not passing allocators into containers and functions
I think that is just a symptom of a broader mistake made by C++ and shared by Rust, which is a belief (that was, perhaps, reasonable in the eighties) that we could and should have a language that's good for both low-level and high-level programming, and that resulted in compromises that disappoint both goals.
To me, the fact that Zig has spent so long in development disqualifies it as being a "rewrite of C."
To be clear, I really like Zig. But C is also a relatively simple language to both understand and implement because it doesn't have many features, and the features it does have aren't overly clever. Zig is a pretty easy language to learn, but the presence of comptime ratchets up the implementation difficulty significantly.
A true C successor might be something like Odin. I am admittedly not as tuned into the Odin language as I am Zig, but I get the impression that despite being started six months after Zig, the language is mostly fully implemented as envisioned, and most of the work is now spent polishing the compiler and building out the standard library, tooling and package ecosystem.
I don't think it's the implementation that's delaying Zig's stabilisation, but the design. I'm also not sure comptime makes the implementation all that complicated. Lisp macros are more powerful than comptime (comptime is weaker by design) and they don't make Lisp implementation complicated.
Fair. I'm not a compiler developer, so I'll defer to your expertise on that front.
That being said, I suppose my ultimate wonder is how small a Zig implementation could possibly be, if code size and implementation simplicity was the priority. In other words, could a hypothetical version of the Zig language have existed in the 80's or 90's, or was such a language simply out of reach of the computers of the time.
It's not quite as minimal as C, but it definitely could have been made in the 80s or 90s (actually, 70s, too) :) There were far larger, more complex languages back then, including low-level languages such as C++ and Ada, not to mention even bigger high-level languages. High-level languages were already more elaborate even in the 70s (comptime is no more tricky than macro or other meta-programming facilities used in Lisp in the sixties or Smalltalk in the 70s; it certainly doesn't come even remotely close to the sophistication of 1970s Prolog).
I don't think there's any programming language today that couldn't have been implemented in the 90s, unless the language relies on LLMs.
> Zig does share some things with C - the language is simple and values explicitness - but at its core is one of the most sophisticated (and novel) programming primitives we've ever seen: A general and flexible partial evaluation engine with access to reflection.
To my understanding (and I still haven’t used Zig) the “comptime” inherently (for sufficiently complex cases) leads to library code that needs to be actively tested for potential client use since the instantiation might fail. Which is not the case for the strict subset of “compile time” functionality that Java generics and whatnot bring.
I don’t want that in any “the new X” language. Maybe for experimental languages. But not for Rust or Zig or any other that tries to improve on the mainstream (of whatever nice) status quo.
> leads to library code that needs to be actively tested for potential client use since the instantiation might fail
True, like templates in C++ or macros in C or Rust. Although the code is "tested" at compile time, so at worst your compilation will fail.
> I don’t want that in any “the new X” language
Okay, and I don't want any problem of any kind in my language, but unfortunately, there are tradeoffs in programming language design. So the question is what you're getting in exchange for this problem. The answer is that you're getting a language that's both small and easy to inspect and understand. So you can pick having other problems in exchange for not having this one, but you can't pick no problems at all. In fact, you'll often get some variant of this very problem.
In Java, you can get by with high-level abstractions because we have a JIT, but performance in languages that are compiled AOT is more complicated. So, in addition to generics, low-level languages have other features that are not needed in Java. C++ has templates, which are a little more general than generics, but they can fail to instantiate, too. It also has preprocessor macros that can fail to compile in a client program. Rust has ordinary generics, which are checked once, but since that's not enough for a low-level language, it also has macros, and those can also fail to expand correctly.
So in practice, you either have one feature that can fail to compile in the client, or you can have the functionality split among multiple features, resulting in a more complicated language, and still have some of those features exhibit the same problem.
I wasn’t clear then. I would rather have N language features of increasing complexity/UX issues for dealing with increasingly complex situations rather than one mechanism to rule them all that can fail to instantiate in all cases (of whatever complexity). That’s the tradeoff that I want.
Why? Because that leads to better ergonomics for me, in my experience. When library authors can polish the interface with the least powerful mechanism with the best guarantees, I can use it, misuse it, and get decent error messages.
What I want out of partial evaluation is just the boring 90’s technology of generalized “constant folding”.[1] I in principle don’t care if it is used to implement other things... as long as I don’t have surprising instantiation problems when using library code that perhaps the library author did not anticipate.
[1]: And Rust’s “const” approach is probably too limited at this stage. For my tastes. But the fallout of generalizing is not my problem so who am I to judge.
> Okay, and I don't want any problem of any kind in my language, but unfortunately, there are tradeoffs in programming language design.
I see.
> So in practice, you either have one feature that can fail to compile in the client, or you can have the functionality split among multiple features, resulting in a more complicated language,
In my experience Rust being complicated is more of a problem for rustc contributors than it is for me.
> and still have some of those features exhibit the same problem.
Which you only use when you need them.
(I of course indirectly use macros since the standard library is full of them. At least those are nice enough to use. But I might have gotten some weird expansions before, though?)
That will have to do until there comes along a language where you can write anything interesting as library code and still expose a nice to use interface.
> I would rather have N language features of increasing complexity/UX issues for dealing with increasingly complex situations rather than one mechanism to rule them all that can fail to instantiate in all cases (of whatever complexity). That’s the tradeoff that I want.
It's not that that single mechanism can fail in all situations. It's very unlikely to fail to compile in situations where the complicated language always compiles, and more likely to fail to compile when used for more complicated things, where macros may fail to compile, too.
It's probability of compilation failure is about the same as that of C++ templates [1]. Yeah, I've seen compilation bugs in templates, but I don't think that's on any C++ programmer's top ten problem list (and those bugs are usually when you start doing stranger things). Given that there can be runtime failures, which are far more dangerous than compilation failures and cannot be prevented, that the much less problematic compilation failures cannot always be prevented is a pretty small deal.
But okay, we all prefer different tradeoffs. That's why different languages choose design philosophies that appeal to different people.
[1]: It's basically a generalisation of the same idea, only with better error messages and much simpler code.
I don't know man, Rust's borrowing semantics are pretty new under the sun, and actually do change the way you think about software. It's a pretty momentous paradigm shift.
To call Rust syntax beautiful is a stretch. It seems that way in the beginning but then quickly devolves into a monstrosity when you start doing more complex things.
Zig on the other specifically addresses syntax shortcomings in part of C. And it does it well. That claim of rust making C more safe because it’s more readable applies to Zig more than it does to Rust.
I feel like the reason the rust zealots lobby like crazy to embed rust everywhere is twofold. One is that they genuinely believe in it and the other is that they know that if other languages that address one of the main rust claims without all the cruft gains popularity they lose the chance of being permanently embdedded in places like the kernel. Because once they’re in it’s a decade long job market
> they know that if other languages that address one of the main rust claims without all the cruft gains popularity they lose the chance of being permanently embdedded in places like the kernel
First of all, I'm really opposed to saying "the kernel". I am sure you're talking about the Linux kernel, but there are other kernels (BSD, Windows etc.) that are certainly big enough to not call it "the" kernel, and that may also have their own completely separate "rust-stories".
Secondly, I think the logic behind this makes no sense, primarily because Rust at this point is 10 years old from stable and almost 20 years old from initial release; the adoption into the Linux kernel wasn't exactly rushed. Even if it was, why would Rust adoption in the Linux kernel exclude adoption of another language as well, or a switch to another, if it's better? The fact that Rust was accepted at all to begin with aside from C disproves the assumption, because clearly that kernel is open for "better" languages.
The _simplest_ explanation to why Rust has succeeded is that it's solves actual problems, not that "zealots" are lobbying for it to ensure they "have a job".
Rust is not stable even today! There is no spec, no alternative implementations, no test suite... "Stable" is what "current compiler compiles"! Existing code may stop compiling any day....
Maybe in 10 years it may become stable, like other "booring" languages (Golang and Java).
Rust stability is why Linus opposes its integration into kernel.
In the "other good news department", GCC is adding a Rust frontend to provide the alternative implementation, and I believe Rust guys accepted to write a specification for the language.
I'm waiting for gccrs to start using the language, actually.
I'm no Rust fan, but beauty of a syntax is always in the eye of the beholder.
I personally find Go, C++ and Python's syntax beautiful. All can be written in very explicit or expressive forms. On the other hand, you can hide complexity to a point.
If you are going to do complex things in a compact space, you'll asymptotically approach Perl or PCRE. It's maths.
> if other languages that address one of the main rust claims without all the cruft
But regardless of how much one likes Zig, it addresses none of the problems that Rust seeks to solve. It's not a replacement for Rust at all, and isn't suitable for any of the domains where Rust excels.
> and isn't suitable for any of the domains where Rust excels.
That's a pretty bold claim since Zig is specifically designed for systems programming, low level stuff, network services, databases (think Tigerbeetle). It's not memory safe like Rust is, but it comes with constructs that make it simple to build largely memory safe programs.
> It's not memory safe like Rust is, but it comes with constructs that make it simple to build largely memory safe programs.
Right, this is the specific important thing that Rust does that Zig doesn't (with the caveat that Rust includes the `unsafe` mechanism - as a marked, non-default option - specifically to allow for necessary low-level memory manipulation that can't be checked for correctness by the compiler). Being able to guarantee that something can't happen is more valuable than making it simple to do something correctly most of the time.
It's not that simple though, Zig has equivalent spatial memory safety which prevents issues that are pretty consistently among (or at) the top of the list for most dangerous vulnerability classes.
And while I don't have enough experience with Rust to claim this first hand, my understanding is that writing correct unsafe Rust code is at least an order of magnitude harder than writing correct Zig code due to all of the properties/invariants that you have to preserve. So it comes with serious drawbacks, it's not just a quick "opt out of the safety for a bit" switch.
> Being able to guarantee that something can't happen is more valuable than making it simple to do something correctly most of the time.
Of course, all other things being equal, but they're not.
> And while I don't have enough experience with Rust to claim this first hand, my understanding is that writing correct unsafe Rust code is at least an order of magnitude harder than writing correct Zig code due to all of the properties/invariants that you have to preserve.
How do you make such boldly dismissive assertions if you don't have enough experience with Rust? You are talking as if these invariants are some sort of requirements/constraints that the language imposes on the programmer. They're not. It's a well-known guideline/paradigm meant to contain any memory safety bugs within the unsafe blocks. Most of the invariants are specific to the problem at hand, and not to the programming language. They are conditions that must be met in any language - C and Zig are no exceptions. Failure to adhere to them will land you in trouble, no matter what sort of safety your language guarantees. They are often talked about in the context of Rust because the ones related to memory-unsafe operations can be tackled and managed within the small unsafe blocks, instead of being sprawling it throughout the code base.
> So it comes with serious drawbacks, it's not just a quick "opt out of the safety for a bit" switch.
Rust is not the ultimate solution to every problem in the world. But this sort of exaggeration and hyperbole is misleading and doesn't help anyone choose any better.
> How do you make such boldly dismissive assertions
As I said that's my understanding from talking and listening to people who have a lot of experience with Rust, Zig, and C.
So generally speaking, are you saying that writing correct unsafe Rust is only as difficult as writing correct Zig code and not, as I understand it to be, significantly more difficult?
If no references are involved, writing unsafe Rust is significantly easier than writing correct C, because the semantics are much clearer and easier to find in the documentation, and there's no insane things like type-based aliasing rules.
If references are involved, Rust becomes harder, because the precise semantics are not decided or documented. The semantics aren't complicated; they're along the lines of "while a reference is live, you can't perform a conflicting access from a pointer or reference not derived from that reference". But there aren't good resources for learning this or clarifying the precise details. This area is an active work-in-progress; there is a subteam of the Rust project led by Ralf Jung (https://www.ralfj.de/blog/) working on fully and clearly defining the language's operational semantics, and they are doing an excellent job of it.
When it comes to Zig, the precise rules and semantics of the memory model are much less clear than C. There's essentially no documentation, and if you search GitHub issues a lot of it is undecided and not actively being worked on. This is completely understandable given Zig's stage in development, but for me "how easy it is to write UB-free code" boils down to "how easy is it to understand the rules and apply them correctly", and so to me Zig is very hard to write correctly if you can't even figure out what "correct" is.
Once Zig and Rust both have their memory models fleshed out, I hope Zig lands somewhere comparable to where Rust-without-references is today, and I hope that Rust-with-references ends up being only a little bit harder (and still easier than C).
> So generally speaking, are you saying that writing correct unsafe Rust is only as difficult as writing correct Zig code and not, as I understand it to be, significantly more difficult?
Yes. That's correct. The point is, unsafe Rust is pretty unremarkable. Safe Rust doesn't just do borrow checking of references. It also forbids certain risky actions like raw pointer indirection or calling unsafe functions (across FFI, for example) [1]. Unsafe Rust just enables those features. That's it! Unsafe Rust doesn't disable anything or impose any additional restrictions. Contrary to a popular misconception, it doesn't even disable the borrow checker. Unsafe Rust actually gives you extra freedoms on top of what you already have (including the restrictions).
And now you have to be careful because Rust just gave you a footgun that you asked for. In a manually memory-managed language, you'd get fatigued by the constant worry about this footgun. In Rust, that worry is limited to those unsafe blocks, giving you the luxury to workout strategies to avoid shooting yourself in the foot. The 'invariants' are that strategy. You describe the conditions under which the code is valid. Then you enforce it there, so that you can breath freely in Safe Rust.
> And while I don't have enough experience with Rust to claim this first hand, my understanding is that writing correct unsafe Rust code is at least an order of magnitude harder than writing correct Zig code due to all of the properties/invariants that you have to preserve. So it comes with serious drawbacks, it's not just a quick "opt out of the safety for a bit" switch.
I think this is hard to generalize about. There are many instances where one might want to do unsafe memory operations in rust, with different correctness implications. I am suspicious that in Zig you do actually have to preserve all the same properties and invariants, and there's just nothing telling you if you did so or not or even what all of them are, so you don't know if your code is correct or not.
I would compare the recent Rust Android post [1], where they have a 5000x lower memory vulnerability rate compared to traditional C/C++ with the number of segfaults found in Bun. [2]
In my opinion Zig does not move the needle on real safety when the codebase becomes sufficiently complex.
The number is memory vulnerabilities. Not if they are exploitable. The numbers comes from this part of the article:
This near-miss inevitably raises the question: "If Rust can have memory safety vulnerabilities, then what’s the point?"
The point is that the density is drastically lower. So much lower that it represents a major shift in security posture. Based on our near-miss, we can make a conservative estimate. With roughly 5 million lines of Rust in the Android platform and one potential memory safety vulnerability found (and fixed pre-release), our estimated vulnerability density for Rust is 0.2 vuln per 1 million lines (MLOC).
Our historical data for C and C++ shows a density of closer to 1,000 memory safety vulnerabilities per MLOC. Our Rust code is currently tracking at a density orders of magnitude lower: a more than 1000x reduction.
Then someone's playing with definitions here because a bug that can't be exploited and doesn't have a demonstrable impact on safety or security isn't a vulnerability under any definition that I subscribe to - it's just a bug.
What we ultimately care about is how many preventable, serious defects sneak into production code - particularly those concerning data security, integrity, and physical safety. The only statistics we should all care about is how many serious CVEs end up in the final product, everything else is just personal preference.
Eliminating a segfault when `--help` is provided twice is nice, but it didn't fix a security vulnerability so using it to bolster the security argument is dishonest.
Sure but there's this belief in the Rust community that it's not responsible anymore to write software that isn't memory safe on the same level as Rust.
So Zig would fail that, but then you could also consider C++ unsuitable for production software - and we know it clearly is still suitable.
I predict Zig will just become more and more popular (and with better, although not as complete- memory safety), and be applied to mission critical infra.
If we ignore recent movents in govermental cybersecurity agencies, and big tech to move away from unsafe programming languages, as much as technically possible.
Introducing a language with the same safety as Modula-2 or Object Pascal, would make sense in the 1990's, nowadays with improved type systems making the transition from academia into mainstream, we (the industry) know better.
It is not only Rust, it is Linear Haskell, OCaml effects, Swift 6 ownership model, Ada/SPARK, Chapel,....
Of those listed, I'd bet Swift (having had experience with it) is the most pleasant to work with. I just hope it takes off on the systems and backend side at some point.
> So Zig would fail that, but then you could also consider C++ unsuitable for production software - and we know it clearly is still suitable.
Speak for yourself, I never want to write C++ ever again in my life.
I'm not a huge fan of the language of responsibility. I don't think there should be a law banning the use of C or C++ or any other programming language on account of it being unsafe, I don't think that anyone who writes in C/C++ is inherently acting immorally, etc.
What I do think is that Rust is a better-designed language than C or C++ and offers a bunch of affordances, including but not limited to the borrow checker, unsafe mode, the type system, cargo, etc. that make it easier and more fun for programmers to use to write correct and performant software, most of the time in most cases. I think projects that are currently using C/C++ should seriously consider switching off of them to something else, and Rust is an excellent candidate but not the only candidate.
I think Zig is also almost certainly an better language than C/C++ in every respect (I hedge more here because I'm less familiar with Zig, and because it's still being developed). Not having as strong memory safety guarantees as Rust is disappointing and I expect that it will result in Zig-written software being somewhat buggier than Rust-written software over the long term. But I am not so confident that I am correct about this, or that Zig won't bring additional benefits Rust doesn't have, that I would argue that people shouldn't use Zig or work on languages like Zig.
Given the density of memory issues in the Bun issue tracker I have a hard time squaring the statement that Zig makes it "easy" to build memory safe programs.
It should be noted that most of those issues are created by opening a link that bun creates when it crashes, they are not yet reviewed/confirmed and most likely a lot of them are dulplicates of the same issue.
Rust is not designed for low level system programming / embedded systems like Zig is. It is designed to make a browser and software that share requirements with making a browser.
There is some overlap but that's still different. The Zig approach to memory safety is to make everything explicit, it is good in a constrained environment typical of embedded programming. The Rust approach is the opposite, you don't really see what is happening, but there are mechanisms to keep your safe. It is good for complex software with lots of moving parts in an unconstrained environment, like a browser.
For a footgun analogy, one will hand you a gun that will never go off unless you aim and pull the trigger, so you can shoot your foot, but no sane person will. It is a good sniper rifle. The Rust gun can go off at any time, even when you don't expect it, but it is designed in such a way that it will never happen when it is pointed at your foot, even if you aim it there. It is a good machine gun.
Great C interop, first class support for cross-compilation, well suited for arena allocators.
You can use Rust in kernel/embedded code, you can also use C++ (I did) and even Java! but most prefer to use C, and I think that Zig is a better alternative to C for those in the field.
There is still one huge drawback with Zig and that's maturity. Zig is still in beta, and the closest you get to the metal, the more it tends to matter. Hardware projects typically have way longer life cycles and the general philosophy is "if it ain't broke, don't fix it". Rust is not as mature as C by far, there is a reason C is still king, but at least, it is out of beta and is seeing significant production use.
I remember when I talk about Zig to the CTO of the embedded branch of my company. His reaction was telling. "I am happy to hear someone mention Zig, it is a very interesting language and it is definitely on my watch list, but not mature enough to invest in it". He was happy that I mentioned Zig because in the company, the higher ups are all about Rust because of the hype, even though we do very little of if BTW, it is still mostly C and C++. And yeah, hype is important, customers heard about Rust as some magical tech that will make the code bug-free, they didn't hear about Zig, so Rust sells better. In the end, they go for C anyways.
> Great C interop, first class support for cross-compilation, well suited for arena allocators.
C interop and arena allocators aren't hard requirements for a kernel language. In fact, why would a kernel in <INSERT LANG> need to talk to C? You need it to talk to Assembly/Machine code, not C.
It helps if it can talk to/from C but it's not a requirement.
> customers heard about Rust as some magical tech that will make the code bug-free
That's on customers not having a clear picture. What we can look at experimentally is that yes, Rust will remove a whole suite of bugs, and no, Zig won't help there. Is Zig better than C? Sure, but so is C++ and it still sucks at it.
Like, the few big things wrong with Rust is probably compilation speed and async needing more tweaks (pinned places ergonomics, linear types to deal with async drop...) to make it way better.
For my part, I don't know why, but Zig's syntax feels wrong to me. I don't even know why. I really want to like its syntax, as Zig seems really promising to me, but I just don't, which makes it not very enjoyable for me to write.
I don't know if it's my lack of practice, but I never felt the same about, say, Rust's syntax, or the syntax of any other language for that matter.
That kind of is a bit load bearing. The differences are pretty huge. Plus, borrow checker is nowhere to be found. Cyclone is more C with a few tweaks (tagged unions, generics, regions, etc.).
Borrow checking is basically a synonym for affine type system.
The same outcome can be achieved via affine types, linear types, effects, dependent types, regions, proofs, among many other CS research in type systems.
Which is why following Rust's success, plenty of managed languages are now going through the evolution step to combine automatic resource management with improved type systems.
Taking the one that best approaches their current design.
> Borrow checking is basically a synonym for affine type system.
No? It's more akin to flow analysis with special generic types called lifetimes.
> The same outcome can be achieved via affine types, linear types, effects, dependent types, regions, proofs, among many other CS research in type systems.
Sure, and sounds, colors, and instruments are the same, but they are mixed to create an audio-video song. I'm not saying that what Rust did is something that came about ex nihilo, without precedence.
But having it all unified uniquely the way Rust did it is frankly revolutionary. Until now, people assumed if you want memory safety, you have to add a GC (tracing or RC). Or alternatively write extensive proofs about types like Ada/Spark.
There were languages with lifetimes and borrowing mechanics before Rust. Rust packages these mechanics in a nice way. Just like Zig encodes many niceties in a useful C language (comptime, simple cross-compilation, stdlib).
Which ones?? Before Rust, to my knowledge, no language had an actually practical way to use lifetimes and borrow-checking so that both memory safety and concurrency safety (data races, which is huge) were solved, even though the concepts were known in research. Doing the actual work to make it practical is what makes the difference between some obscure research topic and a widely used language that actually solves serious problems in the real world.
Yeah but is that a practical language people can use instead of C and Rust? I’ve always heard of it only as a research language that inspired rust but nothing else.
Outside AT&T until they ramped down the project, I guess not, Rust also took its time to actually take off beyond Mozilla, and is around because it was rescued by big tech (Amazon, Google, Microsoft,...) hiring most of the core team after Mozilla's layoffs.
> actually do change the way you think about software. It's a pretty momentous paradigm shift.
That's true for people who doesn't read and think about the code they write. For people who think from the perspective of a computer, Rust is "same checks, but forced by the compiler".
Make no mistake, to err is human, but Rust doesn't excite me that much.
> Most importantly, it dodges Rust and C++'s biggest mistake, not passing allocators into containers and functions
Funny. This was a great sell to me. I wonder why it isn’t the blurb. Maybe it isn’t a great sell to others.
The problem for me with so many of these languages is that they’re always eager to teach you how to write a loop when I couldn’t care less and would rather see the juice.
However, nowadays with comprehensive books like this, LLM tools can better produce good results for me as I try it out.
Very, very few people outside of foundational system software, HFT shops, and game studios understand why it's a great selling point. Everyone else likes the other points and don't realize the actual selling point of the language.
Graydon Hoare, a former C++ programmer on Mozilla Firefox and the original creator of Rust, acknowledges that for many people, Rust has become a viable alternative to C++ :
It's possible that Graydon's earliest private versions of Rust the 4 years prior to that pdf were an OCaml-inspired language but it's clear that once the team of C++ programmers at Mozilla started adding their influences, they wanted it to be a cleaner version of C++. That's also how the rest of the industry views it.
Alternative yes, derivative no. Rust doesn't approach C++'s metaprogramming features, and it probably shouldn't given how it seems to be used. It's slightly self-serving for browser devs to claim Rust solves all relevant problems in their domain and therefore eclipses C++, but to me in the scientific and financial space it's a better C, making tradeoffs I don't see as particularly relevant.
I say this as a past contributor to the Rust std lib.
Zig, D, and C are also alternatives to C++. It’s a class of languages that have zero cost abstractions.
Rust is NOT a beautiful language hiding inside of C++. It is not an evolution of C++. I’m pointing out that what you said is objectively wrong.
Can rust replace C++ as a programming language that has a fast performance profile due to zero cost abstractions? Yes. In the same way that Haskell can replace Python, yes it can.
> Rust and C++'s biggest mistake, not passing allocators into containers and functions
Rather, basing its entire personality around this philosophy is Zig's biggest mistake. If you want to pass around allocators in C++ or Rust, you can just go ahead and do that. But the reason people don't isn't because it's impossible in those languages, it's because the overwhelming majority of the time it's a lot of ceremony for no benefit.
Like, surely people see that in C itself there's nothing stopping anyone from passing around allocators, and yet almost nobody ever does. Ever wonder why that is?
Much of the book's copy appears to have been written by AI (despite the foreword statement that none of it was), which explains the hokey overenthusiasm and exaggerations.
As we know AI is at least as smart as the average human. It knows the Zeitgeist and thus adds “No AI used” in order to boost “credibility”. :) (“credibility” since AI is at least as smart the average human, for us in the know.)
For those who actually want to learn languages which are "fundamentally changing how you think about software", I'd recommend the Lisp family and APL family.
What is the most optimal Erlang/Elixir you can think of regarding standardized effect systems for recording non-determinism, replaying and reversible computing? How comparable are performance numbers of Erlang/Elixir with Java and wasm?
I'd recommend asking the Elixir community about this as I didn't even understand your question.
I am by no means a professional with Erlang/Elixir. I threw it out there because these language force you to think differently compared to common OOP languages.
No need to include Elixir here; none of the important bits that will change how you view software come from Elixir, it's just a skin on top of Erlang (+ some standard library wrappers) and that's it.
I'd argue more people use Elixir over Erlang at this point. Sure its just an abstraction on top of Erlang, but people learn through Elixir nowadays, not through Erlang.
If you want to learn the actual mind changing aspects of the BEAM, clearly learning the simpler, smaller language with a more direct route to the juice is the way to go. Hence Erlang, not Elixir. I learned Elixir first back in 2015, and then learned Erlang, and have had the pleasure of using both in production. When all was said and done I really think Erlang was better, especially over a long enough time frame.
As a general point I'd like to state that I don't think it really matters what "people" do when you're learning for yourself. In the grand scheme of things approximately no one uses the BEAM, but this doesn't mean that learning how to use it is somehow pointless.
- Leaning on a pre-emptive scheduler to maintain order even in the presence of ridiculous amounts of threads ("processes" on the BEAM) running
- Using supervision trees to specify how and when processes and their dependents should be restarted
- Using `gen_server` processes as a standard template for how a thread should be running
There's more to mine from using the BEAM, but I think the above are some of the most important aspects. The first two I've never found to be fully replicated anywhere other than in OTP/BEAM. You don't need them, but once you're bought into the BEAM they're incredibly nice to have.
Not even close. While Numpy has many similar operations, it lacks the terseness, concepts like trains and forks etc. Modern APL style doesn't use... control flow (neither recursion nor loops nor if...) and often avoids variables (tacit/point-free style).
Zig is so novel that it's hard to find any language like it. Its similarity to C is superficial. AFAIK, it is the first language ever to rely on partial evaluation so extensively. Of course, partial evaluation itself is not new at all, but neither were touchscreens when the iPhone came out. The point wasn't that it had a touchscreen, but that it had almost nothing but. The manner and extent of Zig's use of partial evaluation are unprecedented. I have nothing against OCaml, but it is a variant of ML, a 1970s language, that many undergrads were taught at university in the nineties.
I'm not saying everyone should like Zig, but its design is revolutionary:
I guess comptime is a little different but yeah I wouldn't say it fundamentally changes how you think about software.
I wouldn't say that about OCaml either really though. It's not wildly different in the way that e.g. Lean's type system, or Rust's borrow checker or Haskell's purity is.
I’m not a D programmer but I remember talks by Alexandrescu where he was arguing for this capability in C++ and ultimately one of his stated reasons why he switched from C++ to D
Look up static if - AST manipulation in native D code.
There aren't a multitude of other languages that compete with Rust and Zig in the "zero cost abstraction" domain. There's like, Ada... and D sort of.
Rust and Zig aren't merely very good, they are better than the alternatives when you need a "zero cost abstraction" option.
But sure, go ahead and dismiss it as a cult if it makes you feel better. I bet you were one of the people who dismissed the iPhone as "just apple fanbois" back in the day. Won't amount to anything.
But the concern in this thread wasn't that people consider Zig or Rust good, so don't try to frame it this way, because it is dishonest.
Original quote:
> [Learning Zig] is about fundamentally changing how you think about software.
This is not the same. Something like it could be said about Lisp, Forth, Prolog, Smalltalk, Fractran or APL, even Brainfuck, not Rust or Zig. No, thinking about object lifetimes or allocators is not "fundamental change" in how to think about software. It is bread and butter of thinking about software. Therefore I believe this is cultish behavior - you assign extraordinary properties to something rather dull and not that much different from other mainstream languages.
> I bet you were one of the people who dismissed the iPhone as "just apple fanbois" back in the day
Wrong. I still dismiss people praising Apple, swallowing some bullshit about "vision" etc. as fanboys.
This looks fantastic. Pedagogically it makes sense to me, and I love this approach of not just teaching a language, but a paradigm (in this case, low-level systems programming), in a single text.
Zig got me excited when I stumbled into it about a year ago, but life got busy and then the io changes came along and I thought about holding off until things settled down - it's still a very young language.
But reading the first couple of chapters has piqued my interest in a language and the people who are working with it in a way I've not run into since I encountered Ruby in ~2006 (before Rails hit v1.0), I just hope the quality stays this high all the way through.
So many comments about the AI generation part. Why does it matter? If it’s good and accurate and helpful why do you care? That’s like saying you used a calculator to calculate your equations so I can’t trust you.
I am just impressed by the quality and details and approach of it all.
Nicely done (PS: I know nothing about systems programming and I have been writing code for 25 years)
> The Zigbook intentionally contains no AI-generated content—it is hand-written, carefully curated, and continuously updated to reflect the latest language features and best practices.
If the site would have said something like "We use AI to clean up our prose, but it was all audited thoroughly by a human after", I wouldn't have an issue. Even better if they shared their prompts.
Because AI gets things wrong, often, in ways that can be very difficult to catch. By their very nature LLMs write text that sounds plausible enough to bypass manual review (see https://daniel.haxx.se/blog/2025/07/14/death-by-a-thousand-s...), so some find it best to avoid using it at all when writing documentation.
But all those "it's AI posts" are about the prose and "style", not the actual content. So even if (and that is a big if) the text was written using the help of AI (and there are many valid reasons to use it, e.g. if you're not a native speaker) that does not mean the content was written from AI and thus contains AI mistakes.
If it was so obviously written by AI then finding those mistakes should be easy?
The style is the easiest thing to catch for people; GP has said that the technical issues can be more difficult to find, especially in longer texts; there are times where it indeed are caught.
Passing even correct information through an LLM may or may not taint it; it may create sentences which on first glance are similar, but may have different, imprecise meaning - specific wording may be crucial in some cases. So if the style is under question, the content is as well. And if you can write the technically correct text at first, why would you put it through another step?
AI tools make different types of mistakes than humans, and that's a problem. We've spent eons creating systems to mitigate and correct human mistakes, which we don't have for the more subtle types of mistakes AI tends to make.
Presumably the "subject matter expert" will review the output of the LLM, just like a reviewer. I think it's disingenuous to assume that just because someone used AI they didn't look at or reviewed the output.
But why would a serious person claim that they wrote this without AI when it's obvious they used it?!
Using any tool is fine, but someone bragging about not having used a tool they actually used should make you suspicious about the amount of care that went to their work.
That’s fine. Write it out yourself and then ask an AI how it could be improved with a diff. Now you’ve given it double human review (once in creation then again reviewing the diff) and single AI review.
That's one review with several steps and some AI assistance. Checking your work twice is not equivalent to it having it reviewed by two people, part of reviewing your work (or the work of others) is checking multiple times and taking advantage of whatever tools are at your disposal.
>That’s like saying you used a calculator to calculate your equations so I can’t trust you.
No it isn't. My TI-83 is deterministic and will give me exactly what I ask for, and will always do so, and when someone uses it they need to understand the math first or otherwise the calculator is useless.
These AI models on the other hand don't care about correctness, by design don't give you deterministic answers, and the person asking the question might as well be a monkey as far as their own understanding of the subject matter goes. These models are if anything an anti-calculator.
As Dijkstra points out in his fantastic essay on the idiocy of natural language "computation", what you are doing is exactly not computation but a kind of medieval incantation. Computers were designed to render impossible precisely the nonsense that LLMs produce. The biggest idiot on earth will still get a correct result from the calculator because unlike the LLM it is based on boolean logic, not verbal or pictorial garbage.
Because the first thing you see when you click the link is "Zero AI" pasted under the most obviously AI-generated copy I've ever seen. It's just an insult to our intelligence, obviously we're gonna call OP out on this. Why lie like that?
It's funny how everyone has gaslit themselves into doubting their own intuitions on the most blatant specimen where it's not just a mere whiff of the reek but an overpowering pungency assaulting the senses at every turn, forcing themselves to exclaim "the Emperor's fart smells wonderful!"
“The Party told you to reject the evidence of your eyes and ears. It was their final, most essential command.”
It matters because, it irritates me to no end that I have to review AI generated content that a human did not verify before. I don't like being made to work in the guise of someone giving me free content.
> That’s like saying you used a calculator to calculate your equations so I can’t trust you.
A calculator exists solely for the realm of mathematics, where you can afford to more or less throw away the value of human input and overall craftsmanship.
That is not the case with something like this, which - while it leans in to engineering - is in effect viewed as a work of art by people who give a shit about the actual craft of writing software.
> Why does it matter? If it’s good and accurate and helpful why do you care? That’s like saying you used a calculator to calculate your equations so I can’t trust you.
Agree. What matters is quality, regardless of what/who made it.
O.t.o.h., it is funny to see tech people here, that work on implementing technology, taking an approach so... Luddite and "anti-tech".
I agree man. I love the HN community but it seems a lot more cynical than usual :).
i think 90% of the comments were about the AI part rather than the actual product - which seems very cool and definitely took a lot of effort to put together.
I value human work and I do NOT value work that has been done with heavy AI usage.
Most AI things I've seen are slop, I instantly recognize AI songs for example. I just dont want anything to do with it. The uniqueness of creative work is lost with using AI.
An awful lot of commenters are convinced that it's AI-generated, despite explicit statements to the contrary. Maybe they're wrong, maybe they're right, but none of them currently have any proof stronger than vibes. It's like everyone has gaslit themselves into thinking that humans can't write well-structured neutral-tone docs any more.
This is not written in a neutral-tone at all! There is a lot of bland marketing speech that feels completely out of place. This is not how you write good technical literature.
Many people have already shown the hallucinated apis that is much stronger evidence than your "vibes".
I suppose the author may have deliberately added the "No AI assistance" notice - making sure all the hallucinated bugs are found via outraged developers raising tickets. Without that people may not even have bothered.
It's pretty incredible how much ground this covers! However, the ordering feels a little confusing to me.
One example is in chapter 1. It talks about symbol exporting based on platform type, without explaining ELF. This is before talking about while loops.
It's had some interesting nuggets so far, and I've followed along since I'm familiar with some of the broad strokes, but I can see it being confusing to someone new to systems programming.
It's really hard to believe this isn't AI generated, but today I was trying to use the HTTP server from std after the 0.15 changes, couldn't figure out how it's supposed to work until I've searched repos in Github. LLM's couldn't figure it out as well, they were stuck in a loop of changing/breaking things even further until they arrived at the solution of using the deprecated way. so I guess this is actually handwritten which is amazing because it looks like the best resource I've seen up until now for Zig
it's not only the size - it was pushed all at once, anonymously, using text that highly resembles that of an AI. I still think that some of the text is AI generated. perhaps not the code, but the wording of the text just reeks of AI
For some of my projects I develop against my own private git server, then when I'm ready to go public, create a new git repo with a fully squashed history. My early commits are basically all `git commit -m "added stuff"`
It's almost as though the LLMs were trained on all the writing conventions which are used by humans and are parroting those, instead of generating novel outputs themselves.
They haven’t picked up any one human writing style, they’ve converged on a weird amalgamation of expressions and styles that taken together don’t resemble any real humans writing and begin to feel quite unnatural.
As someone who uses em-dashes a lot, I’m getting pretty tired of hearing something “screams AI” about extremely simple (and common) human constructs. Yeah, the author does use that convention a number of times. But that makes sense, if that’s a tool in your writing toolbox, you’ll pull it out pretty frequently. It’s not signal by itself, it’s noise. (does that make me an AI!?) We really need to be considering a lot more than that.
Reading through the first article, it appears to be compelling writing and a pretty high quality presentation. That’s all that matters, tbh. People get upset about AI slop because it’s utterly worthless and exceptionally low quality.
The repetitiveness of the shell commands (and using zig build-exe instead of zig run when the samples consist of short snippets), the filler bullet points and section organization that fail to convey any actual conceptual structure.
And ultimately throughout the book the general style of thought processes lacks any of the zig community’s cultural anachronisms.
If you take a look at the repository you’ll also notice baffling tech choices not justified by the author that runs counter against the zig ethos.
(Edit: the build system chapter is an even worse offender in meaningless cognitively-cluttering headings and flowcharts, it’s almost certainly entirely hallucinated, there is just an absurd degree of unziglikeness everywhere: https://www.zigbook.net/chapters/26__build-system-advanced-t... -- What’s with the completely irrelevant flowchart of building the zig compliler? What even is the point of module-graph.txt? And icing on the cake in the “Vendoring vs Registry Dependencies” section.)
Yeah and then why would they explicitly deny it? Maybe the AI was instructed not to reveal its origin. It's painful to enjoy this book if I know it's likely made by an LLM.
If you find it useful no harm in enjoying it! The main problem with AI content is it's just not good enough...yet. It'll get there. The LLMs just need more real-world feedback incorporated, rather than being the ultimate has-read-everything,-actually-knows-nothing dweeb (a lot of humans are like this too). (You can see the first signs of overcoming this w/ latest models coding skills, which are stronger via RL, I believe.) (Not first hand knowledge tho -- pot kettle black situation there.)
I've had the same experience as you with Zig. I quite love the idea of it Zig but the undocumented churn is a bit much. I wish they had auto generated docs that reflect the current state of the stdlib, at least. Even if it just listed the signatures with no commentary.
I was trying to solve a simple problem but Google, the official docs, and LLMs were all out of date. I eventually found what I needed in Zig's commit history, where they casually renamed something without updating the docs. It's been renamed once more apparently, still not reflected in the docs :shrugs:.
But you can tell your LLM to just go look at the source code (after checking it out so it doesn’t try 20s github requests). Always works like a charm for me.
It looks cool! No experience with Zig so can't comment on the accuracy, but I will take a look at it this week. Also a bit annoying that there is no PDF version that I could download as the website is pretty slow. After taking a look at the repository (https://github.com/zigbook/zigbook/tree/main), each page seems to be written in AsciiDoc, so I'll take a look about compiling a PDF version later today.
HOWTO: The text can be found per-chapter in `./pages/{chapter}.adoc` but each chapter includes code snippets found in a respective `./chapters-data/code/{chapter}/` subdirectory. So, perhaps a hacky way to do it but quite lazy to fully figure asciidoctor flags, created using a script a combined book.adoc that includes all others with `include::{chapter}.adoc` directives, then run `asciidoctor-pdf -a sourcedir=../chapters-data/code -r asciidoctor-diagram -o book.pdf ./pages/book.adoc`.
I agree, I love zig but the things that make me program differently are features like excellent enum/union support, defer and comptime, which aren't readily available in the other languages I tend to use (C++, Fortran and Python).
Hmm, the explanation of Allocators is much more detailed in the book, but I feel although more compact, it seems much more reasonable in the language reference. [0]
I'll keep exploring this book though, it does look very impressive.
C++ is far better than C in very many ways. It's also far worse than C in very many other ways. Given a choice between the two, I'd still choose C++ every day just for RAII. There's only so much that we can blame programmers for memory leaks, use-after-free, buffer overflows, and other things that are still common in new C code. At some point, it is the language itself that is unsuitable and insufficient.
Early talks by Andrew explicitly leaned into the notion that "software can be perfect", which is a deviation from how most programmers view software development.
Zig also encourages you to "think like a computer" (also an explicit goal stated by Andrew) even more than C does on modern machines, given things like real vectors instead of relying on auto vectorization, the lack of a standard global allocator, and the lack of implicit buffering on standard io functions.
I would definitely put Zig on the list of languages that made me think about programming differently.
The big thing I would say I actually learned and would intentionally apply to other languages is SIMD programming. Otherwise, I'd say it gave me a much clearer mental model of memory management that helps me understand other languages much more fundamentally. Along with getting my hands directly on custom allocators for the first time, a question that took me time to figure out but gave me a lot of clarity in answering was "why can't you do closures in Zig?" Programming in Zig feels very Go-like, and not having closures was actually one of the biggest hiccups for me. I don't think this really changed how I write in other languages, but definitely how I think about other languages.
Snap! I also played around with closures a tonne in Zig. Definitely possible but not... ergonomic. Haven't ended up using them much.
And agree with allocators; in C I always considered using custom allocators but never really needed to. Having them just available in the zig std means I actually use them. The testing allocator is particularly useful IMO.
Never used Go but if it's Zig-like I might give it a shot! Thanks!
I'll make a list of the things that both languages have in common that make them feel similar to me:
- structs and functions are the main means of composition
- the pattern of: allocate resource, immediately defer deallocating the resource
- errors are values, handled very similarly (multiple return values vs error unions)
- built in json <-> struct support
- especially with the 0.16.0 Io changes in Zig, the concurrency story (std.Io.async[0] is equivalent to the go keyword[1], std.Io.Queue[2] is equivalent to channels[3], std.Io.select[4] is equivalent to the select keyword[5])
- batteries included but not sprawling stdlib
- git based dependencies
- built in testing
I think it mostly comes down to the standard library guiding you down this path explicitly. The C stdlib is quite outdated and is full of bad design that affects both performance and ergonomics. It certainly doesn't guide you down the path of smart design.
Zig _the language_ barely does any of the heavy lifting on this front. The allocator and io stories are both just stdlib interfaces. Really the language just exists to facilitate the great toolchain and stdlib. From my experience the stdlib seems to make all the right choices, and the only time it doesn't is when the API was quickly created to get things working, but hasn't been revisited since.
A great case study of the stdlib being almost perfect is SinglyLinkedList [1]. Many other languages implement it as a container, but Zig has opted to implement it as an intrusively embedded element. This might confuse a beginner who would expect SinglyLinkedList(T) instead, but it has implications surrounding allocation and it turns out that embedding it gives you a more powerful API. And of course all operations are defined with performance in mind. prepend is given to you since it's cheap, but if you want postpend you have to implement it yourself (it's a one liner, but clearly more expensive to the reader).
Little decisions add up to make the language feel great to use and genuinely impressive for learning new things.
C does not provide vector primitive to expose the vector primitives in modern machines. C compilers rely on analyzing loops to see when auto-vectorization is applicable. Auto-vectorization is a higher level of abstraction than directly exposing vector primitives.
Regarding the lack of a standard global allocator, and the lack of implicit buffering on standard io functions, these are simply features of the Zig standard library which are true of computers (computers do not have a standard global allocator nor do they implicitly buffer IO) but are not features of the C standard library, and therefore are not encouraged to use custom allocators or explicit buffering.
The biggest red flag for me is the author hiding their name. If you wrote quality book about a programming language you are not hiding your identity from the world.
The repository also has a misconfigured .gitignore file which allowed them to check in some built executables into the repository.
This is something that I wouldn't judge beginners for, but someone claiming to be an expert writing a book on the topic should know how to configure a .gitignore for their particular language of expertise.
Very well done! wow! Thanks for this. Going through this now.
One comment: About the syntax highlighting, the dark blue for keywords against a black background is very difficult to read. And if you opt for the white background, the text becauses off white / grey which again is very difficult to read.
A nitpick about website: the top progress bar is kind of distracting (high-constrast color with animation). It's also unnecessary because there is already scrollbar on the right side.
> The Zigbook intentionally contains no AI-generated content—it is hand-written, carefully curated, and continuously updated to reflect the latest language features and best practices.
I just don't buy it. I'm 99% sure this is written by an LLM.
Can the author... Convince me otherwise?
> This journey begins with simplicity—the kind you encounter on the first day. By the end, you will discover a different kind of simplicity: the kind you earn by climbing through complexity and emerging with complete understanding on the other side.
> Welcome to the Zigbook. Your transformation starts now.
...
> You will know where every byte lives in memory, when the compiler executes your code, and what machine instructions your abstractions compile to. No hidden allocations. No mystery overhead. No surprises.
...
> This is not about memorizing syntax. This is about earning mastery.
Pretty clear it's all AI. The @zigbook account only has 1 activity prior to publishing this repo, and that's an issue where they mention "ai has made me too lazy": https://github.com/microsoft/vscode/issues/272725
After reading the first five chapters, I'm leaning this way. Not because of a specific phrase, but because the pacing is way off. It's really strange to start with symbol exporting, then moving to while loops, then moving to slices. It just feels like a strange order. The "how it works" and "key insights" also feel like a GPT summarization. Maybe that's just a writing tic, but the combination of correct grammar with bad pacing isn't something I feel like a human writer has. Either you have neither (due to lack of practice), or both (because when you do a lot of writing you also pick up at least some ability to pace). Could be wrong though.
It's just an odd claim to make when it feels very much like AI generated content + publish the text anonymously. It's obviously possible to write like this without AI, but I can't remember reading something like this that wasn't written by AI.
It doesn't take away from the fact that someone used a bunch of time and effort on this project.
To be clear, I did not dismiss the project or question its value - simply questioned this claim as my experience tells me otherwise and they make a big deal out of it being human written and "No AI" in multiple places.
I was pretty skeptical too, but it looks legit to me. I've been doing Zig off and on for several years, and have read through the things I feel like I have a good understanding of (though I'm not working on the compiler, contributing to the language, etc.) and they are explained correctly in a logical/thoughtful way. I also work with LLMs a ton at work, and you'd have to spoon-feed the model to get outputs this cohesive.
Keep in mind that pangram flags many hand-written things as AI.
> I just ran excerpts from two unpublished science fiction / speculative fiction short stories through it. Both came back as ai with 99.9% confidence. Both stories were written in 2013.
> I've been doing some extensive testing in the last 24 hours and I can confidently say that I believe the 1 in 10,000 rate is bullshit. I've been an author for over a decade and have dozens of books at hand that I can throw at this from years prior to AI even existing in anywhere close to its current capacity. Most of the time, that content is detected as AI-created, even when it's not.
> Pangram is saying EVERYTHING I have hand written for school is AI. I've had to rewrite my paper four times already and it still says 99.9% AI even though I didn't even use AI for the research.
> I've written an overview of a project plan based on a brief and, after reading an article on AI detection, I thought it would be interesting to run it through AI detection sites to see where my writing winds up. All of them, with the exception of Pangram, flagged the writing as 100% written by a human. Pangram has "99% confidence" of it being written by AI.
I generally don't give startups my contact info, but if folks don't mind doing so, I recommend running pangram on some of their polished hand written stuff.
How long were the extracts you gave to Pangram? Pangram only has the stated very high accuracy for long-form text covering at least a handful of paragraphs. When I ran this book, I used an entire chapter.
Doesn't mean that the author might not use AI to optimise legibility. You can write stuff yourself and use an LLM to enhance the reading flow. Especially for non-native speakers it is immensely helpful to do so. Doesn't mean that the content is "AI-generated". The essence is still written by a human.
>If an LLM was used in any fashion, then this statement is simply a lie.
While I don't believe the article was created this way, it's possible to use an LLM purely as a classifier. E.g. prompt along the lines of "Does this paragraph contain any errors? Answer only yes or no." and generate only a single set of token probabilities, without any autoregression. Flag any paragraphs with sufficient probability of "yes" for human review.
Clarity in writing comes mostly from the logical structure of ideas presented. Writing can have grammar/style errors but still be clear. If the structure is bad after translation, then it was bad before translation too.
I'm not sure, but I try my best to assume good faith / be optimistic.
This one hit a sore spot b/c many people are putting time and effort into writing things themselves and to claim "no ai use" if it is untrue is not fair.
If the author had a good explanation... Idk not a native English writer and used an LLM to translate and that included the "no LLMs used" call-out and that was translated improperly etc
To me it's another specimen in the "demonstrating personhood" problem that predates LLMs. e.g. Someone replies to you on HN or twitter or wherever, are they a real person worth engaging with? Sometimes it'll literally be a person but their behavior is indistinguishable from a bot, that's their problem. Convincing signs of life include account age, past writing samples, and topic diversity.
IMO HN should add a guideline about not insinuating things were written by AI. It degrades the quality of the site similarly to many of the existing rules.
Arguably it would be covered by some of the existing rules, but it's become such a common occurrence that it may need singling out.
What degrades conversation is to lie about something being not AI when it actually is. People pointing out the fraud are right to do so.
One thing I've learned is that comment sections are a vital defense on AI content spreading, because while you might fool some people, it's hard to fool all the people. There have been times I've been fooled by AI only to see in the comments the consensus that it is AI. So now it's my standard practice to check comments to see what others are saying.
If mods put a rule into place that muzzles this community when it comes to alerting others a fraud is being affected, that just makes this place a target for AI scams.
It's 2025, people are going to use technology and its use will spread.
There are intentional communities devoted to stopping the spread of technology, but HN isn't currently one of them. And I've never seen an HN discussion where curiosity was promoted by accusations or insinuations of LLM use.
It seems consistent to me with the rules against low effort snark, sarcasm, insinuating shilling, and ideological battles. I don't personally have a problem with people waging ideological battles about AI, but it does seem contrary to the spirit of the site for so many technical discussions to be derailed so consistently in ways that specifically try to silence a form of expression.
I'm 100% okay with AI spreading. I use it every day. This isn't a matter of an ideological battle against AI, it's a matter of fraudulent misrepresentation. This wouldn't be a discussion if the author themselves hadn't claimed what they had, so I don't see why the community should be barred from calling that out. Why bother having curious discussions about this book when they are blatantly lying about what is presented here? Here's some curiosity: what else are they lying about, and why are they lying about this?
To clarify there is no evidence of any lying or fraud. So far all we have evidence of is HN commenters assuming bad faith and engaging in linguistic phrenology.
There is evidence, it's circumstantial, but there's never going to be 100% proof. And that's the point, that's why community detection is the best weapon we have against such efforts.
(Nitpick: it's actually direct evidence, not circumstantial evidence. I think you mean it isn't conclusive evidence. Circumstantial evidence is evidence that requires an additional inference, like the accused being placed at the scene of the crime implying they may have been the perpetrator. But stylometry doesn't require any additional inference, it's just not foolproof.)
You can't just say that a linguistic style "proves" or even "suggests" AI. Remember, AI is just spitting out things its seen before elsewhere. There's plenty of other texts I've seen with this sort of writing style, written long before AI was around.
Can I also ask: so what if it is or it isn't?
While AI slop is infuriating, and the bubble hype is maddening, I'm not sure every time somebody sees some content they don't like the style of we just call out it "must" be AI, and debate if it is or it isn't is not at least as maddening. It feels like all content published now gets debated like this, and I'm definitely not enjoying it.
You can be skeptical of anything but I think it's silly to say that these "Not just A, but B" constructions don't strongly suggest that it's generated text.
As to why it matters, doesn't it matter when people lie? Aren't you worried about the veracity of the text if it's not only generated but was presented otherwise? That wouldn't erode your trust that the author reviewed the text and corrected any hallucinations even by an iota?
I don't think there was very much abuse of "not just A, but B" before ChatGPT. I think that's more of a product of RLHF than the initial training. Very few people wrote with the incredibly overwrought and flowery style of AI, and the English speaking Internet where most of the (English language) training data was sourced from is largely casual, everyday language. I imagine other language communities on the Internet are similar but I wouldn't know.
Don't we all remember 5 years ago? Did you regularly encounter people who write like every followup question was absolutely brilliant and every document was life changing?
I think about why's (poignant) Guide to Ruby [1], a book explicitly about how learning to program is a beautiful experience. And the language is still pedestrian compared to the language in this book. Because most people find writing like that saccharin, and so don't write that way. Even when they're writing poetically.
Regardless, some people born in England can speak French with a French accent. If someone speaks French to you with a French accent, where are you going to guess they were born?
Even if that were comparable in size to the conversational Internet, how many novels and academic papers have you read that used multiple "not just A, but B" constructions in a single chapter/paper (that were not written by/about AI)?
I wouldn't mind a technical person transparently using AI for doing the writing which isn't necessary their strength, as long as the content itself comes from the author's expertise and the generated writing is thoroughly vetted to make sure there's no hallucinationated misunderstanding in the final text. At the end of the day this would just increase the amount of high quality technical content available, because the set of people with both a good writing skill and a deep technical expertise is much narrower than just the later.
But claiming you didn't use AI when you did breaks all trust between you a your readership and makes the end result pretty much worthless because why read a book if you don't trust the author not to waste your time?
So petty as to lie about using AI or so petty as to call it out? Calling it out doesn't seem petty to me.
I intend to learn Zig when it reaches 1.0 so I was interested in this book. Now that I see it was probably generated by someone who claimed otherwise, I suspect this book would have as much of a chance of hurting my understanding as helping it. So I'll skip it. Does that really sound petty?
I understand being okay with a book being generated (some of the text I published in this manual [1] is generated), I can imagine not caring that the author lied about their use of AI, but I really don't understand the suggestion I write a book about a subject I just told you I'm clueless about. I feel like there's some kind of epistemic nihilism here that I can't fathom. Or maybe you meant it as a barb and it's not that deep? You tell me I guess.
I'm also concerned whether it is useful! That's why I'm not gunnuh read it after receiving a strong contrary indicator (which was less the use of AI than the dishonesty around it). That's also why I try to avoid sounding off on topics I'm not educated in (which is too say, why I'm not writing a book about Zig).
Remember - I am using AI and publishing the results. I just linked you to them!
So you could do everyone a favour by giving a sufficiently detailed review, possibly with recommendations to the author how to improve the book. Definitely more useful than speculating about the author's integrity.
I'm satisfied with what's been presented here already, and as someone who doesn't know Zig it would take me several weeks (since I would have to learn it first), so that seems like an unreasonable imposition on my time. But feel free to provide one yourself.
Well, there must have been a good reason why you don't like the book. I didn't see good reasons in this whole discussion so far, just a lot of pedantry. No commenter points to technical errors, inaccuracies, poor code examples, or pedagogical problems. The entire objection rests on subjective style preferences and aesthetic nitpicking rather than legitimate quality concerns.
I don't see what else I can say to help you understand. I think we just have very different values and world views and find one another's perspective baffling. Perhaps your preferred AI assistant, if directed to this conversation, could put it in clearer terms than I am able to.
My statement refers to this claim: "I'm 99% sure this is written by an LLM."
The hypocrisy and entitlement mentality that prevails in this discussion is disgusting. My recommendation to the fellow below that he should write a book himself (instead of complaining) was even flagged, demonstrating once again the abuse of this feature to suppress other, completely legitimate opinions.
I'm guessing it was flagged because it came off as snark. I've gone ahead and vouched it but of course I can't guarantee it won't get flagged again. To be frank this comment is probably also going to get flagged for the strong language you're using. I don't think either are abusive uses of flagging.
Additionally please note that I neither complained not expressed an entitlement. The author owes me as much as I owe them (nothing beyond respect and courtesy). I'm just as entitled to express a criticism as they are to publish a book. I suppose you could characterize my criticism as complaints, but I don't see what purpose that really serves other than to turn up the rhetorical temperature.
The book content itself is deliberately free of AI-generated prose. Drafts may start anywhere, but final text should be reviewed, edited, and owned by a human contributor.
There is more specificity around AI use in the project README. There may have been LLMs used during drafting, which has led to the "hallmarks" sticking around that some commenters are pointing out.
That statement is honestly self-contradictory. If a draft was AI-generated and then reviewed, edited, and owned by a human contributor, then the parts which survived reviewing and editing verbatim were still AI-generated...
Why do you care, if a human reviewed and edited it, someone filtered it to make sure it’s correct. It’s validated to be correct, that is the main point.
Clearly someone didn't make sure everything is correct, since they allowed a self-contradictory statement (whether generated by AI or by human) into the text...
People have the illusion of reviewing and "owning" the final product, but that is not how it looks like from the outside. The quality, the prose style, the errors that pass through due to inevitable AI-induced complacency ALWAYS EVENTUALLY show. If people got out of the AI bubbles they would see it too, alas.
We keep reading the same stories for at least a couple of years now. There is no novelty anymore. The core issues and problems have stayed the same since gpt3.5. And because they are so omnipresent in the internet, we have grown to be able to recognise them almost automatically. It is no longer just a matter of quality, it is an insult to the readers when an author pretends that content is not AI generated just because they "reviewed it". Reviewing sth that somebody else wrote is not ownership, esp when that sth is an LLM.
In any case, I do not care if people want to read or write AI generated books, just don't lie about it being AI generated.
Welp. I wish I had read the comments first to discover that this is AI generated. On the other hand, I got to experience the content without bias.
I opted to give it a try instead of reading the comments and the book was arranged in a super strange way where it's discussing concepts that a majority of programmers would never be concerned with when starting out with learning a language. It's very different to learn about some of these concepts if you are reading a language doc in order to work on the language itself. But if you want to learn how to use the language, something like:
> Choose between std.debug.print, unbuffered writers, and buffered stdout depending on the output channel and performance needs.
is absolutely never going to be something you dump into chapter 1. I skimmed through a few chapters from there and it's blocks of stuff thrown in randomly. The introduction to the if conditional throws in Zig Intermediate Representation with absolutely no explanation of what it is and why it's even being discussed.
Came here to comment that this has been written pretty poorly or just targets a very niche audience and now I discover it's slop. What a waste of time. The one thing AI was supposed to save.
This source is really hard to trust. AI or not, the author has done no work to really establish epistemological reliability and transparency. The entire book was published at once with no history, no evidence of the improvement and iteration it takes to create quality work, and no reference as to the creative process or collaborators or anything. And on top of that, the author does not seem to really have any other presence or history in the community. I love Zig, and have wanted more quality learning materials to exist. This, unfortunately, does not seem to be it.
For books that are published in more traditional manners, digital or paper, there is normally a credible publisher, editors, sometimes a foreword from a known figure, reviews from critics or experts in the field, and often a bio about the author explaining who they are and why they wrote the book etc. These different elements are all signals of reliability, they help to convey that the content is more than just fluff around an attention-grabbing title, that it has depth and quality and holds up. The whole publishing business has put massive effort into establishing and building these markers of trust.
The book claims it’s not written with the help of AI, but the content seems so blatantly AI-generated that I’m not sure what to conclude, unless the author is the guy OpenAI trained GPT-5 on:
> Learning Zig is not just about adding a language to your resume. It is about fundamentally changing how you think about software.
“Not just X - Y” constructions.
> By Chapter 61, you will not just know Zig; you will understand it deeply enough to teach others, contribute to the ecosystem, and build systems that reflect your complete mastery.
More not just X - Y constructions with parallelism.
Even the “not made with AI” banner seems AI generated! Note the 3 item parallelism.
> The Zigbook intentionally contains no AI-generated content—it is hand-written, carefully curated, and continuously updated to reflect the latest language features and best practices.
I don’t have anything against AI generated content. I’m just confused what’s going on here!
EDIT: after scanning the contents of the book itself I don’t believe it’s AI generated - perhaps it’s just the intro?
EDIT again: no, I’ve swung back to the camp of mostly AI generated. I would believe it if you told me the author wrote it by hand and then used AI to trim the style, but “no AI” seems hard to believe. The flow charts in particular stand out like a sore thumb - they just don’t have the kind of content a human would put in flow charts.
Every time I read things like this, it makes me think that AI was trained off of me. Using semicolons, utilizing classic writing patterns, and common use of compare and contrast are all examples of how they teach to write essays in high school and college. They're also all examples of how I think and have learned to communicate.
To be explicit, it’s not general hallmarks of good writing. It’s exactly two common constructions: not X but Y, and 3 items in parallel. These two pop up in extreme disproportion to normal “good writing”. Good writers know to save these tricks for when they really want to make a point.
Most people aren’t great writers, though (including myself). I’d guess that if people find the “not X but Y” compelling, they’ll overuse it. Overusing some stylistic element is such a normal writing “mistake”. Unless they’re an extremely good writer with lots of tools in their toolbox. But that’s not most people.
I find the probability that a particular writer latches onto the exact same patterns that AI latches onto, and does not latch onto any of the patterns AI does not latch onto, to be quite low. Is it a 100% smoking gun? No. But it’s suspicious.
But you didn't write that "Using semicolons, utilizing classic writing patterns, and common use of compare and contrast are not just examples of how they teach to write essays in high school and college; they're also all examples of how I think and have learned to communicate."
I mean maybe the content is not AI generated (I wouldn’t say it is) but the website does have an AI generated smell to it. From the colors to the shapes, it looks like Sonnet or Opus definitely made some tweaks.
Clearly your perception of what is AI generated is wrong. You can't tell something is AI generated only because it uses "not just X - Y" constructions. I mean, the reason AI text often uses it is because it's common in the training material. So of course you're going to see it everywhere.
Find me some text from pre-AI that uses so many of these constructions in such close proximity if it’s really so easy - I don’t think you’ll have much luck. Good authors have many tactics in their rhetorical bag of tricks. They don’t just keep using the same one over and over.
The style of marketing material was becoming SO heavily cargo-culted with telltale signs exactly like these in the leadup to LLMs.
Humans were learning the same patterns off each other. Such style advice has been floating around on e.g. LinkedIn for a while now. Just a couple years later, humans are (predictably) still doing it, even if the LLMs are now too.
We should be giving each other a bit of break. I'd personally be offended if someone thought I was a clanker.
You’re completely right, but blogs on the internet are almost entirely not written by great authors. So that’s of no use when checking if something is AI generated.
I'm a C/C++ developer. I write production code in MQL5 (C-like) and Go, and I use Python for research and Automation. I can work with other languages as well, but I keep asking myself: why should I learn Zig?
If I want to do system or network programming, my current stack already covers those needs — and adding Rust would probably make it even more future-proof. But Zig? This is a genuine question, because the "Zig book" doesn’t give me much insight into what are the real use cases for Zig.
If you're doing it for real-world values, keep doing that. But if you want traction, writing in a "fancy" language is almost a requirement. "A database engine written in Zig" or "A search engine written in Zig" sounds much flashier and guarantees attention. Look at this book: it is defintely an AI slop, but it stays at the top spot, and there's barely any discussion about the language itself.
Enough rant, now back on some reasons for why choosing Zig:
- Cross platform tools with tiny binaries (Zig's built in cross compilation avoids the complex setup needed with C)
- System utilities or daemons (explicit error handling instead of silent patterns common in C)
- Embedded or bare metal work (predictable rules and fewer footguns than raw C)
- Interfacing with existing C libraries (direct header import without manual binding code)
- Build and deployment tooling (single build system that replaces Make and extra scripts)
For my personal usage, I'm working on replacing Docker builds for some Go projects that rely heavily on CGO by using `zig cc`. I'm not using the Zig language itself, but this could be considered one of its use cases.
> For my personal usage, I'm working on replacing Docker builds for some Go projects that rely heavily on CGO by using `zig cc`. I'm not using the Zig language itself, but this could be considered one of its use cases.
Hm, i can see a good use case when we want to have reproducible builds from go packages, including its C extensions. Is that your use case, or are you aiming for multi-environment support of your compiled "CGO extensions"
need to bundle a lot of C libraries, some using dynamic linking and some using static linking, and I need to deploy them on different operating systems, including some that are difficult to work with like RHEL. Right now the builds are slow because I use a separate Dockerfile for each platform and then copy the binary back to the host. With Zig CC I could build binaries for different platforms and architectures without using Docker.
My take on this as someone that professionally coded in C, C++, Go, Rust, Python (and former darlings of the past) is that Zig gives you the sort of control that C does with enough niceties as to not break into other idioms like C++ and Rust does in terms of complexity.
Rust "breaks" on some low level stuff when you need to deal with unsafe (another idiom) or when you need to rely on proc-macros to have a component system like Bevy does. Nothing wrong with this, is just that is hard to cover all the ground.
The same happens with C++, having to grow to adapt to cover a lot of ground it ended up with lots of features and also with some complexity burden.
In my experience with Zig, you have the feeling of thinking more about systems engineering using the language to help you implement that without resorting to all sort of language idioms and complexity. It feels more intuitive in way giving it tries to stay simple and get out of your way. Its a more "unsurprising" programming language in terms of what you end up getting after you code into it, in terms of understanding exactly how the code will run.
In terms of ecosystem, lets say you have Java lunch, C lunch and C++ lunch (established languages) in their domains. Go is eating some Java(C#, etc..) lunch and in smaller domains some C++ lunch. Rust is in the same heavy weight category as Go, but it can eat more C++ lunch than Go ever could.
Now Zig will be able to compete in ways that it can really be an alternative to C core values, which other programming languages failed to achieve. So it will be aimed at things C and C++ are doing now and where Go and Rust wont be good candidates.
If you used Rust long enough you can see that while it can cover almost all ground its not a good fit for lower level stuff or at least not without some compromises either in performance or complexity (affecting productivity). So its more in the same family as C++ in terms of what you pay for (again nothing wrong with that, is just that some complex codebases will need a good amount of man-hours effort in the same line as C++ does).
Don't get me wrong, Rust can be good at low level stuff too, is just that some of its choices make you as a developer pay a price for those niceties when you need to get your hands dirty in specific domains.
With Zig you fell more focused on the machine with less abstractions as in C but with enough goodies that can make even the most die-hard C developer think about using it (something C++ and Rust never managed to do it).
So i think Zig will have its place in the sun as Rust does. But I see Rust taking more the place where Java used to be (together with Go) + some things that were made in C++ where Zig will be more focused on system and low level stuff.
Modern C++ will still be around, but Rust and Zig will used more and more where languages like C and C++ used to be the only real contenders, which is quite good in my POV.
What will happen is that Rust and Zig programmers might overlap and offer tools in the same area (see Bun and Deno for instance) but the tools will excel on their own way and with time it will be more clear into which domain Rust and Zig are better at.
"The Zigbook intentionally contains no AI-generated content—it is hand-written, carefully curated, and continuously updated to reflect the latest language features and best practices."
Such a bald-faced and utterly disgusting lie. The introduction itself ticks every single flag of AI generated slop. AI is trained well on corporate marketing brochures.
I don't know any of you. But Zig has opened way big door for system programming for people like me who has never done that before. And, Zig code looks (for a guy comes from curly braces language) easier to understand with really small learning curve.
Yeah, you should. Zig is a trending language right now, and in the coming years many projects are likely to be rewritten in Zig instead of Rust (often referred to as "riiz").
I was half joking. Folks keep saying everything will get rewritten in Zig, so I played along with that. Nothing serious behind it.
With only half serious intent, I think only the real wizard types, like Jarred Sumner (Bun) and Mitchell Hashimoto (Ghostty), who understand both low level systems and higher level languages, should be writing big tools in Zig. The tough part in the next few years will not be building things, it will be keeping them alive if the authors step away or the ecosystems move in a different direction.
I don't think you need to learn anything! Especially if you like Rust and it works for your projects.
Not an expert but Zig seems like a modern C - you manage memory yourself. I guess if you want more modern features than C offers, and actively don't want the type-system sort of features that Zig has (or are grumpy about compile times, etc) then it's there for you to try!
Partially agree on this, std lib/crates and ease of use do make a difference (this is not even the main reason to use Rust), though Rust certainly has its own headaches. (Imagine searching for someone's implementation of HashedMap on github or using dedicated packages like glib, when you get it easily at crates.io). Again this is subjective based on use cases.
It was very hard to find a link to the table of contents… then I tried opening it and the link didn’t work. I’m on iOS. I’d have loved to take a look quickly what’s in the book…
For me, personally, any new language needs to have a "why." If a new language can't convince me in 1-2 sentences why I need to learn it and how it's going to improve software development, as a whole, it's 99% bs and not worth my time.
DHH does a great job of clarifying this during his podcast with Lex Friedman. The "why" is immediately clear and one can decide for themselves if it's what they're looking for. I have not yet seen a "why" for Zig.
For many languages I agree, especially languages with steep learning curves (e.g. Rust, Haskell). But zig is dead fast to learn so I'd recommend just nipping through Ziglings and seeing if its a language you want to add to the toolbox. It took me only about 10 hours to pick up and get used to and it has immediately replaced C and C++ in my personal projects. It's really just a safer, more ergonomic C. If you already love C, I maybe wouldn't bother.
Haha the fucking garbage. Before AI, before the internet, this overexaggerated, hokey prose was written by scummy humans and it came exclusively in porn magazines along with the x-ray specs and sea-monkey fishtanks.
> The Zigbook intentionally contains no AI-generated content—it is hand-written, carefully curated, and continuously updated to reflect the latest language features and best practices.
I think it's time to have a badge for non LLM content, and avoid the rest.
I imagine it's kind of like "What's stopping someone from forging your signature on almost any document?" The point is less that it's hard to fake, and more that it's a line you're crossing where everyone agrees you can't say "oops I didn't know I wasn't supposed to do that."
The name seems odd to me, because I think it's fine to describe things as a digital brain, especially when the word brain doesn't only apply to humans but to organisms as simple as a 959 cell roundworm with 302 neurons.
> Most programming languages hide complexity from you—they abstract away memory management, mask control flow with implicit operations, and shield you from the machine beneath. This feels simple at first, but eventually you hit a wall. You need to understand why something is slow, where a crash happened, or how to squeeze every ounce of performance from your hardware. Suddenly, the abstractions that helped you get started are now in your way.
> Zig takes a different path. It reveals complexity—and then gives you the tools to master it.
> This book will take you from Hello, world! to building systems that cross-compile to any platform, manage memory with surgical precision, and generate code at compile time. You will learn not just how Zig works, but why it works the way it does. Every allocation will be explicit. Every control path will be visible. Every abstraction will be precise, not vague.
But sadly people like the prompter of this book will lie and pretend to have written things themselves that they did not. First three paragraphs by the way, and a bingo for every sign of AI.
I had a discussion on some other submission a couple of weeks back, where several people were arguing "it's obviously AI generated" (the style btw was completely different to this, quite a few explicitives...). When I put the the text in 5 random AI detectors the argument who except for one (which said mixed, 10% AI or so) all said 100% human I was being down voted and the argument became "AI detection tools can detect AI" but somehow the people claim there are 100% clear telltale signs which says it's AI (why those detection tools can detect them is baffling to me).
I have the feeling that the whole "it's AI" stick has become a synonym for I don't like this writing style.
It really does not add to the discussion. If people would post immediately "there's spelling mistakes this is rubbish", they would rightfully get down voted, but somehow saying "it's AI" is acceptable. Would the book be any more or less useful if somebody used AI for writing it? So what is your point?
Check out the other examples presented in this thread or read some of the chapters. I'm pretty sure the author used LLMs to generate at least parts of this text. In this case this would be particularly outrageous since the author explicitly advertizes the content as 100% handwritten.
> Would the book be any more or less useful if somebody used AI for writing it?
Personally, I don't want to read AI generated texts. I would appreciate if people were upfront about their LLM usage. At the very least they shouldn't lie about it.
I ran the introduction chapter through Pangram [1], which is one of the most reliable AI-generated text classifiers out there [2] (with a benchmarked accuracy of 99.85% over long-form text), and it gives high confidence for it having been AI-generated. It's also very intuitively obvious if you play a lot with LLMs.
I have no problem at all reading AI-generated content if it's good, but I don't appreciate dishonesty.
There's also the classic “it's not just X, it's Y”, adjective overuse, rule of 3, total nonsense (manage memory with surgical precision? what does that mean?), etc. One of these is excusable, but text entirely comprised of AI indicators is either deliberately written to mimic AI style, or the product of AI.
"not just x but y" is definitely a tell tale AI marker. But, people can write that as well. Also our writing styles can be influenced as we've seen so much AI content.
Anyway, if someone says they didn't use AI, I would personally give them the benefit of the doubt for a while at least.
Like many scholarly linguistic construction, this is one many of us saw in latin class with non solum ... sed etium or non modo ... sed etium: https://issuu.com/uteplib/docs/latin_grammar/234. I didn't take ancient Greek, but I wouldn't be surprised if there's also a version there.
Even for content that isn’t directly composed by llm, I bet there’d be value in an alerting system that could ingest your docs and code+commits and flag places where behaviour referenced by docs has changed and may need to be updated.
This kind of “workflow” llm use has the potential to deliver a lot of value even to a scenario where the final product is human-composed.
Meh. I mean, who's it for? People should be adopting the stance that everything is AI on the internet and make decisions from there. If you start trusting people telling you that they're not using AI, you're setting yourself up to be conned.
Edit: So I wrote this before I read the rest of the thread where everyone is pointing out this is indeed probably AI, so right of the bat the "AI-free" label is conning people.
I guess now the trend is Zig. The era of Javascript framework has come to end. After that was AI tend. And now we have Zig and its allocators, especially the arena allocator.
The page you've linked is very confusing, but as far as I can tell that's a Zigbee device that the manufacturer (Tensor plc) consistently describes as a "Zig" device. I have no idea why, it's bizarre.
- This thesis [1] identifies a product in this family as a Zigbee device. It's on the 80th page (numbered 62). Elsewhere it's referred to as a Zig device.
- I can't find anyone else claiming to make Zig devices or any references to a Zig protocol outside of this one manufacturer and their distributors.
- The manufacturer makes a lot of weird typos. They variously say these devices operate at 2.4GHz, 2.4MHz, and 2.4Mhz.
- There's nothing about a Zig protocol on the Zigbee Wikipedia page.
For sure but I don't think Andrew Kelley should've let a thing like that stop him from naming the language what he wanted when it isn't true. There's a lot of projects out there and only so many good, short, pronounceable names, so there's only so much accommodating you can do for name collisions with entities that don't exist.
Even if what you say is true, people make bets on new tech all the time. You show up early so you can capture mindshare. If Zig becomes mainstream then this could be the standard book that everyone recommends. Not just that, it’s more likely the language succeeds if it has good learning materials - that’s an outcome the author would love.
> people make bets on new tech all the time. You show up early so you can capture mindshare.
I got on the ground floor with elixir. got my startup built on it. now we have 3 fulltime engineers working on elixir fulltime. None of that would have happenned if I looked at a young language and said "its not used in the real world"
"nobody uses in the real world yet" is uncharitable, as Zig is used in many real-world projects (Bun and Tigerbeetle are written in Zig, for example). But there's value being at the forefront of technologies that you think are going to explode soon, so that's how people find time and energy, I guess.
Why does this feel like an ad? I've seen pangram mentioned a few times now, always with that tagline. It feels like a marketing department skulking around comments.
The other pangram mention elsewhere in this comment section is also me -- I'm totally unaffiliated with them, just a fan of their tool
I specify the accuracy and false positive rate because otherwise skeptics in comment sections might otherwise think it's one of the plethora of other AI detection tools that don't really work
FWIW I work on AI and I also trust Pangram quite a lot (though exclusively on long-form text spanning at least 4 or more paragraphs). I'm pretty sure the book is heavily AI written.
SAME. I was looking for a donation button myself! I've paid for worse quality instructional material. this is just the sort of thing I'm happy to support
I’m not sure how much value is to be had here, and it’s unfortunate the author wasn’t honest about how it was created.
I wish I wouldn’t have submitted this so quickly but I was excited about the new resource and the chapters I dug into looked good and accurate.
I worry about whether this will be maintained, if there are hallucinations, and if it’s worth investing time into.