> There is no big company behind Zig, and the non-profit foundation stands on its own legs thanks to a balanced mix of corporate and individual donations. We don't have any big tech company on our board of directors, and frankly, we like it this way.
I hope it stays this way. And maybe it can given the way the foundation has been bootstrapped, and that the project is driven largely by a single creator. Having the funding be driven largely by small donations of actual developers is a healthy incentive structure.
Zig, next to Rust, is one of the more interesting language projects going on these days. Since Rust became orphaned from Mozilla, I am a bit ambivalent about the amount of investment in the project which is being driven by tech giants currently.
It's only recently after it became very popular all tech giants started investing in it.
Similar are many other languages like Ruby, Haskell etc.
Rust is developed within a corporate world of Mozilla, like Go language in google, so its natural it will embrace corporate world and companies like Microsoft, Amazon which will in time decide it's future directions.
I mean at some point, if the tech is good, tech giants will use it. And then you'll have a flurry of people rambling in shock "Wow, Google depends so much on [tech] and they don't even give a single penny to the devs!".
Programming languages getting a bunch of investment from its users is open source working possibly at its best.
I am rarely impressed by new languages, I tend to think we already have too many, a lot of them are slow and or redundant.
But I like to see the emergence of a few sane proposals to replace C/C++, I've been extremely impressed so far by the work done by the team behind Zig.
The best feature, from my perspective, is that Zig does not intend to be a better C++ but a better C.
my only beef with zig/jai is that they both have strong opinions on things like operator overloading and i am afraid they will end up just like java.
i have nothing against having strong opinions, i am the biggest offender but completely disallowing some features, some of the best features static typing can offer is just nonsense.
every language designer should study common-lisp and start with the operators : and :: these two operators alone tell you that the language designed for practical use.
for me overloading the operator + and function 'add' is exactly same. you should then forbid function overloading as well now. you can argue about the weakness of operator overloading but disabling completely, not even a workaround?
why not just enable oo and have safe alternatives instead?
. safe(a + b)
. a s+ b
"It's really easy to make fun of C++, but everyone has that one feature of C++ that they like. And they want to put it in Zig. If everyone had their favorite feature from C++ in Zig, Zig would just be C++. But I'm not going to let that happen."
a language with first class cross-compilation alone is enough for me to respect you as a language designer and zig have so many features like this. i am a huge fan.
i have only one favourite feature in c++ and it is templates/ctfe. without the function/operator-overloading templates are incomplete, i would then use c instead. i am still using c++ with all its failures, with all its complexity, with all its uglyness|ineleganceonly because c++ got one thing right. you can at least somehow modify the language.
edit: i have to point out that i have never worked in a big team (more than 10) and you can (probably should) safely ignore my ideas/opinions.
Maybe I am just a huge dummy, but I have yet to find examples of metaprogramming in the wild that aren't just mind-meltingly hard to grok. (Most of what I have seen is Python and Rust).
I have no doubt about how powerful metaprogramming is, but it makes me feel that understanding and contributing to libraries that use it is out of my reach.
I think what's novel about Zig's approach is that the metaprogramming is just normal code which happens to be executed at compile-time.
I have found that when any project gets to a certain size, it's almost inevitable that metaprogramming will be required, unless you want to make everything super dynamic and sacrifice performance. The idea of being able to do metaprogramming in the language I used to write the program itself is an interesting one.
I don't know if Kelley would agree with my characterization, but I don't see comptime as metaprogramming. Instead it opens the very interesting possibility of having types as values, as long as those values are resolvable at compile time. This lets you do things that feel like metaprogramming (e.g. making a generic container structure) but it seems a better conceptual fit to me that you're programming with types as values rather than generating code from a template or macro.
in fact i believe templates are the greatest idea in static typing. they look awful because its design and the implementation is awful not the idea. c++ failed because they didn't know and too late to turn back, and rust just looks same, you should see d-templates.
proposed and rejected. I went to a lang meeting and tried to steelman the issue, without having a strong investment either way (I had one use case, where I would slightly prefer the infix operation).
Yeah gamedev/simulation is the use-case where I think operator overloading is really a value-add. It's really nice to be able to write linear algebra code which reads like math.
Operator overloading is something that has often been misused, and even if I tend to like syntactic sugar, I understand that being explicit is more important.
When trying to understand a codebase, it is much easier to not worry about hidden indirections.
This is not the role of langage designers to prevent bad programmers to write bad code.
But it is better to avoid giving them too many ways to shoot themselves in the foot.
Forcing the code to be explicit is good design, in my opinion, simply because it is much easier to write code than to read it, we have to put more weight on the clarity side.
Is your point with the ':' and '::' operators that by allowing anybody to break package encapsulation at will the language pragmatically allows for situations that the original package designer did not anticipate? I could see that.
"Many years later we asked our customers whether they wished
us to provide an option to switch off these checks in the interests of efficiency on production runs. Unanimously, they urged us not to--they already knew how frequently subscript errors occur on production runs where failure to detect them could be disastrous. I note with fear and horror that even in 1980, language designers and users have not learned this lesson. In any respectable branch of engineering, failure to observe such elementary precautions
would have long been against the law."
The difficulty is that in the days when C was being designed, computers were much more irregular than they are today. There were one's complement machines, machines that didn't have power-of-two word sizes, no standardization of character sets, IEEE floating point hadn't been invented yet. The irregular machines weren't fringe stuff, they were the dominant architectures (IBM 360/370, DEC PDP-10, Pr1me, just a big collection of weird stuff). And compiler technology was much less advanced. So C was a messy compromise.
High level system programming languages are about 10 years older than C, which in its early days only cared to target the PDP-11 model used for the first UNIX rewrite.
Authors just chose to ignore what was already out there and do their own thing instead.
Well I guess it is lesson for both: Those who think technology is better because it is so successful and who think technology will be successful because it is so much better than other things.
Iirc, C was also designed by committee, and a lot of industry players got some of their grubby hands on the spec... (I could be wrong) but I belive the utter mess that is short/long/char sizes arises from hardware manufacturers wanting their code to be "trivially portable" across platforms with different machine words.
The question is if this should be done at the language-level or library-level. With sufficient metaprogramming capabilities, an enterprising programmer could write safe resource management abstractions for Zig. However, this wouldn't make the language "safe by default," which seems to be what Rust programmers are bringing with this criticism.
Even if Zig doesn't bring safety by default to the language, however, I think you could get it in practice by making the standard library enforce safety. Then, unless someone goes out of their way to write their own libraries from scratch and eschew all of the safety mechanisms, they would likely live in this safety bubble.
Can you write a borrow checker that enforces alias-xor-mut using metaprogramming? I'm skeptical, because of the flow sensitivity you really want to make it practical.
You could with some changes to the type capabilities, but I don't think that's the right direction. The main question is do we want soundness or not? This is a general question for various correctness properties, and, at least in the formal methods space, the answer seems to be "not always." Soundness has a cost, and stopping 100% of UAF bugs for the cost of making the language more complex and even adding a few bugs of other kinds, might not be worth it if you can stop 99% of them for a fraction of the cost. I think the goal should be to not have soundness -- IMO it has more downsides than upsides in this case -- and make good runtime detection joined with fast compile/test cycle and even automatic test generation.
Not at all. I'm saying that sound guarantees are not the only way to achieve memory safety. The goal isn't to use a language that makes sound guarantees, but to write correct programs (even Rust programs don't give you sound guarantees for memory safety as many of them depend on unsafe code that isn't soundly proven). That the cheapest way to write such programs is to have sound guarantees for everything is an interesting hypothesis, which would get you a language like, say, Idris, but it is not the consensus. There are many paths to safety and correctness, and not all of them go through soundness. I'd venture to say that most properties your life depends on in safety-critical systems are not soundly guaranteed.
When I say "memory safety" I mean "language-enforced memory safety". You're saying that having the language enforce memory safety isn't worthwhile.
> even Rust programs don't give you sound guarantees for memory safety as many of them depend on unsafe code that isn't soundly proven
There's no such thing as 100% memory safety; at the extreme end there are bit flips caused by cosmic rays. But the evidence suggests that practical language-enforced memory safety is actually worthwhile, despite the fact that no theoretical absolute memory safety is possible. All memory-safe languages have runtimes and FFIs, which in Rust is the unsafe blocks. This doesn't change the fact that, empirically, memory safety, even the imperfect memory safety we have to live with in the real world, is a meaningful improvement in stability and security.
> That the cheapest way to write such programs is to have sound guarantees for everything is an interesting hypothesis, which would get you a language like, say, Idris, but it is not the consensus.
Straw man. Nobody is talking about proving all code correct. What is reasonably describable as consensus nowadays is that having the language enforce memory safety is worthwhile. The idea that enforced memory safety has, in your words, "more downsides than upsides", is increasingly at odds with the consensus.
To get concrete, I see no reason to believe that quarantine (Zig's current solution to UAF) is a meaningful solution to use-after-free problems, given that quarantine has been deployed for a long time in production allocators in other languages and has failed to eliminate this bug class in the wild.
> You're saying that having the language enforce memory safety isn't worthwhile.
I am saying that it may come at a cost, and overall it may not be worthwhile; but, of course, there are different kinds of safety bugs, different kinds of soundness, different costs, and different alternatives.
> But the evidence suggests that practical language-enforced memory safety is actually worthwhile, despite the fact that no theoretical absolute memory safety is possible
What is it that the evidence suggests exactly? There are so many variables. For example, Zig's runtime checking could be much more effective than similar systems for C or C++, because it is much easier to track all pointers in Zig, just as its overflow protection is far more effective than sanitisers in C, because there's no pointer arithmetic (unless using unsafe operations) and all buffer sizes are always known. So you can't compare what Zig can do to what C can do. Then there's the question of what the safety mechanism is. Then, for each point in this coordinate space, what is the fitness you're looking at? Reducing a particular bug or improved overall correctness?
> What is reasonably describable as consensus nowadays is that having the language enforce memory safety is worthwhile.
No, this is not true, or at least, that depends on what is the fitness function and what exactly it is compared against. To put it bluntly, there is absolutely no consensus that Rust would more cheaply produce programs that are more correct overall than Zig. It's possible this is the case, as is the opposite or that the two are about the same, but we just don't know yet.
> To get concrete, I see no reason to believe that quarantine (Zig's current solution to UAF) is a meaningful solution to use-after-free problems, given that quarantine has been deployed for a long time in production allocators in other languages and has failed to eliminate this bug class in the wild.
But that's not how we look at correctness. The goal isn't to eliminate all UAF bugs. Of course completely (or close to it) eliminating a specific bug is more effective at reducing it than not completely eliminating it. If my only goal was to eliminate UAF, then I'd choose Rust's approach over Zig's. But usually our goal is to write a program that meets our correctness level for all bugs combined in the cheapest way possible. Investing a huge language complexity budget in completely eliminating UAF, as opposed to, say, reducing it by 99%, has not been shown to be the best way to achieve what we actually want.
I am not saying that it's not reasonable to believe that soundly eliminating this particular bug with something like Rust's ownership and lifetime system ends up being better overall, but it is equally reasonable, based on what we know, to believe that Zig's way is more effective overall, or that the two end up the same. There's just so much we don't know about correctness.
Not sure, but I became optimistic of there being some solution after I saw someone implement affine typing using stateful template metaprogramming: https://godbolt.org/z/PnPPrnPjY.
That's neat, but also flow-insensitive (and wow, generating a definition for every single use of a variable can't be good for compile-time memory usage).
There isn't really anything in the language that prevents use-after-free, partially because allocation isn't even part of the language and exists entirely in userland code. However, IIRC the standard library does include an allocator that tries to detect use-after-free.
Isn't that an intentional design decision in Zig? As far as I understood, Zig isn't trying to "solve" memory management, it's trying to provide manual memory management with better tools than C to help you do it right, but it's explicitly not trying to save you from yourself in this respect.
I don't follow. You recently pointed out to me that most of the serious security bugs in the Chromium codebase (C++) are rooted in memory-management. [0] It's no simple thing to fix these bugs. Not even Google can do so in their most treasured code.
It's not supposed to be. In most cases, memory allocation is an operating system concept. As a systems programming language, zig must not abstract this away from the programmer.
You don't need allocations to be part of the language, you just need destructors (or linear types.) But the Zig authors intentionally omitted destructors because they want all control flow to be explicit. There's more at [0].
Side note: the proposal at [0] seems very close to [1].
Currently I'm using Rust in a project without RAII. RAII is neither necessary nor sufficient to prevent use-after-free. In Rust use-after-free is prevented by the borrow-checker. In C++ with RAII use-after-free is still possible, because normal references aren't tracked with regard to the lifetime of pointees.
> I first heard about this from one of the developers of the hit game SimCity, who told me that there was a critical bug in his application:
> it used memory right after freeing it, a major no-no that happened to work OK on DOS but would not work under Windows where memory that is freed is likely to be snatched up by another running application right away.
> The testers on the Windows team were going through various popular applications, testing them to make sure they worked OK, but SimCity kept crashing.
> They reported this to the Windows developers, who disassembled SimCity, stepped through it in a debugger, found the bug, and added special code that checked if SimCity was running, and if it did, ran the memory allocator in a special mode in which you could still use memory after freeing it.
That would depend on the language and the libraries.
You could roll your own 'safe' memory pool in C such that use-after-free doesn't produce undefined behaviour. You'd still have a bug, but it wouldn't have to invoke UB at the level of the C programming language.
as far as I can tell, none really good ones exist yet...
the documentation is still scattered. The official website [1] is a good starting point. and I wish I'd seen this [2] earlier (I hope this gets incoporated into the official resources)
There are many erroneous claims made regarding Zig allocators. Contrary to what is claimed, the fact that you pass an explicit allocator does not affect:
- That an out-of-memory condition is elegantly handled or not.
- That an out-of-memory condition is properly reported to the caller.
- That is simplifies WebAssembly support; whatver you put in the allocator your could put into malloc.
- That it easily allows arena allocators; you can only use an arena allocator if you intimately know every single uses of the passed in allocator, allo the implementation details of the libraries you call and of *all* its dependencies. If a dependency, for example, implements a cache, the arena allocator will corrupt it when deallocated.
There are many ways to approach macros. Zig’s approach with compile time evaluation handles many of them. Esp since types can be created on the fly.
Another approach, which is strongly inspired by the ASTEC project for C, is what I use for my C-like “C3”: http://www.c3-lang.org/macros/
Zig’s clearly about a more homogeneous approach where I am happy to have multiple pieces of syntax to cover different usecases.
It's always possible to use the C preprocessor on non-C files, I saw people do it in Java and I heard it's somewhat common in C#; it's also used by various Unix utilities (X11 for Xrdb for instance). So technically every language can have C macros :-)
I don't know much about Kelley, but if the goal is to create a "C replacement", aggressively keeping the language surface area small seems like a reasonable way to go.
And even if you don't think that's right, we already have the large-surface area systems programing language experiment in Rust. Aren't we better off with Rust and Zig exploring clearly different paths for a C replacement than both of them trying the same things?
Exactly, this is why he sometimes comes off as a jerk, because he tells people stuff like "No, you can't add your virtual function table feature to Zig, OOP programming is a bad idea in a systems language."
All these programming language features require a lot of machinery, complicating the binary interfaces of the resulting software and preventing code reuse.
Simple, small languages getting complicated when used in large applications while complex languages become simple and easier when used in large applications.
That mostly seems correct, but Zig will be an interesting one to watch due to the approach of having powerful compiletime execution. I.e. Zig doesn't have generics in the traditional sense, but you can get them by writing normal code which creates types for you at compiletime. The promise would be that you can get more advanced features without adding complexity to the language, i.e. like in Rust where you have to basically understand the AST in depth to be able to write macros. We'll see if it pans out.
Also for instance Go has been able to stay relevant while staying very small. Complex languages can help with complex projects if there is a good program structure, but one advantage of simple languages is there is only so much of a mess which can be made if developers go a bit off the reservation.
Yeah this is not my experience. Enterprise distributed systems are much easier in erlang and elixir, which are simple languages, over java, which is not.
A key feature of a "flame war" is long chains of responses. By not responding to amedvednikov, AndyKelley did not participate in a flame war on that post. It was a call-out. His criticism was quite relevant at the time, because the article was a false release announcement, and the project was garnering a lot of attention (and taking people's money) by making outlandish claims, with (at the time) nothing to show for it.
There is no such thing as "constructive criticism" of suspected fraud -- there were a bunch of red flags, and AK was not remotely alone in sounding the alarm. Do you have constructive criticism to offer, is AK still badgering AN, or are you just airing grievances about 2 year-old drama?
> There is no such thing as "constructive criticism" of suspected fraud
You can criticize anyone constructively. What they are today is not what they are in the future. People can change even Andrew, but if something is repeated over and over, it is a sign of their personality.
I see his act as harassment. I know that Vlang had red flags, but I feel Russophobia or Jealousy in his actions. I looked through everything, but the guy didn't let the man talk and explain himself. So, the other side of the story is hushed up. It's like asking someone to prove their worth, otherwise they'll be killed. Naive Bayes can uncover many cases of fraud, but bias can turn things upside down.
Yeah, he might have come on kind of strong against the V guy, but as I understand it there are some vapor-warey aspects to VLang (or were at the time he pointed them out).
I hope it stays this way. And maybe it can given the way the foundation has been bootstrapped, and that the project is driven largely by a single creator. Having the funding be driven largely by small donations of actual developers is a healthy incentive structure.
Zig, next to Rust, is one of the more interesting language projects going on these days. Since Rust became orphaned from Mozilla, I am a bit ambivalent about the amount of investment in the project which is being driven by tech giants currently.