Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't think the reason for zero values has anything to do with "avoiding null panics". If you want to inline the types, that is avoid using most of your runtime on pointer chasing, you can't universally encode a null value. If I'm unclear, ask yourself: What would a null int look like?

If what you wanted was to avoid null-panics, you can define the elementary operations on null. Generally null has always been defined as aggressively erroring, but there's nothing stopping a language definition from defining propagation rules like for float NaN.




Sorry, I don't follow you. If you don't have zero values, you either have nulls and panics, or you have some kind of sum-type á la Option<T> and cannot possibly construct null or zero-ish values.

Is there a way to have your cake and eat it too, and are there real world examples of it?


You're thinking in abstract terms, I'm talking about the concrete implementation details. If we, just as an example, take C. and int can never be NULL. It can be 0, compilers will sometimes tell you it's "uninitialized", but it can never be NULL. all possible combinations of bit patterns are meaningfully int.

Pointers are different in that we've decided that the pattern where all bits are 0 is a value that indicates that it's not valid. Note that there's nothing in the definition of the underlying hardware that required this. 0 is an address just like any other, and we could have decided to just have all pointers mean the same thing, but we didn't.

The NULL is just a language construct, and as a language construct it could be defined in any way you want it. You could defined your language such that dereferencing NULL would always return 0. You could decide that doing pointer arithmetic with NULL would yield another NULL. At the point you realize that it's just language semantics and not fundamental computer science, you realize that the definition is arbitrary, and any other definition would do.

As for sum-types. You can't fundamentally encode any more information into an int. It's already completely saturated. What a sumtype does, at a fundamental level, is to bundle your int (which has a default value) with a boolean (which also has a default value) indicating if your int is valid. There's some optimizations you can do with a "sufficiently smart compiler" but like auto vectorization, that's never going to happen.

I guess my point can be boiled down to the dual of the old C++ adage. Resource Allocation is NOT initialization. RAINI.


Then your point is tangent to the question of zero values, and even more so to the abstract concept of zero values spilling over into protobuf.


No, there isn't. It is just other versions of the same problem with people pretending it is somehow different.

People generally like to complain about NULL/nil whatever, but they rarely think about what the alternatives mean and what arrangements are completely equivalent. No matter what you do, you have to put some thought into design. Languages can't do the design work for programmers.


There is a way to have your cake and eat it too: rust.

In rust, you have:

    let s = S{foo: 42, ..Default::default()};
You just got all the remaining fields of 'S' set to "zero-ish" values, and there's no NPEs.

The way you do this is by having types opt in to it, since zero values only make sense in some contexts.

In go, the way to figure out if a type has a meaningful zero value is to read the docs. Every type has a zero value, but a lot of them just nil-pointer-exception or do something completely nonsensical if you try to use them.

In rust, at compiletime you can know if something implements default or not, and so you can know if there's a sensible zero value, and you can construct it.

Go doesn't give you your cake, it gives you doc comments saying "the zero value is safe to use" and "the zero value will cause unspecified behavior, please don't do it", which is clearly not _better_.


> There is a way to have your cake and eat it too: rust.

Suppose my cake is that I have a struct A which holds a value, that doesn't have a default value, from your library B. Suppose that at the time I want to allocate A I don't yet have the information I need to initialize B, but I also know that I won't need B before I do have that information and can initialize it. In simple terms. I want to allocate A, which requires allocating B, but I don't want to initialize B, yet.

What do I do?

If you answer involves Option<B> then you're asking me to to grow my struct for no gain. That is clearly not _better_.


Doesn't Rust have explicit support for uninitialized memory, using the borrow checker to make sure you don't access it before initializing it? Or does that just work for local variables, not members of structs?


You can’t do the “declare before initializing” thing with structs, that’s correct.


Then you can't eat it too (or else you'll get very sick with NPEs/panics), sorry.


More specifically, it could result in undefined behavior, if a panic happens between the allocation and initialization (i.e., it was allocated, not initialized, panicked, and something observed the incomplete struct after the panic). Alternatively, the allocation would always have to leak on panic, or the struct would have to be deallocated without a destructor running.


I agree that rust, with Option and Default, is the only right choice - at least from what I've tried. Elm for example has Option but nothing like Default, so sometimes it's tedious that you have to repeat a lot of handmade defaults, or you're forced to use constructor functions everywhere. But at least the program is correct!

Go is like PHP in regards to pushing errors forward. You simply cannot validate everything at every step. Decoding with invariants is the right alternative.


What is with Rust evangelicals shitting up Go posts? Shut up and go away! Go talk to other Rust users about it if you love it so much!

it's for different things!

the things I build in Go simply do not need to be robust in the way Rust requires everything to be, and it would be much more effort to use Rust in those problem domains

Is Go a more crude language? maybe! but it lets me GET SHIT DONE and in this case worse really is better.

All I know is that I've spent less time over the last ten years writing Go dealing with NPEs than I have listening to Rust users complaining about them!

if you love Rust so much, YOU use it then! We like Go, in threads about Go. I might like Rust too, in the same way I like my bicycle and my car, if only the cyclists would shut up about how superior their choices are


> or you have some kind of sum-type á la Option<T> and cannot possibly construct null or zero-ish values.

Option types specifically allow defaulting (to none) even if the wrapped value is not default-able.

You can very much construct null or zero-ish values in such a langage, but it’s not universal, types have to be opted into this capability.


Exactly my point, you have to opt-in, and in practice you only do precisely where it's actually necessary. Which is completely different than "every single type can be a [null | zero value]". You cannot possibly construct some type A (that is not Option<A> or A@nullable or whatever) without populating it correctly.

Of course you need some way to represent "absence of a value", the matter is how: simple but incorrect, or complex but correct. And, simple/complex here can mean both the language (so performance tradeoff), and (initial) programmer ergonomics.

That's why I ask if you can have your cake and eat it too, the answer is no. Or you'll get sick sooner than later, in this case.


> You cannot possibly construct some type A (that is not Option<A> or A@nullable or whatever) without populating it correctly.

Except you can. The language runtime is clearly doing it when it stores [None|Some(x)] inline in a fixed size struct.


There is no way to store None | Some(x) in sizeof(x) bytes, for simple information theory reasons. What you can do is store between 1 and 8 optional fields with only 1 byte of overhead, by using a single bit field to indicate which of the optional fields is set or not (since no commonly used processor supports bit-level addressing, storing 1 extra bit still needs an entire extra byte, so the other 7 bits in that byte are "free").


> There is no way to store None | Some(x) in sizeof(x) bytes

That's subtlety incorrect. Almost all languages with NULLs in fact already do this, including C. On my machine sizeof(void*)=8, and pointers can in fact express Some(x)|None. The cost of that None is neither a bit not a byte, it's a single value. A singular bit pattern.

See the None that you talk about is grafted on. It wraps the original without interfacing with it. It extends the state by saying "whatever the value this thing has, its invalid". That's super wasteful. Instead of adding a single state, you've exploded the state space exponentially (in the literal sense).


I should have made that caveat: if X doesn't need all of the bits that it has, then yes, you can do this. But there is no way to know that this is the case for a generic type parameter, you can only do this if you know the semantics of the specific type you are dealing with, like the language does for pointer types.

I should also point out that in languages which support both, Option(int*) is a valid construct, and Some(nullptr) is thus not the same thing as None. There could even be valid reasons for needing to do this, such as distinguishing between the JSON objects {} and {"abc": null}. So you can't even use knowledge of built-in types to special-case your generic Option implementation. Even Option(Option(bool)) should be able to represent at least 4 distinct states, not 3: None, Some(None), Some(Some(true)), Some(Some(false)).




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: