Hacker Newsnew | past | comments | ask | show | jobs | submit | zeeboo's commentslogin

It is indeed manufactured specifically to show the existence of "normal" numbers, which are, loosely, numbers where every finite sequence of digits is equally likely to appear. This property is both ubiquitous (almost every number is normal in a specific sense) and difficult to prove for numbers not specifically cooked up to be so.


Okay, fair. It just seemed to me to have pretty limited utility.


Hm who cares about utility in this case?


Well, if we don’t care about utility I could define infinitely many transcendental numbers with no utility other than I just made them up. The number that is the concatenation of the digits of all prime numbers in sequence, for instance: 0.23571113171923… I christen this Dave’s Number. (It probably already has a name, but I’m stealing it.) Let’s add it to the list. Now we can define Dave’s Second Number as the first prime added to Dave’s Number: 2.235711131723… Dave’s Third Number is the second prime added to Dave’s Number: 3.235711131723… Since we’re cataloguing numbers with no utility, let’s add them all to the list.


The list was for famous numbers. Yours might get there, but not so fast.


But it is definitely in the list of Aleph-1 Most Famous Transcendental Numbers.

So there's that.



This is a good write-up of the issue. To see where this craziness could have led, see the C++ overload resolution logic, which isn't the exact same problem but does smell the same: https://en.cppreference.com/w/cpp/language/overload_resoluti...


I feel there's a clear alternative to the problem presented. I do not expect Empty to match any of those interfaces. A generic method should only match with a generic method of the same arity.


This has nothing to do with arity?


arity as in "number of generic arguments," I think they meant.


Still confused then, because there are two sets of generic arguments at play, and a proper call would match the arity of both.


Yeah, it would be that plus the name and the signature as per usual. I don't think this is a huge limitation considering the alternative they chose: disallow generic methods altogether.


So how would you implement generic method calls via an interface reference?


The same way the other languages do it.


Isn't that way either JIT or boxed integers?


Go's GC has been precise since 1.4 released about 13 years ago.


Time flies


You forgot to list the most useful feature of adding generics: people on the internet can no longer say "lol no generics", drastically reducing the amount of garbage comments about Go.


Those comments have now been replaced with "adding generics to Go was a big mistake".


Maybe they will, but they haven't, since this is the first time I've seen that comment. The "lol no generics" was endemic.


It’s hard to be objective because of filter bubbles, but I’ve seen it a lot on Reddit last.


*lately

It pops up on /r/golang sometimes. I don’t think it gets taken super seriously but there’s usually at least someone bringing it up.


They'll never go away, it just morphed into "Go was wrong and finally learnt the lesson that Java 5 did 19 years ago by adding in generics".

Go showed that useful software could be written without user-level generics. I don't think any other language today would dare to do that. In fact most languages seem to be converging into the same thing.


We already knew how to write useful software without user-level generics, we have been doing it for decades since FORTRAN came to be in 1957, no need for Go to prove anything beyond the stubbornness of its designers.


Useful software can also be written in asm and we have the entire early software industry to demonstrate that.

That's not the same as it being a good idea


Go is about productivity. It allows writing, extending and maintaining big codebases with lower efforts comparing to assembly or some other programming language out there. This is because of simple "what you read is what you get" syntax without implicit code execution. Generics break this feature :(

Of course, there are other brilliant features in Go ecosystem, which simplify writing and maintaining non-trivial codebases in Go - tooling, standard library, fast compile times, statically linked binaries, etc.


Except that these people now pollute the Internet with freaky packages, which use generics in esoteric ways :)


But if adoption is “very low”, then it’s not much pollution, is it?


It's for people who think racism is wrong and want a reason to avoid being introspective about what they can do about living in and benefiting from a society built on racism.


Every change that fixes a security issue implies the existence of a change that introduced the security issue in the first place. Why is bumping a version more likely to remove security issues instead of introduce them?

The reason why older is better than newer has more to do with the fact that the author has actually tested their software with that specific version, and so there's more of a chance that it actually works as they intended.


Security issues aren't introduced intentionally, oftentimes they are found much later on in code that was assumed to be secure. Like the SSL heartbleed vulnerability. Once a vulnerability like that is discovered, you _want_ every developer to update their deps to the most secure version


My statement had nothing to do with intent. Conversely, once a vulnerability is introduced (intentionally or not), you don't want every developer to update their deps to the newly insecure version.


Exactly, so it's a trade-off, do you want to encourage updates at the risk of malicious updates (like with node-ipc). Or do you want to add friction to updates and thus risk security vulnerabilities persisting for longer. Node chooses one approach, Go chooses the other.


Again, it's not just malicious updates. Normal updates can also introduce security vulnerabilities. For example, I have a dependency at v1.0 and v1.0.1 introduces a security bug unintentionally. It is eventually fixed in v1.1. If I wait to update until v1.1, then I am not vulnerable to that bug whereas an automatic update to v1.0.1 would be vulnerable. My point is that in expectation, updating your dependency could be just as likely to remove a security vulnerability as it is to add one.


I'd back that down to "most security issues aren't introduced intentionally".


So, I thought this at one point, too. But it turns out that methods is a type alias to an unnamed type, so there's no package level privacy issues: https://github.com/protocolbuffers/protobuf-go/blob/v1.26.0/...


Oh huh, interesting, I've never seen that done before.

I'm struggling to understand what the rationale _for_ doing it is though. Maybe it's to avoid an import cycle?


Yes, to avoid an import cycle or polluting the protoreflect API documentation with a rather large non-user-facing API surface.


For example, it doesn't use HTTP/2 (which has a ~100 page RFC and compiles to a ~2MB object file), does not have any load balancing or name resolution engine, and does not handle connection pooling or multiplexing.

Granted, one person's unnecessary is another person's necessary. But since DRPC has a modular design based on small interfaces, many features that are rarely used in gRPC can be excluded and left up to third parties to implement if they want.


What about `&m[x]` where m is some map? Does that heap allocate and create a copy, or is it a pointer to the actual storage slot? If the former, that's a hidden copy/allocation that didn't exist before, and if it's the latter, resizing the map invalidates the pointer, so it must be updated somehow.


`&` will "move" something to the heap if it isn't already on the heap.

The simpler way to think about it is that in Golang everything is on the heap. However the optimizer will move things to the stack if they don't have their address taken. I think the point about explicitness is that if you don't use `&` then it will be able to be put on the stack. So `&` doesn't cause a heap allocation but lack of `&` (or new()) confirms that there isn't one. (I don't actually know if that is true but I can't think of any counterexamples)


> So `&` doesn't cause a heap allocation but lack of `&` (or new()) confirms that there isn't one. (I don't actually know if that is true but I can't think of any counterexamples)

I think assigning to a pointer would cause an escape.

Just taking a reference wouldn't though, the reference still has to escape (of course you'd usually take a reference so that it can escape but that's not always the case, especially with inlining).


What do you mean by assigning to a pointer? You can only assign a pointer value to a pointer variable and you need to get that pointer from & IIUC.


> What do you mean by assigning to a pointer?

    *x = y


I don't think that does because IIUC you are copying the bits of y to x. So I guess semantically y has escaped but you aren't doing a new heap allocation, you are reusing the memory of x.


I think I didn't communicate my point clearly. Consider this hypothetical program:

    x := make(map[int]int)
    x[0] = 5
    
    y := &x[0]
    *y = 10
    
    print(x[0]) // 5 or 10?
    
    x[0] = 6
    
    print(*y) // 6 or 10?
    
    // force the map to grow and reallocate the buckets
    for j := 1; j < 100; j++ {
        x[j] = j
    }
    *y = 11
    
    print(x[0]) // 5, 6, 10, or 11?
The crux of the problem is answering what y actually points at: the value in the map bucket, or some freshly allocated value? There are problems with whichever one you pick.

edit: changed the second print to *y instead of x[0]. thanks masklinn for catching this error.


> print(x[0]) // 5, 6 or 10?

Do you mean `print(*y)`? You just assigned to `x[0]` so its value should not be in question.

Also

> if it's the latter, resizing the map invalidates the pointer, so it must be updated somehow.

It doesn't (have to) invalidate the pointer though. When resized the map's content get copied to a new backing buffer, the pointer can keep pointing to the old buffer. That's basically the same behaviour as slices: when a slice resizes, a new backing array is allocated, the contents get copied to the new array, and the slice is retargeted to the new array. There can be other slices pointing to the old array (it's of course a very bad idea to update slices to shared arrays, but Go will let you do it).


> It doesn't (have to) invalidate the pointer though. When resized the map's content get copied to a new backing buffer, the pointer can keep pointing to the old buffer.

That's true, but I don't think it's very comparable to slices. With slices, you have to explicitly reallocate either by creating a whole new slice or using append. Reslicing, indexing, or other operations do not reallocate. On the other hand, maps may end up resizing on any operation that involves them, or even theoretically in the background without any operations (during GC, for example). It would be unfortunate to lose that implementation flexibility, and keeping it means that you're essentially picking the "make a copy" option.


> That's true, but I don't think it's very comparable to slices

It's exactly the same.

> With slices, you have to explicitly reallocate either by creating a whole new slice or using append.

That's a distinction without a difference. `append` does not "explicitly reallocate", it may or may not reallocate, you've no idea. Even if the backing array is full, it might be realloc'd in-place.

> On the other hand, maps may end up resizing on any operation that involves them, or even theoretically in the background without any operations (during GC, for example).

So?

Also technically nothing prevents a GC from reallocating the slice.

> It would be unfortunate to lose that implementation flexibility, and keeping it means that you're essentially picking the "make a copy" option.

I've never heard of a hashmap implementation which would do otherwise.

Trying to extend in-place and attempting to properly redistribute if that works sounds like absolute hell. Likewise trying to shrink in-place, though at least you've got some scratch space which you don't have in the other case: you'd have to segregate everything into one half of the map then insert them in the other half, before shrinking your allocation, which might give you a new allocation anyway, at which point you've moved all your values thrice whereas just creating a new allocation and reinserting your stuff there is a single move.


> That's a distinction without a difference. `append` does not "explicitly reallocate", it may or may not reallocate, you've no idea. Even if the backing array is full, it might be realloc'd in-place.

Maybe to you, but to me, a pointer going from modifying the value inside of the map to no longer modifying the value inside of the map during any operation is quite a bit different than requiring a reassignment of the slice header. In other words:

    x := make([]int, 5)
    y := &x[0]
    x[3] = 8
    *y = 5
    print(x[0]) // always prints 5
as compared to

    x := make(map[int]int)
    y = &x[0] // btw, is this even valid? let's assume it implicitly does x[0] = 0
    x[3] = 8
    *y = 5
    print(x[0]) // maybe sometimes prints 5?
is meaningfully different. For slices, we know that x[0] will always print 5 until the value of x is reassigned in some way.

> Also technically nothing prevents a GC from reallocating the slice.

It would have the same problem the map does: you'd have to update any pointers into the slice to point to the new slice, otherwise the semantics of the program changes. That is not something the GC currently does, and would require an awful lot of metadata and scanning.

> I've never heard of a hashmap implementation which would do otherwise.

I'm not sure what this is referring to. I agree every map implementation has to reallocate the backing store of values periodically. I was trying to say that keeping the flexibility to reallocate the backing store of the map during GC means that you cannot choose the "writes through pointer are observed in the map" option (at least without a lot of complication around updating pointers) because as a programmer, you would not be able to know if it would do that or not, which is a fairly useless primitive.


> Maybe to you

Yes, I avoid making assumptions about invariants across mutation calls, that's just a bad idea.

> For slices, we know that x[0] will always print 5 until the value of x is reassigned in some way.

Unless an other goroutine is stomping on your backing array anyway.

> It would have the same problem the map does: you'd have to update any pointers into the slice to point to the new slice, otherwise the semantics of the program changes. That is not something the GC currently does, and would require an awful lot of metadata and scanning.

Yes. So maybe we could ignore that useless strawman?

> I'm not sure what this is referring to.

To what I'm quoting.

> I was trying to say that keeping the flexibility to reallocate the backing store of the map during GC

That sounds less like flexibility and more like "let's make the GC slower and more complex for no reason".


I apologize if the tone of my previous comment sounded harsh to you or if some of my arguments sounded like strawmen. I am in good faith trying to interpret your comments as best as I am able. I don't feel like you're giving me the same courtesy, so I'll exit the discussion now. Thanks.


Since it doesn't seem like this was answered in the other discussion, the answer is that Go does not allow taking the address of a map value. You get a compile-time error: "cannot take the address of m[x]".

https://play.golang.org/p/rX8A6ez9fVx


Indeed. This is in a thread where the original comment was "I think the best approach would be making & work in basically any scenario." I'm trying to demonstrate the complications of making it work on map accesses.


A simple map was discussed in the post. The problem it has is that it has unbounded growth. The sync.Pool approach is a best effort to avoid allocations only. If its API was changed to return sentinel pointers rather than the string, it wouldn't have the guarantee that Intern(x) == Intern(x) when x == x.


Thanks. I re-read this paragraph from the original article and it makes more sense now:

> use a zone mapping table, mapping between zone index integers and zone name strings. That’s what the Go standard library does internally. But then we’re left susceptible to an attack where an adversary forces us to parse a bunch of IP addresses with scopes and we forever a bloat a mapping table that we don’t have a good opportunity to ever shrink. The Go standard library doesn’t need to deal with this, as it only ever maps interfaces that exist on the machine and doesn’t expose the integers to users in representations; its net.IPAddr.Zone field is a string.


Whoops, I just fixed that typo too. (not yet deployed)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: