Hacker Newsnew | past | comments | ask | show | jobs | submit | 0x62c1b43e's commentslogin

What’s even worse is crypto meaning cryptocurrency instead of cryptography now. Now I can’t talk about “crypto” without sounding like I’m going to fill up someone’s spam folder.


I saw metrics like this a while ago but it’s still hard to wrap my head around. We rarely see this enormous portion of the animals that outnumber us so much.

I grew up on a farm in the rural western US, and our few cattle lived in nice little fields like you’d imagine, but I’d occasionally go by intensive farms and you can see how they pack so many animals into such a small space that they spend their whole lives in. It’s so near but so far from all of us. And the reactions from people I know varies widely to this information or these sights.


If I were going to do this, I’d probably go all the way to using ReScript, but it’s a nice idea.

I’m quite surprised it’s not called ToffeeScript though.


Maybe because ToffeeScript is prior art? https://github.com/jiangmiao/toffeescript

Wasn't super popular, but wasn't unknown either.


+1 for tofeescript.

'Civet' certainly implies your code is being processed, but I'm not sure the connotation is desirable.

And all IMHO of course, but significant-whitespace is the worst idea ever.


Wait, what's wrong with making sure indentation and code structure are always the same? Lying indentation structure is always wrong.


It's easy to auto-format according to character syntax, not so easy when the format is the syntax.

Simple copy/paste is enough to break significant-whitespace, let alone space-vs-tabs, etc, etc.

But again, all IMHO, I realise it has it's fans.


> I'm not sure the connotation is desirable.

You'd think not, and yet ...


Civets, they're nature's transpilers.


> significant-whitespace is the worst idea ever

Hear hear! I never heard a single argument for significant invisible characters that makes sense, ever.

Who would want to have a program that fails because you used invisible character X instead of invisible character Y?


A distinction between invisible character X and invisible character Y is a terrible idea. But indentation is very much visible; it's generally more significant to the reader than braces are, so it should be that significant to the computer too.


+1 for FP family tree languages that compile to JS.

This might be a good stepping stone though to sway skeptics.


The compilation speed of ReScript is also great


C on its own does feel like a simpler language though I think it’s true that the semantics taken literally undermine it (the generated assembly can be really surprising sometimes). I do trust a given Rust program to have a specific runtime behavior much more when optimized. It might be interesting to come up with a variant of C that has genuinely simpler semantics as well (not sure how useful I’d find it though)


By that logic we should remove bytes.Clone, strings.Split, bytes.Equal, strings.TrimLeftFunc, etc. from the standard library.


The bytes.Clone() uses non-trivial trick under the hood - `append([]byte{}, b...)`. Compare it to a trivial loop behind maps.Clone().

The strings.Split() implementation is non-trivial because of performance optimizations.

The bytes.Equal() is actually written in highly tuned and optimized assembly in order to achieve high performance for inputs of various lengths.

Now compare this to trivial implementations behind generic-based functions for maps. And do not forget that these implementations may hurt performance because of excess memory allocations in Keys() and Values() functions or because the compiler may fail inlining the callback passed to DeleteFunc().


Those are good points about some of them being nontrivial.

Though part of why they’re optimized and in the stdlib in the first place is because they’re such common patterns. So without them people would end up writing trivial, unperformant, custom versions. So now that more routines can be moved into the stdlib, they can benefit from optimization later.

(I’m not sure how much the maps routines specifically can be optimized, but stdlib routines routines can generally be more aggressive with unsafe or asm or being coupled to the runtime and its quirks, like bytes.Clone, strings.Builder, etc.)

And there are still plenty of ubiquitous patterns that have been worth including in the stdlib even if they’re usually just simple loops that aren’t very optimizable. Like strings.Index is an easy loop to write, but it comes up so often. Or strings.Cut is basically just an if-statement. But it makes code clearer about its intentions; and optimizations to these down the road benefit everyone.

It’s also true that maps.Keys and maps.Values allocate slices, and that you could avoid this with a loop, but strings.Split, bytes.Split, regexp.FindAll, os.ReadDir return slices and are still worthwhile as opposed to specialized iterators for each one. As with any code, you’re conscious of memory allocations where it counts, and optimize as needed.

In fact, now that generics make it possible, the Go team has discussed using iterators (https://github.com/golang/go/discussions/54245), which would benefit strings.Split even further in addition to all the other slice-returning functions.

So generally you have three options for those slice-returning functions:

- Custom inline loop for some of them. More verbose, will probably be naive and not benefit from stdlib optimizations. - Return a slice and iterate over it with a for loop. Creates allocations that could probably be avoided. - Create a customized iterator for that type. Unfortunately, you can’t really use an ordinary for loop, and extra custom iterators for each type. - Use generic iterators to benefit from the optimized functions and also avoid allocation overhead.

So part of the motivation is that now with generics there’s a variety of further optimizations available even to old functions like strings.Split and regexp.FindAll, in addition to opening up common patterns and optimizations for maps/slices/etc. to be included in the stdlib.


Agreed with most arguments.

A few remarks:

> Like strings.Index is an easy loop to write, but it comes up so often

Actually, strings.Index() is very non-trivial function partially written in assembly in order to achieve high performance [1]. This function is used in Go projects *much more frequently* than functions from the golang.org/x/exp/maps package.

> strings.Cut is basically just an if-statement

No, strings.Cut() has non-trivial code when comparing to a trivial loop for map copy or for map delete [2].

> It’s also true that maps.Keys and maps.Values allocate slices, and that you could avoid this with a loop, but strings.Split, bytes.Split, regexp.FindAll, os.ReadDir return slices and are still worthwhile as opposed to specialized iterators for each one.

The *key* difference between maps.{Key,Value} and the mentioned functions from the standard library is that it is trivial to write the `for k, v := range m` instead of maps.{Key,Value} and avoid memory allocations, while it isn't trivial to write the corresponding code without memory allocations, which substitutes strings.Split() or other mentioned functions from the standard library.

[1] https://github.com/golang/go/blob/86c4b0a6ec70b07ab49d3813a5...

[2] https://github.com/golang/go/blob/86c4b0a6ec70b07ab49d3813a5...


Yeah, and I don’t tend to keep this around quantitatively, but I’ve certainly run into bugs in Go programs that would’ve been categorically prevented with generics.

Of course I want sync.Map to use generics instead of interface{}. How could I not? And it’s less complex-looking than type-asserting everywhere.


The sync.Map is a good example , which could benefit from switching to generics. The problem is that this is the only useful case for generics. This case could be implemented in the way similar to built-in map[k]v generic type, which is available in Go since the first public release. And this could have prevent from opening the Pandora box with the generics.


It’s not the only one. Some other packages that used workarounds like interface{} or other things to work around the lack of generics were container/{heap,list,ring}, sort, golang.org/x/sync/singleflight, /x/exp/{maps,slices}, etc. And people will want to write their own patterns at other times of course too. It wouldn’t be reasonable for these all to become builtin types like map. These standard library packages that already exist will also become more efficient and potentially reduce allocations (when using primitives) as well.


The container/list and container/ring is one of the least useful packages included in the standard Go library, since they aren't used widely in Go programs. It is better from performance and readability PoV to use an ordinary slices instead of these packages in most cases.

The container/heap is more useful, but it could benefit more from adding an optimization for inlining interface method calls when Go compiler knows the underlying implementation behind the interface.

The golang.org/x/exp/maps is useless and may be harmful [1].

The golang.org/x/exp/slices is mostly useless, except of Sort*() functions. But it would be better to apply the optimization mentioned above to standard sort.* functions instead of forcing users to switch to different Sort*() implementations in other packages.

[1] https://news.ycombinator.com/item?id=34622393


> The container/heap is more useful, but it could benefit more from adding an optimization for inlining interface method calls when Go compiler knows the underlying implementation behind the interface.

This is exactly what generics do. With e.g. a heap.Heap[uint32] the compiler knows the implementation and there’s no interface method call overhead.

In order for the compiler to do this optimization, it has to know that you don’t e.g. pass a *heap.Heap[uint32] to a function expecting *heap.Heap[uint64], so the type system is what allows it to optimize.

And on top of that, now the user also gets assurance at compile time that heap.Heap[uint32].Pop returns a uint32, preventing bugs from type confusion and also so you don’t have to add type assertions everywhere you use the heap.

So now heap, sort, etc. can benefit from this improved performance; users don’t have to write wrapper types and interface implementations just so their type can be sorted; and bugs are prevented at compile time.

For [1] I posted a reply. It’s true that there are overheads with some slice-returning routines but I explained how in the reply how I viewed the tradeoffs.


In theory the compiler can inline interface method calls without the need to introduce generics. For example, it can detect that the customStruct is passed to the sort.Sort() in the code below, and then instantiate the sort.Sort() code for the given inlined Less(), Swap() and Len() interface calls:

    type customStruct struct { ... }
    func (cs *customStruct) Less(i, j int) bool { ... }
    func (cs *customStruct) Swap(i, j int) { ... }
    func (cs *customStruct) Len() int { ... }

    func sortMyCustomStruct(cs *customStruct) {
      sort.Sort(cs)
    }

The tricky part here is that the compiler should be careful when instantiating such calls for different interface implementations, in order to avoid generated code bloat. For example, if sort.Sort() is used for a thousand different sort.Interface implementations, then it may be not a great decision to create a thousand of distinct sort.Sort() instances for every sort.Interface implementation. But this should work OK for a dozen of distinct implementations.


> since they aren't used widely in Go programs

They aren't widely used because the ergonomics suck, because they aren't generic yet.


It’s hard to be objective because of filter bubbles, but I’ve seen it a lot on Reddit last.


*lately

It pops up on /r/golang sometimes. I don’t think it gets taken super seriously but there’s usually at least someone bringing it up.


I’m gonna be that guy, but do you have sources for any of this? That link shows that compiler performance is the same as before generics, for instance.

Are there more bugs in the compiler? Is readability reduced, and having an effect on pace? Especially if adoption is so low to begin with? Is adoption actually so low, or just rising?


That link admits that the compiler performance was lower than needed in Go1.18 and Go1.19, because of generics, even when compiling Go code without generics. I can confirm this based on my own open source projects written in Go [1].

[1] https://github.com/valyala/


I understand, but it shows that compiler performance is the same as before generics now, so any performance hit is gone.


But the time spent on designing, implementing and then optimizing the generics is lost. This time could be spent on more valuable things for Go ecosystem.


Generics consistently showed up as one of the most desired features (if not the most desired) by working Go developers in the previous developer surveys, so I think it makes sense that the Go team felt the ecosystem saw much value in it relative to other features and worth the time.


Unfortunately, the Go team was misguided by vocal minority who was using the "missing generics" argument as an excuse why they do not switch to Go from their favorite programming languages. The majority of working Go developers were happy with Go, so they didn't take part in the debates and surveys about generics.

The irony is that vocal minority still do not use Go, since they have other excuses now - "bad error handling", "missing functional features", etc. But if these harmful features will be implemented in Go, the vocal minority will find another reason why they do not use Go.


But if adoption is “very low”, then it’s not much pollution, is it?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: