Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Where I feel go is lacking is for data wrangling.

Group by, filter, map, join. It is just very error prone, inconvenient and slow to implement with for loops.



Go does support functional programming constructs (as it has first-class functions) and there are some FP libraries out there, but they are discouraged because the execution is so much slower; Go is not optimized for FP, and chooses "clumsy" for loops over clever functional programming because the loops have mechanical sympathy and are simply faster in execution speed.

That said, if you have a use case with a lot of data wrangling like that, Go may not be the best choice and a functional programming language may be a better fit.


Perhaps I’m thick but what kind of programming doesn’t involve data wrangling?

What are Go programmers doing that they don’t feel the need for map/filter/etc?

BTW there’s no reason why map and filter would be slower than loops, efficiently lowering such functions to loops was solved a very long time ago.


> What are Go programmers doing that they don’t feel the need for map/filter/etc?

As a refugee from a scala project that went badly (we eventually ported the entire thing to go), it's not so bad when you're just using map and filter and friends.

But eventually there's so many of those little methods each with their own nuances and I don't want to have to remember them all (`sliding` comes to mind), and it's just exhausting. I don't want to deal with it any more. The for loop is freeing, I've written the map/filter/reduce/groupBy functions a couple time, but I never end up using them. I don't miss them anymore.

I guess, those methods were sold to me originally as less powerful than a for loop. You had guarantees about what they're doing, and eventually there were enough of them that something flipped. The for loop feels easier. I can see everything that it's doing. It's all right there.

Same things with monads after a certain point. Result/Option are sorta fine, but I'd rather just deal with remembering to close a file than use a Resource. I don't want to have to think about Semigroups and Applicative Functors. I just want to call the function and do the thing. Eventually FP felt like my experiences with bad OO projects where I spent 80% of my time trying to figure out the platonic ideal of something and where it fit to make everything elegant. And then tracing my way through things was significantly worse when things went wrong (and they did still go wrong). I decided it wasn't worth it.

And, yeah. Sometimes I find a gronky loop somewhere that's doing too much. I just re-write it while I'm passing through so it doesn't get out of control, and I move on.


Thank you for sharing your experience and I am sure it is right for you. And I agree that Scala can be too much.

I just wanted to point out that there is a fundamental reason that for loops are inferior to map/reduce when working with data.

For loops is "how" where map/reduce is "what" and that puts the burden on you when you need to parallelize your job.

Joel Spolsky described it here (long ago)

https://www.joelonsoftware.com/2006/08/01/can-your-programmi...


> I just wanted to point out that there is a fundamental reason that for loops are inferior to map/reduce when working with data.

> For loops is "how" where map/reduce is "what" and that puts the burden on you when you need to parallelize your job.

That's not inferior. That's one advantage that map/reduce has in the set of trade-offs between them. I'm aware of that argument. It's part of what I was trying to get at with the "less powerful" comment (the other side of the coin of being less powerful/more constrained is that it's easier to parallelize).

But ~95% of code never makes it to that step. And I've found that defaulting to it leads to worse code bases, instead of defaulting to the simple intuitive thing and accepting the burden for the other 5% of the code.

Because the burden of that 5% of the code is never the biggest part of scaling. It's usually something like getting the business to understand the implications of CAP theorem and figuring out what trade-offs are best for them. The code is pretty easy, by comparison. Even if I have to chuck out 5% of the files and re-write them over a period of time.

I guess my recommendation is basically to do things that don't scale in your code, because 95% of code won't need to, and the other 5% has a good chance of being re-written anyway during the process of scaling.


Those methods are not magic pixy dust. Chain a few a those and you've looped through the entire array 3 times, instead of one for loop.


> Go is not optimized for FP, and chooses "clumsy" for loops over clever functional programming because the loops have mechanical sympathy and are simply faster in execution speed.

So are iterators in Rust which allow you to write idiomatic iterator experessions. Hell, even LINQ in C# has improved dramatically and now makes sense in general purpose code where you would have erred on the side of caution previously. You can pry ‘var arr = nums.Select(int.Parse).ToArray();’ from my cold dead hands.

At the end of the day, it is about having capable compiler that can deal with the kind of complexity compiling iterator expressions optimally brings.


Mechanical sympathy now favors SIMD instructions and hyperthreads because sequential loops are slower even unrolled.


With generics you can now write all of those in Go.


And it would be an order of magnitude slower than plain loop because compiler is not smart enough to optimize it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: