I can't comment on the social phenomenon here, but there is indeed a decent technical argument for avoiding for loops when possible.
In a nutshell, it's kind of like "prinicple of least priviledge" applied to loops. Maps are weaker than Folds which are weaker than For loops, meaning that the stronger ones can implement the weaker ones but not vice-versa. So it makes sense to choose the weakest version.
More specifically, maps can be trivially parallelized; same for folds, but to a lesser degree, if the reducing operation is associative; and for-loops are hard.
In a way, the APL/J/K family takes this idea and explores it in fine detail. IMHO, for loops are "boring and readable" but only in isolation; when you look at the system as a whole lots of for loops make reasoning about the global behaviour of your code a lot harder for the simple reasone that for-loops are too "strong", giving them unweildy algebraic properties.
While these are all valid and well thought out arguments, in this particular example, a whole class of problems and bugs were introduced specifically by avoiding simple loops.
Not to mention the performance implications. Parallelisation, composability and system thinking are sometimes overkill and lead to overengineering.
> So it makes sense to choose the weakest version.
Only if it's actually more readable. The principle of least privilege does not give you any benefit when talking about loop implementations.
> More specifically, maps can be trivially parallelized;
This argument is repeated time and time again but I've never actually seen it work. Maps that can be trivially parallelized aren't worthy to parallelize most of the time. In the rare case it's both trivial and worthy, it's because the map function (and therefore the loop body) are side-effect free, and for those rare cases you don't care too much about the slightly extra effort of extracting the loop body into a function.
> when you look at the system as a whole lots of for loops make reasoning about the global behaviour of your code a lot harder for the simple reason that for-loops are too "strong"
Code is too strong in general. Reasoning about the global behavior of code is difficult if the code itself is complex. Nested maps and reduces will be equally difficult to comprehend. The fact that a map() function tells you that you're converting elements of lists does not save you from understanding what is that conversion doing and why.
Sometimes loops will be better for readability, sometimes it will be map/reduce. Saying that for loops always make it harder to reason about the code does not make too much sense in my opinion.
I agree with the no silver bullet thing - and written on another reply I don't even know if I agree with the example in the article.
> The fact that a map() function tells you that you're converting elements of lists does not save you from understanding what is that conversion doing and why.
It can actually, say you have a query that comes in, this calls a function that fetches records from the database, it's not a basic query, it has joins, perhaps a subquery, etc.
Then you have another function that transforms the results into whatever presentational format, decorates, wtv, those results, and it's also more than a few basic couple lines of logic.
And now you have a bug report come in, that not all expected results are being shown.
If you have
func does_query -> loop transforms
You have 3 possibilities, the problem is on the storage layer, the problem is on the query, the problem is on the loop.
You read the query, because the bug is subtle, it seems ok, so now you move to the loop. It's a bit complex but seems to be correct too. Now you start debugging what's happening.
If you have
func does_query -> func maps_results
You know it's either underlying storage or the query. Since the probability of the storage being broken is less plausible, you know it must be the query. In the end it's a synch problem with something else, and everything is right, but now you only spent time on reproducing the query and being sure that it works as expected.
Loops are easier to read. With functions like reduce you have to solve a puzzle every time to understand what the code is doing (this is also true for functional style of programming in general).
> More specifically, maps can be trivially parallelized; same for folds, but to a lesser degree, if the reducing operation is associative; and for-loops are hard.
In a typical Javascript code reduce operation will not be parallelized. It actually can be slower than a loop because of overhead for creating and calling a function on every iteration.
> when you look at the system as a whole lots of for loops
A code with lot of loops is still more readable than a code with lots of nested reduces.
>Loops are easier to read. With functions like reduce you have to solve a puzzle every time to understand what the code is doing
I think that is a function of familiarity, if you use reduce a lot it will be as easy to read as a loop - perhaps easier because more compact - there is a downside to reading more lines for some people, at some point verbosity becomes its own source of illegibility (although any loop that can easily be turned into a reduce probably won't be excessively verbose anyway)
Of course all that is just on the personal level, you , by using and reading more code with reduce in it will stop finding reduce less easy to understand than loops - but the next programmer without lots of reduce experience will be in your same boat.
I disagree quite strongly, in that this is simply a function of familiarity. Reduce is no more or less readable than for (especially the C style for — imagine trying to work out what the three not-really-arguments represent!)
loops are harder to read. What does it do, map, reduce, send emails to grandma?
In JavaScript, the reduce callback is created once and called repeatedly. For loops are pretty much always the fastest possible way because they use mutable state. They are also a really good way of creating unreadable spaghetti that does things you don't want them to.
I'm not sure what you mean by nested reduces. Chained reduce functions are easy to follow
The point was that in map and reduce it's clear what's being done and what's being returned, especially in a typed language. Ideally you're also in an environment that doesn't allow for side effects, in which case, grandma gets no emails from map or reduce
It is not visible what is returned or what is input im case of chaining. Because return and input parameters are not directly visible and you have to read all previous calls to figure that out.
In a nutshell, it's kind of like "prinicple of least priviledge" applied to loops. Maps are weaker than Folds which are weaker than For loops, meaning that the stronger ones can implement the weaker ones but not vice-versa. So it makes sense to choose the weakest version.
More specifically, maps can be trivially parallelized; same for folds, but to a lesser degree, if the reducing operation is associative; and for-loops are hard.
In a way, the APL/J/K family takes this idea and explores it in fine detail. IMHO, for loops are "boring and readable" but only in isolation; when you look at the system as a whole lots of for loops make reasoning about the global behaviour of your code a lot harder for the simple reasone that for-loops are too "strong", giving them unweildy algebraic properties.