In most imperative languages, writing .Map().Map().Filter().Map() is another full copy for each call anyhow. As a sibling notes, it is possible to "fix" that, but the fix is not even remotely free. It is quite complicated to do it generically. (Although there are other benefits; the most natural approach to that largely fixes my grumblings about refactoring I mentioned in another post.)
Plus, in a for loop approach, it is not true that the caller may need another loop with a copy. They may just loop over the result and skip over the things they don't need. They only need a copy if they are going to pass that on to something else.
A drum I can not stop banging on is that you can not just take a neat technique out of one language and slam it into another without examining the end result to make sure that you haven't murdered the cost/benefits tradeoff. You can slam together all the maps and filters and reduces you want in Haskell, and applicatives and monads and all the fun, due to a combination of the laziness and various safe optimizations like loop fusion. In an eager context that lacks loop fusion, going .Map().Map().Map().Map() has radically different performance implications. For instance, "take 10 $ map f list" in Haskell will only call "f" 10 times. .Map().Take(10) in most implementations will create the full array, however large it is, and slice ten off the end after that.
In imperative languages, contrary to frequent claims from the functional programming crowd, for loops are actually often better in practice. The solution to their pathologies is to be aware of them and not do them. But it is far, far easier to get good performance out of a for loop in an imperative language than to contort one's code into a pale parody of functional programming.
That's why I specified the code. If you're writing .Map().Map().Map().Map(), you are usually getting a lot of intermediate arrays.
If you have a .Collect() call, you're in the deforestation branch. This has its own issues with stressing the inliner (turning simple, direct for loops over large collections into traversals that include several indirect method calls per item in addition to the payload is pretty easy), but that's still generally better than stressing the RAM.
Rust's map doesn't operate on arrays at all from what I can see but operates on iterators directly. This is good and generally more correct. However there's a lot of languages that don't support that. Rust is generally also going to be more reliable about compiling it all away than a lot of other languages where it will be really easy to spill over what the inliner can handle. Those long compile times in Rust do have their benefits.
There's also languages that sort of split the difference, e.g., it is not that difficult in Python to use itertools and generators to correctly write something that will not generate a lot of intermediate arrays, but it is also easy to write a series of calls and list comprehensions to write otherwise equivalent code that will create a lot of intermediate arrays.
I expect as we continue to build new languages over time they're going to all look like Rust here. It's pretty obvious that conceiving of loops as iteration over some sequence is the way to go. However, that is a result that we got to precisely because of our experiences with a lot of languages that don't support it as well, or support it inconsistently, or as is actually quite common, the language nominally supports it but the ecosystem tends to assume concrete values a lot more often than it should, and all these languages are still around.
Writing in this style correctly in imperative code is more difficult than a lot of people jumping up and down about how we should rewrite all our loops as maps and filters tend to account for. It can be done, but it's often harder than it looks, in at least one of the writing and the performance if not both, and the harder it is, the more the costs stack up on the costs side, and the harder the costs/benefits analysis becomes. And I still don't like how it refactors in most cases.
Nor in Python. Nor in C++20's std::views. The very point of iterators us to proceed by one element, avoiding intermediate copies of collections.
One case where the for-loop would be much more efficient is a simple transformation like incrementing each element of an array, or adding two arrays element-wise. A serious C compiler could unroll such a loop into several vector instructions which work on several elements at once. Maybe even LLVM can recognize something like that in Go code.
> In most imperative languages, writing .Map().Map().Filter().Map() is another full copy for each call anyhow
This is incorrect. In fact, the only such mainstream language that I can think of is JavaScript. Python, Java, C#, even C++ all have proper abstractions for lazy sequences, which would get used in such circumstances - and their whole point is that they are composable.
> Plus, in a for loop approach, it is not true that the caller may need another loop with a copy. They may just loop over the result and skip over the things they don't need. They only need a copy if they are going to pass that on to something else.
The point is that your function doesn't know what the caller needs. Yet by making it an eager loop with a copy, you are making that decision for the caller.