Disclaimer: as a Clojure developer and functional programming evangelist, I do not want to be too critical of this article. It's a really great post about some incredibly valuable and advanced Clojure features.
That being said, it really bothers me when functional programming evangelists write out such horrible Python code examples. It makes Python look like it's not a functional language. It perpetuates the idea that Python won't let you write elegant and stateless programs that behave in a functional manner.
If the author had not mentioned Python and had instead used it as pseudocode to represent the entirely non-functional way to write something, I would have been okay with it. But calling out Python specifically is just incorrect!
I have rewritten the code in a very pythonic manner that illustrates the functional capabilities of the language:
even = lambda x: x % 2 == 0
# alternatively, depending on your religious beliefs:
def even(x): return x % 2 == 0
def process(seq):
return sum(x + 1 for x in seq if even(x))
Note the (parens) for the comprehension instead of [brackets], which creates a generator. The above code is lazy (in Python 3)!
And if we fire up a python repl and play around:
In [2]: process(range(10))
Out[2]: 25
I really love Clojure and use it daily, but I find the Python version to be far more legible. It reads more like english.
Anyway, the moral of the story here is that you can do immutable/functional programming with regular ol' Python.
> perpetuates the idea that Python won't let you write elegant and stateless programs that behave in a functional manner
Well, in my opinion Python really does not let you write elegant and stateless programs that behave in an FP manner.
The sample you've chosen in particular is all fine, except that `lambda` expressions are almost never used, due to Python not being expression oriented, so there isn't much you can express with a `lambda`.
You might find your `for x in seq if even(x)` expression elegant, but it is preceded by a `return`. Python is a statement / side effects oriented language and even in your one liner, it shows.
Oh, and given that a Python "generator" is like Java's Iterable / Iterator, that's not FP either, although you can argue that the mutation can be localized (e.g. in your `sum` example), but you have to be careful about it.
And if you'll take a look at the whole Python ecosystem, there's not much FP in it to be honest. I mean, seriously, out of all the mainstream languages, I can't think of a worse language to do FP than Python, except for C maybe,
Thanks for the comment. I totally agree that Python can express this with its functional primitives as well. I was simply using it to show an "imperative style", and Python is my favorite executable pseudo-code.
[P.S: I'm a Python core developer and long-time user; calling Python out is really, really not my thing!]
I get what you are saying, but I think the author was not trying to do that.
The example was meant to show the imperative approach which is not necessarily the best approach in Python either. They said so much in the paragraph before the example. Yes, Python was used and is used often to show imperative examples. Python is used in these examples because it is also dynamically typed, and understood by a lot of programmers. The reason it is not given in Clojure is that it is pretty hard to make imperative examples in Clojure.
Also, coming from Ruby, the way the "process" function was posted is unfortunately the way many Ruby programmers would do it despite access to filter, map and blocks.
> "you can do immutable/functional programming with regular ol' Python"
One cannot easily and safely do functional programming in Python due it's imperative semantics. For example variables are captured as mutable references and even a list comprehension will mutate its bindings.
If you want to do serious functional programming, use Clojure, Haskell and/or OCaml.
Edit: to the downvoters, it's about using the right tool for the job.
"It makes Python look like it's not a functional language. It perpetuates the idea that Python won't let you write elegant and stateless programs that behave in a functional manner."
I suggest Go for this use case. It's easy enough to read for this sort of thing even if you don't know the language, easy to link to running examples on the playground, and nobody will accuse you of showing Go off in a worse light than is called for because the Go equivalent of the imperative example shown really is the Go way to do it. Go is just about the most aggressively non-FP language out there today, making even Python look friendly to FP programming, despite the fact it has closures.
Please use def over lambda when feasible (which is most of the time); using lambda needlessly hurts debugability and saves a total of three characters.
I thank you for this example. If the operator definitions are not trivial and you want to keep them well mantained, then your example is very good.
However, what I meant was that for fast, quick coding, if your 'operator' function is really simple, then lambda fints perfectlu.
I know the Zen of Python says: "There should be one—and preferably only one—obvious way to do it", but i don't align to that principle. I think there should be more than one way to do something, and one should choose one that fits the best.
It is not about char length. The first is an expression while the second is a statement. I find it easy to debug code with expressions rather than statements, could you clarify why the first way would be worse for debugability?
The only meaningful difference between the two is that the lambda is named <lambda> and the function is named even.
I'm happy for you to use lambdas in an expression context, if that's how you like to roll, but assignment is a statement anyway, so it doesn't matter there and you might as well choose the one that produces helpful tracebacks.
I would not. That's incidentally depending on non-zero numbers to become True, and zero to become False. I would prefer to think of the operation as "is x modded by 2 equal to zero?" rather than "does x modded by 2 become true when negated?"
I do find it interesting, but it doesn't change what's easier for me to reason about.
I suspect that it will be easier for core Python developers rather that general Python programmers because they will be more intimately familiar with the conversion rules. I would be more comfortable with such a construct in C or C++, because I am more confident in those conversion rules. But, even though Python's are quite similar, I had to check myself before making my comment, because I knew they were similar, but I was not sure what they were. The situation in the stackoverflow comment is also not quite the same: asking a question about the relationship of numbers (is x modded by 2 equal to zero) is a special case of using integers in a boolean context. Saying it's Pythonic to use an integer in a boolean context does not necessarily mean it's best to always do so.
Incidental to properties of numbers themselves; it's a part of the language, not a part of numbers. The function is asking a question about the property of numbers. I find it more clear when the computation to determine that property depends on number properties, not language properties.
That is a good point, and it makes sense to think of it that way. However I think that it can also be a bit "dangerous" to think too much in terms of the properties of the numbers when working with software due to the fact that for example arbitrary floats cannot be precisely represented in a limited amount of bits like we have in computers. I already hold the view that the numbers we are dealing with when working on computers are an incomplete representation of the ideals of ℝ and ℂ and their likes, so to me then it is ok to reason about code in terms of properties that stem from the language in use.
I use parentheses liberally so I don't have to think about precedence so much both when writing and when later reading.
That said I don't add unneeded parentheses for simple expressions or sub-expressions consisting only of exponents, multiplications, divisions, additions and/or subtractions.
Named lambdas are discouraged as un-Pythonic for what it's worth.
The other point in the article was that your process has now hard-coded even and sum, and composing them in python is a bit unwieldy.
I've run into a number of cases where it's just easier in python to write out the loops than string comprehensions or map/filter/reduces together because there's no threading macro or haskell's functional apply operator.
You can compose it all together in a relatively straightforward manner using functions in the stdlib. I agree that it is nicer with a threading macro, but is this so bad?
def even(x):
return x % 2 == 0
def inc(x):
return x + 1
def process(seq):
return sum(map(inc, filter(even, seq)))
You can also easily write pipeline() (which is just a reduction over callables inside a lambda[0]) such that:
process = pipeline(partial(filter, even),
partial(map, inc),
sum)
process(some-iterable)
I'll grant that it's not syntactically ideal (this is the tradeoff the thrush combinator (pipeline) running as a function instead of working as a macro. The benefit of being able to follow down the program as it works remains.
That's true, but a bit of an unfortunate example since the point of the article is to demonstrate transducers, and instead this demonstrates a bunch of chained collection functions.
Python is multi-paradigm, I think everyone would agree, but I think they removed the ability to do this with function composition like map, reduce and filter in v3 no? Specifically in favor of list comprehensions and generators?
Anyways, if you do it with list comprehensions in Clojure it would look like this:
for full on functional style, as far as I am concerned, process() should be recursive (without all the overhead that implies in an imperative language). I'm being sarcastic because I don't like the list comprehension syntax.
I have probably read most transducer explanatory blogs that ever existed and watched many, many talks on transducers, etc. I have read books about them (as in book chapters about them). I use the built-in transducers almost every day and I consider them an essential tool. I have written the occasional transducer myself for certain purposes.
This is the best transducer explanation and breakdown I’ve seen!
Very well structured and IMO easy to follow for anyone who understands Clojure code already. Great explanation and progression that builds towards the full transducers picture! (edit: typos)
You need to checkout ClojureTutorials channel on youtube, where Timothy Baldridge goes very deep into transducers. But the videos are not free, I think there is a small subscription.
I believe this is the type of writing that the Clojure community (both existing members and newcomers) can benefit a ton from! You should do more of it! :-)
Something I found interesting when playing with the example code is that, while folding in parallel certainly speeds up the execution for realized sequences, it may not make sense to realize a sequence that you're only going to use for one operation:
Far from a ~3x speedup (in these contrived examples), realizing the sequence in-line yields approximately the same performance as if it was operated on lazily.
Something to keep in mind if you're trying to optimize your Clojure. This is still the best resource I've read on reducers/transducers :)
The performance disparity is even more dramatic on my machine. What's the reason for that - is it b/c of concurrent contention during vec/realization? Or ???
For posterity, I just answered my own Q. Performance overhead was just the cost of the linear cost of vec realization as you pointed out. At first it seemed like I was seeing additional overhead.
At the top of the article, we see `(reduce + (map inc (filter even? s)))` -- here, `reduce` is a two-place predicate, taking a function and a collection.
Later, `reduce` is taking a function, a 'base case' [], and a collection -- it's a three-place predicate, more in line with a signature for `fold` than what I'm used to seeing for `reduce`.
Is this a clojure specific overload thing? I'm not at all an expert in FP or anything so it might be fairly standard.
It's fairly common in functional programming languages for functions to be differentiated both by name and arity (reduce/2 being different from reduce/3.)
That being said, it really bothers me when functional programming evangelists write out such horrible Python code examples. It makes Python look like it's not a functional language. It perpetuates the idea that Python won't let you write elegant and stateless programs that behave in a functional manner.
If the author had not mentioned Python and had instead used it as pseudocode to represent the entirely non-functional way to write something, I would have been okay with it. But calling out Python specifically is just incorrect!
I have rewritten the code in a very pythonic manner that illustrates the functional capabilities of the language:
Note the (parens) for the comprehension instead of [brackets], which creates a generator. The above code is lazy (in Python 3)!And if we fire up a python repl and play around:
I really love Clojure and use it daily, but I find the Python version to be far more legible. It reads more like english.Anyway, the moral of the story here is that you can do immutable/functional programming with regular ol' Python.