Basically: I started with my conclusion, and then went through all the other options and rationalized them away until it seemed like my choice was based on objective reasoning.
I think you drawing that conclusion may be a result of how I structured my essay (I introduced my conclusion, then my reasoning).
I freely admit my personal biases and experiences eliminated a lot of potentially good options without ever giving them due consideration (e.g., Haskell, Scheme, Lua). But I really did try to give the 3 options listed in the article a fair shake at the time.
For what it's worth - I was about 50-50 on using Python till the very end ...
I too thought the logic was weak. Here was my basic objection: you chose not to use two popular web programming languages because of their potential volatility, but then went with one that has been so infrequently used in web programming that it almost certainly has opaque deffeciencies. It's not that your reasons were bogus, they just didn't carry the decision.
For a programmer without Lisp experience, that would be true. But I've written enough Lisp-based webapps that there aren't really that many 'unforseen' problems for me anymore (rule of thumb: use software by Edi Weitz - everything he writes is golden :-D)).
That being said, I still ran into trouble with Elephant, so I probably should have taken these sort of 'unknown unknowns' (which are more prevalent in unusual languages like Lisp) into consideration upfront. To not do so was certainly an oversight.
If you are going to be the only programmer on the project then your reasons make sense. However, if you are going to have other people on the projects - then your reasons do not make sense from a longer term point of view. How many lisp programmers do you have out there as compared to python or ruby programmers? If you and the founders decide to go your separate ways how likely is the project going to be continued in lisp?
It seems like a somewhat precarious decision to let you use Lisp for this project. There's nothing supremely special about the problem you are attacking that warrants Lisp (outside of your comfort level which you clarified very clearly in your post).
Perhaps I'm just old and cynical but I've been on too many projects where the first (and therefore "lead") developer chose a particular language and architecture that in the long term was not sustainable.
There are, obviously, far more ruby/python programmers. But, at the very most, I might only ever need a dozen Lisp programmers and I already know 20 that I would love to work with. Lisp is even pretty easy to pick up if you're smart know other dynamically typed, functional, and OO languages.
As a point of reference, ITA Software has no trouble hiring 100s of Lisp programmers.
Minor correction: I am a founder - and if I were to leave (while my co-founders continued), I'd make a point to find my replacement first.
"How many lisp programmers do you have out there as compared to python or ruby programmers?"
This one is a tired argument. Any good programmer will grok Lisp (or Python, or Ruby). If your Python/Ruby programmers can't get Lisp from a couple days of training, then they are not good programmers.
And, BTW, I would risk betting they are writing FORTRAN code in Python and Ruby.
Django/Python and Ruby/Rails present their own (sometimes very significant) opaque complexities. Being complex, they actually hide their deficiencies for a long time. Enough so that sometimes rolling your own thing is not a significant time sink in comparison.
This also alludes to the popularity of do-it-yourself microframeworks. If you're planning on using a microframework anyway, I'm not sure your argument really holds up. You can pretty much use whatever not-completely-marginal language you want and be just fine.
Honestly the only valid argument that I come up with is that you can't find enough programmers that know or are excited to know that language to join your team.
I think it just seems that way, because it takes effort to explicitly pull all those arguments into our consciousness, to explain a decision that was taken subconsciously. The subconscious is often equated with the emotional or the irrational, but there is no reason to suppose the subconscious is incapable of rational decision making.
Looking at it from that perspective, you are not rationalizing an irrational decision when you explain your reasoning: you are simply remembering/recreating the process that lead to the initial, perhaps even rational, decision.
To my knowledge, the famous experiments of Kahnenman and Tversky have proven otherwise. When studying the effect of "anchoring", for example, they would actually explain to their subjects how their rational decisions are influenced by this irrational heuristic. Yet the subjects could not rid their decisions of that influence.
Which showed that there are probably irrational heuristics involved in every decision. However, it doesn't show that irrational heuristics are important, or even dominant, in every decision.
On the Wikipedia page for 'Anchoring', there is an example about thinking of a number and subsequently bidding (experiment by Ariely). What happens when you tell people with a high number that they are likely to make a relatively high bid, so they should try to make a relatively low bid? Do they make 'normal' bids or extremely low bids?
This shows that saying people can't rid their decisions of that influence is an overstatement. When you are consciously aware of the heuristics and the decision is taken over a period of time, you can downplay them.
Most studies that I'm aware of in behavioral psychology (granted, it's a young branch of science) confirm my point, though. When they tell people about the anchoring heuristic, people still continue to make similar mistakes.
One way to explain would be to think about the number of variables we should account for in virtually any everyday decision. Finding even a fairly optimal solution would consume too many resources and evolution has taught us to deal with this using emotional shortcuts. On the evolutionary time scale, the 50 years when the question of choosing a programming language was relevant would be a tiny dot.
On the other hand, rational knowledge can by absorbed subconsciousness eventually. But when it does, we become unaware of it, by definition.
I don't read the article this way.
He candidly admits his bias towards functional languages and
he says that Ruby and Clojure could have been his choice
if not for reasons related to the specific time-frame in which he took the decision.
After that, I was planning to dig into some specifics about the best way to manage/deploy a production Common Lisp webapp, which I hope would help a new Lisper get off the ground.
A single, non-nested list comprehension or generator exp is basically map(filter). You need nesting to get filter(map).
e.g.
map(expensive_call, filter(cond, seq))
equals
[expensive_call(each) for each in seq if cond(each)]
but
filter(cond, map(expensive_call, seq))
equals
[each for each in [expensive_call(x) for x in seq] if cond(each)]
note because of "expensive_call", it's inefficient (and silly) to do
[expensive_call(each) for each in seq if cond(expensive_call(each))]
So map/filter combination gives more flexibility than list comprehension, and for functional-thinking minds, it's just so natural to think in abstract terms of passing functions around. List comprehension is pretty syntactical sugar to do similar things, but it forces you think about the "how to do" instead of "what to do".
That said, it's not really "compelling" though -- there is no real "compelling" reason to switch from one Turing-complete language to another given that you can do the same thing eventually. But hey, it's the itches that drive us nuts, isn't it? :)
I don't get how map/filter is more flexible than list comprehensions. Map/filter require nesting:
(filter (map expensive_call X) cond)
So do list comprehensions:
[y for y in [expensive_call(x) for x in X] if cond(y)]
Near as I can tell, the only difference is that list comprehensions also provide a syntactic sugar for the convenience function filter_then_map. Sometimes this saves you a level of nesting, sometimes not.
Incidentally, this isn't even an issue in a pure functional language with sufficiently smart compiler.
Well, FP is mostly about nesting, but it's uniform and people get used to thinking that way. List comprehensions, on the other hand, are more like procedure code (conceptually), and it sucks to nest them. Nesting aside, it's the different level of abstraction that matters for FP-ers.
Plus, you get 5 mentions (3 y's and 2 x's) of some intermediate variables instead of 0 in your code, so both token-wise and char-wise, map/filter alternative is shorter, and less a mental burden (think about the "succinct" idea by PG).
I have a longer comment here that argues the opposite: list-comprehensions are only a syntactic pun or two away from set-builder notation, which is a higher level-of-abstraction (it declaratively states what it is) than map+filter (which specify a procedure to generate it, albeit at higher level of abstraction than a for-loop).
If you can put together a nontrivial usage of map+filter with at least three source collections that's more concise than the equivalent list comprehension I'll (figuratively) eat my hat.
I'm not so sure if set notation is higher level, but for the example in your longer comment, map/filter/product is not that bad if you use it wisely. Here is my version:
map(lambda (w,s,l): {'widget': w, 'sprocket': s, 'location': l}
filter(lambda (w,s,l): l.hasInStock(w) and l.hasInStock(s) and w.isUsableWith(s),
product(widgets, sprockets, locations)))
Well, I agree it is not any conciser that its list comprehension (about the same I guess?). Nothing is perfect, like you said, know you tool :)
You'll be thrilled (lol) to know that the lambda-tuple syntax isn't in python 3 (!); it does make my examples more concise.
The argument in favor of set-notation being higher level is it's less specific (it doesn't explicitly provide a sequence of operations, just an outcome).
List comprehensions look like set notation but have an implicit procedural translation you have to keep in mind to use them well, so it's a toss-up.
I prefer map/filter/reduce when sequencing has large performance implications but for simple filtering or raw-data-shaping comprehensions read more smoothly.
> You'll be thrilled (lol) to know that the lambda-tuple syntax isn't in python 3
Now you know why many people like me are gradually pissed off by Python and start the exile ... currently trying Scala and it seems a nice language (with list comprehension too! :)
> List comprehensions look like set notation but have an implicit procedural translation you have to keep in mind to use them well, so it's a toss-up.
Actually I think that's the problem I have with list comprehensions: I use them a lot in my code, usually 1~3 levels nested and then have a hard time tracking down the order of implicit loops (which is inner vs outer) and make sure the intermediate variables (x for x in y for y in z ...) do not clash ... OK maybe I'm using it too much and in the wrong way :(
> I prefer map/filter/reduce when sequencing has large performance implications but for simple filtering or raw-data-shaping comprehensions read more smoothly.
I didn't know map/filter is faster than list comprehensions? I thought both are optimized by Python interpreter. But I like the idea of knowing that at least map can be parallelized easily. But since Python does not utilize multicore in a decent way, all bets are off :(
Using do-notation is probably cheating, but what do you think of this? (in Haskell):
do { w <- widgets;
s <- sprockets;
l <- locations;
guard (l `hasInStock` w);
guard (l `hasInStock` s);
guard (w `isUsableWith` s);
return (w, s, l); }
We're trading horizontal space for vertical space. I think it's much clearer than either list comprehensions or plain map/filters. It's the best of both worlds.
I like the look of it. Am I correct that the order the statements are in translates into the order things are evaluated in when the code is called?
If eg I edited it to be:
do { l <- locations;
w <- widgets;
guard (l `hasInStock` w);
s <- sprockets;
guard (l `hasInStock` s);
guard (w `isUsableWith` s);
return (w,s,l); }
Does that force it to go through "location-first" and only check the w and s of (w,s,l) for compatibility if it has already ascertained that w and s are in stock @ l?
Most times you are interested in doing the simple, e.g.:
filtered = [x for x in seq if x>10]
Python's list comprehension is much more readable than using map/filter/reduce - at least for Python programmers :) Anyhow, I really like Guido's decision on dropping these - it creates a cleaner language and forces people to think Pythonic when programming in Python.
> Python's list comprehension is much more readable than using map/filter/reduce - at least for Python programmers
For simple cases, yes. But I wouldn't say
[each for each in [expensive_call(x) for x in seq] if cond(each)]
is more readable than
filter(cond, map(expensive_call, seq))
at least for functional-thinking minds. The level of thinking in abstract is different here.
Now the problem is, some people see Python as a very functional language (with first-class functions etc) and want to use it that way (like Lisp), but BDFL and some core Python devs believe it is better to keep it Pythonic, thus those functional people are kinda pissed off by this and switch away from Python.
Personally I don't think it will make Python a lot cleaner to remove two auxiliary functions and force people to use list comprehension when it is completely trivial to add these missing pair back (two lines of code).
(disclosure: I prefer FP, but I also think keeping things Pythonic is fine most of the time. It's just that in this case, I think map/filter is pretty "Pythonic" according to me. :)
At first, I thought "wow, Python generator expressions are really ugly nested". This is especially true after working with C#3/Linq because query expressions have natural places for line breaks and read in a more consistent order.
Later, I ran into some cases where I wanted a multi-line lambda. And my thought was "aaragh134!#?!"
Then, a weird thing happened. I started making a conscious effort to follow PEP 8. 79 column limit? Seriously? That sucks. But after weeks of struggling with it, something finally hit me. I realized I was writing better code by forcing myself to reduce code density. Sure, it was a little bit longer, but I spend way more time reading it than writing it.
Stop trying to fight it; assign a name to that lambda. Readability counts.
Stop trying to be clever; assign a name to that inner expression. Flat is better than nested.
I know you all know this, but I feel like it bears mentioning that nobody is forcing you to use list comprehensions whether map/filter stay in Python's built-ins or not. They can be defined in around 3-4 lines of code each. Lisp aficionados, already accustomed to the bottom-up style of programming, ought to have no problem writing functions like these as necessary.
If I remember correctly, "removing" these functions mostly just meant dumping them into the functools module rather than including them as a built in function.
Yeah I know. Maybe "force" is not the right word ... probably "discourage"?
Actually only one line of code is enough for each of map/filter:
def map(fn, seq): return [fn(each) for each in seq]
def filter(cond: seq): return [each for each in seq if cond(each)]
But seriously, what do you really gain by removing these two? Isn't that too ideological? I don't really see how un-Pythonic it would be to use map/filter instead of list comprehensions. The problem is BDFL's attitude seems to drive many FP-ers away, like the guy in the original post.
sure is readable for me, and I'm fluent in Python and various Lisps. (That example is Clojure.)
I would like to point out, however, that CL allows you to write:
(loop for x in seq
when (> x 10)
collect x)
which you might think is verbose ("why do I need that 'collect'?")... except that loop allows you to write things like
(loop for i in *random*
counting (evenp i) into evens
counting (oddp i) into odds
summing i into total
maximizing i into max
minimizing i into min
finally (return (list min max total evens odds)))
Loop knocks Python's trivial list comprehensions into a cocked hat.
I switch between map/filter and loop depending on whether I'm working with predefined functions (e.g., (filter 'less-than-ten seq)), handling multiple sequences, doing side-effects, etc.
[It] creates a cleaner language and forces people to think Pythonic when programming in Python.
The question is whether "Pythonic," as the community defines it today, is optimal in all cases. As an analogue, there have been enough talks about things the Java community considers to be stylistically optimal that look really horrible compared to implementations in other languages -- like Python. :-) Pythonic style should be a guide, not an edict, and should be deviated from or redefined when it makes sense. The examples riobard provided already show the syntax weighing things down, a situation where syntax should give way to a more functional style approach.
As for it creating a cleaner language, I try to approach this, as with all things, from the perspective of being an language-agnostic programmer. From that perspective, I do like the python syntax for simple things like what you defined, but under heavier weight mapping and filtering operations, the map/filter function call syntax seems a lot cleaner. There's nothing wrong with syntactic sugar, but I would assert that it will tend to suck when it is all you have.
It depends slightly on how you came to functional programming.
You can arguably trace Python's list comprehension syntax all the way back to setl, a "set-theoretic programming language", and the resemblance to mathematical set-builder notation is intentional; compare
- let B = { g(a) s.t. | a \in A and f(a) holds}
- B = [g(a) for a in A if f(a)]
If you're used to thinking in sets having to decompose into "maps" and "filters" is a speedbump; easy to do but nice to avoid.
Where list comprehensions really start to shine is making it comparatively trivial to pull from multiple source collections without a lot of ugly machinery:
[{'widget':w,'sprocket':s,'location':l} for w in widgets for s in sprockets for l in locations if l.hasInStock(w) and l.hasInStock(s) and w.isUsableWith(s)]
...which is about where explicit map + filter start to become annoying. You can use:
map(lambda i: {'widget':i[0], 'sprocket':i[1], 'location':i[2]}, filter(lambda i: i[2].hasInStock(i[0]) and i[2].hasInStock(i[1]) and i[0].isUsableWith(i[1]), itertools.product(widgets,sprockets,locations)))
...but to my eyes that is not only very ugly but just going by character count the # of characters given over to keywords (map, lambda, filter) instead of "what i'm doing here" is huge. Additionally use of itertools forces use of tuples for your intermediate values and thus the lambdas are gobbledegook until you get to the tail end of the statement and see that i[0] == widget, i[1] == sprocket, and i[2] == location. I could define some constants (WIDGET = 0, SPROCKET = 1, LOCATION = 2, etc) but now it's even longer.
It gets even worse if you try to be clever with the sequence of operations.
You might look at that definition and say lo! I can pre-filter out stuff not in stock at each location and make things more efficient. Naively you'd wind up with:
Under some circumstances that might be substantially faster than the previous approach. But compare the equivalent but you could have instead gone with:
[{'widget':w,'sprocket':s,'location':l} for l in locations for w in [widget for widget in widgets if l.hasInStock(widget)] for s in [sprocket for sprocket in sprockets if l.hasInStock(sprocket)] if w.isUsableWith(s)]
So you lose a little flexibility in simple cases at in exchange for increasing the scope of what you can get away with as "readable" one-liners.
Python's lambda sucks, but that does not necessarily mean map/filter sucks with it too. Of course there are cases where list comprehensions are more convenient and more powerful (esp. if they are more like their "real" counterparts in Haskell and friends). But in cases when map/filter are more convenient, I like the option of using them.
I'm partially defending list comprehensions here and partially challenging you to consider the possibility that there are higher levels of abstraction out there than those employed by functional programming primitives like map and filter.
Level 0: for-loops with an explicit accumulator
Level 1: map + filter (!)
Level 2: ??? arguably an atemporal set-theoretic approach
In practice in python list comprehensions are a superior syntax for computing with multiple source collections.
(!) Really all you need is reduce
map = lambda f,l: reduce(lambda h,t: h + f(t), l, [])
filter = lambda f,l: reduce(lambda h,t: h + t if f(t) else h, l, [])
You'd be silly to implement them that way of course but know your tools.
Nope - just a style thing. You don't need lambdas either - named functions are just as powerful.
But I often think 'functionally.' Since I have options, I'd rather go with a language that allows me to program how I think - instead of being forced to translate my thought into its semantics.
Nope - just a style thing. You don't need lambdas either - named functions are just as powerful.
It depends on what you define as power, and how wide of a continuum you are willing to presume that it runs.
Named functions add two levels of clutter.
The first is to the actual code, because you have to add a name to something that never wanted one, and it has to be defined apart from where it is used. You could give it a fluff name, but that is worse than no name at all. The reader is left wondering what the function is for, which requires carrying unnecessary mental baggage. It would seem more powerful to me that you just define a function where it is used, and not worry about having to carry around that extra mental information for later on; the less state you need to keep the better. But I may only say that because I, like you, tend to think functionally.
The second is to the namespace, because now there is a symbol in the environment that doesn't need to be there. It's not a huge deal, but it is just another bit of extraneous fluff.
Anyway, not meaning to turn this into a language war. Just a couple of thoughts that popped into my head when I read your first paragraph.
Obviously, map and filter are better because ... ah, hmm, I don't know.
I think it is a matter of language design. Higher order functions are not as prominent in Python as say, Haskell, where map & friends are generally preferred for composability.
Lispers prefer map & friends just because list comprehensions add all that messy syntax.
If I were a Python programmer, I would probably use list comprehensions, since they seem to be the preferred idiom.
I like lambda because it allows anonymous functions-- functions that can be defined on the fly (dynamically) and then thrown away. IMO, it's generally bad form for a program to be dynamically creating named functions, filling the namespace.
Also, I like map and filter because they aren't just functions in the procedural sense but combinators: you can pass them around, using them as arguments and returning them. It's much harder to pass around a syntactic entity like a list comprehension (although I'm glad list comprehensions exist; as shorthand they are great in source code.)
All this said, I haven't used Python in 3 years, in favor of purely functional languages such as Clojure and ML, so I might be way out of date.
You bring up a good point about passing map and filter around. I hadn't really thought of that.
I still prefer to user list comprehensions, and don't think it would be so bad to have map, filter, reduce live in the itertools module. This would stylistically match what is done with the comparison operator functions living in their own module, which you can import if you need to pass the functions around.
List comprehensions are great, but they're sort of a DSL-- something the syntax recognizes as special and converts to something else. It's unusual that you can pass around higher-order syntactic elements, while every modern language worth its salt allows you to pass around functions.
Common Lisp has something similar, called LOOP. Implemented as a macro, it's a within-Lisp DSL for expressing looping constructs, e.g.
(loop for i from 1 to 10 sum i)
=> 55
(loop for c across "bar" collect c)
=> (#\b #\a #\r)
It's controversial within the CL community, because the loop language looks much more like traditional languages than Lisp.
The automatic distaste in our industry for things that are old is a disease, a form of vanity. Old ideas are not inadequate because they are old. Most great ideas are old. Sometimes, of course, an old thing is a vestige of some ancient limitation that no longer applies. Those ones are good to clear away. But the prevailing thought process in the software industry, no less dominant for its astonishing primitivity, is to reject the old per se. The will to novelty is so extreme that it doesn't matter if the new thing is worse, only that it is newer. We want to program in new languages like we want to drive new cars (a bad analogy in Common Lisp's case, unless you assume that the older car is both faster and more fuel efficient). Lisp in general, and Common Lisp in particular, is up against this dynamic. You can't understand the reactions to it without accounting for that.
That CL contains TAGBODY is a brilliant thing. It amazes me that a language can be at once so high-level and so low-level. I haven't had occasion to use it yet, but the fact that it's there, and that the higher-level abstractions are built in terms of it, is a thing of beauty to me. (I don't know about PROGV.)
You make a great point, and I think Lisp has enough fundamental value to deserve its place as the "100-year language". In 2060, people will still be using some descendant of Lisp. On the other hand, I think some aspects of CL are outmoded. I don't like the lack of support for maps as a top-level structure, and equality in CL is seriously broken, IMO.
I'd prefer to code in CL over Java/Blub, but I prefer Clojure over CL, and probably Haskell or ML over Clojure. (Lisp has better syntax, and macros are very cool, but static typing wins for large projects, in my opinion.)
Would intern the symbol 'n-to-nth in the current namespace. So every time I see "def", I think of something that has that (mild, almost always innocuous) side effect.
The inner function just falls out of scope, and is presumably collected.
>>> def foo(mylist):
... def n_to_the_n(n):
... if n > 1:
... return n**n
... return 1
... return [n_to_the_n(i) for i in mylist]
...
>>> print foo([1,2,3,4,5])
[1, 4, 27, 256, 3125]
>>> foo
<function foo at 0x6e5f0>
>>> n_to_the_n
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'n_to_the_n' is not defined
>>>
Note that I only made the inner function longer than necessary for illustrative purposes :-)
It's funny, that the decision to choose Common Lisp for some project often has to defend itself (because of so much FUD around the language). CL has one killer feature, that will appeal to any mature developer: it's the only production-ready language around, in which you are virtually not constrained by any third-party design decision. That's it. Discussing other details just boils down to the question of tastes (that differ).
Clojure is nice, although I didn't use it in production.
But it is Java deep underneath (so not completely homoiconic) and comparing to CL there are restrictions, albeit mild.
For some specific tasks I might have preferred it (until CL catches up with concurrency support). But in general CL gives you more freedom.
Before learning CL I was a fairly decent, C, C++ and Perl programmer. Did assembly, Pascal, TCL and Awk. Up to that point, I always had to pause a for a minute when starting a new project/script, think about its scope, and choose a language based on the necessary performance, development speed, expressiveness, available libraries, etc. (and whether whoever was going to read the code afterward knew the language; C was often a natural choice for code shared with others on Unix, C++ for MFC/COM, Perl for sysadmin stuff, and TCL and Awk for my own tools.)
I learned Lisp in over a month, to spite someone (I dared a notorious troll I would write an AI bot of his choice if he stopped spamming us, youthful bravado for sure, and I lost the bet) While researching "AI" I came across Winston and Horn's "Common Lisp", then the hyperspec, then a few more books over the course of a month. I sat down with SICP and did the exercises on my break, while I was in school and waiting tables.
After I learned it however, specially with CLOS, there was no contest. Three months after buying Sonya Keene's CLOS book it was fair to say I forgot all other programming languages. There were no more "projects"; I no longer had to sketch out designs on paper or do "requirement analysis" (something I was told in school was necessary for all software.) For once, the great ideas in my head were a new emacs buffer away. I could write code faster than I would in Perl, Awk or TCL, it ran as fast as C++, and it was more expressive than the English in my head. I could type "commands" into a shell get a dialog embedded in my window, a few more commands and it would move to the upper right corner, I could change its name property and add text to it, then I could fold that dialog box into a menu-item named "Help" in the menubar and call that dialogbox "About". Amazing.
I went on hacking like this for about year when I realized I was doing the "wrong thing". You see, I have been using CMUCL with its builtin editor and writing GUI applications in Motif (it was 2001 and Motif wasn't open source yet, so I got the hang on Lesstif and learned its quirks.) Right around this time, Linux GUIs were maturing and people were being snobs about their Enlightenment themes and dissing each other over their choices of Windows Manager. So I was peer-pressured into learning DHTML and Web Design. I read comp.lang.lisp and those too were snobbish condescending idiots who flamed everyone, specially competent programmers whose work I admired (including Scott McKay and Robert Fahlman (the very people who gave me my CMUCL.))
It was really hard to be a Lisper for a while, specially a young impressionable one who read cll uncritically; news of corporate giants coming with new tools and programming languages to enslave humanity were abound. First C++, then Java, then XML, and finally .NET. You literally had to pick your battles and choose a corporate sponsor or you would have no future in computing! (you think I am kidding?) cll is all doom and gloom, and of course, there is the obligatory stabs at Lisp vendors by Open Source proponents, and stabs at Open Source for people alleging it's killing our beloved vendors. Every once in a while there was news of a Lisp dialect that's going to kill Common Lisp (Smalltalk, Dylan, and the ancient religions of Mesopotamia.)
Fuck, that was painful.
All the while I was following this 4-year long intellectual funeral, becoming ever more "hardcore" and learning mathematics, there was a small group of "Yobos" silently kicking ass and churning out great software. CMUCL got forked to SBCL, added unicode support and threads, not to mention easy building, SLIME was a new Emacs mode better than anything before and since, Cliki was launched, C-L.net, and the #lisp IRC channel was born and hit puberty overnight. Perfect ecosystem.
Today, Lisp is nothing like what it was 8,7,6, even 2 years ago. It's not just "good" in the well-explored text book fashion; no, it's _good shit_. Get work done good. Think, hack, ship, bill for it good. 2-3 products per month good. You still have to know where things are, who is working on what, what's maintained and what's obsoleted by what. Sure. But there is absolutely no lack of libraries.
Come on, man. If you're doing something worthwhile, it likely takes 2-3 months (if you're lucky) to just understand the problem. Common Lisp is an amazing language and development environment, but it won't help you actually solve the problem any faster, only to implement the solution. No silver bullets and all. I love Common Lisp, but the whole "code this in a weekend" thing has got to stop. Good software takes a damn long time to develop in any language, because the language isn't the bottleneck.
Right, however don't forget there is a continuum of software types. The great majority of the code I write is customization of previous software for a new client. If you write enough business software, you will find out there are about 5-10 main components that they all have, and each new project differs only in how they're assembled, and the per-client business rules used to orchestrate them.
For example, it took me months to implement a decent RBAC system in CLOS; prototyping it literally took hours, just reading the wikipedia page. However, I had to refine that after every client, not just customizing it for the client, but iterating over it to make future customization unnecessary.
Billing, accounting, collaboration, knowledge management, human resources, lead management, CRM, inventory, etc. Do web-based implementations of those for a year and you will end up with the core necessary to ship any business software product in a very short time, assembling from your own repo. Augment that with web APIs (which I happily pay for) and you're there.
Another thing that I have become expert at is using other people's software. I am a hopeless cliki addict, mainly because I rarely code to scratch an itch; I have been writing Lisp code for years now, and for the last year, that's what I did full time :-) I have come to know huge libraries so intimately, including their quirks, that I can patch together something fairly quickly and get it out of the door.
That first 80% always takes only 20% of the time, and that's the first month for you. Some languages seem to let you do that month of work in three weeks, or two weeks, or with a framework it might take a week. But that last 20% is the part that always takes 80% of the time, and it often seems like whatever language you use, there are always the same, small, insidious errors and tweeks that need to be done.
Sure, there's some hyperbole in his post… but don't underestimate the ability of good technology and a REPL to aid you in actually understanding the problem, or the available technology, or how you might solve it.
As a recent example, I find Clojure's REPL to be invaluable when wrapping my head around Java libraries. The features of a Lisp make sketching and exploring easier, and that's part of understanding the problem.
I forget the exact quote, but it's something like "I never learned to see properly until I tried to draw". Pertinent.
I worked 8 months on a small-business management package in MFC/C++ using the "RAD" Visual Studio 6. That was the last piece of major code that I have written in something other than Lisp. Immediately after that, I wrote a similar package in CL with a web-based interface, using the free allegro serve; this one took me about a month, though it was functional after the first week.
I know what you're thinking; the previous project gave you the hindsight necessary to implement the second so quickly. Well, not really. The stuff I did in Lisp where things that have been on my requirements for the first project but was never able to implement without huge investments in time. MFC is the most brittle, most fickle and fidgety POS of all time; it generates the boilerplate for you and it expects to you to use its skeletons just the way they are. It took me months to experiment with the various data access APIs, ADO, DAO, and ODBC, moving from one to the next as problems arose, and each change of backend requiring a complete code overhaul.
Now I write database code in native idiomatic code; I can change the backend with a feature. I would sometimes forget to #+postgres on deployed code and find everything working but not see any change in the postgres logs, just to realize I have :sqlite3 pushed into my features ;-)
Heh, funny you learned from the Winston & Horn book - that was my first too.
The biggest philosophical problem I have with Lisp (and a lot of dynamically typed languages) is the lack of compile time safety-nets (it really bites me when I'm trying to work with a large, distributed team). I miss interfaces/abstract-classes, type checking, etc. I know they aren't necessary - but I'm human and make more than my share of stupid mistakes (and I prefer my mistakes be caught at compile time). Unit testing/etc alleviate the pain to the degree - but it requires a lot of discipline to maintain good coverage.
The real trick is that I'm probably faster in Lisp than anything else, but I've met Python and Haskell hackers who are just as fast as I am - so it must be possible to use those languages well (you might argue that they'd be even better in Lisp ;-)).
What is was it about CLOS that made you feel it was a killer language feature? Many seasoned Lisp programmers do not even consider it or OO in general particularly useful.
In the last two years? Clozure Common Lisp at least: an open source multithreaded Lisp implementation with unicode on Win32. Before then it was all Unix. Lisp platform independence is so good, I hack on win32 all day and when I am ready to deploy on Linux, the only warning I get is from git telling me it's converting line endings to Unix style.
The staggering number of new and maturing infrastructure libraries; bordeaux-threads, usocket, CFFI, ASDF, Closure family of XML tools, etc. Lisp-World interop has become trivial. It's not their existence that's new, but their universal adoption by the community. Few years ago you couldn't just download a random lisp library and expect it to work; you had to read the sources and figure things out. Now dependency libraries get downloaded behind the scenes.
I have to second this. I switched from CL to Python years ago because of problems with the infrastructure, but now I'm in the process of switching back, largely due to Clozure becoming really ready for prime time, and tremendous improvements in the stability and usability of available libraries. The situation is still not perfect, but it vastly improved and getting better all the time.
As a beginner to lisp, I have always been bothered by the lack of standardization - especially in terms of libraries.
For example, a couple of weeks back, there was an article about a python-based tool on HN (I forget which). There was a lot of opinion, however was generally about BeautifulSoup vs lxml.
Coming to CL, I dont even know where to begin for XML parsing (http://www.cliki.net/XML). Which is why, it seems like black magic when people make amazing software (quantz, postabon) .
I just get too intimidated on where to begin... it is similar to the javascript library fights that keep erupting on comp.lang.js : you ask about one and are made to feel like you should have chosen another.
As a beginner just grab the XML lib which is easiest to install and looks comfortable to use. If it doesn't work out try another one.
I really don't get all the complaints in this thread about the lack of a central library repository. How hard is it to use Google? Libraries aren't suddenly bad because they're not from Lib Grand Central.
In other languages, Perl specifically, I've tried libraries from CPAN that were total shit and then I googled around and grabbed a better one from someone's homepage.
Perhaps it's possible to get lucky and find good libraries using nothing but Google and patience, but I'm baffled that anyone would deny the value of something like CPAN.
CPAN's structure enforces good practices: (e.g., tests, docs, bug tracking).
CPAN provides not one but two excellent search pages. It also contains many reviews, and makes it trivial to see the source of any library or app before you download it.
it goes beyond CPAN - it would have been nice to have a "batteries-included" version that packs in most useful libraries (like Python's Image, lxml, etc libraries).
The very argument that choice > standardization is something I dont get - even in Rails, you have the choice of not using the default templating engine, JavaScript framework, etc. But it doesnt mean that they dont package it in anyway. This accelerates adoption, since it gets out of your way - and later when you mature as a developer, you can of course customize it wildly.
It may or may not also have the happy side-effect that the quality of the packaged libraries increase, because of a much larger user-base. I'd much rather that people fork mature libraries than hack the one-millionth version of an XML library that just does 2 things.
I'm not denying the value but I do think too much value is put in it. I certainly wouldn't use the lack of something like CPAN as an argument not to learn an otherwise fine language.
CCL is supposed to have nice bridge to Objective-C and Cocoa on OS X. It also compiles very fast compared to SBCL and supports threading on Windows (which SBCL does not).
> I could write code faster than I would in Perl, Awk or TCL, it ran as fast as C++
Was that GUI code (as the rest of your post seems to imply)? CL is not competitive with C++ in terms of raw speed, in my opinion. Without type declarations, it's close to Perl.
Right, the specific code that he wrote he claims ran as fast as C++. What type of code is this? Benchmarking program language speed is a very, very tricky thing because it is so multidimensional.
- Were they both multi-threaded or Single-threaded?
- What libraries were each one using?
- Was it mostly IO, network, or user interaction?
- Is he measuring load times, processing times, run/wait ratios?
- What level of compiler optimization where they both using?
Yes, there are several reasons it's called the benchmarks game and although you can read it the way you have, it's more to do with having a lot of spectators and some regular players, and more to do with a weariness with fatal shootouts.
"it's _good shit_. Get work done good. Think, hack, ship, bill for it good. 2-3 products per month good. You still have to know where things are, who is working on what, what's maintained and what's obsoleted by what. Sure. But there is absolutely no lack of libraries."
I'm not disagreeing, but to be fair, "common lisp" is often referred to as "lisp" or "CL". So the phrase is good for (down)trends, but not for comparisons.
I wouldn't put too much faith in the popularity of languages on Google trends when they have names which are commonly used in english. For me, the headlines on the right hand side say it all:
Jeff Ruby asks OJ to leave business
Ruby on Rails 2.0 Released
JFK Documents Include Reported Ruby, Oswald Conversation
23-foot python found basking in sun
Former Olympic champion Ruby dies in climbing fall
This could also mean that there is neither an opera named "common lisp", nor a comedy group, nor a gemstone fo the same name ... and well perl's fame is waning or fewer people misspell pearl.
I predict you will get a great deal of pythonic fire. Nevertheless, I find it very nice that you plan to deal with the specific libraries you're using in the next posts. The greatest problem in adopting CL seems to be the extremely decentralized (not to say disorganized) library spacs.
"Python 2 is fine now – but in the coming months and years new libraries,
features, and performance improvements are only going to be introduced
in Python 3, and I didn’t want to get left behind or forced to take on
an expensive and time consuming port in the future."
Does this argument hold some water, or is Python 2-to-3 conversion often pretty trivial in practice?
Word on the street is that the 2to3 tool is magical. Just run the Py2.6 interpreter with the "-3" flag for warnings, fix the warnings, and run the 2to3 tool to get your port.
from multiprocessing import Pool
p = Pool(5)
def f(x):
return x*x
p.map(f, [1,2,3])
I do not have direct experience with the library, but from what I've seen, the code you would write is not significantly different from the code you'd write using a decent multithreading interface.
Multiple processes cannot share regular in memory data structures. That makes the multiprocessing library unsuitable for a growing class of applications. However, unladen swallow aims to remove the GIL, so there's still hope for Python.
Some of the conclusions are inconsistent. For example, Ruby got canned mainly for having a heavyweight framework, since he wants to write everything from scratch. What keeps one from doing that in Ruby? Ruby does not depend on Rails.
His complaint about different versions and implementations is a red herring. Lisp is not immune from this.
It's the other way around. For things you've decided to write from scratch, especially if they're hard, CL is a dream. There's nothing comparable in my experience [~]: you can build unbelievably powerful and concise programs easily, then just as easily transform them in major ways. It's the ultimate combination of malleability and expressive power. (Oh, and performance, when you need it.) I can think of only three reasons to prefer Ruby over it: (1) you want Rails etc.; (2) you don't like Lisp; (3) you really like Ruby. The latter two are matters of taste, which leaves #1.
[~] I haven't tried Clojure yet. Then again, I don't need to: the JVM is a drawback for me, and I don't need all those Java libraries right now.
I appreciated the article, but I didn't find the reasoning compelling. What I took away from the article was the following line of reasoning:
1. You strongly prefer functional programming to OO programming (no reason given other than personal preference)
2. You don't have enough experience with any languages that are well-suited to FP other than CL that you would be comfortable undertaking a large project
3. You confirmed that many languages you are familiar with (Java, Python, Ruby) are better suited to OO than FP style (not a surprise at all except perhaps Python as I touch on below), so as a direct consequence of (1) and (2), you were left with CL.
CL is my first-choice language for certain kinds of projects as well, but I didn't come away any new insights from your article. Your choice seems to hinge around two key conclusions for which you give very little explanation: FP is better than OO for your project, and Python is not a good language for FP. I single out Python because I don't think anyone would argue that Java or Ruby is well-suited to FP. Guido insists that Python is not an FP language, but many users (myself included) would argue that it is reasonably well-suited for FP.
That FP was better for your project does not require any further explanation. As others have noted, for a project where scalability of the development team is not an issue (i.e. you are the primary or only developer), it makes sense to choose the style and language you are most comfortable with. If you were starting a project that was going to require several years and a large number of developers, I would have found it more surprising if you committed to a functional programming model. But, given the nature of the project, it didn't seem like there was any need to defend that decision further.
All in all, I'm very excited to see CL being used for more commercial web apps, but I didn't find any insight in the thought process you laid out.
I am, however, much more interested in Pt. II - your description of rolling out a web app in CL. I learned CL ten years before I learned Python and would prefer to write in CL when possible, but for web apps, I have turned to Python for a number of reasons, most notably the obvious availability of libraries. I would love to hear what you used and hear a cogent argument that rapid commercial web app deployment can be done in CL as easily as in Python.
I forget where this came from but this applies 100%:
You ask a person to rate 5 paintings 1-5. You then tell them that you have spare paintings that they happened to have rated as 3 and 4 and ask them if they would like one. They will take a "4". Come back in a week and ask them to rate the paintings again. The "4" becomes a "5" and the "3" becomes a "2".
Our brains will rationalize decisions to make us happy (whatever that means).
Ok, so Rails is too heavy weight for you? What about Merb? Or Sinatra, which was designed for just that reason? Dismissing Ruby as language because only one of a dozen available web frameworks is too "heavy-weight"sounds a bit short-sighted to me.
Not everybody has time to keep up with the continuous and rapidly increasing micro-speciation of all ruby projects from the VMs down to template formats.
" ...continuous and rapidly increasing micro-speciation of all ruby projects from the VMs down to template formats."
This is simply bullshit. There are merely a fair number of solid choices, as one would hope any decent language would offer. Being able to select appropriate tools for basic Web framework, ORM,and template language is a Good Thing.
There are only a handful of viable VMs, and for Web development the practical choices are MRI, REE, and JRuby.
"micro-speciation"? That's hysterical. Literally.
If Ruby doesn't interest you, fine. Bogus characterizations don't help anyone though.
I am not the OP, but I have a problem with Rails and alternatives: there does not seem to be a viable alternative to ActiveRecord around? I looked into DataMapper (I think that was it's name) which initially looked great. But then I could not find any information on how to use transactions, and also received no answers on the newsgroup. I guess I should have read the source, but I gave up at that point.
Since I hate ActiveRecord, I have now subconsciously decided that Rails probably isn't for me.
It's great that Sinatra is spawning lots of imitators for other languages, though.
You could probably get rid of the "(fixed)" in your title; I bet most people didn't see your original submission, and the ones that did will know what happened.
Thanks. Sorry for the trouble before. I had an old personal wordpress blog that I'd originally posted this on - but there were some technical issues with upgrading WordPress to a modern version.
I like the choice. I love to see people using lisps but some of the path to that choice makes me cringe. Like this:
I probably could have written a ‘bare-bones’ implementation of the site’s back-end in Rails in a week instead of two weeks, but I would rather ‘waste’ that one week up front to have more flexibility later.
It was the previous sentence that resonated with me:
I’m a bit wary of using ‘heavy weight’ frameworks like Rails (or Django) on large custom projects. In my experience they make the first 90% of what I’m trying to do be really easy – but then make the last 10% a living hell since I need to modify something the framework never intended me to control.
That is a very true statement, in my experience. It's also an elusive insight, because once you commit to a framework you begin to interpret your problem, the universe, and everything in terms of the framework's conceptual space.
When the fundamental design of the language is stacked against your style of working, the cognitive dissonance of using so many workarounds is a serious problem.
I just switched to Lua during the Python 2 to 3 transition, and I haven't looked back. Real closures and tail calls, no GIL, a vastly more tasteful design, a dead-simple C API, and the whole shebang is a tenth the size of Python.
Meant to add: Lua's not perfect, but its whole design favors embedding it in another language, so it's a hell of a lot better about gracefully getting out of the way when it's better to write the parts that matter in other languages.