This doesn't seem like a useful comparison, analysis or conclusion. Clojure is philosophically a Lisp, but as others have pointed out, it isn't usefully the same language as any other Lisp that preceded it or exists today. If this particular example is useful to people who might otherwise think they could reuse existing Lisp code, then I suppose it's saved some effort, but that just seems like a straw man.
Either way, if you get the terminal condition of a recursive function wrong, you will blow the stack. It's true Clojure isn't built to do TCO, but that's a non-sequitur. The idea that you don't typically use recursion in Clojure is ridiculous - for is a macro, it's syntactic sugar over Clojure's recursion primitives, loop and recur. Even if you didn't want to use that, change the nil? check to empty? and it works. I'm not going to claim that Clojure's list/sequence semantics are necessarily the cleanest (the empty _sequence_ in Clojure is nil, an empty list/vector/set etc is not), but they're not hard to learn if you make any effort at all. Again, your existing Lisp experience may or may not help you understand Clojure's data structures, but I am surprised that this is a surprise to anybody.
In practice recursion is not something that you use that often I think. I've always seen it as a low level construct. The filter/map/reduce idiom or, alternatively, list comprehensions get you a long way.
Not to say recursion is not important. Without it lambda calculus is not Turing complete. But Clojure does provide explicit TCO. Saying this is a big difference is excessive IMHO.
Either way, if you get the terminal condition of a recursive function wrong, you will blow the stack.
Not in Common Lisp. The last time I tried to show someone how well recursion worked in a typical Common Lisp implementation, I crashed Firefox trying to paste the result of a naive factorial function into a text buffer. Stack depth limits in other languages are often arbitrary.
I suppose what I ought to have said is that very few programming languages can completely protect you from your own logic errors. It was not hard for Giles to discover the error in his code, but he still blamed the language for it.
Yeah, I got to the following line and stopped reading:
> First, this code assumes that (if (null list1)) in Common Lisp will be equivalent to (if (nil? list1)) in Clojure, but Clojure doesn't consider an empty list to have a nil value.
That sentence makes it very clear the writer either hasn't been arsed to read through even one Clojure tutorial or is being intentionally dense. If they have an educated opinion, great, but I have no patience for this level of ignorance, especially if it's feigned ignorance.
To make a statement about Common Lisp from simple tutorial code is a bit much.
If one looks at music software in Common Lisp, the code is on a higher level than building lists like that.
> Secondly, this code tries to handle lists in the classic Lisp way, with recursion, and that's not what you typically do in Clojure.
Neither is it done in Common Lisp.
> Those 7 lines of Common Lisp compress to 2 lines of Clojure
The actual difference is that Common Lisp uses LOOP and not FOR, and that LOOP needs two nested LOOP forms:
CL-USER 11 > (defun build (list1 list2)
(loop for e1 in list1 append
(loop for e2 in list2 collect (list e1 e2))))
BUILD
CL-USER 12 > (build '(1 2) '(a b))
((1 A) (1 B) (2 A) (2 B))
Two iteration forms in a LOOP don't nest, but provide iteration bindings similar to LET* and LET:
(loop for i in '(1 2 3) for j = (expt i 2) collect (list i j))
and
(loop for i in '(1 2 3) and j in '(1 4 9) collect (list i j))
Common Lisp's ITERATE macro also needs nested forms, but slightly improved over LOOP:
ITER-USER 26 > (iterate outer (for i in '(1 2))
(iterate (for j in '(a b))
(in outer (collect (list i j)))))
((1 A) (1 B) (2 A) (2 B))
The author of Clojure knows these differences very well, since he was a heavy Common Lisp user for a few years.
> But, at the same time, if you're looking to translate stuff from other Lisps into Clojure, it's not going to be just copying and pasting. Beyond inconsequential, dialect-level differences like defn vs. defun, there are deeper differences which steepen the learning curve a little.
That's true. Clojure is not a Lisp, but partially derived from it, with many other influences from Haskell and other languages. It's mostly incompatible to Lisp: software can't be shared or copied, it needs to be complete rewritten. Thus basic Lisp literature is only of use when it's about features which got copied. For example the book 'On Lisp' might help to understand macros in Lisp and Clojure, whereas books like 'Practical Common Lisp' or Norvig's PAIP aren't very useful for Clojure programmers.
I think the problem is that people seem to equate "Lisp" to "Common Lisp". Clojure is a Lisp, but Clojure is not Common Lisp.
LISP appeared in the late 50's/early 60's. Common Lisp only appeared in the early-to-mid 80's. There were many other lisps in between and many since. Common Lisp is only one of many dialects of Lisp. Clojure is another.
(Not saying that this is what lispm did in this case, (s)he seems to know a lot about lisp; rather this is a common confusion I've seen)
Common Lisp, though, is the lineal descendant of the original LISP, so a case needs to be made for anything else like Scheme and Clojure that claim to be Lisps. I guess I'm saying it's a debatable point.
I'm not sure calling "Clojure not a lisp, but partially derived from it" really makes sense.
That code from other lisp implementations needs rewritten to be ported, is really not that much different from how many other lisp implementations were to each other.
CL and Scheme did a lot to unify things, such that any scheme should run another scheme's code. Same for any CL implementation. But, for example, running emacs lisp in any other lisp just isn't going to work.
Fair. There have been, and always will be lisps that can be copy pasted between each other. Just as you can get some implementations of programs in some assemblies that work between assemblies.
That does not say anything about whether or not a language is an assembly. Just as this says nothing about whether a language is a lisp.
You're bunching Clojure in with several languages that have strayed far further from Lisp. Clojure was designed specifically with the intent of being a dialect of Lisp, which means it's explicitly not in its own language family or in any other language family than Lisp.
> Clojure was designed specifically with the intent of being a dialect of Lisp
The result of that design is that Clojure shares literally zero lines of code with other Lisps: Autolisp, ISLisp, Emacs Lisp, Standard Lisp, Common Lisp. Many basic concepts are absent, renamed or redesigned ('Atom', 'Linked List', ...). Clojure is fully incompatible to any other language with Lisp in its name. Programs have to be re-architectured, because the concepts are different: no TCO, but 'functional', different approach about side effects (-> avoid), lazy persistent datastructure at its core, different idea of OOP (-> avoid), Lisp infrastructure (-> avoid), ...
There is no software I know of, which is shared. The number of tools shared is 'one': GNU Emacs. It's a different community with different goals, different software, different applications, different libraries.
Just think of it: zero lines of code is shared.
My problem with that broader idea of 'Lisp': it is fully vague and it has no practical implications. Sometimes it's only a marketing slogan. It has the same vague meaning like 'object-oriented' or 'Functional'. There is no consensus what OOP in a broad sense means and how it should be defined. 'Lisp' is just as useless.
It does not mean that Clojure is useless or worse than Lisp. Totally not. Clojure has new ideas on data structures put to work and enabled some very productive people enjoying their tools. They are doing wonderful stuff.
Neither does Emacs Lisp nor ABCL. Yet they are Lisps by your definition.
> different approach about side effects (-> avoid)
Though not precluded. You can write Clojure with side-effects, but to do so you need to be explicit about it.
> different idea of OOP (-> avoid)
Which says nothing about the language or its Lisp-nature. Clojure has generic dispatch with multimethods, supporting runtime polymorphism. As you say later, "OOP" is meaningless since it's definition is vague, so using this as a reason for Clojure to not be a Lisp is odd.
> Just think of it: zero lines of code is shared.
Are saying that a lisp is only a 'Lisp' if you can freely share code between them? Without modification? If the names used for functions aren't the same, then that disqualifies it from being a Lisp?
Your position seems to be that unless the lisp is a direct descendant of Lisp 1.5 then it cannot be called a Lisp. In addition to Clojure, this disqualifies Scheme (and its dialects.)
Different Lisps (by your definition) take different approaches to things like namespace separation (Lisp-1 vs. Lisp-2) and scope (dynamic vs. lexical). These differences can be subtle and lead to hard to find bugs when sharing code.
> My problem with that broader idea of 'Lisp': it is fully vague and it has no practical implications.
Yet you have put a stake in the ground and defined the broad idea of 'Lisp' as an entity that shares its roots with the ideas in MacLisp.
> The result of that design is that Clojure shares literally zero lines of code with other Lisps: Autolisp, ISLisp, Emacs Lisp, Standard Lisp, Common Lisp. Many basic concepts are absent, renamed or redesigned ('Atom', 'Linked List', ...). Clojure is fully incompatible to any other language with Lisp in its name.
> Programs have to be re-architectured, because the concepts are different:
"Re-architectured" can be read in a multitude of ways. Changing a few datatypes because Clojure prefers vectors instead of lists, etc., feels like it falls well below the bar for "re-architectured".
> no TCO, but 'functional'
This would hold weight if CL mandated TCO, but it doesn't.
With this in mind, I'd like to examine a few of your points from a different angle:
Racket:
- Basic concepts are the same, (atom, linked lists, '...')
- Restructuring not really needed (up for debate, depends entirely on your prefered initial design choices in either language. Racket isn't that opinionated.)
- Tail call optimization required as per Scheme standard, so is present. Note that this is not actually something Lisp mandates.
- Doesn't 'care' about side effects, community is pragmatic and will generally advise you to do whatever is practical
- Has an object oriented sub-language with message passing and so on
There are a lot of points here that, according to you, makes Racket essentially a Lisp, even your non-point about TCO. I'm curious to know what you feel about people saying Scheme is a Lisp, considering the above.
The whole debate of "Is X (a) Lisp?" reminds me of nationality debates that essentially boil down to some people saying blood is more important than culture. You seem to be taking both sides, however; arguing culture and blood (source code). You cherry pick the cultural differences like most people standing on one side of the nationality debate would and argue that just those specific differences are the most important.
That's the thing, though; Swedish people could argue however much they want that they're very different from Norwegian people. To the rest of the world, though, they're essentially the same. Especially when you start comparing them to people from Peru, Venezuela, South Africa, and so on. When you're in the bubble the very small differences are much bigger to you, but if you zoom out to get some perspective these differences are much smaller than the commonalities.
Yes. I was asking you, because you have ideas of what makes a Lisp a Lisp. I've given you several reasons that by your own admission were important for a Lisp being a Lisp and I'm wondering, considering what you've said was important, if you think Racket is a Lisp.
In practical terms Racket is not a Lisp. There are large areas of overlap. The people behind Racket steer the language further away - which is nothing bad, just an observation... there are lots of new features in Racket, which are not in Lisp.
I find that interesting, considering that most of the points you raised about Clojure not being a Lisp would indicate that you think Racket is a Lisp.
You've also been nothing but vague in terms of what makes Racket not a Lisp, which is ironic considering your previous argument that we should have more clearly defined requirements for what makes a Lisp, but I'm beginning to think that this was mostly hot air.
I think you've mostly proven that the 'vague' definition serves a much more practical purpose than your seemingly arbitrary distinction between these languages, and it will continue to do so, as it implies far more than you've displayed in this thread.
> You've also been nothing but vague in terms of what makes Racket not a Lisp
True, and I have to apologize for that. But I don't have the time, nor the priority to go into full detail. The main points are: different community, different goals, almost zero code sharing, different technical solutions, different literature, ... The differences are technical and social. Each of the points would need more explanation, for which I don't have the time.
For me 'Lisp' is something practical. I have a bunch of non-trivial code -> can I compile/load it? Can I port it easily? Are there people who would be interested to share? What are they using? Can I work with them?
Example:
Macsyma is an old Lisp program. New Lisp dialects appeared. Macsyma was ported to them: Maclisp, Franz Lisp, NIL (New Implementation of Lisp), Lisp Machine Lisp, VaxLisp, Common Lisp...
This software can't be ported to Clojure or Racket, without fully re-architecting the software, and I'm only thinking about the basic Macsyma, without GUI or other system dependent parts.
> that the 'vague' definition serves a much more practical purpose
which one? to confuse people? To raise expectations of collaboration in a community, which are then not fulfilled?
It's true that it is more difficult to port from Common Lisp or other Lisp-2 dialects to Scheme, Racket, or Clojure (which are Lisp-1 dialects) than it would be to port to other Lisp-2 dialects. But that certainly doesn't make Lisp-1 dialects "not Lisp" while Lisp-2 dialects are "Lisp".
Would you consider Scheme a lisp? It is after all also not sharing any code with the above mentioned (by necessity, since it use define instead of defun for declaring functions).
Other Lisp dialects, not implementations. Common Lisp has "implementations", Scheme has "implementations". The word "implementation" implies a high degree of compatibility.
In any discussion involving Common Lisp, there will be the opinion that "Clojure is not a Lisp" without (in my view) providing sufficient context for this clearly confusing statement.
(FWIW, I call Clojure a Lisp. It is clearly in the Lisp tradition, not to mention more technical things like: code-is-data, serious REPL, accurate numbers like 10.00000000000000001M. I don't think "Lisp" should be reserved for mainly incremental improvements on an old movement.)
That’s not a quote. You capitalized "Code-is-data" as if it were starting its own sentence. Had you quoted the whole sentence, your response would've seemed more like a non-sequitur.
In any case, there is no definition of even "chair". (Which partitions the world into chair and not-chair. Simply imagine a continuum between a particular chair morphing into a not-chair; where exactly does it stop being a chair?) So I'm unconcerned about edge-cases, where a language is actually in a fuzzy part of that continuum. Not to mention those amusing blogposts claiming "Javascript is a Lisp!" which maybe aren't objectively true but inspiring.
I keep seeing people make this statement, without any explanation of why. To me, hardly a heavy user of other Lisps, but with enough time spent in Common Lisp and Scheme to have an idea of what's going on, it feels pretty lispy.
I assume there's no official hard definition of what it means to be a Lisp. It's like "Pythonic". A warm fuzzy feeling that not everyone will share about the same things. It depends on what they consider important / good about a language.
Is C++ a C? Is Oberon a Pascal? Is F# a ML? Is C# a Java? Is Go a Squeak? Is Objective-C a Smalltalk? Is Javascript a Self?
No, not really: it's as controversial as Common Lisp's Lisp-n nature (which is to say, both of which are controversial amongst Scheme programmers writing Common Lisp). It's a perfectly useful macro. ITERATE is a bit more traditionally Lispy, and is an easy library away if one wants it.
> I don't think it's fair to say that its use is idiomatic.
Not using it is typically less clear. It's part of the standard; it's more elegant than using other parts of the standard; it's idiomatic.
Eh, no, it was controversial when it was being developed in the early '80s, a lot of respectable Lispers didn't like it, thought it was too complicated and easy to get wrong. But Common Lisp also has simpler constructs like dolist that aren't, to my knowledge, "controversial", except of course as you note this paradigm isn't a Scheme sort of thing.
It's idiomatic to use a sublanguage. Think FORMAT. CLOS. Control flows using the condition system. LOOP is just another style to extend Lisp. There are older libraries which use similar style.
Simple example from Winston&Horn's book LISP for a database query language:
Note that Common Lisp does not require tail-call elimination, so recursion is not really a standard operation. There's no one who thinks LOOP is a crawling horror than me, but it or something like it is your only option.
The conclusion of this article is: "Clojure is not the same as Common Lisp." Well, yeah, it's not the exact same language. How is this surprising?
I wish the author would have spent a little bit more time researching. The comparison is just plain sloppy. Why is Clojure's `recur` not mentioned, for instance? This is documentation that's not that difficult to find by googling, and is mentioned in (I believe) every Clojure book available.
To make a point, I thought about 'recur' and I haven't written a single line of Clojure in my life. In comparsion, CL's looping constructs were not mentioned, which crop up often enough.
From my perspective, the three distinguishing features of Clojure from other languages/Lisps in general are:
1. It's a lisp, which means homoiconic, i.e. build software just like Lego (different from non-lisps)
2. Immutability and purity in the core library + sane, managed concurrency via atoms, refs, core.async, i.e. write serious multithreaded code without putting your hair on fire; helps on the front-end via ClojureScript as well (different from most other Lisps)
3. The JVM, which means very good performance in the general case + build once, run everyone + a lot of libraries
Lisp != Common Lisp. The very thought that you can cut-n-paste code from one Lisp dialect to another is daft. The author obviously didn't take the time to acquaint themselves with the basics of Clojure.
The lack of TCO in JVM hosted languages is besides the point.
It is also possible for the Clojure compiler to do TCO in certain cases: Rich Hickey made the conscious decision to not do it.
Lisp absolutely means Common Lisp, that was the whole point of Common Lisp. (Naggum rant on this: http://www.xach.com/naggum/articles/3224964049435643@naggum....) Certainly we can talk about the "Lisp Family", or "dialects within the Lisp Family" when things are close enough, and I disagree with Naggum in that I think it's not a totally useless thing to consider. But to me it's not all that useful either, since as far as I can tell the minimal thing to qualify membership to the family is to be s-exp based like [Common] Lisp, Clojure, or Scheme. I'd hesitate to call the FFP language as defined by Backus "Lisp", or even "a Lisp", however, even if a program looks like (+ :<4, 6, 8>). It's missing a lot of other things. So while Clojure, Scheme, and Lisp definitely seem closer to each other than to languages like Python or C that it might be sort of useful to group them into a family, it's a stretch to call them true dialects of each other. I can say the same things more or less with the same effort in different dialects, but how can I trivially talk Lisp conditions and restarts or Reader Macros or CLOS in Clojure or Scheme, let alone any of the other "dialects" out there? Those things aren't just "jargon" that can be interchanged or minimally expanded out, they are huge implementation details. You'd sell me more on the dialects thing if we were talking about Scheme as implemented by the Racket guys vs. Guile vs. al., or Common Lisp implemented by SBCL vs. Clozure vs. al., or Clojure as implemented on the JVM, CLR, or JavaScript runtime.
It is also possible for the Clojure compiler to do TCO in certain cases: Rich Hickey made the conscious decision to not do it.
If I read your statement as Rich Hickey having the opportunity but passed on it, I'd reply that he did that with recur. Unless you mean silent TCO like e.g. Scheme does.
Perhaps, but that doesn't mean it's accurate. TCO literally just says tail calls are optimized, and I personally see an advantage to making your intent to make a TCO-ed tail call explicit, so like in Clojure you get a compilation error rather than blowing your stack at runtime.
It certainly looks more elegant to do it silently, but Clojure doesn't strive for that sort of elegance.
TCO is broader than what recur does. Recur only optimises recursive tail-calls to the function you are in, TCO generally implies that any tail-calls (recursive or not) can be optimised (including mutually recursive calls, for which clojure made the trampoline function or just calling one function at the tail of another).
Personally, I like clojure's approach as IMHO recur makes intent clear, but recur is a subset of what TCO optimises in other languages.
For all the gripes surrounding the lack of TCO on the JVM, Clojure really does provide a great set of tools to deal with iteration in a functional way. It's a rare thing when I need to fall back to using loop & recur.
I was gonna post the question "Why should I learn clojure?" but that's easy to google for and get good articles.
I was then gonna post "What's a good book for learning Clojure?" but I can get that on Quora.
So my real question is: what can I read about Clojure that will get me up to speed on its unique awesomeness? I don't need a tutorial that shows me how to add two ints or invoke a function. Show me the good stuff!
I'm far from a Clojure expert, but some a couple of things that jump out at me:
The destructuring mechanisms in Clojure are very, very powerful. For some reason this author does not seem to like them, even though they reduce his 7 line CL program to a 2 line Clojure program. That's not a fluke, from what I've seen. Stuff that would require a lot of caddars and cddaars and whatnot in a normal Lisp are much more straightforward in (idiomatic) Clojure.
The interop between Clojure and Java (or ClojureScript and JavaScript) seems to be a lot cleaner and more straightforward than the sometimes-ugly FFIs you see with other languages.
It has just about all the standard Lisp goodies, too (macros, etc.)
The destructuring mechanisms in Clojure are very, very powerful. For some reason this author does not seem to like them, even though they reduce his 7 line CL program to a 2 line Clojure program.
What destructuring? I didn't see any in the example.
In my opinion, Halloway's Programming Clojure is a good beginner's book because it hits the right mix of Clojure newbie with experience programming. Among free online tutorials, Aphyr's Clojure from the Ground Up is my recommendation. A little deeper into the language, Fogus's Joy of Clojure hits more technical topics.
I would recommend something that seems weird but isn't: start with Joy of Clojure just to get a sample of what the language really is about, and if you like what you see go back to one of the introductory books and learn it.
I'd recommend against Programming Clojure and instead recommend Clojure Programming from O'Reilly.
I read a fair amount of both and the former explains the subject matter too superficially IMHO. +1 for Joy of Clojure which is a nice read in parallel.
That was my impression of Programming Clojure at first, it seemed rather lightweight. What I came to believe is that Halloway's presentation focuses on simplicity but achieves reasonable depth. [1] The high level of accessibility reflects Halloway's background in the training industry and the his expertise in Clojure. His book is efficient in the same manner as Clojure.
Joy of Clojure is a good book. It goes deeper while assuming more of the reader. The technical detail is useful and interesting, but for me, the narrative seems a bit less cohesive [disclaimer: I have the first edition, not the second]. I haven't read Clojure Programming.
[1]: Edit. For example the running exercise is porting a Java build system to Clojure. That's full on software engineering, not a let's-pretend.
Thanks for the hint, I'll check the build system example.
I focused on Programming Clojure as it's almost half the size of Clojure Programming and I want to get to other, more focused clojure books (Reactive Programming, Macros).
But for me, personally, the examples given so far (I've made it at least through the State-chapter) weren't well explained. I'll see how Clojure Programming holds up in that regard but so far many explanations have been better IMHO.
You should learn Clojure because you'll be a better developer. Maybe you'll never use it outside personal projects. It changes the way you think about programming.
And didn't tail call optimization only become standard in Lisps with the advent of Scheme?
In any case, at Standard Chartered we didn't have tail call optimization with our Haskell dialect either. It didn't matter too much in practice, because you should be using combinators anyway. And when you are calling foldr or map, you do not care that somewhere hidden away they are implemented with a loop in C++, as long as they behave right.
Well SC's proprietary compiler is probably different, but GHC is self-hosted (the runtime system is C but the compiler is implemented in Haskell) and the map and fold functions in Prelude are recursive. Here's the source code for `map` in Prelude:
map :: (a -> b) -> [a] -> [b]
map f [] = []
map f (x:xs) = f x : map f xs
But it is definitely still true that explicit recursion is discouraged as being too 'low-level' for most Haskell code and it's preferable to use higher-order functions instead.
Indeed, and as far as I know Scheme JVM implementations, which I haven't looked at in a long while, either do it per the spec and are slow, SISC reputedly, died about the time I might have started using it, or go through contortions like Kawa to do the best you can. Don't know about JScheme, it was dead before then, and I just noticed Bigloo will compile to the JVM, adding one to the list of 4.
I seem to recall it depending on compiler settings whether sbcl does tco. Which means you probably don't want to rely on it in general unless you are okay being locked to a specific implementation and specific optimization settings that may-or-may not seem magical to the uninformed user.
And really i think the "Classic" lisp way to loop is loop, not recursion.
Both, I think (from long ago memories). The "modern" loop macro (which is probably Turing complete like I seem to remember people saying format is :-) is I think a relatively new thing, I overhead a lot of discussion about its design in the early '80s.
Although it probably had precursors, mainline Lisp, now Common Lisp, is decidedly multi-paradigm, there were even sops thrown to FORTRAN programmers as I recall, probably back from when there were only a very few computer languages in existence (heck, LISP's first implementation, on a vacuum tube computer, was as FORTRAN subroutines). So overt things like loop are in theory idiomatic as well as recursion.
The --full-tail-calls flag is not on by default, partly
because it is noticably slower (though I have not measured
how much), and partly I think it is more useful for Kawa to
be compilatible with standard Java calling conventions and
tools.
Well, for some definition of just fine. Well implemented TCO is a performance boost, not hit on most platforms. The lack of TCO built into the JVM means that JVM languages like Scala and Kawa generally have to roll their own on top of the JVM, resulting in a performance hit.
Performance is irrelevant I think to the discussion. The claim is being made that Clojure and ABCL don't do full tail call elimination because of stack restrictions in the JVM. That sounded unlikely to me. Kawa does full tail call elimination and it's a JVM language. Hence, this claim can't be true, right?
Kawa fakes it. The JVM doesn't support it so if you use a normal function call you don't get tail recursion.
You could avoid calling functions and instead do your own stuff but that doesn't change the fact that the JVM doesn't support it.
That is what they mean by performance, you lose performance because you can't do naked function calls in those cases.
Since except mutual recursion (which is difficult to detect tail recursion for correctly) the benefit of tail recursion is a performance boost it is ignored when the cost of implementing it kills your performance.
Originally, though, it didn't. See the current language in the link you provided, emphasis added:
Kawa now does general tail-call elimination, but only if you use the flag --full-tail-calls. (Currently, the eval function itself is not fully tail-recursive, in violation of R5RS.) The --full-tail-calls flag is not on by default, partly because it is noticably slower (though I have not measured how much), and partly I think it is more useful for Kawa to be compilatible with standard Java calling conventions and tools. Code compiled with --full-tail-calls can call code compiled without it and vice versa.
The fact that it didn't originally is irrelevant. And the fact that you have to provide a flag because of performance is irrelevant. It still remains that Kawa appears to do full tail call optimization on the JVM. So it can't be that Clojure doesn't do it because the JVM doesn't permit it. So what's really going on here?
A much bigger difference imho between Clojure and Common Lisp is that the former is built on abstractions, conj, while the latter is built on concrete cons cells. There are more other significant differences but I don't want to flame/argue.
Is it even recursing in the tail position though? Reading the code it looks to me like the recursion result is used as an argument to a subsequent function call (append).
Spot on! The code fails as the author has used nil? rather than empty? The recursive call is not from the tail position so tail call optimisation is not possible in any language.
The reason why some people say that Clojure isn't a lisp is that some of its design decisions such as those mentioned in the blog posting detract from the essence of lispiness. It goes too far to say that it's not a lisp but it's certainly less lispy than Common Lisp or Scheme.
What is the name for illustrating the point with deliberately unrelated example? His clojure code fails because he did not swap nil? for empty? (and he even states this in the article) but he then uses it to illustrate the absence of TCO in clojure.
Sure, it does have cons, but your second argument wasn't a sequence. Try:
> (cons 'a ['b])
(a b)
> (doc cons)
-------------------------
clojure.core/cons
([x seq])
Returns a new seq where x is the first element and seq is the rest.
That's not a cons, that's a linked list. As tokenrove mentions, a cons in a Lisp is just a tuple (`cons` is the name of the type and the function constructing it). It can contain any two values.
I might be wrong, but the biggest difference is that lists are immutable in Clojure which makes some data structures which are based on shared conses difficult to construct, or less efficient.
This is why I call Clojure a "Lisp", whereas I somewhat pedantically refer to previous languages in this family as "LISPs", as originally coined from "LISt Processor".
But it's still a Lisp, I went straight from old half-remembered mainline LISP and Scheme to Clojure in a recent small web server project without difficultly. The dynamic style of development is the same as is the typing, if you're not using lists, then the OPs problems don't come up (idiomatic Clojure web programming uses maps, key value pairs), the syntax is still s-expressions, albeit polluted by arrays denoted with square brackets where it makes sense.
And I believe the article is wrong in one sense, the JVM treats non-tail recursion like other languages, growing the stack. It's tail call optimization (TCO) that's the issue: mandatory in Scheme, don't know about its prevalence in Common Lisp implementations, and it's awkward in Clojure but wasn't much of a jump from SICP for the typical cases. And I think it might be a good idea to require signaling when you intend to tail recurse, it's easier and much quicker to find in compilation than when you blow the stack running it.
There's something very off about working with a lisp and not being able to inject stateful inline expressions for prototyping. In other lisps you find the fluid abstraction/computational nature of sexps holding up very nicely while in Clojure you find yourself having to rewrite larger portions of functions just to test or fix something. A lot of my turnoffs from Clojure is that it takes away the whole "geometrical logic glue" aspect of sexps and leaves behind what feels like a neutered stack-based language for the JVM in lisp's clothing.
Hmmmm; I haven't worked with Clojure enough, and that after a decade break from programming and several from serious LISP programming, but ... while I like it, I don't find it very tasteful in many ways, including that very sort of way. It's a language I like to program in, but not one I think I'll ever fall in love with like mainline LISP and then Scheme.
While I very much like the first class syntax for arrays, maps, and sets, there's a tremendous advantage to having one main or even exclusive built in composite data type, e.g. I hear that a strength of Lua, which I gather has been very successful. LISP's DNA, as well as Scheme's I'm pretty sure, was established in the days when that was lists.
Clojure tried to push the homoiconicity as a first class rule, so it felt logical to have a literal for built-in types. But at the same time I .. how to say that, don't want syntax. First lisps had the minimum of it: sexp, symbol. jmc lisp stopped there, they then added numbers and strings for ergonomic sanity[1], and it feels the right balance, the proximity of metalevel and language, the adt/dsl feel since everything will be an sexp based abstraction (SICP ate my mind?).
So clojure tried principled consistency, I like the logic of it, but not the result. Erik Meijer and Brett Victor claim it's a good thing to do. Find a concept and go all the way down with it.
[1] it would be fun to be denotational even for these. (arithmetic-sum (number two one three) (number six six six)), yes it would be.
But I think people like us have to accept that many other people hate a sea of parens to the point they violently reject LISPs (e.g. http://ancell-ent.com/images/parens.png) ; my question WRT to Clojure is, how many of them have come on board in part due to adding to and in some cases changing the (nearly) pure s-expression syntax we love?
If a fair number have, I'm willing to accept it for the benefit of "Lisp", for Clojure has created the greatest movement in "Lisp" since the Lisp Machine and microprocessor heyday of the '80s.
Ah, I might add that for me, the REPL style of programming is my #1 feature. One of my greatest programming feats what with CodeCenter/ObjectCenter, an interpretive environment for C/C++, where I ported a scanner driver and rewrote the engine for monster Kodak document imaging scanners in 3 weeks. The very first time I hooked up one of these 600 pound, bigger than a washing machine scanners and hit the "Go" button it worked all the way through to writing files.
Well, until I ran out of file descriptors, forgot a close() ^_^. But I found being able to do that rather impressive, granted, I'd been working on the lower level SCSI code for a couple of years with optical drives and scanners.
That image fails at making its point however. Even without the brackets, who considers "cond = 1 ag ad recurse carry augend addend 1" a beautiful line of code?
IMHO, without parens you will end up aiming for ml and matching. Otherwise removing everything will turn anything into ...
Without punctuation:
int thing int a int b while c = a ++ > b b = 0 a = b + c return c
Well almost gibberish, I have to admit, infix and some mandatory keyword do help. But Lisps have this tendency to be structural ... so their syntax and idioms go along with that very often. It's somehow twisted to remove that from them.
Without types:
thing a b while c = ++ a > b b = 0 a = b + c return c
Without most operator symbols:
thing a b while c eq inc! a > b b = 0 a = sum b c return c
Keeping the process
thing a b while c eq inc! a > b b isnow 0 a set-to sum b c return c
thing a b while c eq inc! a gt b b isnow 0 a isnow sum b c c
Recovering a bit of alternative syntax:
(int thing int a int b) while (c = a ++ > b. b = 0) a = b + c return c
Nope. Tried Gun Emacs with such a mode for a bit recently because that came as a default in a Clojure editing recipe I found, but having using versions of EMACS to edit LISP since 1980 I found it more annoying than helpful, there are times when I want to mangle the text and fix the parens afterwards.
For me the two keys are smart indention and the habit of counting off each opening paren at the beginning of an indented line as I close it. I did this sort of work on GoldHill's EMACS clone in the brief period I worked for them, getting it to blink the opening paren when the point is positioned to the right of the closing paren.
Clojure explicitly makes '()' and '[]' different, as well as a few other delimiters. '(1 2 3) is a list like in Lisp tradition, but [1 2 3] is a vector. That is the syntax that the parent is talking about.
Yes, specifically [:whatever] is syntactic sugar for (vector :whatever) and {:what :ever} is syntactic sugar for {array-map :what :ever}. CL has the #() syntactic sugar for the first and the #'acons syntactic sugar for the second. So, in fact, Clojure is no more unLISPy than CL in the most quoted respect.
(Quote from CLtL: Many people have suggested that brackets be used to notate vectors, as [a b c] instead of #(a b c). This notation would be shorter, perhaps more readable, and certainly in accord with cultural conventions in other parts of computer science and mathematics. However, to preserve the usefulness of the user-definable macro-character feature of the function read, it is necessary to leave some characters to the user for this purpose. Experience in MacLisp has shown that users, especially implementors of languages for use in artificial intelligence research, often want to define special kinds of brackets. Therefore Common Lisp avoids using brackets and braces for any syntactic purpose.)
Either way, if you get the terminal condition of a recursive function wrong, you will blow the stack. It's true Clojure isn't built to do TCO, but that's a non-sequitur. The idea that you don't typically use recursion in Clojure is ridiculous - for is a macro, it's syntactic sugar over Clojure's recursion primitives, loop and recur. Even if you didn't want to use that, change the nil? check to empty? and it works. I'm not going to claim that Clojure's list/sequence semantics are necessarily the cleanest (the empty _sequence_ in Clojure is nil, an empty list/vector/set etc is not), but they're not hard to learn if you make any effort at all. Again, your existing Lisp experience may or may not help you understand Clojure's data structures, but I am surprised that this is a surprise to anybody.