Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The Lisp Curse (2011) (winestockwebdesign.com)
194 points by jlturner on Feb 25, 2016 | hide | past | favorite | 303 comments


I used to read these kinds of articles with interest, back in my Common Lisp programming days. They would worry me. Back in the day, I would even sometimes participate in language advocacy discussions.

These days, I don't care at all — I just write great apps that work (fast). Some code runs on the JVM, some code runs in the browser, some runs in both places. I get to use an impressive set of libraries from at least three different ecosystems (Clojure/ClojureScript, Java, Javascript). I get to use fantastic languages with impressive concurrency support. I deliver applications which customers pay money for.

For me at least, Clojure made this kind of writing totally obsolete, in the span of several years.


I've had the same experience but only in isolated places where I could choose the technology. Most of the time, though, I bring up any Lisp and people act like I walked in with dog excrement on my shoes. But these are the same kind of people that are telling me that Golang is going to cure cancer, too so the feeling is mutual.


Unfortunately, due to the JVM, Clojure is not the answer for everyone.

It is not for embedded systems, not for Android (last I checked, it was very slow), and not for browsers. I believe that ClojureScript is probably what's going to take the crown, eventually.

Still, Clojure or ClojureScript missing a lot of things from Lisp. Restarts, for instance.


I'm curious why you say that Clojure(script) is not for browsers? The only time I've really had a problem with Clojurescript is when I'm doing something that heavily interacts with an imperative browser api (like webGL). And even then I just create an interface for that part in JS.


No language is for everyone. Trying to make such a language guarantees it will fail.


When I wrote this essay, Clojure was beginning to show its chops. You're right, the success of Clojure has made Lisp armchair theorizing obsolete. I'm a little tired of the essay, but I keep it up because I still get e-mails praising me for it (cf. the volunteers who made translations linked at the bottom of the page).


I'd have to say that another reason Lisp hasn't taken off is because hygienic macros are a pain to write.....


I used to think that, but alexandria:with-gensyms is pretty nifty. I think the lesson I learned is that every syntax pain point is a macro waiting to be born.


Can't agree more. I feel like the reason why a lot of Lispers back in the day ended up re-inventing the wheel is not because of Lisp's "dark, seducing power" but simply because the tools and solutions just weren't there yet.

For example, if it wasn't for the existing JVM ecosystem of tools and libraries, I would probably end up creating my own undocumented, unportable, bug-ridden implementation of 80% of Netty. Instead, I just use https://github.com/ztellman/aleph


The difference is that Hicky is good at marketing. His lectures are awesome and he has made himself the leader of the immutability movement. That hasn't happened to lisp before.


It's so, so often more about the people and circumstances than it is about the technology itself.


I just started learning Clojure and I would agree with you already. It has all the nice parts from other ecosystems and is just fast to do what I need to do.


You need to add something like python or ruby to your arsenal for quickly writing scripts that start up fast ;) or do you use nodejs for this?


Startup time is something of a red herring for me, kind of like "give me a single EXE or your language is useless" used to be in the Common Lisp world. I don't encounter this problem in practical use. But then, I don't use Clojure to write scripts that I then run hundreds of times from the shell (but why would you want to do that, rather than write a program that does what is needed to hundreds of files at once?).


Racket is a good one to reach for :)


There are a few fast startup JVMs out there that will run Clojure code.


Lisp is programmable programming language, especially Common Lisp. It works as programmer amplifier. Better you are, more you benefit from it.

I challenge anyone to watch this lecture from Christian Schafmeister and say they are not impressed: https://www.youtube.com/watch?v=8X69_42Mj-g and https://www.youtube.com/watch?v=0rSMt1pAlbE


Christian Schafmeister here - thank you for those kind words. I'm using Common Lisp as the basis for a programming environment for developing programmable matter (big, smart molecules and materials). I'll be announcing it soon - stay tuned. Designing molecules is one of the hardest problems in the world and it needs the most expressive, powerful language - Common Lisp. Oh - and C++, there's plenty of C++ in there as well :-).


Your "molecular metaprogramming" presentation is, without exaggeration, one of the most beautiful lectures I've ever encountered.

You've instantly inspired me to change career paths. This weekend I'm going to set up a home chemistry lab so that I may one day contribute to the amazing body of work you've started.

Does anybody on HN have some recommendations for learning intermediate chemistry? or setting up home labs? Just for context, I have math/CS degrees and have already taken introduction courses to physics, chemistry, and biology.


Synthetic chemistry is not really something you want to learn on your own in your basement. You can do it, but you're much much much better off just following real laboratory courses in college. They'll teach you proper lab safety in a safe environment, and the range of reactions you'll be able to perform will be much wider than what you could do in your basement. Most undergrads will perform a Grignard reaction, but I would not advise you do that in your basement.

If you're interested in theory, then it's a whole other thing.

(I'm a chemist.)


I obviously have no idea what can/can't be done at home yet :)

But UCLA is right down the street; I'll see what they have to offer.

Thanks for the tip!


Anything could be done at home, but at what cost (safety + economic)? If you get some lab experience by following courses, you might even land an internship in a synthesis lab if you're lucky :)


Chris Schafmeister here - wow. Well, just make sure you always wear your safety goggles.


Will do, chief! Thanks again for the inspiration.


Beautiful ideas! Do you have a way to control the final stereo-isomer that comes out of the synthesis? I'm thinking of the "256 stereo-isomer" molecule that you mentioned. Or is it a case of brute force and select afterwards?


We have absolute control over the final stereo-isomer that comes out of the synthesis. The large molecules are assembled from building blocks that each contain two stereo centers. We synthesize all four stereo-isomers of the building blocks in separate bottles and then put them together in different sequences to create the large molecules. Where the programming (Clasp Common Lisp in the molecular design language that I'll announce soon) comes in is we can build computer models of all stere-isomers and, in software, figure out which ones can do the job that we want. Then we just synthesize that and test it in the lab.


Awesome! Thanks for that.


This is why I love HN! Amazing video, and that project sounds fascinating. Where can I go to keep an eye on it?


Very, very interesting stuff, thanks.

Every couple of years I spend a couple of months with Lisp and then decide that I actually want to use Lisp to generate code in other languages. Recently I've been on a modern C++ kick, and have been really amazed at how Clang and LLVM are being used to do code indexing.

So, C++ + CL + LLVM + scientific computing ... wow!


I too enjoyed the spiroligomer talk, not all of it, but the deep down approach felt good. If I had more biochem knowledge I'd assemble a team.


Amazing work.


Only part way through the first video but this is very inspiring.

"C++ templates are to common lisp macros as IRS tax forms are to poetry" so true.


What popular programs are written in Lisp?


For one, the service that performs the vast majority of airfare searches -- formerly ITA Software, now Google Flights.

Having spent time at the MIT AI Lab and having co-founded a company whose principal product was a Lisp/C hybrid, I think the challenge with mainstream adoption of both Lisp-like and functional languages is the syntax. There's an element of "don't use a programming language that's hard to hire for" but I think that's secondary as it never bothered us or posed a real problem.

Naughty Dog's Crash Bandicoot games (which I also worked on) used Lisp for all the character control logic.


with regards to Naugty Dog, The Jak & Daxter games were written entirely in GOAL, an in-house scheme implementation that compiled down to assembly.


might like this: http://practical-scheme.net/docs/gdc2002.html it's about using scheme in production house for final fantasy movie


GOAL and GOOL were both used by that studio: "Game Oriented Action Lisp" and "Game Oriented Object Lisp".

They switched to C++ because it was too hard to find good lisp devs.


That's not quite right. These days Naughty Dog uses a DSL called DC built in Racket to write all the "data" in the game (everything from cut scenes to character attributes). Running DC produces data files shipped on the DVD of the game, and used by the big C++ engine that's running on the PlayStation.

Dan Leibgold gave a talk about their system at RacketCon a few years ago: https://www.youtube.com/watch?v=oSmqbnhHp1c


They switched to C++ because they were bought by Sony and integrated into their landscape. They thought that sharing C++ code would be useful. As it turned out, they put Scheme back into their production pipeline.


Oh, good to know!


> I think the challenge with mainstream adoption of both Lisp-like and functional languages is the syntax

Ironic(?) considering there is almost no syntax to Lisp.


There aren't a lot of symbols to Lisp but there's plenty of syntax

Using Racket:

  (if bool then else) instead of (if bool then) or (if (bool then) (bool else))
  (if (> x y)
    (x)
    (y)) ;fails for numbers because (x) is considered a function call (even though 3 is an invalid identifier and thus can be assumed to be always be a number).
 (define (fun x y) (...)) instead of (define fun (x y) (...)) or (define ((fun (x y)) (...)))
That's syntax.

Just because I'm not using {}'s here and infix there doesn't make it any less syntax. That's just the two most basic forms too; bring in loop? forget about it. This also ignores things like '(@,) or (x . y) but I'm not a lisper so I don't know how often that actually comes up


Technically, this might be syntax, but as someone who is learning Racket (and programming) in the beginning stages, it's so much less syntax to remember then even python (underscore, double underscore, with, decorators, list comprehensions etc all have their own syntax nuances), the only syntax I see in Racket is the s-expression, the quote, and the dots. Everything else is just the flow of programming logic. It really does free my mind to work on the problem domain itself! Been enjoying SICP so much.


Racket doesn't get quite as bad (well it does but it tries to keep things looking like S-exps) but consider CL's Loop macro http://www.unixuser.org/~euske/doc/cl/loop.html loop is (from what I understand) idiomatic too. Yes it's a macro (so is (defun ...) though) but it's syntax a CLer needs to know in order to deal with CL in the wild. Format is famously even worse.


A lot of people abandon loop for precisely that reason (it's un-lispy) and use Iterate.

https://common-lisp.net/project/iterate/

Even the ITA/Google style guide says to avoid loop if possible:

https://google.github.io/styleguide/lispguide.xml#Iteration


But the ITERATE macro is syntax, too.

Almost every macro in Lisp provides syntax.


Yes, but I thought his point was more that whilst Lisp has a very easy, simple and regular syntax, ie. (func arg1 arg2 (func arg3)) and so on, it's less simple and regular when you get to the loop macro (loop arg keyword arg keyword...). Hence why I mentioned the Iterate library as something a lot of people use to get back to the regular syntactical appearance.

It's one of the strengths of Lisp imo; that you don't need to think much about how the parser is going to interpret your code (ie. missing semi-colons, whitespace, use curly brace here, square bracket there, etc.), just stick to (func arg1 arg2) and all you're left with is your own logic errors.


    (func arg1 arg2 (func arg3))
That's the syntax of function calls.

But Lisp has a few special forms and zillions of macros. Most of them are syntax.

Lisp has IF. What is the syntax of IF?

    IF form then-form else-form+
Lisp has COND. What is the syntax of COND?

    cond {clause}*
    clause::= (test-form form*) 
 
List has DEFUN. What is the syntax of DEFUN?

    defun function-name lambda-list [[declaration* | documentation]] form*
Now what is the syntax for LAMBDA-LIST?

    lambda-list::= (var* 
                    [&optional {var | (var [init-form [supplied-p-parameter]])}*] 
                    [&rest var] 
                    [&key {var | ({var | (keyword-name var)} [init-form [supplied-p-parameter]])}* [&allow-other-keys]] 
                    [&aux {var | (var [init-form])}*]) 
and so on...

> It's one of the strengths of Lisp imo; that you don't need to think much about how the parser is going to interpret your code (ie. missing semi-colons, whitespace, use curly brace here, square bracket there, etc.), just stick to (func arg1 arg2) and all you're left with is your own logic errors.

What you describe is just the data syntax for s-expressions. Not the syntax of the programming language Lisp.


> What you describe is just the data syntax for s-expressions. Not the syntax of the programming language Lisp.

Exactly. The data syntax if what most people worry about. The names of the verbs (funcs/methods/etc.) may change from language to language, but the data syntax is what trips people up. I think Lisp has one of the simplest and clearest. There are very few cases of "oh you can't write that there, only nouns are allowed in that position".

I agree with your point, but I think we're arguing slightly different points here ;)


It's debatable whether "simple and regular syntax" is a strength or a weakness. Lisp/Scheme might be too regular for their own good. Consider the following statements in Scheme, for instance:

    (lambda x (+ x x))
    (cond (> x 2) (+ x x))
    (if (> x 2)
        (do-this when-true)
        (also-do-this when-true))
They are syntactically correct (technically), but they are probably not what you meant. So you still have to pause and ask yourself how cond works... except the parser will not help you.

That is to say, a problem with s-expressions is that they are so regular that everything looks the same, and when everything looks the same, it can become an obstacle to learning. Mainstream languages are not very regular, but they are more mnemonic. I think Lisp works best for a very particular kind of mind, but that for most programmers its strengths are basically weaknesses.


    (if (> x 2)
        (do-this when-true)
      (also-do-this when-true))
In some other language:

    x > 2 ? doThis(whenTrue) : alsoDoThisWhenTrue();
Same problem. Maybe even slightly worse. For example it could be:

    x > 2 ? doThis(whenTrue) ; alsoDoThisWhenTrue();
To spot the difference between a colon and the semicolon: tough.


SBCL will warn or error at compile time on the first two, and there are similar issues to the third one in many languages; it's a semantics issue more than a syntactic issue.


An equivalent to iterate/loop where each compound form need be replaced by an anonymous function and each binding is replaced by a dictionary entry could be implemented completely as a function. Is this also new syntax?

If not, how is the macro different other than implicitly changing the evaluation?

for a more simple example, why is the idiom CALL-WITH-FOO (implemented as a function) not syntax while WITH-FOO (implemented as a macro) is? What precisely is syntax is somewhat nebulous (if I use a regex library in C, have I added syntax to the language? Regexes certainly are syntax, despite being wrapped in a C string).


Even Racket says loops are unRacket like and so does R. I avoid loops at all cost and use list whenever possible.


Loop is idiomatic, but whether or not it is syntax is arguable, as it is implemented entirely as a macro.


I think you're conflating syntax with semantics.


I am not. Syntax is the structure, semantics the meaning (I mean more or less).

(def (fun x y) (...)) is syntactically different than (def fun (x y) (...)) even if they are semantically equivalent.


(3) vs 3 is a both really. In say smalltalk you can call 3 and it would return 3 because semantically it's an object where as in lisp it's not callable (even if in CL it may be represented as a CLOS object I don't know)

syntactical ( ) isn't actually a procedure call we can see this in (define (id x y) (..)) or (let ((x 3)) ...) in the theoretically pure Lisp semantically it's just a leaf in the tree but as part of an if-block in a real language it gets treated as procedure call even if it makes no sense.


The syntax is the same. Both are s-expressions. The difference is how a particular implementation interprets them. In this example, it would depend on the semantics of def.


That's only the syntax of s-expressions - a data format.

The Lisp syntax is defined on top of s-expressions.

For example Common Lisp has a CASE operator. That's the EBNF syntax:

    case keyform {normal-clause}* [otherwise-clause] => result*
    normal-clause::= (keys form*) 
    otherwise-clause::= ({otherwise | t} form*) 
An example:

    (case id
      (10 (foo))
      (20 (foo) (bar))
      (otherwise (baz)))
The expressions are written using s-expressions as data. But still there is structure in those s-expressions, described by the EBNF syntax of CASE.

Every special operator and every macro provides syntax. Since users can write macros themselves, everybody can extend the syntax. On top of s-expressions.


That is interesting. I've always considered lisp in terms of denotational semantics. In fact, I wrote a toy lisp in which the complete grammar was basically

    list -> ({symbol | number | string | list}*)
and then it was up to the interpreter to decide the meaning of special forms. (I say "basically" because there was also desugaring of '(...) to (quote ...)).


Lisp as the idea has no syntax. Racket is a dialect of that idea, and the authors have decided to add syntax.


Lisp "as the idea" is not a programming language. Racket is a language, Common Lisp* is a language. No one writes code in the IDEA of lisp, indeed no one can because no computer yet can pull instructions out of what ever aether contains platonic ideals.

* using SBCL: (defun foo (x y) (...)) instead of Racket (define (foo x y) (...)) is again an example of syntax.


It's really such a simple syntax change from f(x) to (f x) yet it makes an enormous difference and opens up a whole new world of possibilities. Sure, there are homoiconic languages in which you can write macros that aren't lisps but the expressive power and ease of use suffers. Take for example macros in Julia (itself heavily lisp-inspired), they're possible but ugly and not nearly as seamless as macros in lisp.


That's exactly the problem: until you're used to it, it looks nothing like psuedocode. Contrast with Python. I'm not saying it's right, I just think this is the issue.


Really, the major difference syntactically is:

   (f x)
vs.

   f(x)
I've never really understood why people seem to have such a hard time with that.


Can you understand why most have a hard time with

  (* (+ a b) (+ c d))
instead of

  (a + b) * (c + d)
Hint: if the first notation is so superior, why don't math papers use it.


> Hint: if the first notation is so superior, why don't math papers use it.

Math papers usually use neither the first nor the second.

they use:

  (a + b)(c + d)
in the example you propose, and, reversing the operators so that the first style would have:

  (+ (* a b) (* b c))
and the second:

  (a * b) + (c * d)
math papers would usually have:

  ab + cd
So, I'm not sure "math papers do it differently" is the argument you want to use to advance your second syntax over the first.

Of course, since in lisp + and * are variadic rather than binary operators, they are a lot more like the pi and sigma operators applied to sets in mathematics than binary operators. Which are prefix, not infix. So, there's that.


Additionally there's more than one macro system out there to allow for infix math in Lisp... And for non-mathy things, in Clojure at least you're often using the threading macro -> or some variation of do/doto.


I do understand, but I'll also point out that that first expression will often (with longer variables or expressions in particular) be broken out like:

  (* (+ a b)
     (+ c d))
Which is readable, though not necessarily compact.

Also, * and + in the former, aren't strictly the same as in the latter. * and + take an arbitrary number of parameters in CL. From [0], `(* )` => `1`. I can't test, but I believe `(* 2)` => `2` (the spec doesn't describe the case of a single parameter, unless I'm missing it). `+` is the same, but `(+)` => `0` instead, it's identity value.

Order of operations is made more explicit, and, I've found, it's more useful to think of `+` and `*` as `sum` and `product` rather than `plus` and `times`.

[0] http://www.lispworks.com/documentation/HyperSpec/Body/f_st.h...

[1] http://www.lispworks.com/documentation/HyperSpec/Body/f_pl.h...


Math does also have `sum` and `product` in the form of Sigma and Pi. Of course, not exactly the same thing (since they operate over a set, not discrete elements).

I would venture to say that the reason infix notation is naturally preferred is related to our psychology, the same way most human languages are SVO (Subject Verb Object) or SOV. VSO languages (Lisp like) are less prevalent.

In general my opinion is that when a majority vastly prefers one alternative, there is usually a strong reason for it (even if it may be irrational) and it's foolish to go against the grain.


I seem to recall that SOV (reverse Polish notation) is marginally more prevalent than SVO among the world's languages... though it is true that most creoles are SVO, which does at least seem to indicate that it's a default of sorts.


Infix is mostly used, in programming, within mathematical and logical expressions. But the majority of my code spends its time in some kind of chain of function calls, which has the verb first. Maybe if I did more with OO-languages I'd see it differently?


Interesting. OO syntax often is object.function(arguments), which is subject-verb-object order. I never thought of it that way before. You can throw some adverbs in among the arguments, too.


That's correspondent with how Java and especially Obj-C and super especially AppleScript programmers try to write code that reads like COBOL, er, English.


Lisp is VOO, not SVO.

Java is SVO.

C is VOO too.


If you have Quicklisp[1] installed you can install the "infix" package and get infix notation in Common Lisp[2]:

    $ sbcl
    This is SBCL 1.2.4.debian, an implementation of ANSI Common Lisp.
    More information about SBCL is available at <http://www.sbcl.org/>.
    
    > (ql:quickload 'infix)
    ; Loading package
    (INFIX)
    > #i(1 + 1)            ; addition
    
    2
    > #i(2^^128)           ; exponentiation
    
    340282366920938463463374607431768211456
    > (defun factorial (x)
         #i(if x == 0 then
              1
            else
              x * factorial(x-1)))       ; infix function call
    
    FACTORIAL
    > (factorial 5)
    
    120
    > #i(factorial(5) / factorial(6))
    
    1/6
    > '#i((a + b) * (c + d))      ; Put a ' before the #i() to see what code is generated
    
    (* (+ A B)
       (+ C D))
--

[1] - https://www.quicklisp.org/beta/ [2] - Don't know if there is a similar package for Scheme.


> Don't know if there is a similar package for Scheme.

I have to agree with others in the thread that infix in Lisp/Scheme is not the convention, and IMO an awkward fit. Don't recall encountering infix in any published/shared code I've seen, it may exist, but to learn Scheme becoming comfortable with s-expr notation is definitely necessary.

However, there is SRFI 105[0] which describes "curly infix expressions". It's implemented in Guile 2.x, possibly available in a few others but evidently not had a lot of uptake among Schemes.

[0] http://srfi.schemers.org/srfi-105/srfi-105.html

Edit: added URL


I wouldn't recommend using infix libraries if you really want to get into Common Lisp though. They're a bit of a crutch for people coming from other languages, but that's it.

Pretty much the whole language is based on Polish notation. The sooner you realise that + - * / are just function names like any other, the better you'll do.

For example:

  (+ 1 2 3)

  in plain symbols is just:

  (function parameter parameter parameter)
  
  But if I were to write my own addition function:

  (addition 1 2 3)

  it would also be:

  (function parameter parameter parameter)

  and so is:

  (http-request "http://www.google.com")

  (function parameter)
If you use infix notation, you're writing half your code in a competely different semantic to the other half. I can't imagine it helping people really get a proper grasp of how Common Lisp works.


Nobody claimed Lisp syntax was optimal for math papers. Neither is C syntax. The Lisp syntax does have advantages for program source code. Sure, it has disadvantages too. Everything is a compromise.

The Lisp syntax is so incredibly controversial, and that fact itself is incredibly strange to me. I see it as a pragmatic engineering decision: let's represent source code as syntax trees, and then sophisticated editing modes become straightforward, macros become straightforward, and the syntax becomes very uniform.

This big thread indicates another reason Lisp isn't popular: because people keep arguing back and forth about the textual syntax, rather than discussing actual experiences with using it.


There are actual features which make Lisp a bit more difficult to understand and a few are related to syntax: especially the code as data feature. Some elements of the language have different purposes: both as code and as data. Symbols can be data and they can be identifiers. Lists can be data and they can group elements of the programming language. Others have only one purpose. For example an array is always data.

Symbols and lists behave differently depending on the context:

Examples for Lisp snippets:

   (foo bar baz)  ; it could be a macro, function or special operator form

   (quote (foo bar baz))   ; here it is data

   (defun do-something (foo bar baz) (a b c))  ; here it is an arglist

   (defun do-something (a b c) (foo bar baz))  ; one element later it is a form
These context need to be learned and actual visual clues are a) the symbol in front and b) the structure of the expression.

This is puzzling for a lot of people. A few never really take that hurdle.


Yes, they learn the second syntax from birth. There have been arguments to teach the lisp syntax in mathematics due to it being easier to understand with multiple arguments:

    (+ x y z a) instead of (x + y + z + a)
Also, there are no order of operations problems with the lisp syntax like there are with traditional mathematical notation (unless you use parens, which makes it look even more lispy).


I found math major students to grasp Lisp much faster than C.

Therefore I would say the answer is simply familiarity and concern for the audience.


So why don't we write all binary operations that way? eg. x f y instead of f(x,y). I've always felt more comfortable with prefix notation, especially because it easily generalizes to more than two arguments. I think infix notation is an accident of mathematical history.


Welcome to Haskell, where anything can be infix :-)


That can be nice in a lot of situations. e.g.

    (+) 4 2
 vs. 
    (+) <$> Just 4 <*> Just 2
I usually prefer the applicative style above to

    liftA2 (+) (Just 4) (Just 2)
because it preserves the "form" of the original expression and generalizes to more arguments. ie. it doesn't require liftAn for whatever n number of arguments my function takes.


With all due respect, math papers don't, I think, pretend that their syntax makes any claims to optimality of any kind.

Indeed, most math syntax (in my experience) has a very explicit reason why it's used: historical circumstance, convention and common tradition.


Arithmetic is one place where infix notation is generally easier to read. If I were writing a program that basically just did a load of mathematics I may even consider using a different language.. However looking over the software I generally develop, I probably need to use arithmetic in about .01% of the code.


> Hint: if the first notation is so superior, why don't math papers use it.

As a mathematician I'd love to use it in my papers, but no reviewer would accept such a paper.


I have no idea why the people who write math papers make the choices they make.

But I'd like to point out that in your second example the parentheses are needed because infix notation is inherently more complicated.

In the first example however, the notation is far simpler: a simple list of function plus zero or more arguments.


Math papers use a very complex 2d notation last I've looked.


    c = sqrt(a*a + b*b)
    
    (set! c (sqrt (+ (* a a) (* b b))))


Something nobody has mentioned yet is that, in the C-style version, the precedence rules are eliminating some parentheses. You can't do that in Lisp (except maybe with a macro). But then, in Lisp, you don't have to remember precedence rules.

In this example, the advantage is on the C side, because pretty much everybody who knows any math knows that multiplication has precedence, and they can just read that syntax. If you have to go look at the precedence chart in K&R or Stroustrup before you know how to parse the expression correctly, well, then the Lisp approach is probably more efficient...


    (define c
      (sqrt
       (sum (sqr a)
            (sqr b))))
Which you read out loud like this: "c is a square root of a sum of two values, squared".

Easy to read as you see and easy to understand. This:

    c = sqrt(a*a+b*b)
is way harder to read.


Imagine you removed a parenthesis at the end. Would you even notice? It's not compact at all, and there are fewer visual cues.

Infix notation works better when it is applicable. Limiting the number of parentheses is also best when possible.

You can add parentheses and make it less compact if you want. You could theoretically write c=sqrt(axa+bxb) as:

    c =
        sqrt(
            ((a *
                a) + 
                (b *
                    b)))
But that's just ridiculous.


> It's not compact at all,

You are now arguing against parens. You can have mostly prefix syntax without parens, with blocks delimited with indentation only. Scheme's sweet-expressions[1] are one such example. Anyway, please take my example, remove the parens and check if your argument still applies.

If it does, then it's down to the function names and your (common) misconception that "+" or "^" is somehow more readable, easier to understand or something than "sum" or "sqr". Where I simply disagree. BTW: why do you insist on using infix syntax for a couple of operators, while you use every other possible operator in a prefix notation and are happy with it? What is the difference between "sqrt" and "-" which makes it ok to use sqrt in prefix form?

> Limiting the number of parentheses is also best when possible.

No. It's only best if it aids readability. This is something that Lisp does rather well actually - there are many examples of equivalent Java and Clojure expressions where Clojure version has half as many parens. Getting rid of parens for the sake of getting rid of parens is counterproductive.

[1] http://srfi.schemers.org/srfi-110/srfi-110.html


Because your version takes up 6 lines! A simple one line expression!

And yes you can remove the parentheses, but not only does no one do that, it still takes up 6 lines. And then you have significant whitespace too.

>why do you insist on using infix syntax for a couple of operators, while you use every other possible operator in a prefix notation and are happy with it? What is the difference between "sqrt" and "-" which makes it ok to use sqrt in prefix form?

Because that's universal and standard for math notation. But also sqrt only takes one argument. If it took two arguments, then it would be perfectly reasonable to add an infix operator for it too. Many languages do add infix operators for everything from combining strings to ANDing booleans, etc, because they are so much more readable.


I think the thing about lisp isn't the fact that it's functions start with a paren. It's the fact it uses function composition to write everything that makes it harder to keep track of. Most languages don't define their functions like this:

  c = sqrt(a^2+b^2)
  vs.
  define(c, sqrt(sum(sqr(a),sqr(b))))

  def getMaxValue(numbers):
		answer = numbers.first()
		for (i in xrange(numbers.length)):
			if (numbers[i] > answer):
				answer = numbers[i]
		return answer

  vs.
  (defun get-max-value (list)
	  (let ((answer (first list)))
	    (do ((i 1 (1+ i)))
	        ((>= i (length list)) answer)
	      (when (> (nth i list) answer)
	        (setf answer (nth i list))))))

  if you could only use python functions:
  defun(get-max-value, [list], 
		let(answer,first(list)),
		 do( (i,1,(1+ i)), 
		 	(>=(i, length(list)), ans), 
		 when( >(nth(i,list),answer), 
		 	setf(answer,nth(i,list)))))

  defun(get-max-value, [list], let(answer,first(list)), do( (i,1,(1+ i)), (>=(i, length(list)), ans), when( >(nth(i,list),answer), setf(answer,nth(i,list)))))


The actual Lisp version is:

    (defun get-max-value (list)
      (reduce #'max list))
or even

    (defun get-max-value (list)
       (loop for element in list maximize element))


Not really. We are demonstrating how multiline statements become hard to read in lisp because in practice you can only use function calls to write everything.

Any language where you write an entire function as a huge one liner expression with functionality in nested function calls is hard to read. It's the behavior, not the syntax per say.

What the actual behavior is doesn't matter as much, even if you can reduce both of them to one liners in many languages.

ex:

  def get-max-value(list)
    reduce(:max, list)
  end


Not really, since Lisp does not only have function calls, but special forms and macros.

In actual Lisp practice, one uses macros, special forms and function calls.

You seem to have failed to understand the difference.

There are two REAL reasons why Lisp is harder to read than some other languages:

* the syntax looks and works slightly different and most programmers have been trained to other programmning language syntax. With training, this is less a problem.

* Lisp uses a data syntax as the base layer of the programming languages and encodes programs as data. So the syntax of Lisp comes on top of s-expressions. Very few other programming languages are doing that and as a consequence it complicates a few things. The user of Lisp has to understand the effects of code as data. This is something you don't have to understand in Java or Python. It can be learned, but it has to be learned.

At the same time, this code as data principle of Lisp gives a lot of power, flexibility and new capabilities. It makes Lisp different and in some way more powerful than Java or Python. The added power comes from easy syntactic meta programming in Lisp, which neither Java nor Python provide. This has also consequences for interactive programming, since programs can be written and manipulated by programs under user control.


I already mentioned sweet-expressions in another comment. And also, there's one important thing we're forgetting when discussing syntaxes, which is the fact that we very rarely read the code without syntax highlighting, so the examples you posted actually look like this: http://pygments.org/demo/3781004/

This does change the situation somewhat.


> if you could only use python functions

I'm not sure if this is what you're talking about but there actually is a Lisp where you can call Python functions. It's called Hy[1] and I encourage you to take a look, it borrows some good solutions from Clojure, but generally is quite an acceptable Lisp :)

[1] http://hylang.org


No.

Why do mathematicians not use s-exps but syntax that is much more similar to C? Reading "sum" "mul" etc. takes longer than if you have visual anchors like * +. And infix is an advantage for simple expressions, because they split arguments, where as with sexps you have to parse from left to right and count parens.


> Why do mathematicians not use

Please tell me why should I care. No, really - I'm a programmer, not a mathematician.

> Reading "sum" "mul" etc. takes longer than if you have visual anchors like * +.

Citation for this?

IMO it's exactly the opposite, but I may be wrong. Some kind of reference would be nice.

> And infix is an advantage for simple expressions, because they split arguments, where as with sexps you have to parse from left to right and count parens.

Ok, so 2 ("sum" vs. "+", 3 vs. 1 char) additional characters are bad, because they take longer to read, but for example 3 additional characters here:

    (+ a b c d)
    vs.
    a + b + c + d
are good, because they take longer to read. That's interesting.


>> Reading "sum" "mul" etc. takes longer than if you have visual anchors like * +.

> Citation for this? IMO it's exactly the opposite, but I may be wrong. Some kind of reference would be nice.

I know it from myself and don't think I have to to provide evidence that by large most people work like this. Reading and interpreting text is just WAY more complex a process and thus much slower than associating a shape with a meaning.

For example, application designers have known for a long time that it's important to build a symbolic language (icons etc) because that's just way faster (once you have learned what the symbol means, for example with the help of a tooltip).

There's another guy who explained this at length

http://c2.com/cgi/wiki?LispLacksVisualCues

Search for "top" throughout the page.

> (+ a b c d) vs. a + b + c + d

Yes. But as explained in my other comment, that's not optimizing for the common case.


> I know it from myself and don't think I have to to provide evidence that by large most people work like this. Reading and interpreting text is just WAY more complex a process and thus much slower than associating a shape with a meaning.

I don't think there is a difference in speed between reading "sum" and "+". You don't read the word "sum" letter by letter: you see it as a whole token and your brain recognizes it instantly.

> For example, application designers have known for a long time that it's important to build a symbolic language (icons etc) because that's just way faster (once you have learned what the symbol means, for example with the help of a tooltip).

You're talking GUI, which is different than writing and reading code. There are, for instance, much less GUI elements visible on the screen than there are identifiers even in a short snippet of code and there is much more context available for deduction in the code than in the GUI. I don't think the two situations - recognizing GUI features and recognizing and understanding identifiers in the code - are comparable.


Come on. It's totally obvious that the shapes of * and + are much simpler and much more distinct than "sum" and "mul" (which by the way are relatively short and simple examples for words).

Humans have excellent shape recognition -- recognizing (and differentiating) a tree and a person happens subconsciously, effortlessly. Interpreting the words "person" and "tree" takes way more effort.

Similarly, humans have usually very good spatial sense. If there are persons to the left and to the right of a tree, it is effortless to recognize that they are "separated".

> You're talking GUI, which is different than writing and reading code.

No. I'm talking perception.

> There are, for instance, much less GUI elements visible on the screen than there are identifiers

That depends. There are very complex GUIs out there. But let's assume it for a moment. (By the way that typically that means the code is not good (weak cohesion)).

> there is much more context available for deduction in the code than in the GUI.

That is not supportive of your previous argument: The more identifiers, the less context per identifier.

> I don't think the two situations - recognizing GUI features and recognizing and understanding identifiers in the code - are comparable.

It's both about perception. It's very, very important that programmers can focus on their work instead of wasting energy building parse trees in their minds, incurring severe "cache misses". Again, take this simple commonplace example:

  (sum (mul a (minus (b c)) d)
  a*(b-c) + d
If you don't think there's a huge difference I can't help you. I'm sure I need about three seconds to parse the sexp as a tree and figure out what goes with what. Then I have to go back and interpret the operators.

Conversely, the infix/symbol operators example I can map out with minimal, and linear, movement of the eyes. In most cases I don't even need to parse it as a tree -- it's almost a sequence. On a good day, it costs me maybe a second to parse the thing and extract the information I need.

Another advantage of symbols for arithmetic is that they give a sense of security, because one can intuitively infer that they have "static" meaning. While usually words are reserved for things that change, i.e. mutable variables. Being able to infer non-mutability based on shape alone gives a huge advantage.


> Come on. It's totally obvious that the shapes of * and + are much simpler and much more distinct than "sum" and "mul"

I disagree that it's obvious. Moreover, I don't believe there is a measurable difference between the speed of recognizing "sum" and "+", once you're equally familiar with both.

> The more identifiers, the less context per identifier.

I don't believe it's that simple, but we're starting to go into semantics (which are part of comprehensibility of code, but not part of it's readability I think).

> If you don't think there's a huge difference I can't help you.

I think you can help yourself: just go and train yourself in reading prefix notation, like I did. Then get back to this example and then tell me again that there is a huge difference.

> I'm sure I need about three seconds to parse the sexp

I don't even know how to measure the time I needed to read the sexp, it was that short. And I even instantly realized that you've put parens around "b c" in the "minus" call, which would cause an error in most lisps.

> Conversely, the infix/symbol operators example I can map out with minimal, and linear, movement of the eyes.

That's why I used newlines and indentation in my example above. To take your example:

    (sum (mul a (minus b c))
         d)
This also reads linearly, just in a different order than you expect. This doesn't make it objectively harder or slower to read, it's just unfamiliar to you.

Also see my other comment on familiarity: https://news.ycombinator.com/item?id=11180682


I'm not interested in differentiating between readability and comprehensability. If I want to work on some code I need to comprehend it. That starts "in the small" with the mechanical aspects of "readability", if you will. Where to draw lines is not relevant. Every aspect to the process of comprehension is important. The "small" aspects are more important than you might think: Because they affect every line of code. Whereas there are fewer instances of the more global aspects to comprehension.

Like in a binary tree, where half of the elements are in the lowest level.

> you've put parens around "b c" in the "minus" call

You have a point. One pair of Irritating Superfluous Parentheses less.

    > (sum (mul a (minus b c))
    >      d)
Even the consideration to sprinkle such a trivial expression over multiple lines hints at the superiority of a * (b-c) + d. It's just the most straightforward thing to do. No far-fetched argument can change that.

I'd love to see eye-tracking data which show the tradeoffs between various syntaxes.

The regularity and the simplicty of sexps is of course good for computers, because these can barely associate. Because they can't learn new tricks (they have fixed wiring). But humans have streamlined their languages (which also includes syntax; again, I'm not differentiating here) to their environments since forever.

Sexps are also good for abstraction and meta programming. But as we all know abstraction has a cost and there is no point in abstracting an arithmetic expression. And most code, for that matter.


> I'm not interested in differentiating between readability and comprehensability.

Fair enough, but then please stop using single letter variable names, add type annotations where applicable, provide docstrings and contracts for functions. Comprehensibility is so much more than syntax that I think mixing the two will make for even more interesting, but even less fact-based discussion.

> I'd love to see eye-tracking data which show the tradeoffs between various syntaxes.

Yeah, that would be very interesting. The thing is, there is no such data available, but you still are convinced that one kind of syntax is better than the other. I'm not - from where I stand the differences and tradeoffs in readability of syntaxes, once you know them equally well, seem too minor to measure.

> Even the consideration to sprinkle such a trivial expression over multiple lines

No. It's just different way of getting to the same effect. I don't see why would one be worse than the other (splitting things using infix operators vs. splitting things using horizontal and vertical whitespace).

Other than that, you completely avoided the familiarity issue. Do you think that we're genetically programmed for reading infix syntax? If not, then it means we need to learn infix syntax just like any other. My question was, would someone not yet exposed to infix propaganda have a harder time learning infix (with precedence rules and resolving ambiguities) or prefix?

You also ignored my question about the difference in readability when you are equally well trained in both syntaxes. You can't compare readability of two syntaxes fairly unless you have about equal amount of skill in both. And the fact that readability is influenced by skill is undeniable. So, in other words, are you sure you're as skilled with sexps - that you wrote comparable amount of code - as with infix? Honestly asking.


> Comprehensibility is so much more than syntax

Absolutely. It's a tender flower.

> No. It's just different way of getting to the same effect. I don't see why would one be worse than the other (splitting things using infix operators vs. splitting things using horizontal and vertical whitespace).

It's very important since size matters. Efficiency of encoding and cost of decoding (~ perception) matters. But if you don't think it makes a difference -- fine, you are free to read braille instead of plain text even if you have perfect eyesight. You can also add three layers of parens around each expression if you think that's more regular.

> Do you think that we're genetically programmed for reading infix syntax?

No. There's this fact that all combinations of basic grammar are represented in natural languages: SVO, SOV, VSO, VOS, OSV, OVS. And then there are some programming languages which don't differentiate between subjects and objects, but go for (OVO), VO, VOO, VOOO... (or concatenative style OV, OOV, OOOV...). Which is great since the goal of formalism is to be "objective". (Note that Object-oriented programming is actually subject-oriented programming from this standpoint. It's not "objective")

Instead I say that it is more efficient if syntax is optimized for the common cases. Shorter is better, if the decoding won't produce more cache misses. Infix and symbols don't produce cache misses for the vast majority of humans, in the case of arithmetic (read: mostly sequential, barely tree-shaped) expressions.

Sexps are inherently unoptimized for the common cases. They are "optimized for abstraction": for regularity. It is an explicit design goal to not differentiate things which are different "only" on a very concrete level. Instead of content, form is accentuated. This is not suitable for the > 95% of real life software that is just super-concrete and where abstraction has no benefits.

I'm sure I have now given 5 to 10 quite plausible examples which support the standpoint that symbols-and-infix arithmetics is good for humans, based on how their mind works. You haven't provided any counter-arguments but just shrunk off everything. But thanks anyway for that. I think I'm satisfied now with the examples that came out.

> are you sure you're as skilled with sexps [..] as with infix?

No. Never will be.

Are you? Show me a Lisp program with more than casual usage of arithmetics and tell my why you consider it readable. By the way, the first google hit I just got for "lisp arithmetic readability" is http://www.dwheeler.com/readable/


> Infix and symbols don't produce cache misses for the vast majority of humans

You tried to prove your theory by finding positive evidence. But the evidence is very weak and far fetched.

> which support the standpoint that symbols-and-infix arithmetics is good for humans, based on how their mind works.

Given that we largely don't know how the mind 'works', that's a weak argument.

> Show me a Lisp program with more than casual usage of arithmetics and tell my why you consider it readable.

Given that a lot math code is expressed in low-level Fortran, I'll take Lisp every day.

From a statistics system in Lisp:

    (defgeneric gaussian-probability-density (x params)
      (:documentation "general gaussian density method.")
      (:method ((x number)
                (params gaussian-probability-univariate-parameters))
        (\ (exp (* -1.0 (/ (- x (mean params))
                           (standard-deviation params)))
           (sqrt (* 2.0 pi (variance params))))))
I find that perfectly readable.


As a mathematician, sum(whatever) or product(whatever), reads just fine. There are in fact, a lot of uses of sums and products, so after a while, they are pretty naturally.


Yes, sum(whatever) is fine. What is "whatever"? If it's the common case of two operands, then I don't think you're making a point against a + b.

And you don't think (sum (mul a (minus b c)) d), or (+ (* a (- b c)) d) for that matter, is more readable than a * (b-c) + d, do you?

> There are in fact, a lot of uses of sums and products, so after a while, they are pretty naturally.

I think you are talking about summing up a collection (like, an array, a matrix, etc.) as opposed to building an expression tree. Of course, sum(myIntList) is just fine. That's a whole different story.

There are also the rare cases where you have to sum, like 6 integers. (sum a b c d e f) might not be worse than a + b + c + d + e + f. But that's by far not the common case in most problem domains. The common case is like a*(b-c) + d.


If my cup hadn't been empty, you would owe me a new keyboard.

What? You were serious? Um, no. Just no. Your way is not easier to read - at least, not for (I would guess) 95% of programmers, and 99% of humans.


How do you know? Any proof/any scientific source, or is it just lore and how you personally feel about this?


I already admitted that it was a guess. But I still think I can defend it.

Starting in elementary school, everyone learns to read math notation. By high school, everyone knows what

  c = sqrt(a*a + b*b)
means. The Lisp version may be easier to read for those who have spent enough time using Lisp. That's not the majority of programmers, though, and it's only a tiny minority of the general population.

Do you think that, to a non-Lisp programmer, the Lisp version is easier to read? Do you think it is easier to read to a non-programmer who has had high school math? Or is it just easier to read for you?


> Starting in elementary school, everyone learns to read math notation. By high school, everyone knows what

We're either talking about objective readability or personal familiarity. What you say is that, after extensive training for many years, it is easier for people to read notation they were trained to read. This is both true and utterly uninteresting.

What is interesting, though, is how much training you need to read prefix and how much training you need to read infix. It's obvious that infix takes more time to learn: operator precedence and things like using "-" in both infix and prefix forms make it objectively more complex than prefix notation. You just forgot how much time you spent learning it.

> Do you think that, to a non-Lisp programmer, the Lisp version is easier to read? Do you think it is easier to read to a non-programmer who has had high school math?

Again, this is not interesting at all. You're talking familiarity, not readability. Of course, it's easier to read something you've been taught to read. To make this more objective, take an elementary school kid - who wasn't exposed to years long infix propaganda - and check both notations' readability with them.

Personally, I learned to read just about any kind of notation used in programming. From my observations, there are only minor differences between the speed of comprehension when using different notations - once you've trained enough. The difference is how much training you need. I can tell you that reading J - an infix language, it's an APL descendant - took me much, much longer to master than reading Lisp.


Learning lisp syntax requires just a very short introduction. In SICP, it is said that they never formally taught lisp at class. The students just pick Lisp up in a matter of weeks.


Yes, whichever ever one you have seen a million times before looks more readable. So?


It's probably because in algol derived languages -- when you encounter parentheses, something weird is happening -- something you've got to put extra thought into.

Maybe the ordering of something is being forced. Maybe something else is going on, but whatever it is requires more thought than things that are parentheses free.

So you look at Lisp and your brain locks up the brakes, with "WTF is going on here??? I'm out".


I guess that's ironic, but with people coming up used to all the ornate syntax, one of the common balks is "all those parenthesis, it all looks the same" and "there is no syntax, how do you read this."

It's like going from Arabic numerals to counting by groups of five; initially, it feels like you're losing expressive power. And, of course, at a glance, you can't read "|||||||||||||||||||||||||||||" as quickly as you can "28".


Here's a good wiki on the topic:

http://c2.com/cgi/wiki?LispLacksVisualCues

After a brief period of usage 'the parens disappear' and you just read the code by indentation.


So, uhh... why not use a notation where nesting is expressed via indentation, then? I don't understand how the syntax can be considered superior if the way people actually cope with it is to hide another syntax inside it.

I was dumping ASTs as part of a little language project recently and my first impulse was to render them as S-expressions. Alas, it just wasn't readable; I couldn't make any sense of it. Indented YAML style lists, though? The structure pops right out and the information I wanted was immediately obvious. There were no constraints here, I was free to render text in any way that suited me; the Lisp style syntax just wasn't helpful.


A Lisp with significant indenting (like Python) would bear a strong resemblance to Haskell.


I prefer having a parser produce an abstract syntax tree. From an expressiveness viewpoint, you get a good deal of raw power from being able to manipulate this directly instead of having to instruct a lexer/parser to do it for you. I don't necessarily think this is a good thing though.


I know almost nothing about lisp, but what comes to mind is car, cdr which aren't exactly the most descriptive keywords of programming languages I have seen.

(I think they are replaced in modern versions but my point still stands as I remember the old ones, not the new)


Yes, sometimes 'car' and 'cdr' are replaced with 'first' and 'rest' and are some of the first things you cover when learning the language, so I don't think the names would really be hindering adoption for those that try to learn the language. The benefits of car and cdr are that they can be composed: (caadr l) instead of (car (car (cdr l))) for example.


Well, if you use "first" and "rest", then (caadr l) could become (firrest l) or something like that...


I like Racket's second, third, fourth,... etc. [1]

I don't think there's an equivalent to cddadr for example, but at that level of deconstruction you're better off abstracting the data structure (maybe a struct [2]) or using some other mechanism like match [3]

[1] http://docs.racket-lang.org/reference/pairs.html?q=second#%2...

[2] http://docs.racket-lang.org/reference/define-struct.html?q=s...

[3] http://docs.racket-lang.org/reference/match.html?q=match#%28...


Yes but can you do caaaaaar or caaddaar or cdaar, etc. with head and tail?

In 5 characters with cadar you can express walking along a tree structure to get exactly the node you want.


car, cdr, cadr, and friends are discouraged in some Lisp communities where pattern matchers are available.


Sure. caaaaaar -> heaaaaaad. cadar -> heaiad. It's one more character, but if you really want to express yourself that way, there's no reason you couldn't...


If you mean Lisp as a straightforward expression of "Lambda calculus" then I would agree.

However, Lisp as in Common Lisp most certainly has a good amount of syntax. And let's not forget macros which amount to user defined syntactic extensions.

Here are some examples of syntax built into the standard:

Lambda lists have varying syntax depending on context. http://www.lispworks.com/documentation/lw70/CLHS/Body/03_d.h...

Declarations have both standardized and implementation defined syntax for introducing information into the compile-time environment: http://www.lispworks.com/documentation/lw70/CLHS/Body/03_c.h...

Type specifiers introduce new syntax for both standardized and user-defined types: http://www.lispworks.com/documentation/lw70/CLHS/Body/04_bc....

Logical pathnames: http://www.lispworks.com/documentation/lw70/CLHS/Body/19_ca....

Feature expressions: http://www.lispworks.com/documentation/lw70/CLHS/Body/24_aba...

And of course, programmable reader macros which is how syntax for every single language primitive in the language is introduced: http://www.lispworks.com/documentation/lw70/CLHS/Body/02_d.h...

Here's an example of a complex standardized macro with its own domain specific syntax (loop): http://www.lispworks.com/documentation/lw70/CLHS/Body/06_a.h...

Let's not forget that most of what one might think of as built-in features in Common Lisp are actually standardized extensions to the language built with macros.

All of the power, expressivity, and extensibility of Common Lisp is what makes it my favorite programming language. It's what makes everything like above possible and gives power back to the user. But ignoring the syntactic complexity will not win us any followers!

TLDR: Common Lisp isn't simple, but it exposes one of the most powerful and empowering programming environments we have.


> If you mean Lisp as a straightforward expression of "Lambda calculus, then I would agree.

That's what I was talking about. The language, being programmable, especially with read macros, can be used to create a very syntactically full language, but at it's basic core level, there is really just '(', ')', '.' and symbols.


The software I produce at work is Ruby. I don't know that it's popular -- but it is central to a large company's operations.

But I generate and validate that Ruby code with Common Lisp -- in other words, I write Lisp that writes correct, idiomatic Ruby.

I would be very surprised if I were the only person doing this.


Have you written about this somewhere? I've love to read more about it.


Is it possible that you can share some information on this, if allowed by your company / employer? This is a first I've heard of this, sounds quite interesting.


I understand why you might want to do that, but why are your reasons for doing it?


Popularity is not the correct measure stick.

Lisp is more like research language than implementation language for bean counting apps. Better question is what bleeding edge things have originated, are being done or have been done in Lisp.

John Carmak is doing VR research with Racket.

Christian Schafmeister doing molecular metaprogramming.

Raytheon implemented a signal processing analysis pipeline for missile defense in Lisp

Commercial Lisp vendors keep lists of some of their customers. Specialized Cad programs like Bentley PlantWise are not popular but they are very complex.

http://franz.com/success/

http://www.lispworks.com/success-stories/



pgloader (Dimitri Fontaine) is a great piece of software.

There's also cl-abnf from the same author and project which is another great example of the expressive power of common lisp.

Also, everything Fernando Boretti does is awesome ;)

https://github.com/eudoxia0 http://eudoxia.me/article/common-lisp-sotu-2015/ https://github.com/dimitri/cl-abnf


I'm regularly hearing about web applications written in ClojureScript, a.k.a. Clojure (a dialect of Lisp, [1]) compiled to JavaScript. Recently I've heard of:

- https://github.com/asciinema/asciinema-player

- https://precursorapp.com/ (not open-source)

[1] http://www.clojure.org/


There are also others, such as Appshare which I have built


Clojurescript is not "also known as" Clojure.


That's not what he said:

aka. Clojure (...) compiled to JavaScript

You stopped reading a few words too soon :)


Ha. I misread. Thanks for the correction.


> Clojurescript is not "also known as" Clojure.

The GP says "ClojureScript, a.k.a. Clojure ... compiled to Javascript, which (I believe) is accurate.


Emacs?

Edit: Urbit has a Lisp (http://urbit.org/)


The underlying 'functional assembly' VM language of Urbit (Nock) is a lisp that works on arbitrarily large integers. Here's a compiler for it I wrote a while ago, written in Common Lisp:

https://gist.github.com/burtonsamograd/29103c2dfaa67f4fd344


Interesting.

The music on your home page reminds me a bit Utopia soundtrack (like the sound of the frog).

https://www.youtube.com/watch?v=_ZYVb8Q5i9w


Thanks. I wrote it myself. I just put out a new album yesterday:

    http://kruhft.bandcamp.com/album/listener


Urbit is deeply reactionary.


Curtis Yarvin, yes.

Brendan Eich is a bit too.


If you count dynamic websites as programs then you're using one right now. ;-)


A better question: "What impressive programs are written in Lisp?". Or "Why isn't there a Squeak-like Lisp machine environment, where 'compatibility' doesn't have to matter?". If a language is incredibly productive and programmable, why don't we already have a VPRI-like STEPS environment in 20,000 lines?

http://www.vpri.org/pdf/tr2011004_steps11.pdf


State of the Common Lisp Ecosystem, 2015

http://eudoxia.me/article/common-lisp-sotu-2015


> What impressive programs are written in Lisp?

I'd only call this anecdote, but one or more benchmarks assert that cl-ppcre, common lisp's perl-compatible regex implementation, is faster than any other, include perl's.

The larger question I'm intuiting from your post, "Why doesn't language power make a difference in practice?" I don't have an answer to.


> Why doesn't language power make a difference in practice?

Depends on your definition of "make a difference in practice". If you mean "make the language become one of the dominant ones", yeah, that doesn't seem to have happened. Either Lisp is less effective in the large than one would expect from its power, or it's less powerful in practice than people think, or power has almost no relation to language dominance.

But if you mean "make a difference to the user", well, it lets the user more easily write the program that the user wants to write. In practice, that makes a difference - to that user.


This is covered in the article. You might say it's the central point of the article - lisp attracts "lone wolf" programmers who want to build perfect abstractions closely mapped to the real-world problem; as opposed to projects that require many man-years of effort run by MBA's who want fungible "resources" to do their tiny-bite-sized pieces according to spec. The philosophy is different.


There is OpenGenera (but more than 20k sloc):

https://github.com/ynniv/opengenera


AutoCAD uses AutoLISP for scripting and extension. Its been a significant part of AutoCAD since 1986.

http://ronleigh.com/autolisp/ahistory.htm


emacs (mostly)


Not many, but even so, this is never the right question to ask.

That question would be: what programs are written in <LANGUAGE> that couldn't benefit from being rewritten in another language?

And most often, the answer to that question is none.

Because languages matter a lot less than language fanatics want you to think.

As for Lisp, I used to be a total fan until I realize the importance of a sound static type system, and now I will never go back. Lisp will never go anywhere because this is the 21st century and we know now that static type systems are an absolute requirement for modern programming.


Right. That's why no one seriously consider Javascript for any new project, and no one would propose the idea of using it on a server.

I am not saying that you are wrong in liking static typing, but arguing that dynamically typed languages are non-starters in this decade is a statement that is easily disproven by the existence of Javascript.


Replicating "pre-AI-winter" success is like trying to replicate "pre-dot-com" success, et cetera. The past is the past. Such things all depend not on technical reality but on what was the predominant belief among some non-technical decision makers. Old Lisp was funded by institutions. There were so many dialects because multiple institutions were actually developing Lisp using local talent. That required funds, and dispensation of funds requires that someone is rationalizing that dispensation based on some beliefs, and those beliefs are very distant from specific technical ideas like "multiple dispatch is a beneficial in OOP".

What remains is that the Lisp design contains numerous elements which are great ideas, which, if they are nicely implemented in their proper form, give you a great programming language.

That doesn't translate to being able to return to the time when you're showered with institutional money to just go wild hacking on whatever you like.


I don't think there's much explanatory power here at all, actually.

Some of the most popular languages today, from a web development perspective at least, are JavaScript, Ruby, and Clojure.

One of them is an actual Lisp with full-blown macros and a lot of Emacs users.

The others are also extremely expressive, dynamic, and amenable to metaprogramming.

I don't think there's a single, simple, beautiful explanation for why Common Lisp isn't more widely used. Mostly it just has an undeserved bad reputation... partly because articles like this show up all the time, reinforcing the stigma of Lisp as a weird language for bipolar geniuses, or whatever.


But one of the most important features of Clojure is that it's hosted on mature, decidedly non-Lisp platforms with plenty of well-maintained libraries, and it has clean, concise interop. If that's what a Lisp needs to become popular, then I'd say that's evidence for this theory, not against it.


I am a big fan of Clojure, but you're definitely overstating its popularity relative to Javascript and Ruby. I would put at very least PHP, and Python ahead of it as well, and Java and C# probably see more web work done with them. Clojure is just not in the same league in terms of popularity as any of these.


It's still very possible to hire Clojure developers, people know about it and are willing to learn, it's got a good reputation, and so on.


True, looks like someone is living in a Clojure buble.

- Clojure user :)


Agree. I think there is one reason only why lisp/scheme are not mainstream: community.

1. Ruby + Clojurescript have prominent, opinionated community leaders who spend an immense amount of resources on proselytizing and preaching agreed-upon best practices (hell, 90% of Ruby development is done within a framework that all but forces you to write code in a particular, proven way).

2. As for JS: web developers are all forced to use it, which naturally leads to less fragmentation (if Chicken Scheme was the only scripting language available on a UNIX OS, by necessity some libraries would become dominant). Even then, most large enough JS codebases are ghastly.

---

Contrast that with Lisp/Scheme. I am now learning scheme, and love the language to point that it's depressing not to get to work in it on a daily basis. That said, the most impenetrable aspect of scheme/lisp is not the language, it's the community:

I don't think I'm exaggerating if I say that the entirety of the scheme community (with the exception of racket) is scattered into completely fragmented groups of under 20 individuals, all of whose landing pages look like they were designed in 1993, filled with broken and outdated links; libraries are maintained and abandoned willie-nillie, communication happens mostly in the form of mailing lists... where do I go to watch a talk? where do I read about latest libraries and projects? what are some of the agreed upon best practices?

"Read the SRFI" is not documentation (if you can figure out the relevant SRFI in the first place).

Example 1: this is the package manager Gambit Scheme recommends http://snow.iro.umontreal.ca/ -- a noble attempt at unifying scheme packaging across implementations. DEAD (as far as I can tell). And why? Because each scheme implements its own unique module system, implements its own unique subset of RnRS.

Example 2: need an LMDB library? Well, the chicken scheme version is a port of a version that exists only within the LambdaNative framework (which is implemented in a different scheme). Why does it have to be ported? because the two schemes have an incompatible way of handling c-bindings. So one can't just write a library that's R5RS compliant and assume it works across implementations. So we end up with two developers maintaining two versions of the same library, instead of both focusing on the same one. And when one of them decides to start using leveldb on his projects instead? The project is DEAD.

(I'm not arguing for a singular implementation of scheme. Part of the beauty of the ecosystem is the variety in implementations suited for different purposes. But so many of the incompatibilities are trivial, and are a community problem, not a technical one. If everyone agreed to implement, say, R7RS-small and better efforts were made to standardise the trivial, like how modules are defined, or how c bindings are to be declared, we could have a central repository of cross-compatible packages.)

The problem isn't the language or its power, it's a lack of community building efforts, which makes it extremely frustrating for a newcomer, even one who is absolutely enchanted by the language; and downright prohibitive for a company to bet on scheme.

Imagine you are a project manager, would you gamble the success of your company on a language whose libraries, as far as I can tell, almost ALL have a bus factor of 1?

tl:dr scheme needs a modern, newcomer-friendly, robust community portal. When there is one link I can give to people where they can find: practical tutorials, lectures, a quick overview of the ecosystem, portable libraries, books and articles full of tips and best practices, charismatic community leaders (who explain things in simple terms that don't scare off the uninitiated, as opposed to trying to convince you how much smarter Lisp programmers are)... then there will be no reason for the language not to be widely used.


I absolutely agree that this makes Lisp/Scheme incredibly difficult for a newcomer -- on that note, could you recommend any resources for a beginner interested in learning Scheme? I've looked at Racket, and it seems to be the only newcomer-friendly "hub" out there, so to speak.


Chicken scheme is a friendly implementation, they have a good package manager and modules repository, along with an api search tool, an active IRC, and pretty extensive documentation. It also compiles to C which is really nice.

The book The Scheme Programming Language http://amzn.com/026251298X is a good overview of the main libraries and language features, and has lots of examples.

Other than that, I've been learning mostly through google and trial and error.


Scheme and the Art of Programming by G Springer is a great book. It's more formal and structured as compared to other well known popular alternatives.After this book, SICP will be a walk in the park.

You can find solutions to the first 7 chapters here: http://playpen.sixbit.org/studies/aop/


In 1988 I was an intern at HP Labs and my project was 50% Lisp and 50% C, the latter half dealing with time-critical IO and low-level routines crunching CAD tablet input.

At the end of a day working on the Lisp portion, which assembled sketched lines into a CAD model I went home happy and relaxed with a feeling of accomplishment. At the end of the C days I felt tense and slightly worried, thinking about ways of more cleverly detecting what the user was trying to sketch on the tablet.

In my 20+ years of programming I never returned to Lisp and I don't quite know why I don't miss it. I have a vague nostalgia for Lisp but I think modern programming languages give us enough tools to get our job done without feeling too much shame about abandoning the purity of Lisp.

Also, the proselytizing has not helped. I think the "Blub paradox" article embodies the mindset that was ultimately the nail in Lisp's coffin.


I had to Google the Blub Paradox (https://en.wikipedia.org/wiki/Paul_Graham_%28computer_progra...) but I see what you mean. It's based on the idea that someone who disagrees with you about something can't be acting rationally, rather that they are somehow incapable of perceiving your wisdom. Sometimes that is the case, of course, but I don't think that explains why Lisp isn't the universal language that everyone writes everything in.

I have a lot of warm fuzzy feelings about Lisp as well, but a lot of Lisp advocacy has been of the "One True Language" variety as opposed to "right tools for the right job, and here's why Lisp might be the right tool for you". It doesn't affect my perception of Lisp, but it doesn't help market it either.


The name doesn't help at all. "Blub" has connotations of bumbling stupidity even without knowing what the "Blub Paradox" is.


I read the article and thought a lot about it, but I'm not buying it. The acceptance problems for Lisp haven't been because it is "too powerful" or that lone wolf hackers won't work together.

The author makes an example of the many Object Oriented (OO) systems, but he performs some bait-and-switch there. Those many OO systems were for _Scheme_, not Common Lisp. And Scheme is intentionally a tiny Lisp. For a long time, Scheme was focused on being the smallest possible Lisp. Common Lisp on the other hand, while it briefly went through an OO experimentation period, really only has one OO system: CLOS.

Also, the whole Emacs line is off target too. What has that to do with the expressive power of the language? And why ignore the two extremely powerful commercial Common Lisp IDE's out there? So is the point that Common Lisp isn't successful because there isn't a better free IDE?

And the "lone wolf/80%" isn't doing it for me either. The Common Lisp specification was the work of many bright minds and is brilliant. And it stands in complete opposition to the situation the author attempts to describe.

I'm not saying that Lisp in general (Scheme, Common Lisp, and Clojure) has been successful, or that Common Lisp in particular has been. If the standard is mindshare and acceptance they have not been successful. There are histories and causes aplenty, but being too powerful is not one of them.


I'm the author of this essay. This must be at least the third time that it has appeared on Hacker News. I'm starting to get sick of it, myself, but I keep it up because so many people like it (at the bottom of the page, you can see the translations; all done gratis by volunteers).

I wrote it during a period when I was especially fascinated with Lisp and got caught up in the theorizing to which the Lisp community can be prone. Nowadays, I freelance in front-end web development and doing Lisp hacking is something that I put on the back-burner.


so you just use javascript? I'm in that period of fascination, any practical advice? ;)


For now, yes. My life has taken some unexpected detours, these past few years, so I have to focus on work rather than inspiration.


Have you ever put (any, as in CL, Scheme etc.) Lisp into practice? I got the impression from the essay that you think that everything in a Lisp consists of macros, which is not the case.


A personal anecdote:

I stumbled on LISP and Scheme around the time I graduated from high school in 1991. I became interested in LISP from learning about classic AI research. I found a professor my first year in college who mentored me in an independent study of LISP. I then learned scheme doing a self-study with SICP.

Despite all of this, I've never done any paid work in LISP, and I've spent the majority of my career writing C. I can't write a LISP program from scratch from memory any more.

However, I've found that the ideas of functional programming and the concept of the read-eval-print loop have colored my understanding of software design in a really positive way. I remember when I learned TCL in the mid 90s it was immediately obvious that TCL is just LISP on strings (read-substitute-print). Lua is also very LISP-like, with tables instead of lists.


This is not the curse of LISP; it's the curse of compile-time macros. Any language extensible at compile time has this problem. C++ has it. (See Boost). Rust seems headed there. You can even do this with C macros.

There was, around 1990, a fad for "extensible languages".[1] It died, because the resulting code was so hard to read, with program-specific syntax for each program. Don't go back there.

[1] http://www.cas.mcmaster.ca/sqrl/papers/SQRLreport47.pdf


This is a really interesting hypothesis.


I really hope that extensible languages are the future. It is unfortunate that a majority of their users had no clue on how to properly extend the languages. It's about a time to fixe it.

> resulting code was so hard to read,

DSL code is the easiest to read and maintain. No "general purpose" language would ever match this.

> with program-specific syntax for each program.

As if it's something bad.

> Don't go back there.

You failed to understand the entire concept.


> You failed to understand the entire concept.

I don't think so. As the article states, this is a social/community issue, if every application has it's own language and syntax that is a huge barrier for others coming to work on it. You might not care about that but most large pieces of software are not written by one person. Beyond that you come to issues of code and skill reuse. A lot of people dislike the expressibility of Go, but it's hard to write code that someone else can't easily come and read and understand.


> As the article states

The article is plain wrong.

> if every application has it's own language and syntax that is a huge barrier for others coming to work on it

And this is quite obviously not true. A well designed (note the emphasis!) DSL makes any app/problem domain/library/whatever much more accessible than any kind of a "general purpose" language can. Simply because DSL does not obscure the essence of the problem.

> Beyond that you come to issues of code

Code reuse with DSLs is way beyond anything the inferior languages can achieve. I witnessed cases where 30+ years old DSLs got revived by reimplementing them from scratch, immediately making an entire (huge) code base available on a new platform, with new tools and bells and whistles.

> and skill reuse.

One should never care about those pitiful "language" skills. They're worthless.


Your argument here seems to be that even if the majority of programmers find Lisp and all its derivatives to be difficult to read, we're wrong.


No, it's more of a No True Scotsman. People have written DSLs that are unreadable, but those DSLs weren't Well Designed(TM). So their failures don't count against DSLs, because only well designed DSLs count.

I think empirical evidence is that DSLs are easy to make not readable, especially as they evolve. Then again, that's true of almost everything, including assembly language programming, structured programming, object oriented programming, and functional programming (did I miss anything?).


I was talking about the well designed DSLs. This should include a decent syntax.


> As the article states, this is a social/community issue, if every application has it's own language and syntax that is a huge barrier for others coming to work on it.

Every application already has its own vocabulary, which is a huge barrier for others coming to work on it (seriously: try to check out the Linux kernel, or Firefox, or the Python interpreter, and try to start hacking on them).

Macros and custom syntax can help make that vocabulary more understandable, in the same way that functions can help make control flow more understandable.


Power is a weakness in a programming language, not a strength - some of the most interesting research languages today are not even Turing-complete. It's easy to add expressiveness to a clunky language; it's much harder to add limits to an expressive language that prevent expressing nonsense.


"Expressive power" and "computational power" aren't the same thing.

Turing's original Universal Machine (you know, the moving read/write head over an indefinitely long tape of symbols) is not expressive at all; a simple task like adding two integers on that machine requires a completely arcane, verbose piece of gobbledygook to be prepared on the tape, where you cannot tell at a glance which part of it is the program, and which is the integers to be added. It is less expressive than 4 + 4 in a calculator language that doesn't have loops.

Expressive power is, in its barest essence, the freedom to assign an arbitrary meaning, from some domain, to a new combination of symbols, such that every symbol in that combination refers to some entity in that domain only.

It behooves us to have a general-purpose language with as much expressive power as we can get our hands on.

It is not hard at all to add restrictions in domain languages created inside an expressive language. For instance if you make an x86-64 assembler in Lisp, it will be just as restricted as any other assembler; it will diagnose unrecognized opcodes, bad addressing modes, etc.


> For instance if you make an x86-64 assembler in Lisp, it will be just as restricted as any other assembler; it will diagnose unrecognized opcodes, bad addressing modes, etc.

I've written one such assembler in Common Lisp. It was rather straight-forward and generated nice code. Debugger came practically free.

Lisp is a language for symbolic computing. Values are not so valuable as computations and trees. If the level you're working at is too restrictive you're free to invent a new algebra for symbols at a higher level in terms of the lower-level primitive symbols. It's the same power one gets from manually calculating sums to inventing a language for expressing all sums. How you choose to represent things has great power in your ability to reason about them.


> For instance if you make an x86-64 assembler in Lisp, it will be just as restricted as any other assembler;

Check out Henry Baker's Comfy 65 Compiler[1] to see how far a little lisp changes an assembler language.

[1]: http://home.pipeline.com/~hbaker1/sigplannotices/sigcol04.pd...


> For instance if you make an x86-64 assembler in Lisp, it will be just as restricted as any other assembler; it will diagnose unrecognized opcodes, bad addressing modes, etc.

But it won't enforce those things at compilation time in the type system, because it doesn't have one. You might have a macro that checks them, but it would be ad-hoc.


  But [Lisp] won't enforce those things at compilation time
  in the type system, because it doesn't have one.
What Lisp implementation do you have in mind? I ask because Common Lisp has type system that includes compile-time type checking. For example, SBCL:

http://sbcl.org/manual/index.html#Handling-of-Types


My guess: some instructor's "one weekend lisp" that he or she had to study for a fraction of a semester.


Another commenter already said that Common Lisp indeed does have a type system. It's possible to circumvent, but that's Lisp for you.

You can very easily write an assembler in Haskell that doesn't verify any of that stuff statically. Learning to do advanced kinds of static verification with Haskell's type system is actually rather difficult, and usually requires language extensions.

Using a macro that checks those things seems like it could be a very pragmatic way to provide safety. It might also let you verify properties that would be hard to figure out how to verify using only Haskell-style type checking.

Indeed, Haskell programmers often use QuickCheck to do ad-hoc verification of type class laws, for example, since they can't be proven in the type system.


You can write Lisp that is de facto statically typed, and good compilers take advantage of it. If we write (cons a b), that piece of program text has a type, which is a cons. Given (let ((c (cons a b))) ... (car c) ...) the Lisp compiler can generate efficient code to access c, without a run-time type check that c is a cons, based on c inheriting the type from the cons expression, and not being subject to any assignment in its scope. If we regard it as a parametrized type, not knowing what a and b is, it is the expression "for all types x, for all types y, the type of (cons x y) is: cons cell of x and y".


> Learning to do advanced kinds of static verification with Haskell's type system is actually rather difficult

Perhaps. But it benefits greatly from being a consistent way of doing things: everyone does their static verification the same way, because the language supports exactly one way of doing it.

> and usually requires language extensions.

Sometimes. But again at least those extensions are relatively standardized, rather than completely ad-hoc as a macro can be.

> Indeed, Haskell programmers often use QuickCheck to do ad-hoc verification of type class laws, for example, since they can't be proven in the type system.

Yeah, that's one of the reasons I'm excited about Idris. But even in your example, QuickCheck is again a standardized way of doing this.


Well, all you need to do in a macro to provide error checking is to assert or raise a condition when something is wrong. That's pretty standard.

In Haskell land, you can do wonderful things like Servant, the statically verified API server—but that kind of code is very advanced, and written by a shadowy cabal of type level wizards. I think there is less of a standardized way of doing this stuff than one might expect.

I'm not really sure what you mean by "standardized." Common Lisp is an actual ANSI standard... But you seem to be referring to the existence of common practices.

I'm not convinced that the problem you describe is actually a problem, and I'm not convinced that Haskell or Idris will end up being more successful in the mainstream than Lisp.

(I'm a huge fan of Haskell, Agda, and static/dependent typing, for the record.)


> Well, all you need to do in a macro to provide error checking is to assert or raise a condition when something is wrong. That's pretty standard.

Sure, but the conditions that lead you to do that can be arbitrary Turing-complete code. IIRC in unextended Haskell what you can do at type-level is more restricted; certainly you're guided towards a particular way of structuring your constraints. That in turn makes it more practical for other tools to support your constrained sublanguage.

I think the future of programming lies in constrained (BWIM non-Turing-complete) languages and provably correct code. I guess we'll find out.


I didn't respond to that because I don't debate with thinkers who equate "type" with "static type". Not that I don't want to, but I haven't historically found it possible (at least in a productive way).


Common Lisp has an optional type system and many compilers have excellent type inference engines for optimization.


Common Lisp's type system isn't optional. The language is strongly typed. Type goes away (in a sense) when you add declarations. That is to say, if you tell Lisp that, say, some variable contains a fixnum, then (in code compiled with safety 0) it just believes your declaration, and it's up to you to ensure that it's not a lie (else the behavior is undefined). The type system isn't optional in any other sense.

It's optional for an implementation to provide static checks and optimizations based on type, which is different. "Static type" and "type" are not synonyms.

Common Lisp code is safe by default; you can't use an object of type X as if it were one of type Y.


A spec that says adding a fixnum to a string produces a possibly-runtime error isn't a type system, it's just a particular instance behaviour. Types are by definition associated to terms in a language.

(That isn't to say that that kind of runtime behaviour isn't valuable or doesn't provide (some of) the same functionality as a type system. But it's not what the word "type" means).


> Types are by definition associated to terms in a language.

This is true, within a particular paradigm, which isn't the be-all and end-all of what "type" means in computer science.

All kinds of things are type. For instance, "JPEG file" or "alphanumeric character" are both type terms.

You don't get to dictate to everyone one narrow definition of a term that has an obviously broad applicability in numerous contexts.

If we are discussing Common Lisp, and you want to get properly pedantic about what "type" means, the way you can do that is to look up its definition in the ANSI CL glossary, which says:

"type n. 1. a set of objects, usually with common structure, behavior, or purpose."

Or you can use it in a broader way which is compatible with this concept.

If you insist that it means something else which is not compatible with the glossary definition, and you still want to talk about Lisp, then I'm afraid it's not productive; you are more interested in engaging in a conflict about word semantics: which "camp" gets to "own" the ideologically precious word, such as "type".

Usually how we can resolve that conflict is to split our vocabulary into different words. You can have "type", I don't need it; I will call it this other thing that the CL glossary refers to "genus". Now type can be the property of a syntactic term in a program, and genus refers to sets of objects which have something in common.

I'm mostly interested in genus systems, and type systems in their context (agreement between the type of a program term, and the genus of the run-time thing it operates on).


I don't think Lisp could even be said to have a genus system - to have a "system" for dealing with sets of objects implies having a way to talk about those sets, do set operations (intersection/union/product) etc.

But in any case, any genus-but-not-type system is irrelevant to my original point: that when writing an x86-64 assembler in lisp (as an embedded DSL), any enforcement of restrictions at compile time would have to be expressed as ad-hoc macros, because there is no (language-standard) type system in which to declare them.


Common Lisp in fact has type expressions with set operations.

  $ clisp -q
  [1]> (typep nil '(and symbol list))
  T
  [2]> (typep :foo '(and symbol list))
  NIL
  [3]> (typep '(1 2) '(and symbol list))
  NIL
  [4]> (typep '(1 2) '(or symbol list))
  T
  [5]> (typep 3 '(or symbol list)
I.e. I can in fact talk about sets of objects, in a formal way, with the Lisp system.


Thanks for the explanation. Never really thought of it that way.


If I make an assembler in Lisp, compile it and ship it to you so that you can apply it to your assembly language programs, it wouldn't work so well for me to be diagnosing your assembly language problems in my compile time.


I'd prefer a language where I don't have to write a domain language from scratch and write a program that can enforce its rules at runtime. Rather I'd like to be able to express my domain code in the language itself, so that my users don't have to learn (and I don't have to create) a new language. That requires a consistent way of restricting things at compile time, which is very hard to retrofit - if I ever want to be able to enforce that programs don't do certain things, the language needs to not offer unrestricted access to those things.


If you express your domain code in the language yourself, users still have to learn and understand what you have done.

What you write in a language has its own de facto language, whether you capture its conventions in a notation or not.

Simply knowing the elements of the language in which that work is done isn't enough. Otherwise you could just memorize a language reference manual and call yourself a software engineer.

Or could just argue that since the program in any modern language is written in UTF-8, and the users know that already, they have nothing to learn.


There are still things to learn, and you may still need to adapt tools, but it's a lot easier to work with an "embedded" DSL than an "external" one. You still need to define the nouns and verbs of your domain, but you don't have to define a whole new grammar.


Lisp isn't any one thing, it's a language family. One could create a Lisp that is statically typed.


> Lisp isn't any one thing, it's a language family.

Which is rather the problem. Types, in particular, are only useful if everyone has them - optional add-on typing has been tried for many languages, but I've never known it be successful.


Typed Racket <https://docs.racket-lang.org/ts-guide/> is pretty successful in the Racket community. It's also prompted some pessimistic papers about gradual sound typing, since it's hard to avoid a performance hit when imposing type safety on untyped code using contracts. On the other hand, that performance hit is in part typechecker/compiler implementation dependent and in part dependent on the granularity of the added typing (e.g. typing modules or functions vs typing subexpressions).

Typed Racket code often gains a speedup over untyped Racket, since sound typing allows the elision of contracts and other runtime overheads for safety. There is no penalty at all for invoking Typed Racket code from untyped code, either in terms of performance or in terms of what the untyped-language programmer needs to know.


Note that we've made big improvements in the performance of Typed Racket's generated contracts just since that paper was written, so many things are now better.


My point was that making generalizations about Lisp is like making generalizations about languages with C-like (perhaps more correct ALGOL-like) syntax. Most people wouldn't make a statement about Java and assert that it applied to C++, too. This is why saying that Lisp has no type system doesn't make much sense. There are many completely different languages that use s-expression notation, and they all have different feature sets, much like all the C-like languages.


Ever seen ACL2 used for assembler verification?


>Power is a weakness in a programming language, not a strength

    The truth is that Lisp is not the right language for any particular
    problem. Rather, Lisp encourages one to attack a new problem by
    implementing new languages tailored to that problem. Such a language
    might embody an alternative computational paradigm […] A linguistic
    approach to design is an essential aspect not only of programming but
    of engineering design in general. Perhaps that is why Lisp […] still
    seems new and adaptable, and continues to accommodate current ideas
    about programming methodology.
- Lisp: A language for stratified design

http://dspace.mit.edu/bitstream/1721.1/6064/2/AIM-986.pdf


Some really interesting things may happen when you add arbitrarily powerful macros to a bondage and discipline non-Turing-complete language.


One of them is that, each time you read a new program, you have to learn a new language. Probably no language benefitted / suffered from this more than Lisp.


Any signficant project in any language has a "dictionary" of custom identifiers, whose vocabulary you have to learn if you want to become a maintainer.

The thousand functions in a big C program are just as much a language, as some Lisp macros.

The functions all do something. That something isn't "transform the code", but that doesn't matter; if you're looking at the function call and don't know what it does, you're just as lost.

Basically, if you're reading Lisp, you go from "outside in" and just assume that everything you see whose definition you are not familiar with is a macro.

Well-behaved code follows certain unwritten conventions. For instance, it avoids creating confusion by exhibiting multiple completely independent uses of the same symbols in the same scope. So for instance if we have (frobozz a b) wrapped in a lexical scope where we have (let (a b) ...) in effect, this well-behavedness principle tells us that the a and b symbols in (frobozz a b) refer to these variables. So, we can probably lay aside our suspicion that, for instance, frobozz is redefining some unrelated a in terms of some unrelated b. However, frobozz might be a macro; so we can't cast aside our suspicion that either a or b has its value clobbered. (Same in Pascal or C++, with ordinary functions: frobozz(a, b) could take VAR parameters or references, respectively).


This isn't really the case. Some libraries are all macros, some libraries have no macros, but most have a few macros that quickly fall into general patterns: some macros for setting up and tearing down context safely (with-thing macros), others for iterating some data structure without revealing its internals (do-thing macros), generally simple stuff.


> One of them is that, each time you read a new program, you have to learn a new language

Which is not bad at all. You still have to do it with the programs written in the same language built with different libraries and targeting the different problem domains. In the latter case the language is obscuring the essence of the code.

And if you're using a well designed DSL, and you're familiar with the problem domain, it will be readable naturally, just like a pseudocode.


A lot of domain languages need infix operators though. And even if you understand the domain, it's hard to read and refactor code in the presence of unrestricted macros, whereas in a language where such DSLs are implemented via the type system the tooling will understand that.


Why do they need infix operators? Most programmers already use prefix notation with function calls. Infix is primarily restricted to mathematical and logical operations, but the prefix (lisp style, taking an arbitrary number of arguments) is pretty clear when written neatly:

  (and cond1
       cond2
       cond3)
(NB: multiple lines like this would really only be used for a lot of conditions or longer expressions.)

We already do something similar with our algol-like languages:

  if((cond1 || cond2) && (cond3 || cond4)
                      && (cond5 || cond6)
                      && (cond7 || cond8))
In lisp, CL at least, would look like:

  (if (and (or cond1 cond2)
           (or cond3 cond4)
           (or cond5 cond6)
           (or cond7 cond8))


But at that point it's no longer the language of the domain. If you're going to support writing mathematics that looks like the mathematics that mathematicians write then you need infix operators, because mathematics is written with infix operators. Similarly for many other domains.


That is incorrect; the notation (op arg1 arg2) isn't the surface syntax that is preferred by some practitioners working in that domain. However, it corresponds 1:1 to its abstract syntax.

There are ways to provide infix independently. That is to say, one person can develop this domain specific syntax, and another can develop or customize an infix engine for it.

In Common Lisp, there is a well known "infix.cl" module that provides mappings like a[b,c] -> (aref a b c), f(x, y) -> (f x y), a + b -> (+ a b) and so on.

This isn't understood as creating a new language as such; it's just a sugar. I think it allows new operators to be added with custom precedence and associativity. Or you can hack it however you want.

I've never seen it used in any production code.

The main purpose it serves is to satisfy people who want to know that it can be done; after that it turns out that they don't actually want it done. They just don't want to work with a language that can't do it.

The only program I'm aware of which actually supports writing mathematics that looks like the mathematics mathematicians write (the actual 2D notation) is Tilton's Algebra: written in Common Lisp.

The various mathematics languages out there fail.

Oh, and not to mention that mathematicians have been trained to work with this:

    $\sin{(\frac{\pi}{2}-\theta)} = \cos{\theta}$
That's not such a bad example; I can still sort of see the trig identity in that if I squint my eyes.


Even still, mathematics isn't entirely infix. And outside computer algebra systems, you'll be hard pressed to find examples of the postfix operations expressed as postfix in programming environments. !, for example. What language allows you to express:

  n P k = n! / (n - k)!
as the above? `n P k` will become something like `nperms n k`, factorial will be moved to prefix as `fact n / fact (n - k)`. We're already diverging from the domain language, there's no reason to consider a more logically consistent framework in this case than one that seems to have arbitrarily moved some things from infix to prefix, postfix almost universally to prefix, and left some infix as infix.


>What language allows you...

SML allows you to have an infix function named "P", and Haskell allows something similar, although you have to quote it with backticks and can't use a capital letter to start a function name...

https://en.wikibooks.org/wiki/Standard_ML_Programming/Expres...

https://wiki.haskell.org/Infix_operator#Using_prefix_functio...


I did mean to mention that, but how about creating postfix operators?


In Prolog you have the ability to create new postfix operators. Of course Prolog being a logic language and not a functional language, along with '!' being used for 'cut' opens up another can of worms.

http://www.swi-prolog.org/pldoc/man?predicate=op/3


You're right as far as you go, but it's not an all-or-nothing thing. Most languages won't let you write mathematics exactly like mathematicians do, but the closer you can get the better. For many cases avoiding infix entirely would be a big cost.

(FWIW Scala allows postfix operators, though you'd have to put the n! in brackets)


> Most languages won't let you write mathematics exactly like mathematicians do, but the closer you can get the better

See Wolfram Mathematica for example.

Also, one of my usual DSL tricks is to allow arbitrary TeX in identifiers, which, combined with the literate programming tricks, allows to write very idiomatic mathematical expressions as a code.


You need infix, but that's not enough.

Mathematics isn't written with infix operators alone. Mathematics is written in a fancy 2d notation with all kinds of operator types: infix, prefix, postfix, around-fix, sub-fix, super-fix.

If you look at actual software for maths - several famous ones were written in Lisp like Macsyma, Reduce and Axiom - they provide more than infix.


And? DSLs can have any syntax you like. And any type system you want.

And DSLs done the right way, via macros, are much better in integrating with tools than any ad hoc interpreted DSLs would ever be able to. You can easily have syntax and semantic highlighting infered, with auto indentation, intellisense and all the bells and whistles. For no extra cost.


> And DSLs done the right way, via macros, are much better in integrating with tools than any ad hoc interpreted DSLs would ever be able to. You can easily have syntax and semantic highlighting infered, with auto indentation, intellisense and all the bells and whistles. For no extra cost.

No you can't. If the macro is arbitrary code then no tool can offer those things - there's no way to offer intellisense if you don't know what strings are meaningful in the language, and an unconstrained macro could use anything to mean anything.


The tools could have hooks for this.

It doesn't take much imagination.

You know how GNU Bash is customizeable with custom completion for any command, so that when you're, say, in the middle of a git command, it will complete on a branch name or whatever?

Similarly, we can teach a syntax highlighter, completer or whatever in some IDE how to work with our custom macro.


Sure - but at that point we've lost a lot of the value of having a standardized language at all. The whole point of a language standard is that multiple independent tools can be written to work with it - that your profiler and your linter and your compiler can be written independently, because they'll be written to the spec. If everyone has to customize all their tools to work with their own code, that's a lot of duplicated effort. Better to have a common standard for how you embed DSLs in the language, so that all the tools already understand how to work with them.


It is a broken approach. A much better way is to have a standard protocol (see slime for example, or IPython, or whatever else), and use the same tools as your compiler does, instead of reimplementing all the crap over and over again from the language standard.

I expect that not that many C++ tools that do not use libclang will remain.


At that point you're essentially advocating treating libclang as the standard. All the usual problems of "the implementation is the spec" apply.


libclang is just an example, maybe not an ideal one. But, yes, I'm advocating for an executable language spec, one that you'd use as a (probably suboptimal, but canonical) implementation.

A good example of such a thing would be something like https://github.com/kframework/c-semantics


Yes you can - as soon as you start wrapping your arbitrarily complex macros into a custom syntax. I am easily doing this stuff with any language I am adding macros and extensible syntax to.


But if the syntax is arbitrarily customizable, you can't possibly have the tools understand how to highlight it / intellisense / autoindent / etc.


Of course I can. If my tools are communicating with my compiler, they know everything it knows. I have a single tiny generic Emacs mode (and a similar Visual Studio extension) that handles all the languages designed on top of my extensibility framework.

It's trivial. Any PEG parser I add on top automatically communicates all the highlighting, indentation data and all that to the tools (and it's inferred from the declarative spec, no additional user input is required). Underlying typing engines do the same, for the nice tooltips and code completion. The very compiler core doest the same with all the symbol definitions, dependencies, etc. Easy.


This sounds very interesting :) And I think Colin Flemming is doing something similar in Cursive? In any case, I'd like to see more of what you are talking about - do you have any more documentation of it, a writeup, blog post or video?


If I understand it correctly, Cursive is something different, they don't want to run an inferior Clojure image (unlike Slime, for example), but reproducing a lot of Clojure functionality with their massively complex static analysis tools. But I might get it wrong, all the information I have about Cursive came from one of its advocates who is very aggressively against the very idea of an inferior REPL for an IDE.

I've got some code published, but not that much in writing, planning to fix it some time later. See the stuff at my github account (username: combinatorylogic). Relevant things there are Packrat implementation, literate programming tools and an Emacs mode frontend.


Exactly! You end up defining your macros in a particular restricted subset of lisp, and your tooling for Emacs and Visual Studio has to know about that particular subset. Other people writing similar macros will no doubt have their own, subtly different subset, and their own integrations for their subset. But since your way of writing declarative specs for language customization isn't standardized, you can't use each other's tool integrations.

The way you express DSLs is something that needs to be understood by language tooling, so it belongs in the language spec.


No. Tools do not know anything about the restrictions. In fact they work with a wide range of languages, not just lisp. The only "restriction" is a protocol, built into the macro expander, syntax frontend and compiler core.

So, in your rot13 example compiler would rat all the new identifiers with their origins to the tools.


> So, in your rot13 example compiler would rat all the new identifiers with their origins to the tools.

How can the compiler know which identifier connects to which origin, unless because the macro complied with some standard/restriction/protocol? From a certain perspective all I'm suggesting is making these protocols part of the language standard - that is, define the DSL that's used to define DSLs, rather than allowing macros to consist of arbitrary code.


> Yes you can - as soon as you start wrapping your arbitrarily complex macros into a custom syntax.

Well, by that definition you get exactly the same if the host language of your DSL is statically typed and doesn't use macros. Custom syntax is custom syntax and whether tools/IDEs understand it has nothing to do with the host language.


Sorry, I did not quite get what you mean.

Of course macro+syntax extension got absolutely nothing to do with what you can achieve in a language without macros.

And, no, you did not understand. Any custom syntax you're adding (if the right tools are used, like mine, for example) would automatically become available for your IDE and all the other tools, because they're reusing the same compiler front-end.


> And, no, you did not understand. Any custom syntax you're adding (if the right tools are used, like mine, for example) would automatically become available for your IDE and all the other tools, because they're reusing the same compiler front-end.

Just being able to execute the macro isn't enough for the IDE though. E.g. if a macro is "rot13 all identifiers in this block" then sure the IDE can run it, but it can't offer sensible autocompletion inside the block without understanding more about the structure of the macro.


IDE does not execute the macro - it knows the result of its expansion from the compiler. And compiler keeps a track of all the identifiers and their origins.


The IDE can autocomplete the rot13ed identifiers from outside, perhaps. But it can't possibly suggest rot13ed identifiers inside the macro block for autocomplete, because it can't possibly know that that's what the macro does.


Why? You know which macro made the identifiers. You know what this macro consumed. In most practically important cases this is sufficient.

But, yes, you cannot do it with the Common Lisp approach, where macros operate on bare lists, not the scheme-like syntax objects. The problem here is that the lists had been stripped from the important location metadata. For this reason I had to depart from the simple list-based macros and using custom syntax extension with rich ASTs underneath. Still, on top of a Lisp.


Even with location information, if the IDE's going to offer autocomplete inside the macro it would need to be able to invert the way the macro transforms identifiers, which is not possible to do to arbitrary code.

I agree that this is very rarely practically important - but if you think about it that's precisely the fact that a more restricted alternative to macros should be adequate.


No, it would have to be able to invert ROT-13 to offer what I think is PP's point.

EDIT: Which is obviously impossible if you assume a Turing Complete macro expansion language.


> And? DSLs can have any syntax you like. And any type system you want.

While you're technically correct, as a practical matter you won't implement a type system/type checker for all the DSLs you create. (Nor do I believe you'll even do it for anything approaching a majority.). Obviously, I'm using the impersonal "you" here.

Implementing real type systems is hard and mostly rather tedious work.

syntax-parse only gets you so far[1].

[1] https://docs.racket-lang.org/syntax/Parsing_Syntax.html


I have a nice DSL which makes building complex type systems (including dependent) trivial and fun. So I do not mind adding typing to pretty much any kind of DSLs, including the smallest of them.


I still think you should have some links bookmarked to drop in conversations like this. Bookmarked so you can just type a name rather than waste time looking. Many in these discussions might think you're speculating or have some toy project rather than the interesting one I found digging through your old posts that backs up your claims.

Maybe even have a series of examples that illustrate solutions you keep mentioning so others can learn and apply them. Just a thought. :)


I am already slow-banned here (or whatever it is called). If I start spamming the links, they will suspend this account.


Just trying to understand here: Are you saying that you have a meta-framework[1] for developing DSLs?

If so, then I suppose I misunderstood you original claim, but I won't apologize because your claim was very opaque to anyone who doesn't know you.

[1] An example of what I mean would be xtext.


Actually I do have a such a framework [1], but this is not my point here. My point was that it's relatively trivial to implement such a collection of DSLs on top of pretty much any sufficiently powerful (i.e., CL-like macros) meta-language.

If you don't have a ready to use language construction framework targeting your meta-language, just build one. Easy.

As for typing specifically, the approach is rather fun and simple. Firstly, you'd need something like Prolog. It's really not that much, you can quickly build a passable implementation in just a couple of dozens of lines of code in pretty much any language. See miniKanren [2] for example. Then, any kind of typing is done easily: write a simple pass over your AST (may require some preparation passes, like resolving the lexical scope - but this is useful for all other things too), that will spit out Prolog equations for each expression node that needs typing. Then execute your Prolog code and get your types back. An implementation won't look any more complicated than a formal specification of a type system in a paper, using the type equations (as in [3]).

[1] https://github.com/combinatorylogic/mbase

[2] http://minikanren.org/

[3] http://stackoverflow.com/questions/12532552/what-part-of-mil...


At this point "what is a language?" (or perhaps "what should a language be?") becomes a more than academic question. In a language with arbitrary macros one could potentially implement any language on top of that language. If we're using the term by analogy to human language, to my mind the key factor is the ability of implementations with no previous interaction to communicate (that is, how much can one express in a way that another will understand (and how deeply)). Arbitrary macros allow anything to be expressed, but it is very difficult for tools to understand the meaning of arbitrary code.


Of course. But, as I said, you don't have to restrict your macros, you only have to add a bit of a protocol on top of them in order to make them play nicely with all the tools.

And then you'll get the most powerful programming environment possible - building arbitrarily complex hierarchies of languages, with a cost of adding a new language being close to zero, and with a free support from all your tools, for no extra cost.

Actually, while building such a hierarchy I naturally came to a number of "restrictions", although they're not enforced. I prefer to build compilers using chains of very trivial transforms, each implemented with at most a total language (or a simple term rewriting for most of the cases). It also helps to maintain a nice interaction with the tools.


I would describe a total language as "restricted" relative to a turing-complete language - wouldn't you?


As I said, this kind of restrictions are useful but not enforced.


If there's a useful restriction on my code then I like to enforce it. Otherwise I usually end up accidentally breaking it.


All compilers translate a text stream into a tree structure during parsing, so arguing that some language "needs" infix is saying that there is some tree structure out there that can't be traversed depth-first rather than breadth-first.

That can only be true if your supposed "tree" actually contains loops. What language do you know of that generates cyclical parse structures?



For those that are interested, here's some livecoding streams of me developing and working in a Lisp I'm developing Sigil:

https://www.livecoding.tv/burtonsamograd/videos/

See the 'building a language' parts for actual lisp programming, the rest is JS webdev.

Sigil is a minimal lisp with only the basic primitives (cons, car, cdr, cond, null, atom, lambda and a few others) and macros. The project is an experiment in building a lisp from the axioms and seeing how far one can go without having to add to the underlying lisp implementation (which is in javascript).


There are many good points in this article and even the variety of Common Lisp implementations in existence attest to the author's point. It seems that everyone is rolling out their whole new CL implementation.

That said, even if it were true that the power of lisp turned out to be it's curse, one should be careful to generalize this way about lisp hackers, 80% projects and lack of documentation. The language might tempt you to become the hacker portrayed in this article, but it certainly doesn't fight you should you want to follow best practices.


Having multiple implementations wasn't a problem for Java (Sun JDK, IcedTea, JRockit), noone thinks much of the different compilers available for the C family (GCC, CLANG, Microsoft C compiler, Intel's C compiler), consider Ruby with MRI, JRuby, or Python. I could go on.

Members of the Lisp family seem to be judged by a different standard. I don't know why that is.

My perspective: the implementations attest to good documentation and collaboration. SBCL by itself is an admirable artifact with extensive documentation that's been maintaned since 1999 (when it forked from CMUCL).

So what gives?


Reading this I wonder if the everybody rewrites everything is that much of an issue. The benefits of it-crowdsourcing capitalized through libraries isn't obvious to me. Some languages make it necessary to distribute the cost of libraries through groups. Maybe Lisp allows to skip that and go straight at the problem. I remember one article about a guy looking at a Graph library for Java. He found two, generic, typed, object and all that; they were more a problem than a solution. He made its own thing in Lisp and called it a day.


I like it. It's a keeper. Also the argument is a bit overstated.

"...The moral of this story is that secondary and tertiary effects matter....Employers much prefer that workers be fungible, rather than maximally productive...[in regards to creating a larger framework] The reason why this doesn't happen is because of the Lisp Curse. Large numbers of Lisp hackers would have to cooperate with each other..."

So here's what I see. I see a lot of frameworks and libraries composed by folks using other languages. Many of these, yes, are much more complete than one, totally-custom-made solution by some Lisp hacker. And yes, large numbers of people participate in creating and maintaining these larger frameworks and systems.

I also see large corporations bleeding cash because they wanted silo'ed, fungible programmers. Got seven architectural tiers, each with its own framework? Hell, you're going to need seven extremely specialized programmers, kid. Time to get on the phone with the recruiter.

So they purchase these specialists. All the buzzwords match. The specialists sit down at the magic frameworks, the ones all the cool kids over the last two years have decided to support. Life is good. For about ten minutes. Then something is required that's not in the frameworks. Or the framework is broken under this one particular edge case.

Suddenly we need somebody who's general purpose again. Except -- and this is especially nice -- that's not the guy we hired. So seven layers of experts sit struggling with seven oddball issues. Googling Stack Overflow. Hacking the crap out of things. Making a total mess.

And at the end of the day, if we're lucky, something ships. Something with little problems here and there that requires seven different areas of expertise to figure out. Plan on maintenance being fun.

The next day version 2.0 of the framework on layer 5 ships. Now if you want maintenance programmers, for layer 5 they have to know both version 1.0 and 2.0

More complexity continues. It's fun for everybody.

Compare this to the guy who uses a standard language, decades-old libraries, hacks out a partial solution (but good enough to ship) in a few days and then moves on to something else. Now tell me that the framework guys have a better technology development model. I ain't believing it.

I'm not saying become a lone wolf and never work with folks. I'm saying that "working with folks" can completely kill any kind of value you're trying to create. Do it wisely.


I wonder why there is all the anguish around why Lisp hasn't flourished, and seemingly less anguish around APL, or even Smalltalk. Is there a reason it engenders so much loyalty? Is it the idea that macros allow you to extend the language, and that seems like the pinnacle of the programming hierarchy? Could there be insight gleaned from looking at programming languages as a fad or style?


I think it is because, for a lot of people, Lisp is fun.

If Lisp flourishes, more people would get to use it at work.

Lisp not flourishing means less fun.

Also, when I've worked in Lisp it just seemed like the right way to do things. Everyone likes to do the right thing, so anguish results when you seem denied the opportunity to follow the path you feel is right.

That said, I think the author was right and the community issues and ease of building your own solutions will limit Lisp as a language family.


I can definitely subscribe to the idea that Lisp is fun. The application I have been writing (featureful chat application like Slack) has a server side completely written in Common Lisp by myself. Why do I work on it even though it's unlikely that it will ever draw anyone away from Slack?

Because it's fun.

Is it possible to do the same in another language? Sure, Slack is proof that it's possible. But would I have persisted in actually doing it without any hope of profit? No, because programming in other languages is a chore, and the only light at the end of that tunnel is the finished product.

With Lisp, the journey is a reward of its own.

Yes, I know the same thing has been said by Haskellers and others, but I doubt it was ever said by a C++ programmer.


If you substitute APL or Smalltalk into what you've written, I don't see a majority of APL and Smalltalk users disagreeing with it.


For me, lisps (CL, scheme, or any other variant) hit a nice sweet spot. You're programming at the AST layer, which makes adding new semantic features (say you wanted erlang style concurrency and message passing) trivial to integrate syntactically (less trivial to implement the underlying feature). Compare this to trying to patch that into C#. Same for any new semantics: logic languages; specific constructs for math, physics, music; OO - see CLOS.

APL is less extensible in this regard (to my understanding, someone might prove me wrong). Smalltalk is probably on par, if you can model it with objects and methods/messages cleanly.

But in the end, for me, it's the idea of programming the AST directly and being (mostly) freed from arbitrary syntactical constructs.


> You're programming at the AST layer, which makes adding new semantic features (say you wanted erlang style concurrency and message passing) trivial to integrate syntactically (less trivial to implement the underlying feature).

To me, this hits the nail on the head, for all the wrong reasons. Syntax isn't the problem, implementing the feature is!

If I'm writing Lisp code, why do I want it to look like Erlang? What's wrong with it looking like Lisp? ("Erlang style concurrency and message passing" is two things, syntax and functionality. Do you want the functionality? Fine. You want the syntax too? Why?)

In fact, this is kind of what the article is about. People want the syntax "their way"; it makes it hard for others to work with their code.

The only time you should invent new syntax is when it is impossible (or very hard) to say something in the syntax you've got. "I don't like it" isn't good enough. "It will save me a few keystrokes" isn't good enough.


I think that Clojure mitigates these issues quite a bit for a couple reasons:

- The core library is has enough built in that many questions are answered using built in solutions (e.g. what is done with objects elsewhere is very frequently done with maps and functions that operate on maps) - The language designer (Rich Hickey) is very opinionated, and the language tends to attract people who agree.

There still are multiple choices when it comes to solving some problems (Om vs Reagent vs Rum vs etc.), but is that really that much different than other languages? Javascript has a million choices for the same domain.


"BBM" == Brilliant Bipolar Mind. The acronym is used in "The Lisp Curse" citation of The Bipolar Lisp Programmer, but defined elsewhere in material not cited.


People who care what other people think about the programming languages they use write essays like this.

The rest of us just shut up and hack some Lisp.


The homoiconic thing is not all it is made up to be. That is, you can take (say) Java and parse it to an abstract syntax tree and the manipulate the tree.

Assuming you have a good toolbox of operators, working with that kind of AST is not much harder than working with s expressions. Maybe it is easier if you can build a general notation that can be generalized for multiple languages and also represent more ordinary data such as json formatted, relational, rdf, etc.


I couldn't disagree more. Working with that kind of AST is many orders of magnitude harder than Lisp macros. None of it is first class. You brush over it in your comment like writing a Java parser is trivial. My clojure parser returning the AST in its entirety is: (read-string (slurp "my-source.clj")). What does your Java one look like?

The proof that it is so much harder is that no one ever does it. Yet people do this all the time in Lisps. If its "not much harder" I'd expect to see more of it, but I don't, because it is just so much work.

Aside from parsing, what about templating. Yes, you can manipulate the AST once you have it, but what would templating AST look like? Would it look just like the Java code? With the syntax quote in clojure it looks very much like the output. How would you even begin to template syntax in your hypothetical Java implementation?


1. I am not writing a parser, one already exists and can be brought in about as easy as your code sample. 2. I am translating the AST to rdf which lets me work on the AST with production rules 3. If I wanted templates I would use templates. What I do do is start out with an abstract representation and then apply say 20 simple transformations that step by step fill in the details to get to code. Speed is OK because we have indexes, rete networks and other tricks. 4. People are not doing it yet and this is why people are saying code reuse is hard, etc. 5. The transformations are written in a sparql derived rule format that itself uses the transformation system to add features by composition.


I feel like you are being disingenuous. Do you have any experience with Lisp macros? Please enlighten me. Show me how you parse Java, into rdf, and then into AST in one line. And a link to this "already existing" parser would be great. Also, can you link me to some examples of this sparql derived rule format so I can compare.


See

http://www.eclipse.org/articles/Article-JavaCodeManipulation...

as for SPARQL derived rule format see

http://spinrdf.org/

although we've done a bit to humanize the syntax and try to add back some of the features that were common in production rules engines back to the 1970s.


Yes, but parsing Java into an AST is much harder than parsing S-expressions into an AST...


I think this is not properly accounting for the difference between something that's 90% done, and something that is done. (Nothing is ever finished, of course, but you know what I mean.) As we all know, the difference between the two is not simply a gap of 10 percentage points. Far from it.

Teams of people were required to write Haskell and Dr Whatshisname tossed off Qi by himself. OK, but people use Haskell, does anyone use Qi? If not, then it's not a problem for Qi to be 90% done. And that's a lot easier to accomplish by oneself.


A lot of thoughts about lisp; are written by zealots; who like to create the impression that they are playing with some kind of dangerous mind amplifying meta language; and if you start using it; you might accidentally unleash creatures from the id. But then you look at it; and find it is simply a programming language.


Should add a (2011) to the title.


Added.


"It's so damn hard to do anything with tweezers and glue that anything significant you do will be a real achievement. You want to document it. Also you're liable to need help in any C project of significant size; so you're liable to be social and work with others. You need to, just to get somewhere."

Oh is it time already for another "C is for idiots" thread from Lisp supremacists again? If Lisp fanboiz were slightly less busy ridiculing the rest of the world over how stupid everyone else is, they might even have had some time to pay attention to silly things like performance and tooling. That might help the adoption a bit more than the supercilious rock throwing that is almost exclusively the tone of every single article/post from Lisp lovers.


C is the king of raw performance, there's no denying that. But for the vast majority of high-level, user-facing applications today it's a bad fit.

Clojure is more than fast enough for serious server side programming, often coming close to Java in performance with minimum amount of tuning needed in the average case. When you add multithreading into the mix, it's not even a contest; the amount of time you'll have to spend to make an idiomatic Java or C++ run both correctly and fast is gargantuan compared to a language where concurrency was built-in from the start.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: