For one, the service that performs the vast majority of airfare searches -- formerly ITA Software, now Google Flights.
Having spent time at the MIT AI Lab and having co-founded a company whose principal product was a Lisp/C hybrid, I think the challenge with mainstream adoption of both Lisp-like and functional languages is the syntax. There's an element of "don't use a programming language that's hard to hire for" but I think that's secondary as it never bothered us or posed a real problem.
Naughty Dog's Crash Bandicoot games (which I also worked on) used Lisp for all the character control logic.
That's not quite right. These days Naughty Dog uses a DSL called DC built in Racket to write all the "data" in the game (everything from cut scenes to character attributes). Running DC produces data files shipped on the DVD of the game, and used by the big C++ engine that's running on the PlayStation.
They switched to C++ because they were bought by Sony and integrated into their landscape. They thought that sharing C++ code would be useful. As it turned out, they put Scheme back into their production pipeline.
There aren't a lot of symbols to Lisp but there's plenty of syntax
Using Racket:
(if bool then else) instead of (if bool then) or (if (bool then) (bool else))
(if (> x y)
(x)
(y)) ;fails for numbers because (x) is considered a function call (even though 3 is an invalid identifier and thus can be assumed to be always be a number).
(define (fun x y) (...)) instead of (define fun (x y) (...)) or (define ((fun (x y)) (...)))
That's syntax.
Just because I'm not using {}'s here and infix there doesn't make it any less syntax. That's just the two most basic forms too; bring in loop? forget about it. This also ignores things like '(@,) or (x . y) but I'm not a lisper so I don't know how often that actually comes up
Technically, this might be syntax, but as someone who is learning Racket (and programming) in the beginning stages, it's so much less syntax to remember then even python (underscore, double underscore, with, decorators, list comprehensions etc all have their own syntax nuances), the only syntax I see in Racket is the s-expression, the quote, and the dots. Everything else is just the flow of programming logic. It really does free my mind to work on the problem domain itself! Been enjoying SICP so much.
Racket doesn't get quite as bad (well it does but it tries to keep things looking like S-exps) but consider CL's Loop macro http://www.unixuser.org/~euske/doc/cl/loop.html loop is (from what I understand) idiomatic too. Yes it's a macro (so is (defun ...) though) but it's syntax a CLer needs to know in order to deal with CL in the wild. Format is famously even worse.
Yes, but I thought his point was more that whilst Lisp has a very easy, simple and regular syntax, ie. (func arg1 arg2 (func arg3)) and so on, it's less simple and regular when you get to the loop macro (loop arg keyword arg keyword...). Hence why I mentioned the Iterate library as something a lot of people use to get back to the regular syntactical appearance.
It's one of the strengths of Lisp imo; that you don't need to think much about how the parser is going to interpret your code (ie. missing semi-colons, whitespace, use curly brace here, square bracket there, etc.), just stick to (func arg1 arg2) and all you're left with is your own logic errors.
> It's one of the strengths of Lisp imo; that you don't need to think much about how the parser is going to interpret your code (ie. missing semi-colons, whitespace, use curly brace here, square bracket there, etc.), just stick to (func arg1 arg2) and all you're left with is your own logic errors.
What you describe is just the data syntax for s-expressions. Not the syntax of the programming language Lisp.
> What you describe is just the data syntax for s-expressions. Not the syntax of the programming language Lisp.
Exactly. The data syntax if what most people worry about. The names of the verbs (funcs/methods/etc.) may change from language to language, but the data syntax is what trips people up. I think Lisp has one of the simplest and clearest. There are very few cases of "oh you can't write that there, only nouns are allowed in that position".
I agree with your point, but I think we're arguing slightly different points here ;)
It's debatable whether "simple and regular syntax" is a strength or a weakness. Lisp/Scheme might be too regular for their own good. Consider the following statements in Scheme, for instance:
(lambda x (+ x x))
(cond (> x 2) (+ x x))
(if (> x 2)
(do-this when-true)
(also-do-this when-true))
They are syntactically correct (technically), but they are probably not what you meant. So you still have to pause and ask yourself how cond works... except the parser will not help you.
That is to say, a problem with s-expressions is that they are so regular that everything looks the same, and when everything looks the same, it can become an obstacle to learning. Mainstream languages are not very regular, but they are more mnemonic. I think Lisp works best for a very particular kind of mind, but that for most programmers its strengths are basically weaknesses.
SBCL will warn or error at compile time on the first two, and there are similar issues to the third one in many languages; it's a semantics issue more than a syntactic issue.
An equivalent to iterate/loop where each compound form need be replaced by an anonymous function and each binding is replaced by a dictionary entry could be implemented completely as a function. Is this also new syntax?
If not, how is the macro different other than implicitly changing the evaluation?
for a more simple example, why is the idiom CALL-WITH-FOO (implemented as a function) not syntax while WITH-FOO (implemented as a macro) is? What precisely is syntax is somewhat nebulous (if I use a regex library in C, have I added syntax to the language? Regexes certainly are syntax, despite being wrapped in a C string).
(3) vs 3 is a both really. In say smalltalk you can call 3 and it would return 3 because semantically it's an object where as in lisp it's not callable (even if in CL it may be represented as a CLOS object I don't know)
syntactical ( ) isn't actually a procedure call we can see this in (define (id x y) (..)) or (let ((x 3)) ...) in the theoretically pure Lisp semantically it's just a leaf in the tree but as part of an if-block in a real language it gets treated as procedure call even if it makes no sense.
The syntax is the same. Both are s-expressions. The difference is how a particular implementation interprets them. In this example, it would depend on the semantics of def.
(case id
(10 (foo))
(20 (foo) (bar))
(otherwise (baz)))
The expressions are written using s-expressions as data. But still there is structure in those s-expressions, described by the EBNF syntax of CASE.
Every special operator and every macro provides syntax. Since users can write macros themselves, everybody can extend the syntax. On top of s-expressions.
That is interesting. I've always considered lisp in terms of denotational semantics. In fact, I wrote a toy lisp in which the complete grammar was basically
list -> ({symbol | number | string | list}*)
and then it was up to the interpreter to decide the meaning of special forms. (I say "basically" because there was also desugaring of '(...) to (quote ...)).
Lisp "as the idea" is not a programming language. Racket is a language, Common Lisp* is a language. No one writes code in the IDEA of lisp, indeed no one can because no computer yet can pull instructions out of what ever aether contains platonic ideals.
* using SBCL: (defun foo (x y) (...)) instead of Racket (define (foo x y) (...)) is again an example of syntax.
It's really such a simple syntax change from f(x) to (f x) yet it makes an enormous difference and opens up a whole new world of possibilities. Sure, there are homoiconic languages in which you can write macros that aren't lisps but the expressive power and ease of use suffers. Take for example macros in Julia (itself heavily lisp-inspired), they're possible but ugly and not nearly as seamless as macros in lisp.
That's exactly the problem: until you're used to it, it looks nothing like psuedocode. Contrast with Python. I'm not saying it's right, I just think this is the issue.
> Hint: if the first notation is so superior, why don't math papers use it.
Math papers usually use neither the first nor the second.
they use:
(a + b)(c + d)
in the example you propose, and, reversing the operators so that the first style would have:
(+ (* a b) (* b c))
and the second:
(a * b) + (c * d)
math papers would usually have:
ab + cd
So, I'm not sure "math papers do it differently" is the argument you want to use to advance your second syntax over the first.
Of course, since in lisp + and * are variadic rather than binary operators, they are a lot more like the pi and sigma operators applied to sets in mathematics than binary operators. Which are prefix, not infix. So, there's that.
Additionally there's more than one macro system out there to allow for infix math in Lisp... And for non-mathy things, in Clojure at least you're often using the threading macro -> or some variation of do/doto.
I do understand, but I'll also point out that that first expression will often (with longer variables or expressions in particular) be broken out like:
(* (+ a b)
(+ c d))
Which is readable, though not necessarily compact.
Also, * and + in the former, aren't strictly the same as in the latter. * and + take an arbitrary number of parameters in CL. From [0], `(* )` => `1`. I can't test, but I believe `(* 2)` => `2` (the spec doesn't describe the case of a single parameter, unless I'm missing it). `+` is the same, but `(+)` => `0` instead, it's identity value.
Order of operations is made more explicit, and, I've found, it's more useful to think of `+` and `*` as `sum` and `product` rather than `plus` and `times`.
Math does also have `sum` and `product` in the form of Sigma and Pi. Of course, not exactly the same thing (since they operate over a set, not discrete elements).
I would venture to say that the reason infix notation is naturally preferred is related to our psychology, the same way most human languages are SVO (Subject Verb Object) or SOV. VSO languages (Lisp like) are less prevalent.
In general my opinion is that when a majority vastly prefers one alternative, there is usually a strong reason for it (even if it may be irrational) and it's foolish to go against the grain.
I seem to recall that SOV (reverse Polish notation) is marginally more prevalent than SVO among the world's languages... though it is true that most creoles are SVO, which does at least seem to indicate that it's a default of sorts.
Infix is mostly used, in programming, within mathematical and logical expressions. But the majority of my code spends its time in some kind of chain of function calls, which has the verb first. Maybe if I did more with OO-languages I'd see it differently?
Interesting. OO syntax often is object.function(arguments), which is subject-verb-object order. I never thought of it that way before. You can throw some adverbs in among the arguments, too.
That's correspondent with how Java and especially Obj-C and super especially AppleScript programmers try to write code that reads like COBOL, er, English.
If you have Quicklisp[1] installed you can install the "infix" package and get infix notation in Common Lisp[2]:
$ sbcl
This is SBCL 1.2.4.debian, an implementation of ANSI Common Lisp.
More information about SBCL is available at <http://www.sbcl.org/>.
> (ql:quickload 'infix)
; Loading package
(INFIX)
> #i(1 + 1) ; addition
2
> #i(2^^128) ; exponentiation
340282366920938463463374607431768211456
> (defun factorial (x)
#i(if x == 0 then
1
else
x * factorial(x-1))) ; infix function call
FACTORIAL
> (factorial 5)
120
> #i(factorial(5) / factorial(6))
1/6
> '#i((a + b) * (c + d)) ; Put a ' before the #i() to see what code is generated
(* (+ A B)
(+ C D))
> Don't know if there is a similar package for Scheme.
I have to agree with others in the thread that infix in Lisp/Scheme is not the convention, and IMO an awkward fit. Don't recall encountering infix in any published/shared code I've seen, it may exist, but to learn Scheme becoming comfortable with s-expr notation is definitely necessary.
However, there is SRFI 105[0] which describes "curly infix expressions". It's implemented in Guile 2.x, possibly available in a few others but evidently not had a lot of uptake among Schemes.
I wouldn't recommend using infix libraries if you really want to get into Common Lisp though. They're a bit of a crutch for people coming from other languages, but that's it.
Pretty much the whole language is based on Polish notation. The sooner you realise that + - * / are just function names like any other, the better you'll do.
For example:
(+ 1 2 3)
in plain symbols is just:
(function parameter parameter parameter)
But if I were to write my own addition function:
(addition 1 2 3)
it would also be:
(function parameter parameter parameter)
and so is:
(http-request "http://www.google.com")
(function parameter)
If you use infix notation, you're writing half your code in a competely different semantic to the other half. I can't imagine it helping people really get a proper grasp of how Common Lisp works.
Nobody claimed Lisp syntax was optimal for math papers. Neither is C syntax. The Lisp syntax does have advantages for program source code. Sure, it has disadvantages too. Everything is a compromise.
The Lisp syntax is so incredibly controversial, and that fact itself is incredibly strange to me. I see it as a pragmatic engineering decision: let's represent source code as syntax trees, and then sophisticated editing modes become straightforward, macros become straightforward, and the syntax becomes very uniform.
This big thread indicates another reason Lisp isn't popular: because people keep arguing back and forth about the textual syntax, rather than discussing actual experiences with using it.
There are actual features which make Lisp a bit more difficult to understand and a few are related to syntax: especially the code as data feature. Some elements of the language have different purposes: both as code and as data. Symbols can be data and they can be identifiers. Lists can be data and they can group elements of the programming language. Others have only one purpose. For example an array is always data.
Symbols and lists behave differently depending on the context:
Examples for Lisp snippets:
(foo bar baz) ; it could be a macro, function or special operator form
(quote (foo bar baz)) ; here it is data
(defun do-something (foo bar baz) (a b c)) ; here it is an arglist
(defun do-something (a b c) (foo bar baz)) ; one element later it is a form
These context need to be learned and actual visual clues are a) the symbol in front and b) the structure of the expression.
This is puzzling for a lot of people. A few never really take that hurdle.
Yes, they learn the second syntax from birth. There have been arguments to teach the lisp syntax in mathematics due to it being easier to understand with multiple arguments:
(+ x y z a) instead of (x + y + z + a)
Also, there are no order of operations problems with the lisp syntax like there are with traditional mathematical notation (unless you use parens, which makes it look even more lispy).
So why don't we write all binary operations that way? eg. x f y instead of f(x,y). I've always felt more comfortable with prefix notation, especially because it easily generalizes to more than two arguments. I think infix notation is an accident of mathematical history.
because it preserves the "form" of the original expression and generalizes to more arguments. ie. it doesn't require liftAn for whatever n number of arguments my function takes.
Arithmetic is one place where infix notation is generally easier to read. If I were writing a program that basically just did a load of mathematics I may even consider using a different language.. However looking over the software I generally develop, I probably need to use arithmetic in about .01% of the code.
Something nobody has mentioned yet is that, in the C-style version, the precedence rules are eliminating some parentheses. You can't do that in Lisp (except maybe with a macro). But then, in Lisp, you don't have to remember precedence rules.
In this example, the advantage is on the C side, because pretty much everybody who knows any math knows that multiplication has precedence, and they can just read that syntax. If you have to go look at the precedence chart in K&R or Stroustrup before you know how to parse the expression correctly, well, then the Lisp approach is probably more efficient...
You are now arguing against parens. You can have mostly prefix syntax without parens, with blocks delimited with indentation only. Scheme's sweet-expressions[1] are one such example. Anyway, please take my example, remove the parens and check if your argument still applies.
If it does, then it's down to the function names and your (common) misconception that "+" or "^" is somehow more readable, easier to understand or something than "sum" or "sqr". Where I simply disagree. BTW: why do you insist on using infix syntax for a couple of operators, while you use every other possible operator in a prefix notation and are happy with it? What is the difference between "sqrt" and "-" which makes it ok to use sqrt in prefix form?
> Limiting the number of parentheses is also best when possible.
No. It's only best if it aids readability. This is something that Lisp does rather well actually - there are many examples of equivalent Java and Clojure expressions where Clojure version has half as many parens. Getting rid of parens for the sake of getting rid of parens is counterproductive.
Because your version takes up 6 lines! A simple one line expression!
And yes you can remove the parentheses, but not only does no one do that, it still takes up 6 lines. And then you have significant whitespace too.
>why do you insist on using infix syntax for a couple of operators, while you use every other possible operator in a prefix notation and are happy with it? What is the difference between "sqrt" and "-" which makes it ok to use sqrt in prefix form?
Because that's universal and standard for math notation. But also sqrt only takes one argument. If it took two arguments, then it would be perfectly reasonable to add an infix operator for it too. Many languages do add infix operators for everything from combining strings to ANDing booleans, etc, because they are so much more readable.
I think the thing about lisp isn't the fact that it's functions start with a paren. It's the fact it uses function composition to write everything that makes it harder to keep track of. Most languages don't define their functions like this:
c = sqrt(a^2+b^2)
vs.
define(c, sqrt(sum(sqr(a),sqr(b))))
def getMaxValue(numbers):
answer = numbers.first()
for (i in xrange(numbers.length)):
if (numbers[i] > answer):
answer = numbers[i]
return answer
vs.
(defun get-max-value (list)
(let ((answer (first list)))
(do ((i 1 (1+ i)))
((>= i (length list)) answer)
(when (> (nth i list) answer)
(setf answer (nth i list))))))
if you could only use python functions:
defun(get-max-value, [list],
let(answer,first(list)),
do( (i,1,(1+ i)),
(>=(i, length(list)), ans),
when( >(nth(i,list),answer),
setf(answer,nth(i,list)))))
defun(get-max-value, [list], let(answer,first(list)), do( (i,1,(1+ i)), (>=(i, length(list)), ans), when( >(nth(i,list),answer), setf(answer,nth(i,list)))))
Not really. We are demonstrating how multiline statements become hard to read in lisp because in practice you can only use function calls to write everything.
Any language where you write an entire function as a huge one liner expression with functionality in nested function calls is hard to read. It's the behavior, not the syntax per say.
What the actual behavior is doesn't matter as much, even if you can reduce both of them to one liners in many languages.
Not really, since Lisp does not only have function calls, but special forms and macros.
In actual Lisp practice, one uses macros, special forms and function calls.
You seem to have failed to understand the difference.
There are two REAL reasons why Lisp is harder to read than some other languages:
* the syntax looks and works slightly different and most programmers have been trained to other programmning language syntax. With training, this is less a problem.
* Lisp uses a data syntax as the base layer of the programming languages and encodes programs as data. So the syntax of Lisp comes on top of s-expressions. Very few other programming languages are doing that and as a consequence it complicates a few things. The user of Lisp has to understand the effects of code as data. This is something you don't have to understand in Java or Python. It can be learned, but it has to be learned.
At the same time, this code as data principle of Lisp gives a lot of power, flexibility and new capabilities. It makes Lisp different and in some way more powerful than Java or Python. The added power comes from easy syntactic meta programming in Lisp, which neither Java nor Python provide. This has also consequences for interactive programming, since programs can be written and manipulated by programs under user control.
I already mentioned sweet-expressions in another comment. And also, there's one important thing we're forgetting when discussing syntaxes, which is the fact that we very rarely read the code without syntax highlighting, so the examples you posted actually look like this: http://pygments.org/demo/3781004/
I'm not sure if this is what you're talking about but there actually is a Lisp where you can call Python functions. It's called Hy[1] and I encourage you to take a look, it borrows some good solutions from Clojure, but generally is quite an acceptable Lisp :)
Why do mathematicians not use s-exps but syntax that is much more similar to C? Reading "sum" "mul" etc. takes longer than if you have visual anchors like * +. And infix is an advantage for simple expressions, because they split arguments, where as with sexps you have to parse from left to right and count parens.
Please tell me why should I care. No, really - I'm a programmer, not a mathematician.
> Reading "sum" "mul" etc. takes longer than if you have visual anchors like * +.
Citation for this?
IMO it's exactly the opposite, but I may be wrong. Some kind of reference would be nice.
> And infix is an advantage for simple expressions, because they split arguments, where as with sexps you have to parse from left to right and count parens.
Ok, so 2 ("sum" vs. "+", 3 vs. 1 char) additional characters are bad, because they take longer to read, but for example 3 additional characters here:
(+ a b c d)
vs.
a + b + c + d
are good, because they take longer to read. That's interesting.
>> Reading "sum" "mul" etc. takes longer than if you have visual anchors like * +.
> Citation for this? IMO it's exactly the opposite, but I may be wrong. Some kind of reference would be nice.
I know it from myself and don't think I have to to provide evidence that by large most people work like this. Reading and interpreting text is just WAY more complex a process and thus much slower than associating a shape with a meaning.
For example, application designers have known for a long time that it's important to build a symbolic language (icons etc) because that's just way faster (once you have learned what the symbol means, for example with the help of a tooltip).
> I know it from myself and don't think I have to to provide evidence that by large most people work like this. Reading and interpreting text is just WAY more complex a process and thus much slower than associating a shape with a meaning.
I don't think there is a difference in speed between reading "sum" and "+". You don't read the word "sum" letter by letter: you see it as a whole token and your brain recognizes it instantly.
> For example, application designers have known for a long time that it's important to build a symbolic language (icons etc) because that's just way faster (once you have learned what the symbol means, for example with the help of a tooltip).
You're talking GUI, which is different than writing and reading code. There are, for instance, much less GUI elements visible on the screen than there are identifiers even in a short snippet of code and there is much more context available for deduction in the code than in the GUI. I don't think the two situations - recognizing GUI features and recognizing and understanding identifiers in the code - are comparable.
Come on. It's totally obvious that the shapes of * and + are much simpler and much more distinct than "sum" and "mul" (which by the way are relatively short and simple examples for words).
Humans have excellent shape recognition -- recognizing (and differentiating) a tree and a person happens subconsciously, effortlessly. Interpreting the words "person" and "tree" takes way more effort.
Similarly, humans have usually very good spatial sense. If there are persons to the left and to the right of a tree, it is effortless to recognize that they are "separated".
> You're talking GUI, which is different than writing and reading code.
No. I'm talking perception.
> There are, for instance, much less GUI elements visible on the screen than there are identifiers
That depends. There are very complex GUIs out there. But let's assume it for a moment.
(By the way that typically that means the code is not good (weak cohesion)).
> there is much more context available for deduction in the code than in the GUI.
That is not supportive of your previous argument: The more identifiers, the less context per identifier.
> I don't think the two situations - recognizing GUI features and recognizing and understanding identifiers in the code - are comparable.
It's both about perception. It's very, very important that programmers can focus on their work instead of wasting energy building parse trees in their minds, incurring severe "cache misses". Again, take this simple commonplace example:
(sum (mul a (minus (b c)) d)
a*(b-c) + d
If you don't think there's a huge difference I can't help you. I'm sure I need about three seconds to parse the sexp as a tree and figure out what goes with what. Then I have to go back and interpret the operators.
Conversely, the infix/symbol operators example I can map out with minimal, and linear, movement of the eyes. In most cases I don't even need to parse it as a tree -- it's almost a sequence. On a good day, it costs me maybe a second to parse the thing and extract the information I need.
Another advantage of symbols for arithmetic is that they give a sense of security, because one can intuitively infer that they have "static" meaning. While usually words are reserved for things that change, i.e. mutable variables. Being able to infer non-mutability based on shape alone gives a huge advantage.
> Come on. It's totally obvious that the shapes of * and + are much simpler and much more distinct than "sum" and "mul"
I disagree that it's obvious. Moreover, I don't believe there is a measurable difference between the speed of recognizing "sum" and "+", once you're equally familiar with both.
> The more identifiers, the less context per identifier.
I don't believe it's that simple, but we're starting to go into semantics (which are part of comprehensibility of code, but not part of it's readability I think).
> If you don't think there's a huge difference I can't help you.
I think you can help yourself: just go and train yourself in reading prefix notation, like I did. Then get back to this example and then tell me again that there is a huge difference.
> I'm sure I need about three seconds to parse the sexp
I don't even know how to measure the time I needed to read the sexp, it was that short. And I even instantly realized that you've put parens around "b c" in the "minus" call, which would cause an error in most lisps.
> Conversely, the infix/symbol operators example I can map out with minimal, and linear, movement of the eyes.
That's why I used newlines and indentation in my example above. To take your example:
(sum (mul a (minus b c))
d)
This also reads linearly, just in a different order than you expect. This doesn't make it objectively harder or slower to read, it's just unfamiliar to you.
I'm not interested in differentiating between readability and comprehensability. If I want to work on some code I need to comprehend it. That starts "in the small" with the mechanical aspects of "readability", if you will. Where to draw lines is not relevant. Every aspect to the process of comprehension is important. The "small" aspects are more important than you might think: Because they affect every line of code. Whereas there are fewer instances of the more global aspects to comprehension.
Like in a binary tree, where half of the elements are in the lowest level.
> you've put parens around "b c" in the "minus" call
You have a point. One pair of Irritating Superfluous Parentheses less.
> (sum (mul a (minus b c))
> d)
Even the consideration to sprinkle such a trivial expression over multiple lines hints at the superiority of a * (b-c) + d. It's just the most straightforward thing to do. No far-fetched argument can change that.
I'd love to see eye-tracking data which show the tradeoffs between various syntaxes.
The regularity and the simplicty of sexps is of course good for computers, because these can barely associate. Because they can't learn new tricks (they have fixed wiring). But humans have streamlined their languages (which also includes syntax; again, I'm not differentiating here) to their environments since forever.
Sexps are also good for abstraction and meta programming. But as we all know abstraction has a cost and there is no point in abstracting an arithmetic expression. And most code, for that matter.
> I'm not interested in differentiating between readability and comprehensability.
Fair enough, but then please stop using single letter variable names, add type annotations where applicable, provide docstrings and contracts for functions. Comprehensibility is so much more than syntax that I think mixing the two will make for even more interesting, but even less fact-based discussion.
> I'd love to see eye-tracking data which show the tradeoffs between various syntaxes.
Yeah, that would be very interesting. The thing is, there is no such data available, but you still are convinced that one kind of syntax is better than the other. I'm not - from where I stand the differences and tradeoffs in readability of syntaxes, once you know them equally well, seem too minor to measure.
> Even the consideration to sprinkle such a trivial expression over multiple lines
No. It's just different way of getting to the same effect. I don't see why would one be worse than the other (splitting things using infix operators vs. splitting things using horizontal and vertical whitespace).
Other than that, you completely avoided the familiarity issue. Do you think that we're genetically programmed for reading infix syntax? If not, then it means we need to learn infix syntax just like any other. My question was, would someone not yet exposed to infix propaganda have a harder time learning infix (with precedence rules and resolving ambiguities) or prefix?
You also ignored my question about the difference in readability when you are equally well trained in both syntaxes. You can't compare readability of two syntaxes fairly unless you have about equal amount of skill in both. And the fact that readability is influenced by skill is undeniable. So, in other words, are you sure you're as skilled with sexps - that you wrote comparable amount of code - as with infix? Honestly asking.
> No. It's just different way of getting to the same effect. I don't see why would one be worse than the other (splitting things using infix operators vs. splitting things using horizontal and vertical whitespace).
It's very important since size matters. Efficiency of encoding and cost of decoding (~ perception) matters. But if you don't think it makes a difference -- fine, you are free to read braille instead of plain text even if you have perfect eyesight. You can also add three layers of parens around each expression if you think that's more regular.
> Do you think that we're genetically programmed for reading infix syntax?
No. There's this fact that all combinations of basic grammar are represented in natural languages: SVO, SOV, VSO, VOS, OSV, OVS. And then there are some programming languages which don't differentiate between subjects and objects, but go for (OVO), VO, VOO, VOOO... (or concatenative style OV, OOV, OOOV...). Which is great since the goal of formalism is to be "objective". (Note that Object-oriented programming is actually subject-oriented programming from this standpoint. It's not "objective")
Instead I say that it is more efficient if syntax is optimized for the common cases. Shorter is better, if the decoding won't produce more cache misses. Infix and symbols don't produce cache misses for the vast majority of humans, in the case of arithmetic (read: mostly sequential, barely tree-shaped) expressions.
Sexps are inherently unoptimized for the common cases. They are "optimized for abstraction": for regularity. It is an explicit design goal to not differentiate things which are different "only" on a very concrete level. Instead of content, form is accentuated. This is not suitable for the > 95% of real life software that is just super-concrete and where abstraction has no benefits.
I'm sure I have now given 5 to 10 quite plausible examples which support the standpoint that symbols-and-infix arithmetics is good for humans, based on how their mind works. You haven't provided any counter-arguments but just shrunk off everything. But thanks anyway for that. I think I'm satisfied now with the examples that came out.
> are you sure you're as skilled with sexps [..] as with infix?
No. Never will be.
Are you? Show me a Lisp program with more than casual usage of arithmetics and tell my why you consider it readable. By the way, the first google hit I just got for "lisp arithmetic readability" is http://www.dwheeler.com/readable/
As a mathematician, sum(whatever) or product(whatever), reads just fine. There are in fact, a lot of uses of sums and products, so after a while, they are pretty naturally.
Yes, sum(whatever) is fine. What is "whatever"? If it's the common case of two operands, then I don't think you're making a point against a + b.
And you don't think (sum (mul a (minus b c)) d), or (+ (* a (- b c)) d) for that matter, is more readable than a * (b-c) + d, do you?
> There are in fact, a lot of uses of sums and products, so after a while, they are pretty naturally.
I think you are talking about summing up a collection (like, an array, a matrix, etc.) as opposed to building an expression tree. Of course, sum(myIntList) is just fine. That's a whole different story.
There are also the rare cases where you have to sum, like 6 integers. (sum a b c d e f) might not be worse than a + b + c + d + e + f. But that's by far not the common case in most problem domains. The common case is like a*(b-c) + d.
I already admitted that it was a guess. But I still think I can defend it.
Starting in elementary school, everyone learns to read math notation. By high school, everyone knows what
c = sqrt(a*a + b*b)
means. The Lisp version may be easier to read for those who have spent enough time using Lisp. That's not the majority of programmers, though, and it's only a tiny minority of the general population.
Do you think that, to a non-Lisp programmer, the Lisp version is easier to read? Do you think it is easier to read to a non-programmer who has had high school math? Or is it just easier to read for you?
> Starting in elementary school, everyone learns to read math notation. By high school, everyone knows what
We're either talking about objective readability or personal familiarity. What you say is that, after extensive training for many years, it is easier for people to read notation they were trained to read. This is both true and utterly uninteresting.
What is interesting, though, is how much training you need to read prefix and how much training you need to read infix. It's obvious that infix takes more time to learn: operator precedence and things like using "-" in both infix and prefix forms make it objectively more complex than prefix notation. You just forgot how much time you spent learning it.
> Do you think that, to a non-Lisp programmer, the Lisp version is easier to read? Do you think it is easier to read to a non-programmer who has had high school math?
Again, this is not interesting at all. You're talking familiarity, not readability. Of course, it's easier to read something you've been taught to read. To make this more objective, take an elementary school kid - who wasn't exposed to years long infix propaganda - and check both notations' readability with them.
Personally, I learned to read just about any kind of notation used in programming. From my observations, there are only minor differences between the speed of comprehension when using different notations - once you've trained enough. The difference is how much training you need. I can tell you that reading J - an infix language, it's an APL descendant - took me much, much longer to master than reading Lisp.
Learning lisp syntax requires just a very short introduction.
In SICP, it is said that they never formally taught lisp at class. The students just pick Lisp up in a matter of weeks.
It's probably because in algol derived languages -- when you encounter parentheses, something weird is happening -- something you've got to put extra thought into.
Maybe the ordering of something is being forced. Maybe something else is going on, but whatever it is requires more thought than things that are parentheses free.
So you look at Lisp and your brain locks up the brakes, with "WTF is going on here??? I'm out".
I guess that's ironic, but with people coming up used to all the ornate syntax, one of the common balks is "all those parenthesis, it all looks the same" and "there is no syntax, how do you read this."
It's like going from Arabic numerals to counting by groups of five; initially, it feels like you're losing expressive power. And, of course, at a glance, you can't read "|||||||||||||||||||||||||||||" as quickly as you can "28".
So, uhh... why not use a notation where nesting is expressed via indentation, then? I don't understand how the syntax can be considered superior if the way people actually cope with it is to hide another syntax inside it.
I was dumping ASTs as part of a little language project recently and my first impulse was to render them as S-expressions. Alas, it just wasn't readable; I couldn't make any sense of it. Indented YAML style lists, though? The structure pops right out and the information I wanted was immediately obvious. There were no constraints here, I was free to render text in any way that suited me; the Lisp style syntax just wasn't helpful.
I prefer having a parser produce an abstract syntax tree. From an expressiveness viewpoint, you get a good deal of raw power from being able to manipulate this directly instead of having to instruct a lexer/parser to do it for you. I don't necessarily think this is a good thing though.
I know almost nothing about lisp, but what comes to mind is car, cdr which aren't exactly the most descriptive keywords of programming languages I have seen.
(I think they are replaced in modern versions but my point still stands as I remember the old ones, not the new)
Yes, sometimes 'car' and 'cdr' are replaced with 'first' and 'rest' and are some of the first things you cover when learning the language, so I don't think the names would really be hindering adoption for those that try to learn the language. The benefits of car and cdr are that they can be composed: (caadr l) instead of (car (car (cdr l))) for example.
I like Racket's second, third, fourth,... etc. [1]
I don't think there's an equivalent to cddadr for example, but at that level of deconstruction you're better off abstracting the data structure (maybe a struct [2]) or using some other mechanism like match [3]
Sure. caaaaaar -> heaaaaaad. cadar -> heaiad. It's one more character, but if you really want to express yourself that way, there's no reason you couldn't...
If you mean Lisp as a straightforward expression of "Lambda calculus" then I would agree.
However, Lisp as in Common Lisp most certainly has a good amount of syntax. And let's not forget macros which amount to user defined syntactic extensions.
Here are some examples of syntax built into the standard:
Let's not forget that most of what one might think of as built-in features in Common Lisp are actually standardized extensions to the language built with macros.
All of the power, expressivity, and extensibility of Common Lisp is what makes it my favorite programming language. It's what makes everything like above possible and gives power back to the user. But ignoring the syntactic complexity will not win us any followers!
TLDR: Common Lisp isn't simple, but it exposes one of the most powerful and empowering programming environments we have.
> If you mean Lisp as a straightforward expression of "Lambda calculus, then I would agree.
That's what I was talking about. The language, being programmable, especially with read macros, can be used to create a very syntactically full language, but at it's basic core level, there is really just '(', ')', '.' and symbols.
Is it possible that you can share some information on this, if allowed by your company / employer? This is a first I've heard of this, sounds quite interesting.
Lisp is more like research language than implementation language for bean counting apps. Better question is what bleeding edge things have originated, are being done or have been done in Lisp.
John Carmak is doing VR research with Racket.
Christian Schafmeister doing molecular metaprogramming.
Raytheon implemented a signal processing analysis pipeline for missile defense in Lisp
Commercial Lisp vendors keep lists of some of their customers. Specialized Cad programs like Bentley PlantWise are not popular but they are very complex.
I'm regularly hearing about web applications written in ClojureScript, a.k.a. Clojure (a dialect of Lisp, [1]) compiled to JavaScript. Recently I've heard of:
The underlying 'functional assembly' VM language of Urbit (Nock) is a lisp that works on arbitrarily large integers. Here's a compiler for it I wrote a while ago, written in Common Lisp:
A better question: "What impressive programs are written in Lisp?". Or "Why isn't there a Squeak-like Lisp machine environment, where 'compatibility' doesn't have to matter?". If a language is incredibly productive and programmable, why don't we already have a VPRI-like STEPS environment in 20,000 lines?
I'd only call this anecdote, but one or more benchmarks assert that cl-ppcre, common lisp's perl-compatible regex implementation, is faster than any other, include perl's.
The larger question I'm intuiting from your post, "Why doesn't language power make a difference in practice?" I don't have an answer to.
> Why doesn't language power make a difference in practice?
Depends on your definition of "make a difference in practice". If you mean "make the language become one of the dominant ones", yeah, that doesn't seem to have happened. Either Lisp is less effective in the large than one would expect from its power, or it's less powerful in practice than people think, or power has almost no relation to language dominance.
But if you mean "make a difference to the user", well, it lets the user more easily write the program that the user wants to write. In practice, that makes a difference - to that user.
This is covered in the article. You might say it's the central point of the article - lisp attracts "lone wolf" programmers who want to build perfect abstractions closely mapped to the real-world problem; as opposed to projects that require many man-years of effort run by MBA's who want fungible "resources" to do their tiny-bite-sized pieces according to spec. The philosophy is different.
Not many, but even so, this is never the right question to ask.
That question would be: what programs are written in <LANGUAGE> that couldn't benefit from being rewritten in another language?
And most often, the answer to that question is none.
Because languages matter a lot less than language fanatics want you to think.
As for Lisp, I used to be a total fan until I realize the importance of a sound static type system, and now I will never go back. Lisp will never go anywhere because this is the 21st century and we know now that static type systems are an absolute requirement for modern programming.
Right. That's why no one seriously consider Javascript for any new project, and no one would propose the idea of using it on a server.
I am not saying that you are wrong in liking static typing, but arguing that dynamically typed languages are non-starters in this decade is a statement that is easily disproven by the existence of Javascript.