Hacker News new | past | comments | ask | show | jobs | submit login
Why functional programming? Why Haskell? (realworldhaskell.org)
63 points by tosh on Aug 10, 2013 | hide | past | favorite | 83 comments



Haskell is badly designed language in many respects which promotes ugly code. I think, it won't be mainstream statically typed functional language in any time.

Here is my list of bad language features:

1. It doesn't support dot notation for records.

2. Record names can't be overloaded.

3. Monads might lead to messy code due to verbose syntax. For example, instead of writing c <- openDbConnection (readConfigData), I have to write ugly code

  do
    conf <- readConfigData
    c <- openDbConnection
The situation becomes worse, the more side effecting parameters you need for a function call.

5. Many extensions, essential for productive development are out of the language standard. For example, multi param type classes, existential types. Haskell' standard is in progress for a very long time, and still isn't finished.


I'm not sure what you mean with "do notation for records", but yes, there are some problems with record notation. Lens/zipper libraries can help with that though. (Overloading record names would be sweet in any case.)

> Monads might lead to messy code due to verbose syntax.

Your example could be written `do c <- openDbConnection =<< readConfigData` (assuming there should be a `conf` in the third line). If you need multiple parameters passed like this you're right, you probably need to execute them separately and give their results names. That however can also lead to more readable code if getting the parameters isn't just a single statement.

> Many extensions, essential for productive development are out of the language standard. There is always this discussion what "valid Haskell" is, on the one side the "Haskell Report" guys, on the other the "whatever GHC accepts for the next 10 years" section. A few remarks:

- GHC is a giant project that uses neither multi-parameter typeclasses nor existentials.

- I don't think language extensions are ever removed from GHC, just pronounced deprecated. (I don't know about such a case at least, correct me if I'm wrong)

- Heavy use of language extensions does mean GHC lock-in. It's free software, but I can still see how that could be a concern.


1. Don't see the appeal.

2. Definitely agreed.

3. "readConfigData >>= openDBConnection" can be used in a simple case like your example. Cases with more arguments can use "ap" or "<*>", which are perhaps a bit ugly, but most of the time you don't need them.

5. The fact that Haskell is a living language that is improving all the time is a point in its favor, I think.


I do think 2 is a real problem, but it seems Haskell people couldn't agree on the solution. Simon Peyton Jones (Haskell 98 standard editor) proposed a change but many people didn't like it.

http://ghc.haskell.org/trac/haskell-prime/wiki/TypeDirectedN...


FWIW, there is currently a GSoC project supervised by SPJ that is working on making overloadable fields available in GHC.


1. Nonissue in real life.

3. Don't modify parameters. It makes code easier to test in any language.

5. GHC may as well be the standard. Are you really going to use a different compiler.


>3. Don't modify parameters. It makes code easier to test in any language.

It's not about modifying parameters. It's about verbosity of the code. The type of the parameter is IO String.

>5. GHC may as well be the standard. Are you really going to use a different compiler.

It's a real issue. Different language extensions might modify how type inference works. If you add one more extension, you program might suddenly stop to compile. We need a fixed set of language extensions which are always available.


> If you add one more extension, you program might suddenly stop to compile.

Do you have some real examples of this happening with GHC extensions? I haven't run across one in the field.

> We need a fixed set of language extensions which are always available.

This sounds very much like a language standard. Haskell isn't much different than other languages here, though the language committee is slow to ratify generally accepted extensions into the standard. GHC at least makes them available to people who want to kick the tires. Then read about closures in Java. Look at C++11 revisions and notice that sometimes meant cutting features, even if they were previously implemented by some compilers (template exports) or prototyped (concepts).


>Do you have some real examples of this happening with GHC extensions? I haven't run across one in the field.

I am not that advanced in how GHC types work. However, I understand, that in any language extensions there are tradeoffs between how expressive a language is and how much type inference we have.


Many (most?) of GHC's commonly used language extensions enable new functionality or syntax (DataKinds, GADTs, RankNTypes, KindSignatures, TypeFamilies, FunDeps, MPTCs, etc). Enabling such extensions and not using these newly enabled features shouldn't affect type inference.


3. in general haskell is even terser then i like.

5. What on earth are you talking about? Conjecturing about type inference is not useful. While I may prefer some extensions to be on be default, in hundreds of thousands of lines of haskell code I am yet to see adding an extension break existing code. The worst I have seen is adding an extension slow down compilation.

In haskell you have to forget most of what you have learned in order to be successful. Otherwise you will fight the language.


> 3. in general haskell is even terser then i like.

It's the same kind of terseness that languages like Perl have. It's write/only code.


I have not found that to be true.


> The type of the parameter is IO String.

No, the type of the parameter is String. IO String is the type of the expression that gets that parameter when executed. There's a popular quote by shachaf, "getLine :: IO String contains a String in the same way that /bin/ls contains a list of files" that illustrates this nicely.


I understand what a monad and IO monad is and what the types of the expressions are. I am just showing that the current monad syntax is ugly. connectToDb $ readParams "conf" can be transformed by a compiler into what you wrote.


readParams "conf" >>= connectToDb


I previously agreed with complaints #1 and #2, though at some point over the last few years this has stopped being a problem. It's not even Stockholm syndrome, I just don't use records that much anymore. The various lens packages can help with record overloading and dot indexed fields, but without some compiler support it will remain a sore spot for people who want/need a better record system.

#5 is a fair to a point. I don't find MPTC or existential types to be useful in almost any case: I actively try to avoid both. ScopedTypeVariables, FlexibleContexts, FlexibleInstances and potentially RankNTypes and the poorly named UndecidableInstances should be standard though (imo). Some people also find OverlappingInstances and the like to be generally useful but you probably don't want to hear what I'd say about the subject.

There are plenty of other (also subjectively ugly) ways to write #3. SHE (a Haskell preprocessor) deals with it in one way, see the pigworker's idiom brackets: https://personal.cis.strath.ac.uk/conor.mcbride/pub/she/idio.... Without a preprocessor you can certainly use applicatives or some regular monad combinators to git'r'done but a little sugar could go a long ways. I'm not attached to any of these.

Lucky for us there are many languages to choose from, personal taste has proven a fickle mistress.

    import Control.Applicative
    import Control.Monad

    data ConfigData
    data DbConnection

    readConfigData :: IO ConfigData
    readConfigData = undefined

    openDbConnection :: ConfigData -> IO DbConnection
    openDbConnection = undefined

    openDbConnection2 :: ConfigData -> ConfigData -> IO DbConnection
    openDbConnection2 = undefined

    openDbConnection3 :: ConfigData -> ConfigData -> ConfigData -> IO DbConnection
    openDbConnection3 = undefined

    main :: IO ()
    main = do
      -- it does work...
      conf <- readConfigData
      _c0 <- openDbConnection conf

      -- how about a flipped bind?
      _c1 <- openDbConnection =<< readConfigData

      -- or join/fmap...
      _c2 <- join $ openDbConnection <$> readConfigData

      -- need two parameters? liftA2/liftM2 has been around for a while
      _c3 <- join $ liftA2 openDbConnection2 readConfigData readConfigData

      -- or use Functor/Applicative to deal with arbitrary numbers of side effecting parameters, longhand idiom brackets...
      _c4 <- join $ openDbConnection3 <$> readConfigData <*> readConfigData <*> readConfigData

      return ()


You can write the example in your third problem as "readConfigData >>= openDbConnection". This is, in fact, something like what your code desugars to. "do" notation is a useful crutch for difficult code; it's not fair to use that as an example of "verbose syntax".


The code:

    do
      conf <- readConfigData
      c <- openDbConnection
Or just

  do
    c <- openDbConnection =<< readConfigData
    -- do something with c
Where `=<<` can be thought of an infix apply for monads.


You can use operators to deal with 3. do c <- openDbConnection =<< readConfigData

As for 5., GHC is de-facto standard.


> Since pure code has no dealings with the outside world, and the data it works with is never modified, the kinds of nasty surprise in which one piece of code invisibly corrupts data used by another are very rare.

If you're trying to sell me on Haskell, right there you just made me think "uh, ok, guys, but my code does not live in an ivory tower, and very much needs to deal with the real world, messy data, users, and so on".

I realize that Haskell can handle that, too, but in terms of copy writing, it leaves something to be desired.

They subsequently go on to write much more interesting reasons why you might consider Haskell, including examples of companies using it in the real world, but they should not lead with talk of 'purity'.


> in terms of copy writing, it leaves something to be desired.

Haskell advocacy is high in the running for the least effective in the business. Boosters can't seem to help but pitch the things they care about rather than the things the listener does. Maybe it feels satisfying, but it's no way to attract converts.

A more effective approach would be to choose common tasks people perform in other languages and demonstrate how they can be less (error-prone, time-consuming, verbose) in Haskell.

For instance, I recall reading about a Haskell Web framework where the compiler guarantees user-generated content never makes it into any served HTML unsanitized. What a boon for security! However, this is not shouted from the rooftops; instead, it's taking me so long to track down I've given up in the hopes that someone will post it here instead.


You are searching for Yesod - http://www.yesodweb.com - which has some other fancy features like making sure that you can never render an incorrect dynamic URL (because URL's are represented by data types) and more such features.

I've used Yesod in actual real-world projects and it was always a pleasure, albeit sometimes slightly complicated - the new version (which I haven't used yet) promises to simplify this a lot though.


Thanks!


I think it's important to emphasize that while Haskell provides purity, it also enables the definition of impure code in a clear and distinct way, the idea being to encourage a clean "separation of concerns" between logic and interaction.

It's not really that Haskell removes impurity, more that it adds purity. The "main" function of any Haskell application is an IO action, but you can define pure functions which relinquish IO and global state. And the best practices include doing that as much as possible, which is a terrific way of making your program more comprehensible and correct.


I wonder if it would be possible to develop an idiom for programming in a language like C that separated those concerns. Put most/all application logic into const/pure functions, and then have a separate set of interaction functions, i.e. just pull out that one bit of Haskell philosophy from the rest of the language design.

Speculative guesses as to what might stand in the way:

1. The lack of a purity-tracking mechanism like Haskell's may make programming in this style in C more complex and error-prone.

2. The fact that C compilers don't expect you to use this programming style may mean they don't optimize common operations in the way Haskell does; for example, if you do manipulations on large arrays in a functional rather than imperative style, Haskell will optimize away a lot of the temporary arrays, but C compilers may not.


Since beginning to see the value of purity, I write imperative code in a different way. I don't care much about local manipulation of state for algorithms and such, but I write a lot more pure functions.

It's not only that purity is safe and beautiful. It's also in many cases much easier to understand, because it's about writing functions that express transformations. These transformations should make sense in a conceptual way.

When you distill your domain concepts (see "Domain-Driven Design") and write pure functions on domain data, the result is remarkably clear, testable, and debuggable, no matter which language.

Previously I would have been (prematurely) concerned with the efficiency of in-place updates, but since I'm not writing high-performance CPU-bound software, I don't care about that anymore until the profiler tells me to.


What is your imperative language? Do you write imperative code in OOP? Do you find yourself compelled to implement everything with static methods?


I currently work with Java. This codebase employs a kind of pseudo-OOP, according to a pattern that I think is at least somewhat popular. Data is mostly represented by objects with only getters and setters. Business logic is contained in "services," which are basically stateless singletons (handled by dependency injection).

The best code in the system, IMO, consists of small stateless singletons with clear dependencies and non-modifying operations. This code often comes out of test-driven development and has tests that read like specifications.


I am currently trying to get some of this working in a library I call 'libalgae' in which I'm encoding algebraic data types and their eliminators (similar to how Epigram works, like an induction principle for each datatype: This allows us to rule out non-terminating programs and ensure coverage of all constructors.) I'm not very much concerned with optimization, as I'd like to use this "DSL" as a bootstrapping language for a self-hosting programming language which provides the same in a more natural way.

The way I've done it so far is seems to be working, and I've implemented standard types (their introduction and elimination rules) and few functions on them. The two main problems I've run into is memory leaking and portability.

Since the constructors of an algebraic data type should return const data, it's a bit difficult to free them without violating the type system. I haven't decided yet if I'll end up violating the type system or just drop that const. I think the latter makes more sense, since we're trusting C as little as possible in the first palce. Everything actually might as well be done in void pointers but I think there's some value in using types where possible for documentation's sake.

About portability. The main thing here that I ran into was that there's no real standard and portable way of doing closures or anonymous functions in C yet. Right now, what I'm doing is using nested functions, which works in GCC and clang, but I have no idea if they'd work in MSVC or any of the other compilers out there like pcc and lcc. OS X has blocks, but I figure if they're using gcc too, why not stick with nested functions. As far as I can tell, I won't have the problem of having nested functions returning before their parent does because of the eliminator model.

The libCello (http://libcello.org/) author has been able to successfully model typeclasses in C, so I will be picking that up as soon as I get around to it.

I pushed some of the code to GitHub just now for this post: https://github.com/guerrilla/libalgae

I'm glad to hear other people had the same idea. If anyone has any suggestions, please let me know, as I wouldn't mind some collaboration on this.


GCC's pure and const function attributes pretty much allow for these optimizations.


I believe GCC's pure and const function attributes, like the restrict keyword, are only advisory and have no extra compile-time checks that your functions are actually pure or const.


For scalar types I agree, but now that I think of it, I'm not sure it's even possible to write array code using them, unless I'm missing something. If you can't modify the array in-place or write to a provided "output" parameter, and you can't malloc a fresh array to return (because that's a side effect), how do you write a pure function of type int[] -> int[] at all?

The nice thing about Haskell is that it conceptually mallocs a fresh array to return, but then optimizes away the intermediate array in a lot of cases.


Absolutely it would, but it's much easier with a type system to guide you.


The power of Haskell is that it acknowledges the fact that it's a messy world and deals with it with the help of Monads.

You could turn it around and say that non-haskell programs live in an ivory tower and ignore the messy world below :)


I downvoted you because you're putting Monads on a pedestal and misrepresenting their purpose. When you mean IO (or perhaps ST and State) please don't say "Monads". Remember that [] is an instance of Monad too, and it has nothing at all to do with dealing with the messy world. A monad is just a typeclass with two operators that behave according to certain identity and associativity laws, and a _lot_ of types fulfil them.


> it's a messy world and deals with it with the help of Monads.

So someone has a tough problem dealing with the messy real world, and then you tell him to use Monads. Now he has two problems :-)

Ok, I'm kidding, but mbrock's answer is a lot better in terms of selling the language in that it mentions something that anyone can figure out is a good idea, without getting into the details of it.

> encourage a clean "separation of concerns" between logic and interaction.


I do understand your point, but purity is one of the defining characteristics of Haskell, and one of the features that differentiates it from most other languages, so I think it should be mentioned up front when introducing Haskell.


Mention it, maybe, but pointing out how great it is not to have to deal with that messy "real world" is bad marketing. It makes you think "oh, so all these benefits only exist in the magical fairy land and not in the real world where I work?"


I understand what you're saying. It would be better to just talk about the enforced separation between pure code and code with side-effects, and explain that it's possible to structure a normal program so that it's mostly pure.


You are just making a claim though, no one know if it is actually true. Semantic purity doesn't necessarily lead to human clarity, which is one of the mountains Haskell has had to climb: it doesn't seem "easier" or "more simple" than other approaches, only claims of safety might hold up to some scrutiny.


You will note that I make no claims about clarity, only that it is possible to separate pure from impure code in Haskell (which is certainly true).

The problem with Haskell and clarity are, in my experience, twofold:

- large amount of high-level concepts to absorb (though not everything is necessary to start producing code)

- the power of the language is its own worst enemy at times - it is possible to write a pipeline of complex computation with very little work, which leads to less code but less readability (on the other hand, Haskell functions are typically short, which helps a lot with maintaining up-to-date comments)

On the other hand, the separation between pure and impure code is not a complex notion per se, and reasoning about pure code is made much easier.


Right. It is quite easy to build convoluted Haskell code just as it is easy to to build convoluted Java code.

One of my primary complaints about Haskell, however, is that Haskell code is impossible to debug. You can reason about it, the type checkers can check deep properties about it, but it is difficult to actually observe the computer executing it! To me, this is a deal breaker.


> Right. It is quite easy to build convoluted Haskell code just as it is easy to to build convoluted Java code.

I don't think it's quite as easy, but I'll concede the point so I can make the more important point: it is much easier to write clear Haskell code than clear Java code (at least, "clear" as I understand clarity, YMMV).

> You can reason about it, the type checkers can check deep properties about it, but it is difficult to actually observe the computer executing it!

Certainly purity makes it hard to observe, and laziness makes it hard to understand how the program executes. However purity also means that static debugging (e.g. QuickCheck) is far more powerful than in other languages so dynamic debugging (actually watching your running program) is less needed. In fact I do far less debugging in Haskell than in Python, say. I first fight the compiler. When I win, the end result is correct more often in Haskell than in any other language I've ever used.

> To me, this is a deal breaker.

That's fair enough. Everyone has their own preferences. Personally I've never noticed it be a problem.

Disclaimer: I don't write performance sensitive code. I expect performance tuning in Haskell is harder than what I've described above.


> I first fight the compiler. When I win, the end result is correct more often in Haskell than in any other language I've ever used.

And now we get to a few of questions related to bias:

* Given that debugging is difficult in Haskell, do Haskell programmers to get in the habit of statically debugging their code? Is that really more "better" than dynamic debugging? Some type error messages in languages like Scala and Haskell can make static debugging quite painful.

* Does Haskell's powerful type system bias it to well understood problems with meaningful types? I attended a WGFP meeting once, and it was amazing to hear people like SPJ talk about how they found elegant solutions to icky programming problems, but I thought that, given a dirtier language, you could just write a dirty solution and be done with it.

* Would Python's dynamic type system make it more suited to problems with dirtier less clear types when compared to Haskell? If you know both Haskell and Python, do you divide your potential programs into "Python programs" and "Haskell programs"?

* When one writes Haskell code, do they really not have to debug and test their code?


> * Given that debugging is difficult in Haskell, do Haskell programmers to get in the habit of statically debugging their code? Is that really more "better" than dynamic debugging? Some type error messages in languages like Scala and Haskell can make static debugging quite painful.

I'm not sure what you mean by "static debugging". You're going to get type error messages while you're trying to compile, but certainly not at runtime. GHC really does its best to make them as explicit as possible and often suggests the correct fix.

> * Does Haskell's powerful type system bias it to well understood problems with meaningful types? I attended a WGFP meeting once, and it was amazing to hear people like SPJ talk about how they found elegant solutions to icky programming problems, but I thought that, given a dirtier language, you could just write a dirty solution and be done with it.

Not really. It is really a general purpose language, and you find very good libraries to interact with less strongly typed systems (eg, JSON).

> * Would Python's dynamic type system make it more suited to problems with dirtier less clear types when compared to Haskell? If you know both Haskell and Python, do you divide your potential programs into "Python programs" and "Haskell programs"?

If anything it would be more a question of libraries. One thing you won't be able to get in Haskell though is something like an ORM, with lazy-loaded collections and stuff like that. I find myself missing SQLAlchemy. Persistent does not have the same flexibility, despite Esqueleto which looks nice.

> * When one writes Haskell code, do they really not have to debug and test their code?

No. The best type system does not help when you write the wrong values with the right type in your program, or get the logic of your program wrong. But the errors tend to be more interesting. Also, you have a very powerful system to test the pure parts of your code (QuickCheck).


I don't have good answers to your first two questions, since they're very open ended. I can give you my personal experience regarding the second two.

* Would Python's dynamic type system make it more suited to problems with dirtier less clear types when compared to Haskell? If you know both Haskell and Python, do you divide your potential programs into "Python programs" and "Haskell programs"?

No, I classify my potential programs into "Haskell programs". I haven't missed Python once.

* When one writes Haskell code, do they really not have to debug and test their code?

Haskell programmers still have to debug and test, but I find it somewhat easier than I've found it in other languages.


You're overstating it. Debugging Haskell code is definitely harder than in other languages.

But it's far from impossible (I use Debug.Trace successfully), and you do it far less often.

So debugging takes 5 times more effort but needs to be done 15 times less than in other languages.

At least that's the deal I'm getting and I don't usually find myself debugging much so it's not a grave concern.

I'd say lack of non nullability or pattern matching are worse show stoppers.


Have you looked into Debug.trace? The type of trace is "String -> a -> a". This is a 'pure' function that 'ignores' the first argument and returns the second unmodified. What actually happens is that the first argument is a String that gets printed to stdout. Furthermore, once you give it the first argument, it is simply the identity function, which allows you to easily place it anywhere in you code.

For example, if you have a function: add x y = x+y, and you wanted to see all of the inputs add gets, you could do: add x y = trace (show (x,y)) $ x+y.

For anyone confused about how such a function is possible in Haskell, not that it is in Debug, and the documentation clearly states that it should only be used for debugging.

EDIT: You could also do traceShow (x,y) $ x+y.


I've seen this, it basically allows us to generate execution trace trees, which is quite trivial in most languages must somehow hard in Haskell.

I've studied a lot of mechanisms for debugging Haskell code, and some of the debugging mechanisms taken for granted in other languages can be clawed back. But at the end of the day, it is a struggle: the purity (or more accurately, laziness) that is supposed to help so much ends up hurting in at least one aspect.

Perhaps why Haskell programs are so easy to reason about statically is because they are so hard to reason about dynamically :)


Your last point is the other way around:

Haskell has excellent static reasoning/debugging facilities therefore there is little pressure to make good debugging tools.

I believe purity should make debugging easier, not harder. Laziness should only make performance debugging harder, not correctness debugging.

There is much room for improvement in this space.


As a day-to-day Haskell programmer I'm happy to say that purity makes it easier to understand the programs that I write. Still, you're welcome to suggest that's an entirely subjective anecdote if you like.


You'd think that an empirical experiment with new programmers could prove the point, or at least provide conclusive evidence.


Personally I wouldn't think that. I think that such things are far too complicated to be amenable to scientific study.


I would normally agree, but specific usability claims are being made continuously about the benefits of purity, while many programmers think of Haskell has being anything but.


It doesn't say you don't have to deal with 'that messy "real world"'! It is called Real World Haskell after all.


I think it is great "truth in marketing", because going beyond purity requires talking about things like monads, and then they've lost most of their audience anyways.

It is better to lose those not interested in purity early rather than have those uninterested folks become disappointed later.


If you're not quite ready to whole hog on the functional purity aspect, I'd suggest you take a look at OCaml. OCaml supports many modern programming amenities, such as Garbage Collection, Higher-order Functions, Static Type-Checking, Generics, Immutable Data Structures, Algebraic Data Types and Pattern Matching, Automatic Type Inference while still allowing for imperative programming when you need it.

I'd recommend taking a look at the Real World OCaml book: https://realworldocaml.org


Doesn't Scala support the exact same features? I know OCaml is based on ML and is mostly structural, while Scala more of a Java-like curly brace lang that is nominal, but your feature list makes them sound the same.


Things Scala lacks which OCaml has:

1. Tail call elimination 2. A full-featured module system 3. Functors (functions at the module level) 4. First-class modules 5. GADTs 6. Polymorphic variants 7. A decidable type system 8. ... more?

Things OCaml lacks which Scala has:

1. Traits 2. Implicits 3. Trivial Java FFI 4. Seamless syntactic macros (c.f. camlp4) 5. ... more?


Scala has 1, 5, and 6.

It lacks 7, and about 2/3, which are basically the same, where scala at least supports mixin constructions (which you can also get from class-parameter ised functors, though maybe not in Ocamel).


Could you elaborate on Scala's TCO support? I was under the impression that only self-tail calls are eliminated and only if the stack trace would be provably unused. Doesn't the JVM require stack traces? Can you use CPS in Scala?

I see now that Scala can encode GADTs in case classes. What mechanism is available to encode polymorphic variants? Particularly, I am interested in writing matches over unions of polymorphic variant types.

I disagree that 2 and 3 are the same (structural module subtyping, nesting, inclusion differs from applicative functors). I believe you can achieve mixins with OCaml through the object system but it's not clear to me that this addresses functor signature checking.

Another Scala bonus: 5. Objects can implicitly be null.


Scala has TCO support, but maybe not as advanced as Ocaml, being limited by the JVM and all. CPS is heavily used in scala these days, as far as I can tell by reading Ingo Meir's work.

I was confused about polymorphic variants. Scala doesn't seem to have that.

I did a lot of work with units (dr. scheme-style functors) before, and when I moved to scala for my postdoc, I found all my patterns expressible using mixins and nominal types...I didn't miss the modules (1st class or otherwise).

Scala has a non-null-ability option, but most programmers I think would find null damn convenient.


Having programmed at least 10k locs in all major paradigms, I've come to the conclusion that relational programming (prolog and sql) are the best. Haskell talks a good game and a lot of OO procedural code has been written, but for programming succinct, elegant code, which also happens to be the most lucrative - sql databases underlying most business - the relational paradigm cannot be beat.

It's the most beautiful invention of computer science, Programming in prolog, especially, is the closest a programmer can come to achieving a state of nirvana. A place where the mind is at one with the problem at hand.


Good luck controlling your ABS system using prolog.

Having programmed more than I care to remember (since we're establishing credibility by tossing unverifiable facts in there) I think it takes the right tool for the right job. Sometimes that's prolog, sometimes that's assembly and sometimes it's something else entirely.

Every language that is in use today has a niche, no single language manages to span more than a few such niches and the larger the impedance mismatch between the problem and the language the more work you have to do.


Prolog can be used in embedded systems http://www.hercsmusicsystems.com/


Embedded and hard real time are not the same thing.


Relational programming is indeed excellent, but how can you put up with writing in SQL? Of all the programming languages I've ever used, it's the one that allows the least abstraction.


Abstraction in sql are views, done with left joins (rules in prolog). It doesn't seem like much, but that's the beauty of of it, you only need one concept and that's all you need. Contrast that with other languages where you have hundreds of little functions.


Not all abstractions are relations though. In SQL can you abstract the functionality of "take a relation with one int column, one string column, one double column, sum the int column, group by the string column and average the double column"?

Do views support the notion of primary keys and foreign keys?

I wouldn't be able to program in SQL without the former, and the latter would also be useful.


> Having programmed at least 10k locs in all major paradigms

I don't want to sound insulting, but 10 KLOC is not nearly enough to get a reasonable understanding of even very concise languages. 10 KLOCs of APL, maybe, but certainly not with C, Java (certainly not with Java) or even Ruby or Python.

Beware of Maslow's hammer.


Why do you think aren't those languages more popular?


Sql is extremely popular, that's the LAMP stack. Rails and Django led people astray for a while but that's ending.

Prolog should be more popular, it should be at least embedded in every program, even in the browser.

If there are more than a few for loops or if else statements than the majority of the program should be encoded in sql/prolog.

If you look at all these javascript frameworks, the core concept they're missing is relational programming. The program should be a loop that takes all input events like mouse clicks, etc. and then queries a prolog engine about to what actions to take.

It's getting more popular steadily. Map/reduce is being replaced/complemented with distributed sql, and the frontend will also come around as people starting embedding prolog into their guis (LAMP stack in the browser).


Do you know of any major prolog programs that are in use and where you think that they really are successful because of prolog's strengths? I did a little bit of prolog in a logic course, but it never felt natural to me compared to functionl programming. But of course, maybe that's just because I did not dive into it and there are people who think FP feels unnatural. However I really would like to see some real world software in it (with "real world" I mean no AI system, planning system or inference system, but something very small, e.g. a web server, a text editor, a graphics engine, a synthesizer, an operating system, etc.)


The semantics of sql and prolog are the same, so in one sense it's everywhere - relational databases. Prolog specifically is not commonplace because historically the gui's have been written with OO and then the web came and frontend was stateless (html pages sent to browser). As gwt-style single page apps become popular again, hopefully this time prolog will replace OO.


I'm rather uninitiated into all this. What's your basis for claiming the semantics are the same? Isn't backtracking present in Prolog and absent in SQL?


There's a good stackoverflow answer on that http://stackoverflow.com/questions/2117651/difference-betwee...


That looks super. Thank you!


"Why Haskell matters" is a way better explanation than this, i think.



People about to hate on haskell...go


Hate is a feeling I reserve for corrupt politicians, taxi drivers who ignore me, the guy who whistles in my open office, and...C++.

Everything else is either interesting or not. I think Haskell is interesting from an academic research perspective, but I'm not interested in using it to write programs.


Keeping a critical distance is the best reaction to a frenetic hype that acquired religious characteristics.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: