Hacker News new | past | comments | ask | show | jobs | submit login
Functional Programming in a Dysfunctional World (myob.com)
93 points by GarethX on July 16, 2015 | hide | past | favorite | 67 comments



The idea that "memory doesn't matter" and "we're mostly waiting on the network anyway" seems like red herring.

It's not a "micro-optimization" to think of your data first. A program is a sequence of transformations on a stream of data. Figure out what your data is and how you want to transform it and your program will become evident.

FP is a fine tool for programmers to have but you still have to think about the data regardless. If you don't realize how the allocator works in your VM you might end up fragmenting your pools and over-allocating virtual memory for a simple process. For some applications that might matter.


Memory didn't matter in the past. At some point caches weren't important, CPU were not running as fast compared to main memory. Then CPUs were getting much faster and caches started to become very important.

Then at some point networking was very slow and it didn't matter if you wrote your code in a scripting language, if all you did was wait for network packets. That was around Pentium 4 days or so. CPU speed was doubling quickly and 1Gpbs cards and switches were still kind of fancy and expensive.

But then it all kind of changed. Caches are important now. You thrash your cache around and you can take a serious performance hit. Even kernel code can't keep up with network wire speed at 10G range.

Long story short, a lot of performance heuristics and folk knowledge about it has be re-evaluated periodically.


What you need is really much more about the particular workloads you have. It is still often true that code is waiting on network packets or disk I/O. That didn't just change with the decade. This kind of folk wisdom is pretty useless on the whole and we should push people much harder to measure and find their specific problems rather than operating by rule of thumb.


Totally agree.

Hence why I find glib comments about FP languages and memory management being unnecessary to be disingenuous at best.

You should still be aware of these things even if you're freezing a thunk off to a queue while your thread processes some other stack until that network message comes back. The hardware is your friend! Feed it right and it will reward you.


At what point in the past did memory not matter?


Memory speed didn't matter back during early 486-586 days. You just didn't think about cache misses as much because the speed disparity wasn't that great.


I question whether memory was literally not a concern, though. Were you as likely to outrun memory by the CPU, no? Did you still try and minimize the amount of data that went through memory for overall speed? I would think so.

And this is ignoring the fact that hard drives were still ridiculously slow. So, really, the concern has always been that there are large chunks of memory that are not fast. Over time, "not fast" has changed in definition. But practical considerations have remained that keeping a small data set will be faster than a larger one.


But "real memory" is neither what C presents or the copy semantics that is used in FP.

The CPU will keep memory in 64-byte cache lines. There is a complex bus protocol to shuffle cache lines and subparts of cache lines to main memory.

There are additional complex protocols for cache coherence.

The cost of reading 64 bytes from memory into a cache line and when doing a write-back, storing it at a different location in main memory is zero.

Memory is always being copied into our L1 and L2 cache.

Copying data eliminates most of the cache coherence protocols that are complex and costly.

Yes we get a lot of this for free in the CPU implementation, but there is a lot of complexity that goes into imposing what is really beginning to be an unnatural model (mutable memory) on a hierarchical memory system.

FP is using a log-based model. You write to fresh memory, no aliasing, no coherence, no conflicts. You then, during GC, remap memory.

Current hardware cant "remap memory" efficiently, but it seems like the FP approach, in one form or another, is the better approach for dealing with high scalability and deep memory hierarchies: write to fresh memory, then expose a remap operation at the hardware level.

SSD disks are a bit like that internally.


On the other hand, pointer chasing is decidedly not memory friendly. And mutable memory has a great deal of mechanical sympathy! A function's entire stack frame can live in the L1 or L2 cache. Imagine something like determining the length of a linked list, and then GCing all of the intermediate values.

I am not sure what the cost is of the cache coherency hardware, but if it were high, then presumably single-core CPUs would have a big advantage on single-threaded workloads. That doesn't seem to be the case.


I agree that it's foolish to ignore memory issues. We've had horrific performance and scaling in much software that did that. Anyone thinking otherwise can feel free to disable their processor or app cache to see how unimportant memory concerns are. ;)

Any HLL should give performance-concerned designers a clear, mental model of the performance aspects of their code. C and C++ programmers, for instance, understand the costs of their abstractions, how to code in a cache-efficient way, and so on. I read on the old LISP web sites and mailing lists where they similarly could estimate the cost of certain constructions and had tricks to squeeze out extra performance that they hid behind macros, etc. So, I'm sure these other functional languages can do something similar.


I found it correct: what the says is that we should focus first on the actual bottlenecks before we worry about other optimizations; not that we should just look to another side.

If your bottle neck is in that point (copy of data structures) then there are functional data structures as well. And if still your performance is so critical in that point you can use a mutable data structure which is thread safe, and separate data from code in the rest of your code having still a big chunk of functional code where it is easy to reason about.


I am referring to data-oriented design[0], not optimization: thinking about your data, access patterns, and transformations is not a premature optimization in the Knuth-ian sense. It's just plain, old engineering.

It's not incorrect per-se, just misleading in my opinion.

[0]http://dataorienteddesign.com/site.php


> Did you ever attempt to get to the bottom of an intermittent error that was driving you insane, only to discover much later that a private variable wasn’t synchronized properly? Have you ever spent hours stepping through the lifecycle of an object, going through many different components spread out across the codebase and wished that you had spent all that time solving new problems instead?

I see 3 issues here.

1) Most people haven't dealt with those issues enough. There are complex programs but there are also many simple program as well. And unless one deals with hard to debug concurrency / pointer / mutable state errors they will not appreciate immutability. They might pay lip service to it, because it is cool, but they will not appreciate it enough to start rewriting their codebase in it. That is a bit like fault tolerance, unless they have been woken up at 3am because their main larger server process segfaulted and had to debug, they will not think about isolated units of failure, about supervision, durable peristent snapshots/checkpoints, monitoring etc. They of course will say that "fault tollerance is good" but it is good in an abstract, general way.

2) Those that dealt with these issue might accept the state (pun indended) of affairs and never realize there is another way of doing things. I mean they just accept that you have to have locks and mutable state and mutexes and dangling pointers and then spend months debugging hard to track concurrency errors at 3am. That is just how software development works and it is as good as it gets.

3) People realize there is something better, they know about, read about, played with it. But they don't have enough energy, political power, or time do fix it.


I see another issue. Have I ever gotten to the bottom of an intermittent error only to determine it is a synchronization error? Yes.

The problem is I have also gotten down to the bottom of a problem and found it was a stale data problem. In a system that was performing gymnastics to keep things immutable. To the point that fixing it was not a simple matter of just updating the variable.

Worst were the Multiversion concurrency control (MCC) systems I have seen rolled to "help keep all facts of the system immutable." Suddenly, we had to have database implementation experts just to run basic software...


I have similar experiences too.

Trying to keep data immutable introduces scenarios involving stale data, and a new problem to solve. I have seen interesting problems with cache coherence that were quite subtle in their manifestations.

Understanding the system-wide implications of immutable-only data is a non-trivial design task.


the issue I have with this statement is that stale data exists when going full-imperative as well.

The solutions might be harder to implement in an immutable environment if your setup isn't right, but not an order of magnitude harder. And you get all the advantages of immutability.

People complain about stale data issues in immutable programming styles not because they're more prevalent, but because they can rule out so many other classes of bugs immediately.


> People complain about stale data issues in immutable programming styles not because they're more prevalent, but because they can rule out so many other classes of bugs immediately.

This generalisation is too broad (as with almost any generalisation).

In any case, I want to mention one interesting experience -- a very large financial portfolio management system that I worked on, over a decade-and-a-half ago.

The system had multiple in-coming tickers, feeding huge amounts of data. The central data structures of the computational pipeline system were immutable. Initially, there was a very haphazard system for determining when a particular piece of data was to be deemed stale. In time, formal definitions were established. However, the software was not amenable to those definitions, since shared data was not `seen' updating instantaneously.

The core was too big to be changed in any non-trivial manner, without risking the entire business.

We ended up introducing an explicit notion of time into every pipeline processor to ameliorate (not entirely solve) the situation.

Every major design decision has its own trade-offs that are specific to the problem on hand. But, those built into the foundations of the system should be made pragmatically, not by adhering to a philosophy!


I agree that things need to be pragmatic. But there are classes of bugs ruled out absolutely by things like immutable structures and referential transparency.

Now, the thing is that there's always a way to model state, but at least I can rule out things like the term "x" to mean different things depending on how many times f was called beforehand for the most part.


The problem I have with this line, is it ignores the dealing with mutable state you already do on a daily basis.

Are you confused that the listing on hackernews changes depending on time of day? Why would you be confused that a call to "incrementCounter" changes the value of the counter? Or that a call to "printf" adds to the output?

I agree that having a large function where the value of something changes at the top and at the bottom can be confusing. But so can a function where you have two variables of type String because you had to sanitize the input and assign it to a new variable.

Are there ways to avoid that particular error. Certainly. Yet I have encountered that about as many times lately as I have encountered "mutable" based errors.


> Most people haven't dealt with those issues enough

This really surprises me. Surely, most programmers have experience with big and complex codebases with a lot bugs coming from mutable state, no?


For 1) I meant that there are lot of utilities, micrservices, small web backends, scripts, and hopefully, applications that have already been split into isolated components that can fail and restart separately (in other words someone did the work already). So if one starts dealing with those in their careers they just might never have a compeling reason to look at Haskell, F#, Erlang, OCaml and such.

Also note that quite often one way to deal with large mutable state is to simply shove it in database. Usually good databases just make it explicit what happens to shared state.


The most compelling reason is being able to write way less code. When I write in C# I'm constantly annoyed at how pointlessly verbose it is. I feel hampered, like I'm trying to talk to a 5 year old that doesn't really understand English.


I'd like to see Haskell vs C# code that demonstrates this. In my opinion Haskell just fits more code onto a single line and is much less readable.


I've done comparisons where I've written the exact same program in C# and F#. The reduction in unneeded type annotations alone is massive. F# required only 1/20th the number of type annotations.


Hey hey hey now...I said C# and Haskell, not F# ;) F# has some (from what I've seen) more readable syntax than Haskell.


Can you provide some examples where the readability of F# is better than Haskell?

I know F# is a descendant of Ocaml and many times I find Haskell code much cleaner and more readable than Ocaml.


Type inference can make the code easier to write. But, is the code easier to read, and overall less effort for the business? Code tends to be read significantly more than it's written. This is a simplification but seems to be true enough; modification by subsequent maintainers involves a lot of reading.

I worry about features like overuse of type inference when it means the types in question aren't explicit from the code that you're reading (agreed, not all type inference involves overuse). For example, if I write:

  var x = f();
What's the type of x? What's the return value of f? I can answer those questions, certainly, but I have to go look for the answer. I don't immediately have any mental concept or name to hang the idea off of. Whereas in a language like Java, I'd end up writing:

  Foo x = f();
Now I know the return value is a "Foo". This may not seem like much, but suddenly if there are many different Foos in the code, I can see the patterns I didn't see before, and I can draw correlations:

  Foo x = f();
  Foo y = g();
  h(x,y);
Everything can always be understood with a degree of thought, but type annotations seem to, in my experience, make programs a lot easier to read and modify. The original authors typically have the whole thing in their head, so it doesn't make a difference to them, but it can be a huge difference for those who come after.

There's a certain simplicity that comes from types that are so concrete, clearly named, and linkable as Java libraries are. You can figure out something about this code without reading the declaration or definition:

  ListenableFuture<String> emailAddress = user.queryEmailAddress();
You have a pointer for what to search for and an immediate name for the concept. I suppose it's like an index, in a way. An index to make code more readable and referenceable is included throughout. It's hard to explain it, but I find this valuable even when I have support from a sophisticated IDE. (Plus I'm often working outside an IDE.)

Including these types is more time consuming up front than languages that support type inference, or are dynamic, but for long-lived applications it seems like the overall tradeoff is usually worth it. To be fair, I haven't maintained any F# applications. If the type inference is purely saving annotations that are completely obvious even to a beginner to the codebase, then I could see that being worth it. (It seems to me that we are all beginners to the codebase in meaningfully large platforms.) The actual act of typing the annotations is also often pretty easy and handled automatically by the IDE, but I suppose you have that experience with C#.


Nothing prevents you from adding annotations where they clarify things. That doesn't mean you should add them where they just clutter.


True, but I think part of this is about shaping behavior, and about self-discipline. And perhaps also the cognitive bias that comes from knowing one's own program well. A program I wrote always seems more obvious to me than it does to others. It seems to me that the mind, while aware of gaps in knowledge between ourselves and others, cannot fully compensate for this. It's like Hofstadter's law ("It always takes longer than you expect, even when you take into account Hofstadter's Law.")

So I don't know if I fully trust myself to decide which type annotations are obvious up front. Though maybe I'm just used to this way of working. I'm not aware of many solid studies of the differences that move far beyond personal preference. Especially studies that measure the performance of teams over time (vs virtuoso performances). Also, how happy would you be as the maintainer of a codebase if a new guy came along and submitted a pull request which was purely the addition of an already-inferred type?

Type declarations tend to be pretty low effort for me, these days. The way that I write code in Eclipse works like this:

  // I type out this part
  f(x)
Then I invoke auto-complete and ask Eclipse to automatically assign the expression to a new local variable or field. If f() doesn't exist, I might ask it to create that method. It will infer part of the signature from the type of x. Once I write the method, I can auto-complete its return type. Or if the method exists, taking the type of the variable "x", it can suggest that I probably want "x" as the parameter. I just type "f(" and invoke auto-complete. Anyway, once I ask it to assign to a variable, I end up with something like:

  Foo foo = f(x);
I'm given a choice of common variable names based on the type in question which are easy to choose between with good defaults. So most of the time I'm sort of minimally describing or hinting at what I want, and the IDE guesses extremely well what I mean -- better than a compiler ever could, because it's allowed to make multiple guesses and be wrong -- and then I capture or snapshot that by saving it as text.

The type inference is actually just as present as in the other languages, it's just happening up front at editing time, rather than at compile time. And these benefits don't only apply during original composition. If I edit the method f() and change its return type, I can just as easily auto-complete (or more broadly, use automated refactoring for) the change to the type declaration for the "foo" variable.

Maybe if you're a programmer of dynamic languages or heavily type-inferred languages, this will seem like "much ado about nothing". "Isn't it the same in the end?" The difference to me is that it's all there in the text, and on the screen, by default; and it's all comprehensible even without an IDE and without an understanding of all of the types involved. I still drop out of the IDE surprisingly often to read code at the command line or in a web browser.

I can also see where the alternative view is coming from though that, if I don't have to define the type of variable "foo", then I don't have to change the text when "f()" changes. I would support a "var" keyword in Java, and agree the problem is really about when type inference goes too far. But I see the effort of changing "foo" as much less of a problem than easily and frictionlessly understanding what the code means, immediately when reading it, with minimal reliance on context.

I would be interested to take a look at a large F# program and see how easily I can figure it out.


I certainly agree that code is usually more readable close to when it's written, in both personage and time. What I'm skeptical of is that redundant type annotations always (or usually) make code more readable. I do believe they often do, but I think they should ideally be reserved for those cases. I won't always get it right, but that's something that can be improved with experience and especially in code review.

"Also, how happy would you be as the maintainer of a codebase if a new guy came along and submitted a pull request which was purely the addition of an already-inferred type?"

Slightly happier than someone submitting a pull request which was purely the addition of a comment. In both cases, my response is to try and understand why they felt the code less readable without it and see if that motivates other changes, but then probably to happily merge it.


Indeed, it's frustrating when you're reading flat files of code. When reading foreign F#, I pull the code into Visual Studio so I can hover over the variables. It'd be quite convenient for GitHub to provide the type information on hover...


> political power

This is the big one for me. I deal with assorted systems that I can't get migrated off ancient shitty middleware platforms because any comprehension of them vanishes into the ether somewhere around the upper management types.

"Functional programming" isn't even anywhere on the radar when I currently have no chance of even completely getting rid of PHP.


The Virtus issue is simply an example of bad design. When you add magic that can't be guaranteed by the language, you're eventually going to run into problems. Just look at Hibernate.

As far as performance goes, once you commit to a language with no mutable types, you have no recourse once you discover a bottleneck due to immutability. That's a big worry.

And advanced type systems are not unique to functional languages.

I'm also not clear on why the example testing code is better. It doesn't look much different from what I'd write in any language (excepting boilerplate).

So yeah, correctness and testability. It's available in most languages.

Also, what's up with all the single-letter variable names?


> a language with no mutable types

If we are talking about haskell, mutability is absolutely an option when it is the only way to write fast code. This is exactly what is done here: http://www.serpentine.com/blog/2015/05/13/sometimes-the-old-...

> And advanced type systems are not unique to functional languages.

"Advanced type system" does not really mean much. A bit like the weak/strong static typing distinction. However there are still significant differences between the c#/java/scala family and the haskell one. The fact that objects are not nullable by default and the absence of subtyping are considered as two big advantages for many people.

> Also, what's up with all the single-letter variable names?

when the code is abstract and the scope of a variable is less than three lines, it can be ok to use one letter for its name.


> Also, what's up with all the single-letter variable names?

Words are not useful if your variables don't really have semantically meaningful names. Also, if your code is a simple one-liner, it does not really add too much benefit to name everything, if it is obvious from the (very local) context.

For example, when I look at something like:

    ordered (x:y:xs) = x <= y && ordered (y:xs)
I could do something like:

    ordered (first:second:remainingList) = first <= second && ordered (second:remainingList)
... but if anything, that will make it harder to read.


Then why did you call your variables x and y, and not y and x? Surprise: because there IS a semantic meaning to the order.

The haskell variable naming convention is bad. What is worse is that when someone points it out, there is ALWAYS this one or two variable example being put forward, when the problem exists in the 90% of functions that have 5 or more variables in scope.

I see a lot of haskell code that looks like FORTRAN. Haskell code is yet not written in large companies where there is a readability requirement. I just hope more programmers with a better taste for readability joins the Haskell ranks. The readability story needs to improve (and I see it improving a bit..)


It is a lot less of a problem then you make out: most of the time, the single letter naming conventions are actually very useful. Its not like all the haskellers just forgot the usefulness of names from the other languages they frequently use; it is an educated and measured choice.

That said, it certainly can be taken too far and turn into a clusterfuck.

    permutations            :: [a] -> [[a]]
    permutations xs0        =  xs0 : perms xs0 []
      where
        perms []     _  = []
        perms (t:ts) is = foldr interleave (perms ts (t:is)) (permutations is)
          where interleave    xs     r = let (_,zs) = interleave' id xs r in zs
                interleave' _ []     r = (ts, r)
                interleave' f (y:ys) r = let (us,zs) = interleave' (f . (y:)) ys r
                                         in  (y:us, f (t:y:us) : zs)


    addart a = array ((-1,0),(n,m+n)) $ z ++ xsi ++ b ++ art ++ x
      where z = ((-1,0), a!(0,0)) : [ ((-1,j),0) | j <- [1..n] ] ++ [ ((-1,j+n),a!(0,j)) | j <- [1..m] ]
            xsi = ((0,0), -colsum a 0) : [ ((0,j),0) | j <- [1..n] ] ++ [ ((0,j+n), -colsum a j) | j <- [1..m] ]
            b = [ ((i,0), a!(i,0)) | i <- [1..n] ]
            art = [ ((i,j), if i == j then 1 else 0) | i <- [1..n], j <- [1..n] ]
            x = [ ((i,j+n), a!(i,j)) | i <- [1..n], j <- [1..m] ]
            ((_,_),(n,m)) = bounds a

From Matrix.Simplex https://hackage.haskell.org/package/dsp-0.2.1/docs/src/Matri...


https://hackage.haskell.org/package/dsp-0.2.1/docs/src/Matri... link to the function itself

Never really seen something like this before, really hope someone will clean it up


To be fair it looks to be a relatively complex operation. Do you know how any other languages implement this?


This is not about the complexity of the code, but about variables with or without meaningful names.


I feel like xsi and art mean something domain specific that might be obvious to a domain expert.


> What is worse is that when someone points it out, there is ALWAYS this one or two variable example being put forward, when the problem exists in the 90% of functions that have 5 or more variables in scope.

I would bet a fair amount of money that 90% of Haskell functions do not have 5 or more variables in scope.

> Haskell code is yet not written in large companies where there is a readability requirement.

Haskell code is not yet written in large amounts in large companies (or anywhere else). Haskell code is, however, written in large companies.

Also: Large companies have a readability requirement? I'm not so sure that they do, that it's enforced, or that it's effective. I;m especially not sure that it's effective to the point that an outsider can read it.


> Haskell code is not yet written in large amounts in large companies (or anywhere else).

Haskell code isn't written in large amounts anywhere? Define large amounts.

For instance the GHC codebase itself is pretty large.


>> Haskell code is yet not written in large companies where there is a readability requirement

As a counterargument: Haxl at Facebook. https://github.com/facebook/Haxl

The convention isn't terrible, it's just a convention and takes a bit to get used to. The (x:y:xs) pattern is useful for when you could at best describe x and y as thing1 and thing2 and xs as theRestOfTheThings. Since the extra letters add no real clarity it's not a bad way to keep things concise. Once you get used to the pattern it's better because if you see (x:y:xs) you know that the things themselves are less important than their order out of an array.


In my Python code I find that readability and naming conventions are very important. Usually in my Haskell code I'm writing more abstract functions where a good specific name isn't possible or it's obvious what it's referring to because of the type signature.


> Haskell code is yet not written in large companies where there is a readability requirement

Yes it is.


> As far as performance goes, once you commit to a language with no mutable types, you have no recourse once you discover a bottleneck due to immutability. That's a big worry.

There are always escape hatches for when you absolutely need mutability.

> And advanced type systems are not unique to functional languages.

True. It's just that a lot of progress (but not all, of course) in advanced type systems is pioneered by languages which are often described as functional.

> So yeah, correctness and testability. It's available in most languages.

Correctness and testability are properties of the software, not the language. As such, of course you can achieve them with your favorite one. But some languages provide better tools to achieve some degree of correctness and testability which would require a greater effort with other languages.

> Also, what's up with all the single-letter variable names?

This is standard practice in many functional languages. Whenever the abstractness goes up (or the "more general" something is), the identifiers become shorter, since the name itself "means less". Hence why you see code that does something to every x in some kind of container xs; what would you gain with longer identifiers, except maybe misleading the reader?


Also, any imperative algorithm can be translated into a functional algorithm with, at most, O(log n) increase in complexity, and usually with none.


<< As far as performance goes, once you commit to a language with no mutable types, you have no recourse once you discover a bottleneck due to immutability. That's a big worry.

All of the FP languages I'm familiar with provide easy access to mutable collections.


"A nice side effect (hmm) of immutability is you end up with pure functions."

No, immutability doesn't magically give you purity. E.g. Rust gives you immutability by default, yet you don't need anything to be mutable to get IO.


IO is by-definition mutation.


Virtually all of the concepts that the author listed are completely orthogonal to functional programming, and moreover FP is equivocated with statically typed FP.


It seems pretty obvious from context that the author is discussing Haskell's flavor of functional programming in particular. It's somewhat lazy phrasing, but it doesn't seem unclear to me if you read more than the headline (and if you didn't read more than the headline, you wouldn't know he was talking about something more specific anyway).


What's the best resources you would recommend for someone to really wrap their head around functional programming, foundations and practical use?


Learn You a Haskell For Great Good is a great introduction, if you're willing to just pretend you know nothing of programming for a little while.


I was uncertain at first if I was reading a book title lol. Haskell is usually a tough learning experience. I'll keep the link and check it out. Might need to learn it anyway given the seL4 and lots of Galois's work (eg Ivory language) is using Haskell. Figure it's better to start with it than Ocaml given that Haskell doesn't give an imperative programmer many outs. Gotta learn the concepts or leave haha.

"if you're willing to just pretend you know nothing of programming for a little while."

Always good advice for learning a new paradigm. I learned this when smart people taught me how to learn a foreign language. They said learn the concepts fresh, do no mental translations, and immerse yourself where you're forced to solve problems in that language. It's how we learn the native one's so why not new ones.


> Haskell doesn't give an imperative programmer many outs.

What do you mean?

Check these out:

http://www.haskellforall.com/2012/01/haskell-for-c-programme...

http://www.haskellforall.com/2013/05/program-imperatively-us...


Closer, but not quite. One commenter there pointed out that you still need foundational stuff to really get it. Otherwise, you're just seeing templates without really understanding what they mean. That's kind of what I meant.

Nothing wrong with it, necessarily. I'm just saying people learning it say there's not many shortcuts around learning it.


That's a good thing in my book. It takes you longer to get productive, of course, but, when you DO get there, you're on much more solid footing


I hear people find my Clojure Distilled guide helpful for understanding the general concepts https://yogthos.github.io/ClojureDistilled.html


Thanks for the link. Not quite what I'm looking for but I'll keep it for when I try Clojure.


If your program has state, it doesn't matter if it's mutable or immutable state, you will still run into the same problems when working with concurrency. You either have to use locks, make sure functions are executed in the right order, or the preferred way: designate/central state.

So functions vs object orientated is mostly a matter of preference.

One advantage with functional programming though, is that it will be much easier so serialize the state.


> If your program has state, it doesn't matter if it's mutable or immutable state, you will still run into the same problems when working with concurrency. You either have to use locks, make sure functions are executed in the right order, or the preferred way: designate/central state.

What about using STM:

https://www.fpcomplete.com/school/advanced-haskell/beautiful...


I respectfully disagree. I also encourage you to take a look at http://clojure.org/state


> you have to be wary of using Java and .NET types which are still prone to NPEs

Java8 optionals are a maybe monad

http://stackoverflow.com/a/19932439/4658666


I think your post only takes non-null values into account, which is what OP seems to talk about. What would this code do?

    Optional<Int> foo = null;
    foo.flatMap(x -> Optional.of(x));
Obeying the Monad laws would mean getting null as a result, via `m.flatMap(x -> Optional.of(x)) = m` for all m (or `m >>= pure = m` if you prefer Haskell syntax).




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: