"Simple" is a cop-out word. Things can be simple along a lot of vectors. The vector you've chosen seems to be "does less for you" which taken ad absurdum would have you using assembly. Go does have elegant abstractions, and they aren't the simplest along this vector, nor would anyone want them to be. Coroutines, for example, are actually quite conceptually complicated in some ways.
I prefer "understandable"--it appears this is what you're trying to get when you say "simple", but I think you're drastically overselling the understandability of go code. Sure, you understand any given line easily (what you described as "readable"), but you're not usually trying to understand one line of code. Since go's sparse feature set provides few effective tools for chunking[1] your mental model of the program, complex functionality ends up being in too large of chunks to be understood easily. This problem gets worse as programs grow in size.
Another poster mentioned that they start running into problems and wishing they had explicit types with Python programs over 10K LOC, which approximately matches my experience. But comparing to go, you've got to realize that 10K LOC of Python does a whole lot more than 10K LOC of Go; you'd have to write a lot more Go to achieve the same functionality because of all the boilerplate. That's not necessarily a downside because that boilerplate is giving you benefits, and I don't think entering your code into the computer is the limiting factor in development speed. But it does mean that a fair comparison of equally-complex programs is going to be a lot more lines of Go than Python, i.e. a fair comparison of might be 10K LOC of Python vs 50K LOC of Go. I say "might be" because I don't know what the numbers would be exactly.
How many people have written or worked on projects in Go of that complexity? How many people have written or worked on programs of equivalent complexity in other languages to compare? I'm seeing people discuss how easy it is to start a project in Go, but nobody is talking about how easy it is to maintain 50K LOC of Go.
I've worked on projects of >200K LOC in Python, and the possibly-equivalent >500K LOC in C#. I think the C# was easier to work with, but that's largely because the 200K lines of Python made heavy use of monkey patching, and I've worked in smaller C# codebases that made heavy use of dependency injection to similar detriment. I'm honestly not sure which feels more maintainable to me, given a certain level of discipline to not use certain misfeatures.
I haven't written as much Go, and I wouldn't, because the features of C# which make it viable for projects of this complexity simply aren't present, and unlike Python, Go doesn't provide good alternatives. I suspect the reason we don't have many people talking about this is that not many projects have grown to this complexity, and when they do these problems will become apparent.
The real weak point is Go's type system--it's genuinely terrible, because the features that came standard in other modern statically-typed languages decades before Go was invented were bolted onto Go after the fact. Gophers initially claimed they didn't need generics for a few years. As a result you've got conflicting systems developed before `go generate` (using casts), after `go generate` but before generics (using go generate), and after generics (using generics). It's telling that you seemingly reject generics ("clever library using elegant abstraction and generics") even though go has them now.
Attacking Haskell is sort of a straw man--so far I haven't seen anyone in this thread propose Haskell as a go alternative. I think we agree Haskell is far too dogmatic about its abstractions when it's impractical to be used as a general-purpose language (because I don't think it's intended as a general-purpose language).
Fwiw I have worked on multiple many hundred of thousands lines of code projects in multiple languages, several of which we rewrote from python, perl, php, and ruby to go where I first maintained the existing code and then worked on a rewrite. I've also walked into existing large Go projects and worked in elixir and some limited js.
In each and every case except one (some contractors did something really odd in trying to write go like java or ruby, can't recall, but the code was terribad), the go version was both more performant and easier to maintain. This is measured by counting bugs, development velocity, and mean time to remediation.
Meaningful comparisons between programming languages are difficult.
I've done rewrites of Python programs in Python, and the rewrites were more performant and easier to maintain.
My point is, is it the language? Or is it the fact that when you rewrite something, you understand which parts of the program are difficult, you know the gotchas, and you eliminate all the misfeatures you thought you needed the first time but didn't. In short, I suspect the benefit of learning from your mistakes is probably far more valuable than switching languages in either direction.
Hands down, the language made the projects easier to maintain. I have also rewritten from php to python, python to python, and perl to perl, many greenfield projects in each, etc.
Why did the language matter? Largely, static typing, concurrency ergonomics, fast compilation, and easy to ship/run single binaries. The fact it also saved 10-20x in server costs was a great bonus.
Better design can absolutely improve a project and make it easier to maintain and more performant. And bad code can be written in any language. I am more and more convinced that dynamically typed code doesn't have a place in medium to large organizations where a codebase no longer fits in one person's head.
> I think we agree Haskell is far too dogmatic about its abstractions when it's impractical to be used as a general-purpose language (because I don't think it's intended as a general-purpose language).
Originally Haskell was designed to be a language providing
> faster communication of new ideas, a stable foundation for real applications development, and a vehicle through which others would be encouraged to use functional languages
That doesn't necessarily imply general purpose of course, but today pretty much any language suitable for "real applications development" would be considered as "general purpose", I think. In any case, regardless of what Haskell was originally intended to be, I would say it is a general purpose language (and in fact the best general purpose language).
> Attacking Haskell is sort of a straw man--so far I haven't seen anyone in this thread propose Haskell as a go alternative. I think we agree Haskell is far too dogmatic about its abstractions when it's impractical to be used as a general-purpose language (because I don't think it's intended as a general-purpose language).
I'll be that guy. We like our stuff in Haskell. Watching the rest of the industry move forward is like a reverse trip around the Monopoly board of computer science progress.
When I joined up, everything was Java, which couldn't make a binary. Then the crowd jumped to JS, where we ditched integers and true parallelism. Python freed us from speed. Go came along, promising to remove generics and exceptions, and to finally give us back our boilerplate.
And whenever features progress in the forward direction again, there are two issues - firstly, they sometimes come out kind of crap. Secondly, arguments for or against this crapness tend to take up all the oxygen that could have facilitated discussions around goodness.
Exceptions or return values? Nope, monadic error handling, any day of the week.
Terse dynamic code, or bloated static code? Nope, terse code with full type inference.
Terse nulls or nulls & boilerplate Optionals? Nope, just terse Optionals.
First-order generics or no? Higher-kinded parametric polymorphism.
Multiprogramming via locking & shared memory or message passing? Hey how about I choose between shared-memory transactions or transactional message-passing instead?
There is little stuff happening outside of Haskell to be envious of. Java took a swing at the null problem with Optionals a decade ago. My IDE warns me not to use them. It's taking another swing with "Null-Restricted Value Class Types". I know your eyes glaze over when people rant about Haskell, but for two seconds, just picture yourself happily doing your day-to-day coding without the existence of nulls, and pretend you read a blog post about exciting new methods for detecting them.
The issue is not language semantic. The issue is readability. Having the best feature set in the world is useless if the code produced by others is a pain to decipher.
Haskell disqualified itself for general programming when its community decided that point-free was desirable despite the style being impossible to read and custom operators were a good thing. I personally hate every Haskell code base I have ever seen despite being relatively fluent in the language (an issue Ocaml never had amusingly mostly because its community used to be very pragmatic).
The person you are responding to didn't say that, I did.
The abstractions I'm pointing at are cases where mutation or side effects are the desired result of execution. Ultimately this always runs up against having to grok a lot of different monads and that's simply never going to be as easy to understand as calling "print" or "break". Haskell works really well if the problems you're solving don't have a ton of weird edge cases, but often reality doesn't work like that.
The other thing is laziness which makes it hard to reason about performance. Note that I didn't say it's hard to reason about execution order--I think they did a good job of handling that.
Don't get me wrong, Haskell's dogmatic commitment to functional purity has led to the discovery of some powerful abstractions that have trickled out into other languages. That's extremely valuable work.
> The person you are responding to didn't say that, I did.
Ah, thanks, I got confused.
> Haskell works really well if the problems you're solving don't have a ton of weird edge cases, but often reality doesn't work like that.
In my experience it's completely the opposite, actually. I can only really write code that correctly handles a ton of weird edge cases in Haskell. It seems that many people think that Haskell is supposedly a language for "making easy code elegant". The benefit of Haskell is not elegance or style (although it can be elegant). The benefit is that it makes gnarly problems tractable! My experience trying to handle a ton of weird edge cases in Python is that it's really difficult, firstly because you can't model many edge cases properly at all because it doesn't have sum types and secondly because it doesn't have type checking. (As I understand it they have added both of these features since I last used Python, but I suspect they're not as ergonomic as in Haskell.)
> this always runs up against having to grok a lot of different monads and that's simply never going to be as easy to understand as calling "print" or "break"
Actually, I would say not really. The largest number of monads you "have to" learn is one, that is, the monad of the effect system you choose. Naturally, not every Haskell codebase uses an effect system, and those codebases can therefore be more complex in that regard, but that's not a problem with Haskell per se, it's an emergent property of how people use Haskell, and therefore doesn't say anything at all about whether Haskell is usable as a general purpose language. For example, consider the following Python code.
def main():
for i in range(1, 101):
if i > 4:
break
print(i)
You can write it in Bluefin[1], my Haskell effect system as follows.
main = runEff $ \ioe ->
withJump $ \break -> do
for_ [1..100] $ \i -> do
when (i > 4) $ do
jumpTo break
effIO ioe (print i)
Granted, that is noisier than the Python, despite being a direct translation. However, the noise is a roughly O(1) cost so in larger code samples it would be less noticeable. The benefit of Haskell here over Python is
1. You don't get weird semantics around mutating the loop variable, and it remaining in scope after loop exit
2. You can "break" through any number of nested loops, not just to the nearest enclosing loop (which is actually more useful when dealing with weird edge cases, not less)
3. You can see exactly what effects are possible in any part of the program (which again is actually more useful when dealing with weird edge cases, not less)
> Granted, that is noisier than the Python, despite being a direct translation.
My complaint isn't the noise. My complaint is: can you explain what withJump does? Like, not the intention of it, but what it actually does? This is a rhetorical question--I know what it does--but if you work through the exercise of explaining it as if to a beginner, I think you'll quickly see that this is isn't trivial.
> 1. You don't get weird semantics around mutating the loop variable, and it remaining in scope after loop exit
Is this an upside? It's certainly unintuitive, but I can't think of a case this has ever caused a problem for me in real code.
> 2. You can "break" through any number of nested loops, not just to the nearest enclosing loop (which is actually more useful when dealing with weird edge cases, not less)
Again, is this actually a problem? Any high school kid learning Python can figure out how to set a flag to exit a loop. It's not elegant or pretty, but does it actually cause any complexity? Is it actually hard to understand?
And lots of languages now have labeled breaks.
Arguably the Lua solution (gotos) is the cleaner solution here, but that's not popular. :)
> 3. You can see exactly what effects are possible in any part of the program (which again is actually more useful when dealing with weird edge cases, not less)
What does this even mean? In concrete terms, why do you think I can't see what effects are possible in Python, and what problems does that cause?
In all three of the cases that you mention, I can see a sort of aesthetic beauty to the Haskell solution, which I appreciate. But my clients don't look at my code, they look at the results of running my code.
The fact that you need a blog post to tell people how to resolve an issue exemplifies my point that this is not resolved. Nobody needs to be told how to turn off laziness in Python, because it's not turned on.
The fact is, Haskell does the wrong thing by default here, and even if you write your code to evaluate eagerly, you're going to end up interfacing with libraries where someone didn't do that. Laziness still gets advertised up front as being one of the awesome things about Haskell, and while experience Haskell developers are usually disillusioned with laziness, many Haskell developers well into the intermediate level still write lazy code because they were told early on that it's great, and haven't yet experienced enough pain with it to see the problems.
Haskell has a long history of a small base library with a lot of essential functionality being provided as third party libraries, including mtl and transformers (monad transformers), vector (an array library). Even time (a time library) and text (Unicode strings) are third party, by some definitions (they aren't part of the base library but they are shipped with the compiler).
Some people think that's fine, some people think it's annoying. I personally think it's great because it allows for a great deal of separate evolution.
Thanks for your detailed reply! As a reminder, my whole purpose in this thread is to try to understand your comment that Haskell is
> impractical to be used as a general-purpose language (because I don't think it's intended as a general-purpose language)
From my point of view Haskell is a general purpose language, and an excellent one (and in fact, the best one!). I'm not actually sure whether you're saying that
1. Haskell is a general purpose language, but it's impractical
2. Haskell is a general purpose language, but it's too impractical to be used as one (for some (large?) subset of programmers)
2. Haskell is not a general purpose language because it's too impractical
(I agree with 1, with the caveat I don't think it's significantly less practical than other general purpose languages, including Python. It's just impractical in different ways!)
That out of the way, I'll address your points.
> can you explain what withJump does? Like, not the intention of it, but what it actually does? This is a rhetorical question--I know what it does--but if you work through the exercise of explaining it as if to a beginner, I think you'll quickly see that this is isn't trivial.
Yes, I can explain what it does! `jumpTo break` throws an exception which returns execution to `withJump`, and the program continues from there. Do you think explaining this to a beginner is more difficult than explaining that `break` exits the loop and the program continues from there?
> It's certainly unintuitive, but I can't think of a case this has ever caused a problem for me in real code.
> Again, is this actually a problem? Any high school kid learning Python can figure out how to set a flag to exit a loop. It's not elegant or pretty, but does it actually cause any complexity? Is it actually hard to understand?
Yes, I would say that setting flags to exit loops causes additional complexity and difficulty in understanding.
> And lots of languages now have labeled breaks.
I'm finding this hard to reconcile with your comment above. Why do they have labelled breaks if it's good enough to set flags to exit loops?
> Arguably the Lua solution (gotos) is the cleaner solution here, but that's not popular. :)
Sure, if you like, but remember that my purpose is not to argue that Haskell is the best general purpose language (even though I think it is) only that it is a general purpose language. It has at least the general purpose features of other general purpose languages. That seems good enough for me.
> What does this even mean? In concrete terms, why do you think I can't see what effects are possible in Python, and what problems does that cause?
def foo(x):
bar(x + 1)
Does foo print anything to the terminal, wipe the database or launch the missiles? I don't know. I can't see what possible effects bar has.
foo1 :: e :> es => IOE e -> Int -> Eff es ()
foo1 ioe x = do
bar1 ioe x
foo2 :: e :> es => IOE e -> Int -> Eff es ()
foo2 ioe x = do
bar2 (x + 1)
I know that foo2 does not print anything to the terminal, wipe the database or launch the missiles! It doesn't give bar access to any effect handles, so it can't. foo1 might though! It does pass an I/O effect handle to bar1, so in principle it might do anything!
But again, although I think this makes Haskell a better language, that's just my personal opinion. I don't expect anyone else to agree, necessarily. But if someone else says Haskell is not general purpose I would like them to explain how it can not be, even though it has all these useful features.
> In all three of the cases that you mention, I can see a sort of aesthetic beauty to the Haskell solution, which I appreciate. But my clients don't look at my code, they look at the results of running my code.
Me too, and the results they see are better than if I wrote code in another language, because Haskell is the language that allows me to most clearly see what results will be produced by my code.
> The fact that you need a blog post to tell people how to resolve an issue exemplifies my point that this is not resolved. Nobody needs to be told how to turn off laziness in Python, because it's not turned on.
Hmm, do you use that line of reasoning for everything? For example, if there were a blogpost about namedtuple in Python[1] would you say "the fact that you need a blog post to tell people how to use namedtuple exemplifies that it is not a solved problem"? I really can't understand why explaining how to do something exemplifies that that thing is not solved. To my mind it's the exact opposite!
Indeed in Python laziness is not turned on, so instead if you want to be lazy you need blog posts to tell people how to turn it on! For example [2].
> The fact is, Haskell does the wrong thing by default here
I agree. My personal take is that data should be by default strict and functions should be by default lazy. I think that would have the best ergonomic properties. But there is no such language. Does that mean that every language is not general purpose?
> even if you write your code to evaluate eagerly, you're going to end up interfacing with libraries where someone didn't do that.
Ah, but that's the beauty of the solution. It doesn't matter whether others wrote "lazy code". If you define your data types correctly then your data types are free of space leaks. It doesn't matter what anyone else writes. Of course, other libraries may use laziness internally in a bad way. I've fixed my fair share of such issues, such as [3]. But other libraries can always be written in a bad way. In Python a badly written library may cause an exception and bring down your worker thread when you weren't expecting it, for example. That's a weakness of Python, but it doesn't mean it's not a general purpose language!
> Laziness still gets advertised up front as being one of the awesome things about Haskell
Hmm, maybe. "Pure and functional" is the main thing that people emphasizes as awesome. You yourself earlier brought up SPJ saying that the next Haskell will be strict, so we both know that Haskellers know that laziness is a double edged sword. I'm trying to point out that even though one edge of the sword of laziness points back at you it's not actually too hard to manage and having to manage it doesn't make Haskell not a general purpose language.
> and while experience Haskell developers are usually disillusioned with laziness, many Haskell developers well into the intermediate level still write lazy code because they were told early on that it's great, and haven't yet experienced enough pain with it to see the problems.
Hmm, maybe. I don't think people deliberately write lazy (or strict) code. They just write code. The code will typically happen to have a lot of laziness, because Haskell is lazy by default. I think that we agree that that laziness is not the best default, but we disagree about how difficult it is to work around that issue.
I would be interested to hear whether you have more specific ideas you can share about why Haskell is not a general purpose language, in light of my responses.
> everything was Java, which couldn't make a binary. Then the crowd jumped to JS, where we ditched integers and true parallelism. Python freed us from speed. Go came along, promising to remove generics and exceptions, and to finally give us back our boilerplate.
That paragraph made me chuckle, thanks.
> picture yourself happily doing your day-to-day coding without the existence of nulls
I've seen it, with Elm and Rust, and now I hate go's "zero values" too because it makes everything a bit more like PHP aka failing forward.
> Exceptions or return values? Nope, monadic error handling, any day of the week.
Ehhhh...
The thing is, there are a lot of cases where I can look at the code and I know the error won't happen because I'm not calling it that way. Sure, sometimes I get it wrong, but the fact is that not every application needs a level of reliability that's worth the effort of having to reason around error handling semi-explicitly to persuade the compiler that the error is handled, when it really doesn't need to be.
> Terse dynamic code, or bloated static code? Nope, terse code with full type inference.
I think you're significantly overselling this. Type inference is great, but you can't pretend that you don't have to implicitly work around types sometimes, resulting in some structures that would be terser in a dynamic language. Type inference is extremely valuable and I really don't want to use static types without it, but there are some tradeoffs between dynamic types and static types with type inference that you're not acknowledging. I think for a lot of problems Haskell wins here, but a lot of problems it doesn't.
One area I'm exploring with the interpreter I'm writing is strong, dynamic typing. The hypothesis is that the strictness of the types matters more than when they are checked (compile time or runtime). Python and Ruby I think both had this idea, but didn't take it far enough in my opinion, making compromises where they didn't need to.
> Terse nulls or nulls & boilerplate Optionals? Nope, just terse Optionals.
100% with you on this.
> First-order generics or no? Higher-kinded parametric polymorphism.
Ehhh, I feel like this is getting overly excited about something that simply isn't all that useful. I'm sure that there's some problem out there where higher-kinded types matter, or maybe I just lack vision, but I'm just not coming across any problems in my career where this feels like the solution.
I feel like there's a caveat I want to add to this but I'm not able to put my finger on it at the moment, so bear with me if I revise this statement a bit later. :)
> Multiprogramming via locking & shared memory or message passing? Hey how about I choose between shared-memory transactions or transactional message-passing instead?
Ehh, the languages I like are all using transactional message-passing anyway, and I'm pretty sure Haskell didn't invent this.
> There is little stuff happening outside of Haskell to be envious of. Java took a swing at the null problem with Optionals a decade ago. My IDE warns me not to use them. It's taking another swing with "Null-Restricted Value Class Types". I know your eyes glaze over when people rant about Haskell, but for two seconds, just picture yourself happily doing your day-to-day coding without the existence of nulls, and pretend you read a blog post about exciting new methods for detecting them.
I mean sure, I'm 100% with you on Option types, as I said. But, imagine being able to insert `print(a)` into your program to see what's in the `a` variable at a specific time. Hey, I know that's not pure, but it's still damn useful.
> imagine being able to insert `print(a)` into your program to see what's in the `a` variable at a specific time. Hey, I know that's not pure, but it's still damn useful.
In Haskell that’s Debug.Trace.traceShow. You can use it in pure code too.
I prefer "understandable"--it appears this is what you're trying to get when you say "simple", but I think you're drastically overselling the understandability of go code. Sure, you understand any given line easily (what you described as "readable"), but you're not usually trying to understand one line of code. Since go's sparse feature set provides few effective tools for chunking[1] your mental model of the program, complex functionality ends up being in too large of chunks to be understood easily. This problem gets worse as programs grow in size.
Another poster mentioned that they start running into problems and wishing they had explicit types with Python programs over 10K LOC, which approximately matches my experience. But comparing to go, you've got to realize that 10K LOC of Python does a whole lot more than 10K LOC of Go; you'd have to write a lot more Go to achieve the same functionality because of all the boilerplate. That's not necessarily a downside because that boilerplate is giving you benefits, and I don't think entering your code into the computer is the limiting factor in development speed. But it does mean that a fair comparison of equally-complex programs is going to be a lot more lines of Go than Python, i.e. a fair comparison of might be 10K LOC of Python vs 50K LOC of Go. I say "might be" because I don't know what the numbers would be exactly.
How many people have written or worked on projects in Go of that complexity? How many people have written or worked on programs of equivalent complexity in other languages to compare? I'm seeing people discuss how easy it is to start a project in Go, but nobody is talking about how easy it is to maintain 50K LOC of Go.
I've worked on projects of >200K LOC in Python, and the possibly-equivalent >500K LOC in C#. I think the C# was easier to work with, but that's largely because the 200K lines of Python made heavy use of monkey patching, and I've worked in smaller C# codebases that made heavy use of dependency injection to similar detriment. I'm honestly not sure which feels more maintainable to me, given a certain level of discipline to not use certain misfeatures.
I haven't written as much Go, and I wouldn't, because the features of C# which make it viable for projects of this complexity simply aren't present, and unlike Python, Go doesn't provide good alternatives. I suspect the reason we don't have many people talking about this is that not many projects have grown to this complexity, and when they do these problems will become apparent.
The real weak point is Go's type system--it's genuinely terrible, because the features that came standard in other modern statically-typed languages decades before Go was invented were bolted onto Go after the fact. Gophers initially claimed they didn't need generics for a few years. As a result you've got conflicting systems developed before `go generate` (using casts), after `go generate` but before generics (using go generate), and after generics (using generics). It's telling that you seemingly reject generics ("clever library using elegant abstraction and generics") even though go has them now.
Attacking Haskell is sort of a straw man--so far I haven't seen anyone in this thread propose Haskell as a go alternative. I think we agree Haskell is far too dogmatic about its abstractions when it's impractical to be used as a general-purpose language (because I don't think it's intended as a general-purpose language).
[1] https://en.wikipedia.org/wiki/Chunking_(psychology)