Well, Go seems to have enabled a new wave of software renaissance - Kubernetes, Docker, you name it - Go seems to have filled the space that Java was too fat for, JavaScript too light for, and C/C++ too difficult/fun-less to use for (which is everything).
After 20 years of coding, and having just recently discovered LISP and learned Clojure, I believe the perfect language will be a Clojure compiled to native with the ease of Go (e.g. w/ similar import system).
>Well, Go seems to have enabled a new wave of software renaissance - Kubernetes, Docker, you name it
Let's name it. I wouldn't call either of these a "software renaissance". Docker is a wrapper on top of Linux kernel containers, and Kubernetes is a 10000 pound gorilla on top (and port of a C++ app, some say even through a Java rewrite/transpile to Go).
Go was just at the peak of being fashionable at the time (and had some basic features not semantics/syntax related like easy static compilation and cross platform-ness) so got adopted by these projects, the same way many new projects now use Rust.
Docker is a wrapper on top of kernel containers in the same way that Emacs is a wrapper on top of the kernel filesystem API.
I've written a 150-line container manager in shell before (because we had a use case where both Docker was not the right technical fit out of the box and the organization had barely any operational experience with even normal use of Docker), and it's great that containers are built into the kernel and you can just use unshare and a bunch of bind mounts if you'd like, but Docker is its own quite substantial product and brings its own approach to building systems (Dockerfiles, container networking, etc.) that aren't implied by or required by the kernel API.
This sounds like dogma.
While I do agree there are some pieces which are over complicated or just plain bad code, on the whole it's more that the API's aren't quite right specifically for people who want to integrate with it rather than use it.
containerd exists pretty much because of this observation.
And what does it actually give that is new? What does Docker give that you can't get with a normal VM, with Terraform to spin up however many instances that you need? Put an app on an AMI, save it, if you need one instance, or a 1,000 instances, spin them up with Terraform. You get to stick with normal operating systems, such as Linux and Windows. You don't have to learn a bunch of new technologies.
> What does Docker give that you can't get with a normal VM
There is a difference between Linux container (Docker underhood) and full blown Virtual Machine. In case of docker you are not running new operating system, you are just running new processes in different namespace but still using the same kernel. Linux containers are lightweight, you can start them as easy and fast as other processes, without overhead. VM (in most cases) is separate operating system running, which means startup time is different, you have overhead (CPU usage for keeping OS running and all its services, additional RAM usage, additional disk space usage etc) of whole operating system + virtualization overhead (in a lot of cases but can be minimzed).
Yes, I know, but what does that give me? When I'm setting up the infrastructure for my company, what can I do with Docker that I can't do more easily with a standard VM and Terraform? In terms of security, isolation, orchestration of services, managing concurrency, I get everything I need with ordinary VMs and Terraform. I don't see what Docker gives me.
Docker was written in go before go was fashionable. It was mainly written in go because it wasn't python or ruby and didn't want to participate in the language wars.
K8s is written in Go because Docker was in Go and sharing that dev community made a whole lot of sense.
> Well, Go seems to have enabled a new wave of software renaissance - Kubernetes, Docker
Go turned out to be a pretty bad choice for container tech due to its awful multithreaded runtime. Things could have been different if it focused on a single threaded runtime or provided control over OS threads, but it did neither.
Also neither of those projects were enabled by Go. One is a pretty low quality but strategic push into the cloud and other one is a pivot from an attempt to solve real problems of building a PaaS hosting platform.
> Go turned out to be a pretty bad choice for container tech due to its awful multithreaded runtime.
Can you please elaborate on this? My experience with Go is that it is able to fork/exec programs and manage cgroups via sysfs on Linux at least as well as any other programming language can.
You pretty much need to do everything from a separate process, which is the main issue.
Certain things like setns(mountns) will straight up fail in a multithreaded program, so there are some nasty hacks in runc to handle such things in C prior to the go runtime firing up.
> You pretty much need to do everything from a separate process, which is the main issue.
I’m confused. fork()/spawn() are the proper means to launch child processes on Linux. What language would do things differently if the syscall determines the means here?
Because go is always multithreaded.
You literally cannot do certain system level things in Go.
For instance trying to "setns" to a mount namespace will yield EINVAL from the kernel in all cases with Go.
In fact, all namespace related operations are really annoying to do from Go because the runtime is always multithreaded and you have no control over the threading except for locking the active goroutine to its currently assigned thread. This also makes that thread unusable for any new goroutine (even goroutines that are spawned from code running in that thread), which has the side-effect that the new goroutine will always be in the original namespace context.
> Go turned out to be a pretty bad choice for container tech due to its awful multithreaded runtime.
What you mean by that? What makes it a bad choice in context of containers ? I ask because I know how Go is using M:N threading model in its runtime and what linux containers really are, but I can't find a reason in which I would use word awful to describe using Go in this context.
Because container tech is often per-thread which you have no control over in Go.
There is runtime.LockOSThread(), but then if something happens to spin up a new groutine from there it will be done on a new os thread and not running in the container context.
It's not totally impossible to work around, however it does make many things incredibly frustrating.
Modern Go solved the problem of control over native threads threads. But it is puzzling indeed that it took almost 10 years despite popularity with containers.
Only partially... what's available at least prevents leaking the context unexpectedly to other goroutines (due to thread re-use), however anything that spins up a new goroutine from the locked goroutine will end up in a new thread in a totally different context.
Kubernetes was transpiled to Go from Java, so I wouldn’t use it as an example in support of your point. (Read the source code; it’s pretty obvious — and it’s been confirmed elsewhere.). Some components such as etcd, however, are pure Go.
Etcd may not be, but Consul is in my experience substantially easier() to run in production than ZK, and is also written in Go. I’d say this is nothing to do with implementation language and much more to do with a more understandable consensus model.
( based on having spoken to hundreds of operators of both over the last few years, while (disclaimer) working for HashiCorp and other distributed system vendors, but also having personally run both at serious scale)
I’m not sure that people are arguing about “supremacy” as much as they are arguing that Go was an attractive choice for the authors vis-a-vis other languages.
Getting into language or tool superiority arguments without deeply analyzing the environmental factors and use cases seems like a pointless exercise to me, and ought to be discouraged here IMO.
Is the JVM's sandboxing still supported, patched and believed to be secure? I had assumed they gave up on it when they removed applet and webstart support.
Thought it over and you're right, Spark has some SecurityManager glue I've vaguely glanced at, but HDFS relies more on Kerberos than anything in-proc with the workers. IPC into an opaque worker binary would be a lot more painful but can be made to work.
aren't header files quite useful for yourself and especially other devs as a quick summary? I've found myself wishing my IDE would automatically make a header file equivalent for my python code (VS Code actually does this, but gives a lot of other info that's extraneous in my opinion)
I've been learning a bit of Go and I like what I've understood so far but one thing that seems somewhat lacking in the Go Eco system is mathy projects. I know there are some but I couldn't find many that were both active and to my taste. I was looking to learn and hopefully contribute. I was mostly looking for something mature for either mathematical optimization (stuff like linear, quadratic or integer programming) and (non-DL) machine learning. For C++, Java, Scala, Python and Julia (less sure about this), there seems to be much more.
You can compare Go's gonum to Python's numpy. For Java's Weka, Scala's MLlib, Python's scikit-learn, C++'s Dlib to what Go is offering.
Maybe Go leans more on its C-interop for these sorts of things which is a bit like numpy.
What are you not sure of with Julia here? There are places in the ecosystem to not be sure of, but this isn't one. Julia has probably the most advanced mathematical optimization right now with JuMP (http://www.juliaopt.org/JuMP.jl/v0.19.2/) and some of the most advanced post-DL machine learning with the full language differentiable programming tools (Zygote, Tracker, ForwardDiff) which have showcased applications like quantum machine learning and neural stochastic differential equations (https://arxiv.org/abs/1907.07587). Some of it is still in flux, but in terms of ecosystem there's a lot of stuff there that you won't find in other languages.
I meant personally I am less sure about Julia, that my knowledge is less sure. I've neither used it myself nor spent a few hours browser project source code unlike most of other stuff I mentioned.
Since I am not finding what I am looking for in Go then maybe I should ought to try out Julia.
> I've been learning a bit of Go and I like what I've understood so far but one thing that seems somewhat lacking in the Go Eco system is mathy projects.
I think this has to do with golangs poor FFI performance.
Considering it took you that long to be a Lisp convert, I like to know more on how you came to the see the light. I’ve been doing coding for over 15 years, too. And, every time I tried to pick up a Lisp dialect (Clojure was the last) I couldn’t help but wonder what with all the buzz about this. I really wish that I could see the light someday, though.
I wish I had a good answer to this. I suppose I've never looked at anything else than what I was working with (C, C++, Java, JavaScript and much later CoffeeScript) out of "religious" reasons. I guess being really good at what you do can do that to you.
I hold CoffeeScript in a very high regard. It has freed me from two things - the spaghetti verbosity of JavaScript, and the abrupt syntax changes of ECMA spec introduced in the recent versions, ones I don't particularly see as a positive development of the language, _especially_ compared to the timeless cleanliness of CoffeeScript.
So and I think that CoffeeScript may have thawed me a little towards LISP-like syntax and behavior.
I don't know. I think at some point, you realize that you have written (and managed) enough of C/Java-like verbose code and that it's time to try something completely different.
So and then it hits you - programming should be data-centric, not language-construct-centric. And that a too big codebase must be a result of ill-specified project scope, more often than not.
Depends where you are coming from. What languages do you work in now? If the language you are working in now has excellent support for concurrency, then that part of Clojure is not going to impress you. Or if you currently work with a language that makes it easy to write DSLs, then that part of Clojure won't impress you.
I think it should always be mentioned when people call java fat, that the core language is usually fast enough. What makes java "fat" is the mentality of "Frameworks".
For me, Go seems like java, but without the fucking frameworks.
Not just frameworks, but the JVM as well. Having go compile to a reasonably small statically linked binary (which can be shrunk even more by things like UPX if size really matters) is great.
Needing to bring a very large, very complex virtual machine to execute your program adds a lot of “fatness” IMO.
Something that any Java developer was able to do since 2000 with commercial JDKs.
And for those that want the free beer version instead, using a fat jar with linked jvm.lib is hardly different than packing Go's runtime in an executable file.
I have been wondering for some time now, when exactly is a language "fat" or "light"?
The idea seems to be there and "obvious" in a way, but haven't been able to pinpoint exactly what it is.
Is it a feeling or an actual metric?
e.g. When I work with Java in IntelliJ, things don't feel significantly slower than developing Node in WebStorm. Is it perhaps the way the code looks, in terms of verbosity?
I do agree that frameworks contribute to this feeling of Java, but again, is it because a Spring Boot webserver boots slower than a Node one?
Or maybe, say a Java cli program vs a C++ one. Is it the speed at which things run?
Could it be that, because we know Java uses a VM then it "should" be slower than compiled code and thus it's fatter?
What is that "thing" that makes us determine that a language is fat or light?
Well,"fat" is an unprecise term anyway, but the main concern when evaluating languages like that is mandatory overhead.
What exactly constitutes overhead is debatable, but generally anything that is not strictly necessary for the functionality is considered as such... Even if it has useful tradeoffs.
Examples from Java are the GC, compound types need to be objects and thus are wasting space with type and vtable pointers, and the jvm is actually an interpreter (one with only stack semantics at that).
Since C++ has none of that, it is quite natural to say that it is more lightweight than Java.
But that in itself is not necessarily a judgement about execution speed, although it is heavily implied.
If you write Java in a way that it is heavily jitable and doesn't stress the GC, you can end up with a faster program than a naive C++ version would achieve.
But all else being equal, the C++ version will be ever so slightly fester, because less stuff to do means more stuff done.
The one area where Java feels fat is the memory usage overhead due to 64-bit pointers being used a lot. So if you end up holding and processing a lot of data in RAM, the process allocation and the garbage collector metrics won't look awesome. That said, recently Openj9 improves on this literally by miles. I agree with all the other points - Java and its tooling are very powerful and by that nature they are enabling us to put that much more load on it, so yes, that's where it will be predominantly visible.
Edit: that said, Java was created when we didn't have containers. (Just as Clojure was created before Java had lambdas and aggregate streams). So it may be the time to come back closer to the metal again, but as you point out, we need to maintain a clear mind while contemplating that.
> What is that "thing" that makes us determine that a language is fat or light?
In the same way the word modern is thrown around in the context of languages that have evolved a bit, I think heaviness is a notion that is an indicator of how committed the person is to the tradeoffs made by the tech: a heavy language made tradeoffs erring on the side of technical debt and the light one has not.
But it's easier to present one's own preferences as technical facts when trying to flame or troll.
In the case of frameworks, they often make choices for you, they are opinionated. It's easier to get unhappy about it, in a bike shed sort of way
It's just that some people repeat what others say without fully comprehending them (this is especially evident in the golang community). People parrot what the golang authors claim, even though many of such claims are baseless and incorrect.
Every language has flaws. Some of the best languages are those that take existing languages and just throw out some of the bad legacy mistakes, making what’s often a slightly different but pretty much objectively “better” language.
For example: Kotlin is a better Java in almost every way.
The problem with almost all such languages is that they aren’t different enough from their “parent” languages, meaning they struggle to gain traction. Both D and Kotlin are here.
I think Kotlin has also leapfrogged D on this — Google blessing it for Android use and an Android community clearly hoping for something better than an aeons-old version of Java goes a very long way.
The Android success story might just be what the language needs to establish itself enough that it’ll become a backend staple too
On the JVM, just like any other platform, is safer to bet on the systems language of the platform than guest languages with extra debugging layers, tooling and their own wrappers for idiomatic code.
Then Kotlin is trying to stretch too much, meaning that any portable Kotlin code cannot depend on any platform or needs multiple implementations.
Finally Kotlin/Native has special semantics for handling data structures, as it tries to be Swift like, so special care is needed when the code is supposed to be used from Kotlin/Native as well.
> Finally Kotlin/Native has special semantics for handling data structures, as it tries to be Swift like
If you mean Swift's issue with having to use weak references to prevent cycles, Kotlin/Native doesn't do that – they have a cycle collector, so it should behave the same as Kotlin/JVM.
It does have a different threading model though, where you can't share mutable structures between threads. Very recently they've introduced a "relaxed" mode that does allow it, but it's extremely experimental and sounds like it would be slow (since it can't defer updating reference counts).
(Also, what are you referring to with "extra debugging layers, tooling and their own wrappers" for Kotlin on the JVM? I can use IntelliJ + Maven + Spring with either Java or Kotlin. I don't see any extra layers.)
Yep, I mean the semantic model for data structures.
Try to call idiomatic Kotlin code from Java and see how it looks like. Specially when co-routines and similar higher level constructs get added, or having to deal with compatibility between Java streams and Kotlin sequences, which don't build on top of Java ones.
When Loom arrives, it will be a similar story, Java proper fibers and Kotlin co-routines and how they might interoperate.
Nothing unique to Kotlin per se, all guest languages happen to have similar bumps. Calling Scala or Clojure from Java leads to similar lists.
Then one is required to buy into JetBrains tooling for Kotlin, and get an additional license for Clion as means to have a graphical debugger for Kotlin/Native.
There is an Eclipse plug-in, which is kind of 2nd tier, and none at all for Netbeans.
With Java it is just ready set go, no matter which IDE, with just one IDE for Java and native, alongside mixed mode debugging (a feature JetBrains doesn't see a value supporting) and not all Java shops are into InteliJ idolatry.
Not a datapoint by any means, but I think kotlin gained grounds mostly on Android, seeing the questions on stack overflow. When they ask about kotlin it's almost always in the context of Android, while Java questions seem to cover a wider range of fields and platforms.
So, could it be more of a reaction to java-for-android being sub-optimal or the fact that Google suggests kotlin to new developers than Java itself being sub-optimal for the task?
Go is actually pretty close to the perfect language to me -- for "general purpose" computing, at least. Sure, it's plain and somewhat boring to write but it doesn't make me that much less productive. Error handling and generics are the only other features that I want, so I'm really looking forward to Go 2, if that ever comes around. I would be happy to see Go declared "feature-complete" at that point, similar to Elixir [1].
But even then, if I want to have fun and feel like I'm learning new things while I code, I'll always use something else that's more interesting.
Go and Rust are the latest languages I started to use. Neither is perfect, both are awesome, but more importantly, both are a step forward, both get a lot of things very right, and for the rest, most are at least acceptable. When I code, most of my frustrating moments come from underlying problems with cross compatibility, access to hardware, graphics, encodings, etc, which are rarely problems caused by languages themselves.
We are pretty new at this, we are still trying to figure out a lot of things. We are still experimenting a lot with many features. But step by step, new language after
new language, we keep improving.
Trying to be realist tends to be a painful exercise, but in the case of programming languages, I think the situation is looking pretty good. There might be a billion paths left to explore, but at least the ones we are exploring are bringing something useful to the table. I hope we soon start writing articles titled "the perfect languages and the exciting ways we are getting closer to them". The article was pretty good, by the way.
Go has some great 'under the hood' features, ie ones provided by its runtime, but man the language syntax makes it really annoying to take advantage of them. The sweet spot for me would be something between Go and Rust, ie a language with generics, algebraic data types, pattern matching, garbage collection, good concurrency primitives with a good work stealing scheduler.
So I'm basically asking for OCaml with multithreading.
In case you haven't heard of it, you should check out Pony (https://www.ponylang.io/). It's definitely got its own flair, but characteristically it's a lot like what you describe.
Wait... We must having different definition of perfect. I would say Go is a very good language, and probably be one of the best in terms of simple and practical design, fast execution, fast compilation etc.
The perfection, however, is not something Go never meant to be. The perfect language possibly support fully dependent type like Idris while maintains zero abstraction cost like Rust, having precise syntax rule like Lisp, and a powerful runtime like Erlang, and the list goes on.
The success of Go lies in it's trade-offs, the designers are opinionated, knowing well what they want and what they don't, and executes things well in the implementation.
golang puts semicolons behind the scenes anyway, resulting in bizarre behaviors like not being able to write braces that follow "if" or "for" on separate lines.
Every time I've tried to write some go, I've always started reaching for some things that aren't there (essentially generics). I'm looking forward to go 2.0 and hope the inclusion of generics will make it more pleasant to write.
Also, while it's said comically often in HN comments, Rust (and particularly the iterator api) is very pleasing to write. I'd say it sits in quite a sweet spot language-wise.
Context, I'm a C++ programmer who had been writing in dynamic languages for the last couple of years.
Curious about your use cases for generics. I write a lot of Go and don't often find myself missing generics. I also used to write a good deal of C++ and didn't find myself often needing templates either.
How often do you find yourself writing or using `interface{}`? It’s littered all over the go code I’ve seen, including the standard library.
`interface{}` is the equivalent to using `Object` in java land. It completely punts on any sort of type safety, relying on the developer to handle messy edge cases correctly. I think one of the only reasons that it hasn’t been as severe a problem in go as java is thanks to that explicit error checking that go makes developers do (and that gets so much hate online), forcing developers to consider the case that the type isn’t what they assume.
But genetics provide a useful way to safely make those assumptions, and cut down on the boilerplate around them.
> It’s littered all over the go code I’ve seen, including the standard library.
From my experience, interface{} is incredibly uncommon in application code. In most instances where the standard library uses it, it's usually justified because the function in question really accepts literally any type of argument, e.g. json.Marshal() or reflect.TypeOf(). The only counterexample I can think of are sort.Sort() and sort.Slice().
Given all the implications of adding generics to the language, I think Go would be better off just adding the common higher-order list/map manipulation functions to the builtins, i.e. the likes of map(), filter(), maybe reduce().
I just wanted to make some small functions for logging different types of things. I wanted to use parametric polymorphism to dispatch to different functions depending on the type of the thing I passed in. I think I ended up creating a few `log_type_foo` and `log_type_bar` functions rather than the single `log` function I wanted. It looked really ugly to my eye.
The thing I really wanted was parametric polymorphism. I seem to remember reading something where the go people think this is equivalent to generics.
Go isn’t the perfect language because it doesn’t intend to be. Likely, as this article seems to hint at, there’s no such thing and there never will be. Go has a well defined set of design goals (including simple and pragmatic) that it has achieved.
Yet the concrete proposals keep getting shot down or reworked. If anything, this makes me more sure that the Go developers care about their core design goals. They are listening to community feedback, but they don't allow a vocal minority to bury the original vision.
Let me break it to you, there is no perfect language, just the ones most general purpose enough to cover most common problems and most suited for your applications. Each language has it own use and one doesn't trump another in terms of functionality or usage, they only compete when they lie in the same paradigm and this is where most language oriented comparisons are drawn whether be it time of execution or performance. This is also why there are more than one language within the same paradigm, as each one tends to have it's own intricacies and features that was solely developed for the purpose of improving upon it's predecessor. So there you are, stuck with more than one language to suit your needs and hence the untenable need for a perfect language.
Also, trying to bend a certain language to adopt features and functionality that are unwarranted for it's use cases is another reason for the criticism some languages draw upon from the community, therefore it's best for the language to stick to what it does best and stop it at that instead of trying to fit in to uses which it was never intended for when it was first developed. But the language itself is agreeable to evolve through time, but to constrain itself to it's actual purpose would be better for both the language and it's users.
Languages are software products like everything else, so they either improve to cater for their "customers" and survive against competition, or they wither and die into some kind of maintenance corner.
Even languages that supposedly no one uses like Fortran and Cobol, get their standards revisited every couple of years.
One might not want to re-write that surviving Cobol program into a reactive text UI with a couple of FactoryFactory classes, but it is surely possible as per latest language features.
Language can't be small, if it's universal enough. Otherwise you end up with trade-offs for its size, which limit scenarios where that language can be used.
So there is no "perfect" language. All languages have some trade-offs. If you want small - you pay for it.
I agree that every language has its trade-offs, but you can move lots of stuff into the standard library of the language.
Taken to the extreme, the result can be an extremely small language. If your language is expressive enough, the result can also look like it is a much larger language. For example, Forths implement if/else, for/do, while/do, repeat/until, switch/case in terms of some stack manipulation and ‘jump’. Similarly, Common Lisp builds its control structures, object oriented programming support, etc, out of a few primitives.
Theoretical cases that are unusable in practice aren't interesting in the above context. We are talking about practical languages. It's not about Turing completeness.
It's a futile quest to find the perfect programming language. Any language that gains popularity will face a vocal demand for new features at some point. But it's not just about adding new concepts. Programmers also want new shortcuts for existing features to save them a few keystrokes. That then entails issuing recommendations on when to use such shortcuts to keep code consistent.
There are plenty of small languages, but they lack popularity or rich libraries. The only language I can think of that remains in widespread use and is fairly small in language size is C.
I'm strongly in the camp that small languages are preferable to large ones and wish language designers would follow this principle more closely. But small does not necessarily mean more readable. And language syntax design remains a neglected aspect of programming language design.
It's FreePascal. Nobody believes me, but we have perfection already. No crap you don't need. No crazy corner cases. No C++ templated metaprogramming lambda auto pointer garbage. No "I can't write a linked list without a Grimoire" Rust.
It's great. You get a ton done, and simply ignore the language wars. No VM trash (Java). No web trash (JavaScript/"Webasm"). No crap.
Try all the others, and try FreePascal. It's old enough that people aren't bolting on stupid shit. The community is awesome. Everybody just gets shit done. There's a lot of art if you go looking, but none of it requires a kilopage book to describe.
As much as I like Pascal, and I am a big fan since Turbo Pascal since version 4 for MS-DOS, using all of them until about Delphi 3, lack of some kind of automatic memory management is a killer for anyone that cares about code security.
Delphi only supports it for COM and Objective-c/Swift interoperability, everything else is as manual as TP for MS-DOS, and I bet FP hasn't improved in that regard.
Ah, a young lad :) I started tinkering with Turbo Pascal 3 and even have some experience with Pascal on the Apple II. (Not because I am THAT old, but that were the only computers my school had at the end of the 80ies)
The successors of Pascal are also highly interesting. I used Modula-2 quite a lot, but never got deep into Oberon. I do think one big advantage of Go is, that it so strongly draws on the Wirth languages that it almost can be considered a modern successor.
And I fully agree, one shouldn't use a language these days for general programming without some kind of automatic memory management. Which is the other aspect which drew me towards Go, a nice nativley compiled language with a GC.
I was initially also drawn into Go, mainly due to its Oberon-2 influence as well. Even did a couple of contribution attempts during pre-v1 days , but they weren't that good anyway.
However back in the Oberon days, what I liked was the evolution that followed suit, Oberon-2, Component Pascal, Active Oberon and finally Zonnon.
As such, I never appreciated the minimalism discussions around Go and not having modern language features. For that we already have Wirth cutting down Oberon features in every Oberon-07 language specification document revision.
Still, I do look forward for some Go 2.0 roadmap items to actually land in Go, and advocate for its use in applications that would be otherwise written in C, if Go wasn't a thing.
And I also collect examples of actual systems work done in Go, regardless of it not being suitable in the opinions of anti-GC crowd.
Honestly, I am quite happy with the role of Go as a better C. A lot of applications still get written in C because C++ isn't making things better. And Go has a lot of higher level constructs than C, so it reaches into much further domains than C. Structs with methods and struct inheritance as well as interfaces give a surprising amount of power. Also, having first class functions and closures gives a ton of power C (and even C++) lack. Add to that the great GC, and there are surprisingly few tasks I consider Go unfit for.
I am certainly watching the work on Go 2.0, it is certainly worth to continue working on enhancing the language, but I am very happy how carefully it is done as adding too much complexity to the language would be detrimental. There are only a very few things I miss generics for and usually they can be worked around.
I might have to add the disclaimer, that my paying job is mainly about programming in Lisp, so for me there is a high-level alternative to Go, but this also shows me how Go is a great solution for a lot of problems.
I'm a big fan of the Wirthian language family, and while I think FP is eminently usable, I still think Modula-3 would be a better fit for Gophers. A M3 frontend for FPC would be a great combination...
I really like F#, but it has a few major pain points that hinder adoption:
The first is that once a user reads over the various functional things like let statements, record types, union types...etc, you still don't really know how to code in F# unless you already have a .NET and in particular C# background. Nearly all the documentation and books (I have 3 of them) assume you're a C# dev making the switch. This is similar to learning Clojure/Kotlin/Scala on the JVM without knowing any Java. Everytime I post this someone tells me that is untrue, so I give it another shot and an dissapointed. With Python, there are dozens of books written for beginners that show you the building blocks (lists, control flow, dictionaries, tuples, functions, classes, file IO...etc) and how to use the blocks to build useful programs. With F# you get three sentences in and then hear how this is just like this thing in C# that I also know nothing about.
Microsoft also seems to not be very dedicated to providing F# tooling support.
On the plus side, I've found the community to be super smart and helpful and the language itself is very beautiful from an aesthetics point of view to me somehow (just looking at a bunch of code ). In comparison to C#, there is sooo much less noise (); everywhere.
Go is a pretty simple language that compiles to static binaries. F# ain't that as there is all the bloat and baggage of .NET. Even with .NET core, that is more confusion for me.
You are not wrong. In fact I would argue it would be very hard to learn F# without knowing C# even if resources did not assume you did, since one of F#'s major strengths is the ability to use the entire .NET ecosystem just like Clojure as you said, and the .NET ecosystem is written for C# semantics (usable from F# but generally with some slight syntax switching).
On the tooling front, Microsoft include it as a prime language with C# and VB.NET in dotnet core, and ship F# tools with their latest IDEs, which I appreciate and is almost more than I would expect, since their flagship is C#. It would be nice if it was better, but that would require it to be a lot more popular which is fair enough.
Static binaries are something that would be real nice. I've done it with F# using CoreRT, but its certainly not as simple as Go.
Maybe I've been looking at it wrong. Those books don't exist as you have to go through C# first unless you're just gifted maybe.
I've thought OCaml might be a good F# replacement as it has its own compiler and interpreter that doesn't need the CLR/JVM and F# was mostly based off of OCaml, but it's a little awkward to me on Windows and OCaml itself seems to be a little painful (more boilerplate than expected) for doing things like IO (which I do a lot of).
F# is pretty close to a first-class citizen in VS and VSCode (Ionide, etc.) at this point and the community seems good -- what support are you lacking?
One of Go’s strength is the opinionated nature that makes it easy for me as a developer to read and understand other’s code. It’s trivial to jump into the implementation of the standard API to see how things work.
Simple things like gofmt have helped enormously. Leaving out exceptions and templates have made the code harder to write (for me at least) but easier to read. It's the code equivalent of "schema on write."
The frustrating thing about Go for me is that it lacks generics but is strictly typed, which encourages the passing of empty interfaces which can cause problems at runtime. You have to be very careful about storing all your data properly in structs or you face a big foot gun.
Which, like, fair enough. It's not perfect. But it's still really really good. I mean, people still use Java...
The alternative approach here would be to gernerate code for things that need genetics rather than using empty interface. Of course as with all things there are trade-offs... as there will be if/when generics are introduced.
I really like Go (and Rust) and would like a job as its main language. However I don't have enough experience with the language, anyone know the job market and recommend a path or should I give up? I don't have time to spend a few hours/wk just to learn enough to get experience.
Usually, people start building demo’s or other side projects at their main job using whatever new tech they want to pad to their resume. This is bad for the company and your coworkers, but for people jumping ship every 2-3 years leaving a mess has few downsides.
A more ethical approach is to simply learn at work during downtime over browsing the web. It even looks like work, so few people complain.
Building demos to pad your resume is bad for the company, but building demos to achieve unmet business needs is good for the company, and usually there are plenty of those lying around.
What I've noticed is in a lot of big enterprise shops that are playing "Agile" there is a big gap in support/tooling software; that is, software that makes developers more productive by automating repetitive tasks. Since this type of software doesn't exist to management because it won't fit nicely onto a Jira board or it can't be bent to fit a specific OKR, it's the perfect type of software to explore other languages.
Yes! It's also software that matters very little if you can't get a good support story for it - it's not customer facing and there are no SLAs, so worst case, everyone's automation fails but they can still get their job done. If you want to build a CLI for handling code reviews or an automated bisection tool using OCaml on a Raspberry Pi or whatever, that's way more acceptable than building customer-facing software (either services or shipped software) using OCaml on a Raspberry Pi.
Also, writing this type type of software makes you an actual 10x engineer (by making 10 other engineers more productive).
From the article? Not at all. It doesn't matter if it's easy or difficult, because it only needs to be coded once. And it's not a request to change the language at all, just the tooling.
I don't really understand how a REPL would help with coding Go. I think I use tests to do what I think people use REPLs for, and I don't get why I'd change that to a REPL if I could - having those tests permanent is a good thing.
A REPL allows you to work in real-time with the system as it's written.
It's nothing like unit testing. It's like running around in the system able to poke-prod things as you go.
Particularly important is that it doesn't mean working in a console. Most REPL-based dev is done in the source code file itself, and you send code from the document to the REPL process. Code is written in a file as usual, and you utilize the REPL to massage the system as you want.
yeah that would be interesting to try out, thanks for the link :)
I meant that I use tests as a way of poke-prodding things - because Go compiles so fast, and you can run individual tests, it's easy to tool around in test code setting up specific state in code and seeing what happens.
Go never intended to be perfect, it made no claim to be perfect. When people still write long articles explaining why Go is not perfect, for me, it is pretty strong indication that Go is actually pretty close to be perfect.
"It's the economy, stupid", said a former US President.
It's the economic attributes of a language that contribute to its market share. Go is a very good language from an economic point of view. (Java was also a remarkably good language, from an economic point of view.)
As to the your specific point, I suggest a sufficiently large slice of software workers will include the sub-set that write long articles defending their theoretically so-so but economically superior languages: Those blog posts are a feature, not a bug, of economically viable programming languages.
The bottom line is no one has proven that "the bottom line" is positively affected by use of sophisticated languages. The killer app of ho-hum languages is their demonstrated economic value.
PHP has a couple attributes that make it easy to start using and deploy, and a massive pile of network effects.
Those attributes have nothing to do with why it's a garbage fire, and 'sophistication' also has nothing to do with it. You could redo PHP to be equally sophisticated, and still have the things that make it popular, but also be a hundred times more consistent and less buggy. Heck, fixing refs and inconsistent syntax could make it less sophisticated.
After 20 years of coding, and having just recently discovered LISP and learned Clojure, I believe the perfect language will be a Clojure compiled to native with the ease of Go (e.g. w/ similar import system).
One such direction seems to be Carp (https://github.com/carp-lang/Carp), but I haven't tried it yet.