Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Brooks, Wirth and Go (fredrikholmqvist.com)
258 points by kiyanwang on Aug 31, 2021 | hide | past | favorite | 167 comments


Author here.

This thread made my day.

To the people who have commented, thank you. I seldom get to interact with SE people who enjoy their craft, and to see that so many have not only read my post but spent time writing out (dis)agreements brings a smile to my face.

Some people have pointed out some historical inaccuracies, to which I'm grateful. Most of these points are taken from random sources I've consumed over the years, including for instance the two-week-turnaround, which was a hand-wavy quote from Brooks (video interview) for which I don't remember the source. Using it was for effect, perhaps a bit too eager on my part.

I'm new to writing, and by extension, putting my thoughts to paper. This blog entry was written to concretize my thoughts on this subject, as I've found that writing them down helps to clarify them a lot, and to practice my English (non-native speaker).

On the subject of minimalism: I've spent a few years in C#, Java, Swift, C++, JavaScript and Haskell. In all of them, the passing of time has always been a problem. What worked two years ago is broken today, and what was standard then is outdated (or even deprecated) now. Throw in some IDEs, frameworks, licenses etc. and the amount of man-hours lost to auxiliary ad-hoc work has been both draining and expensive (for our customers). Switching several projects to Go mitigated this almost immediately, where the return of investment combined with the novelty factor has lifted the spirits of not only me but the people I've worked with. Going back is (for me) out of the question, at least for the time being.

Happy to see people sharing their thoughts on this.

Cheers :)


I really appreciated the article. I feel the same way about Go as you do. No other programming language has made me feel this way. It's the most practical programming language I have used over the last 40 years.


Agreed! Go has made a big difference for me personally, its simplicity and tooling design makes it easier for me to actually get wrapped up in in programming a solution instead if being paralyzed by the design and starting phase.

Much respect for polyglot programmers and using the right tool for the job, but go personally clicks best for me. I prefer to focus on gaining a deeper understanding of writing effective and maintainable go for a variety of projects. So using go as my main language is actually really important for me. I'm thankful its possible to require that these days.


How does Rust compare in that regard? Will 3 year old non-trivial rust code build/run without issues?


Yes, Rust releases have (arguably) kept better backwards compatibility than Go releases have. There have been IIRC a few Go releases that broke code in practice despite it being considered a non-breaking change.


> I've spent a few years in C#, Java, Swift, C++, JavaScript and Haskell. In all of them, the passing of time has always been a problem. What worked two years ago is broken today

Java, C# and Javascript are obsessive about backwards compatibility (one of the reasons Java's changes to the language are moving so slow).

There's nothing magical about Wirth's languages or Go. What worked two years ago will be broken today, and the language has very little to do with it.

> Switching several projects to Go mitigated this almost immediately

As in: you haven't run into things that are broken yet.


> Java, C# and Javascript are obsessive about backwards compatibility

This might be true for Java, but this has not been my experience for the other two. NET especially. Incrementally migrating from 4.6.* to CORE3 has been challenging, even when we've had external (expert) .NET consultants to help us.

> There's nothing magical about Wirth's languages or Go.

I don't think so either! On the contrary, they're everything but magical.

> What worked two years ago will be broken today, and the language has very little to do with it.

Almost on the dot two years since those rewrites happened and nothing (to my knowledge) has broken. On the contrary, looking at code examples and best practices from 2011 look exactly the same. Writing Swift, anything older than 2019 made me sweat.

> As in: you haven't run into things that are broken yet.

Longest record so far! :)


If you ignore the trend and stick to a single version of any language you've mentioned the churn would have been minimized. Therefore it is not a (core) language problem but a social or ecosystem problem. Not to say that it is not a problem, but it would be probably nothing to do with anything you've claimed in the OP.


You almost never want to get stuck on an old unsupported language version. Language is defined by culture, ecosystem and their application of intention.


> Almost on the dot two years since those rewrites happened and nothing (to my knowledge) has broken. On the contrary, looking at code examples and best practices from 2011 look exactly the same. Writing Swift, anything older than 2019 made me sweat.

Currently working on a Java codebase that has been around for ~8 years, previously developed with JDK7, now migrated over to JDK8 where it's probably going to stick for the foreseeable future.

I think that there's a lot to be said about the orders of magnitude that people think of when talking about the longevity of the code:

  - someone may believe that code running with almost no changes for 2 years is good enough, which may indicate relatively stable libraries/frameworks or approaches like deprecating functionality without explicitly removing it
  - someone else may believe that code running with few changes for 10 years is good enough, which probably also indicates stability of the underlying platform as a whole (for example, JDK8 still receives updates and has been around for 2014 and will be maintained until 2026 or 2030 depending on the distribution) at the expense of slower pace of change
  - someone else might expect their code to work as good in 40 years as it does now, perhaps statically compiled code in very particular domains (e.g. code that typically has few dependencies and runs on hardware directly), though i'd argue that this is a bit of a rarity
One can probably make observations about the different libraries, frameworks, platforms, ecosystems, ways of thinking and perhaps about us as a society based on that, though i doubt that i should necessarily be that person. Anyone know of people who've made similar observations, perhaps?

Regardless, i think it's interesting to look all of this and to wonder about how long any particular language, piece of code or platform will survive. Personally, i really enjoy the ones that are developed at a slower pace and don't need constant churn to keep the code running.


Writing Elixir the last few years has been great from this perspective of stability. There are breaking changes occasionally but they're mostly small or relatively trivial to fix due to the functional programming nature of Elixir.

For embedded stuff Nim has been great as well and code from pre-1.0 Nim code often just works or requires a couple of module renames.

Alas, the Linux kernel seems to drop or swap api subsystems every time I hit refresh on lwn (ok, thats a bit exaggerated) but hey.

Doing embedded projects means I really don't want to rewrite code to keep up with fads for a device thats intended to work for years.


> Almost on the dot two years since those rewrites happened and nothing (to my knowledge) has broken.

We have code in Java that's been running non-stop for two years. Undoubtedly we have .net code that's been running just as long.

However, it's possible that switching from .net to .net core incurs costs because they are quite separate projects. We'v been running on .net core.

> Writing Swift

Yes, Swift is an entirely different story, and it's... a weird story to say the least.

> Longest record so far!

You haven't lived through switching to the new modules structure then :) That's quite a big breaking change (and poorly executed in my opinion).


> You haven't lived through switching to the new modules structure then :) That's quite a big breaking change (and poorly executed in my opinion)

Ugh. I was on the other side of this argument until you brought this out. It’s a great point. The actual language is very good at compatibility but the dependency management did go through a big painful change that some internal code bases I work with but do not own still haven’t worked up the motivation to deal with.


This bit me really hard as I only come into contact with Go occasionally.

First, I couldn't understand how GOPATH worked (many years ago :) ), then that changed, and recently (last year) I spent something like three days trying to figure out why some dependency wouldn't fetch (it was because modules).

:)

For a person who works with a language daily such changes are not as obvious.


I really like Go and respect the authors and maintainers of the language. It is well designed and achieves its goals exceptionally well. The author of this article is right: Give it a Go! You'll learn it quickly and will appreciate having learned it, there are many, very considerate design decisions to discover, ranging from language implementation, tooling and the standard library.

I'm sometimes jealous of Go's users, the language is very readable, light and refreshingly straight forward.

That said, I cannot bring myself to use it (anymore).

The language doesn't scale with my ability and in some sense seems almost patronizing. I'm not a particularly good programmer, but I'm experienced enough that I feel held back by languages like Go. When you are used to more power, expressiveness and clarity, then you just miss it at every step of the way.

On one hand the language doesn't let me encode higher level thinking directly (relational, functional, domain specific...) but I have to act as a compiler. And vice versa it doesn't expose the low level control that one would want to squeeze out performance and minimize resource usage either.

Again, Go still feels attractive in some sense, and I wish I had a reason to use it more, but I can't find it.


I think not letting you encode relational, functional and domain-specific higher level thinking is by design.

I review a lot of code, and have been programming for 20 years. I think most programmers go through a phase, after 2-5 years of experience, where they get really into higher-level thinking. Everything is written according to a mantra like TTD, functional design, or everything-is-an-object, and they spend most of their complexity budget on abstractions.

The code people write in that stage of their careers looks to them like an improvement over the simpler concepts they started with, it ends up being so much technical debt 12 months later, because almost nobody is good at predicting which higher-level concepts you are going to need, and by the time you realize you were wrong, it’s too expensive to remove them.

It’s hard for a language to encourage this kind of you-ain’t-gonna-need-it thinking without seeming patronizing, but I promise you that the next 10 years of your career will change your mind.

(I realize I also sound patronizing and I’m sorry: I typed this on a phone, so left out some caveats, such as I don’t know you and your background.)


I agree with your points generally, but I already went through that phase and "recovered". I get a bit of an allergic reaction from unnecessary abstractions, design patterns and purism since quite a while.

But I think that "phase" is important, because it pushes/pushed us to dive more deeply into topics around abstraction and to make mistakes to learn from. And to keep to good parts so we can apply them accordingly.

I'm convinced that some of those things are almost timeless and just 'correct' for their given use-case. Avoiding side-effects and composing functions isn't a fad, nor does it add complexity but reduces it. The issue emerges when we act as if we can do that completely and absolutely.

Relational data modelling and APIs are not things you can even avoid, many seem to try to do this only to come back full circle and re-implement them ad-hoc.

Similarly there are a ton of other data-structures and patterns like this that have more limited applicability, but work really well if so and reduce the mental overhead to understand and extend a program, make them simpler and more robust.

Another good example would be data-driven/oriented: Pulling concrete stuff out of procedures and putting it into plain, boring data, possibly into structured text (JSON, XML, YAML...). It's boring(!) to program like this once you have settled on the right structure/schema, but in a very good way as in it is relaxing and productive. I would even go as far to say that this is one of the most productivity enhancing force multipliers in programming, but it requires a bit of up-front thinking or at least rewriting/refactoring as opposed to going full-on-procedural.

Again, I hear you, I've been there too. Abstraction is hard and requires experience and constant learning. Doing mistakes in that regard is very expensive (time and mental taxation). But lets not pretend like abstraction is inherently evil or misguided, especially not if we choose patterns and data-structures that have stood the test of time like the above.

So I think abstraction should be treated as a kind of investment, it has a cost (including opportunity costs) and a possible long term payoff. So we need to be considerate and careful.


>(I realize I also sound patronizing and I’m sorry: I typed this on a phone, so left out some caveats, such as I don’t know you and your background.)

Yet your comment is extremely dismissive of young people’s experiences. I personally had to fight for what I considered to be trivial design decisions, because a senior dev was pushing programming techniques based on his experience with Lisp in the literal 80s.

Now I love lisp (although I prefer to work in Haskell), but I had to rewrite this old fart’s shitty and slow java application in a vaguely object-oriented way, so that we would have a hope in hell to attract new devs.

And when we actually employed new devs, I had to simplify my code even further to help onboarding, thankfully I had exercised most of the idiotic code at that point, so we built a simplified codebase that was accessible to anyone who had a vague understanding of an MVC-style web app. (Today that would probably be some kind of basic React/Vue app.)


Many people love it on HN, but for me coding in Go is a chore that makes me disinterested in my job. After 3 years of working with it, I dispute the claims about readability and simplicity. In Go, you solve problems by outputting inhuman amounts of code. It can feel productive, but most of it is just noise.

Of course just because I'm not well suited for it doesn't mean it's not for others. But there's a lot that we've collectively learned about programming, that's not acknowledged in Go's design.

You can start coding in it pretty fast, that much is true.


This was my experience as well. I inherited a codebase of very simple, well-written Go. Anytime I needed to go to the source to find out how the business logic was implemented, I ended up scrolling... and scrolling... and scrolling... collecting the information bit by bit like a scavenger hunt.

Go seems like it would be great for motivating someone to figure out the simplest possible way of solving a computing problem. Unfortunately for software development (but fortunately for humanity) not all problems allow engineers the freedom to brutally simplify the solution. Many domains come with hundreds of rules and exceptions that were all added for nontechnical, human-driven reasons, and your company probably does not have the power to throw out all existing business practices, human expectations, and regulatory requirements in your domain and replace them with something simple to implement. You just have to suck it up and implement the requirements in the most programmer-friendly way you can, and Go doesn't offer very good tools for that.

Admittedly, Go sets a higher floor than more expressive languages. A smart programmer with poor judgment can do a lot worse in Java or Scala than they could in Go. But Go's ceiling isn't much higher than its floor.


It's not noise if you can understand every line written though. That's maintainable code that will last the decades, instead of being replaced with the next hip language 5-10 years down the line.

If you don't have to think too much about what code does, you can get a better overview of what's going on.

Plus, there's more developers available if your code is simple. That is one unspoken (yet written down!) reason why Go was devised - to be able to have mediocre developers still be productive.

Clever code is illegible by most people, unless they wrote it, are very smart themselves, and are actually interested in the code in front of them.

Go code is readable and comprehensible by everyone.


> It's not noise if you can understand every line written though.

If I can understand every line of a 10,000 line program, is it not still noise compared to a 100 line program that accomplishes the same goal?

> If you don't have to think too much about what code does, you can get a better overview of what's going on.

That's the problem with Go. I do have to think about what the code does, because the ability to abstract things to a higher level is virtually nonexistent.

Take the following:

    items
      .select  { |item| whitelist.include?(item) }
      .sort_by { |item| item.price }
      .map     { |item| item.name }
I don't have to think about what this code does at all. It's obvious. And I am extremely confident that there are no bugs. The Golang equivalent?

    var tmp []item
    var res []item

    for _, item := range items {
        for _, entry := range whitelist {
            if item == entry {
                tmp = append(tmp, item)
                break
            }
        }
    }

    sort.Slice(res, func(i, j item) bool {
        return tmp[i].Price < tmp[j].Price
    })

    for _, item := range items {
        res = append(res, item.Name)
    }
Sure I can understand every line of the latter. It's still noise. And it's only comprehensible by sitting down and carefully reading every single line. Are there bugs? I think not, but I'd have to think through it to be sure. Go took what should have been a simple problem and somehow turned it into something that would feel at home if I was writing a device driver.


Case in point, I just reread this and realized there are two bugs.


I’m simply not convinced about that talking point. It’s an extraordinarily strong claim that it’s comprehensible to everyone.

Sure if you have a 100 lines of pure Go, that should be readable. But take a look for instance at the source for go tool cover. How much time do you need to spend to understand it? Then imagine Rob didn’t write it, but a junior. How much more convoluted would it be?


I'm as torn as some of the other commenters here as Go can feel stifling to me frequently, but I have to say I find Go's readability comes with a culture of reading the source, which can be powerful. Often if I'm using a Go tool and I have questions about exactly what it's doing under the hood, unlike other languages where I have to learn the localized style/conventions/patterns (are you using annotations? Dependency Injection? Thread pools or async?), I can just pop open the source and read it. That doesn't necessarily mean I'll _understand_ what I'm reading easily but it does let me pick up subtleties under the hood.

Hashicorp's tools are great examples; before you could find endless posts online about using their tools I'd often dive into portions of Consul or Nomad with questions. Both of these are complicated pieces of software and being able to read the source helped a lot. On the other hand, trying to debug Werkzeug and Flask in Python is a nightmare of objects inheriting weird properties and all sorts of control flow weirdness, despite Flask ostensibly being a lightweight web framework.


You’re conflating ease of understanding/reading Go code with complexity of a hard problem. I can personally read the standard library, or basically any library, and I regularly do to understand how stuff works under the hood. But certain problems are hard and are going to take time to understand the problem and solution versus a basic idea of some things the code is doing. Eg: reading how a RabbitMQ library is using a parameter versus understanding the entire protocol it’s handling. Go is very nice in being able to read other people’s code where other languages it’s extremely hard. (Eg: I’ve tried to read the code for SQL Alchemy in Python and it’s… very hard)


"It's not noise if you can understand every line written though. " It can still just be noise. Have you never seen getters and setters in Java? Error handling in Go is mostly just noise a well. I mean manually wrapping errors so that you can get a stack trace? Our codebase is littered with span calls that in Java could have been relegated to an AOP library.


In Go, you solve problems by outputting inhuman amounts of code.

The standard library manages to pack a lot of functionality in little code. Its probably not easy to write simple concise code in Go but it is certainly possible.


The standard library is good. If you can solve your problem by mostly using tools from the standard library in a short main package, Go works well. The work that I do unfortunately doesn't have that property.


The point you are responding to is not about using th stdlib, but implementing it. If you look at the stdlib itself, it adds lots of functionality without too much code, from scratch.


It would strengthen your point if you specified which parts of the standard library you're talking about.

Crypto stuff - yes. Reflection, AST processing, concurrent hash map - I'm not convinced. Some parts will be terser than others.

The standard library doesn't implement typical business logic. So it's not the best benchmark.

Not to mention the lack of reusable data structures.


Doesn't it also use functionality not available to userland code? Generics come to mind, but they will finally be available to everyone.


Most of it doesn't. Take a look at net/http. I'm not aware of anything it does that your package could not. Certainly nothing that would move the needle on succinctness.


net/http is a great example.

The std lib is disciplined about using safe Go ideas except in the exceptional cases. I feel like it’s nothing that is enforced by the formal spec of the language, just that the std lib has an exceptional emphasis on making it really hard to walk away with misconceptions.


It really depends on what you're comparing Go to and on what kind of projects. Some languages will fair better for simplicity for some tasks. Other languages will fair better for simplicity for other tasks. Programming is a hugely vast topic with an infinite number of problems to be solved in an infinite number of ways.


But the language is just a tool; it means you can turn your brain off churning out code (or telling others to churn out code) and focus your mind on the REAL problems, for which the code should just be a tool.

I've worked with too many developers and projects that made the problem needlessly more difficult, by using a difficult language (Scala), by adding complexity to the whole stack (microservices + cloud based runtime). But they were just CRUD apps. Glue code between services. An API between a front-end and an existing mainframe service, which was abstracted away behind a java service.

They all focused their energy on their craft, on flexing their skills, but not on solving the actual problem - that was boring. And they focused on THEIR skills and abilities, while, if they are as good as they think they are, they should use that to make sure others get up to a level as well.

There's some quotes out there, paraphrasing one of them, if you write code to the best of your abilities, you are, by definition, not smart enough to debug it. And others are not smart enough to comprehend and maintain it. It creates noise and churn that will eventually lead to stagnation and full replacement.

Choose boring technology. Write boring code. Focus on the real problems to be solved, and make sure it's not code you wrote yourself.


> But the language is just a tool; it means you can turn your brain off churning out code (or telling others to churn out code) and focus your mind on the REAL problems, for which the code should just be a tool.

By its limitations the tool in question requires churning more code and adds more steps between the “REAL problems” you solved in your minds and the actual implementation of those solutions in-code.

If languages are just irrelevant tools and you have but to focus on the “REAL problems” to solve it all, there’s no reason not to use a turing tarpit. Why are you not doing that?


> I'm not a particularly good programmer, but I'm experienced enough that I feel held back by languages like Go. When you are used to more power, expressiveness and clarity, then you just miss it at every step of the way.

I fully agree with this sentiment, but I feel that is a pro when programming in the large. Power and expressiveness are not always a good thing in team environments where each persons view of "expressiveness" is different. I say this as someone who worked on a large Perl code base - Perl is powerful, expressive, and if the stars align, may even have clarity. However, when it comes to maintenance, I'll take a 10-line Go function (or 2) over a clever Perl one-liner, any day.


Give it some time. You'll come to appreciate its simplicity (and how hard being simple is) once you've made more mistakes.

I don't mean that to sound patronising. It's been my journey over 40ish years of coding, 30ish years of professional software development.


That's quite an odd humblebrag, I must say. You feel good for the simpletons for whom Go's expressiveness is adequate, but it's just not up to your own lofty ideas.

Perhaps the language isn't a fit for your skills or preferences, which is fine. You want something that scales with your abilities, but then say that you're not a strong programmer. A bit confusing.


I feel misunderstood!

I wanted to convey that I'm torn. I like Go and I feel like it was made for people like me: Working programmers who want to get stuff done.

And in a sense I am, I'm attracted to Go, but there are just enough things that are stumble blocks for me to keep using it or even consider using it in a professional manner over other options. Not because it is a bad language or because I consider its users bad, quite the contrary.

In a sense it doesn't scale with my abilities, but that doesn't mean at all that they are special or golden. It's just a set of things that I absorbed and tried with experience or learned from books and filtered and adopted (dumbed down) for my needs.

I think you might be right with:

> "the language isn't a fit for your skills or preferences"

Maybe should have worded it that way. For me Go is really good 80% of the time and then super awkward to use for the 20% where I cannot write the code in the way I think. It blocks me from exploring those thoughts if that makes sense. And that makes me unhappy.


> That's quite an odd humblebrag, I must say. You feel good for the simpletons for whom Go's expressiveness is adequate, but it's just not up to your own lofty ideas.

TBF that's how rob pike presents the langage.


Nitpick:

"[The System/360] was the worlds first real programmable mainframe computer, opening up the notion that computers could be reprogrammed to suit new problems instead of being replaced by newer models."

The S/360 was by no means the first programmable computer. Rather, it was the first computer product line where all systems were compatible with each other, and where future models would all retain backwards compatibility. The selling point was that you would not have to rewrite your code (or make significant changes) if you wanted to move to a larger system.


That’s some nit that you picked! What else might be there, I wonder now.


Perhaps someone will answer this related Q.. Having coded in Python for years, I'm underwhelmed by claims of Go's simplicity. Python, with its batteries included, seems to me simpler to code in and for understanding existing code, and easier to produce more maintainable code in as far as I can see. (this may partly reflect my prior experience to be fair) When I looked at Go, it looks like one has to do acrobatics and write a load of code to get around missing basic libraries and features in the language. So, I can see Go is useful for being very lightweight, and a good replacement for C++ for many things, but as for it being simple.... well, really? I guess does "simple" mean, relative to the likes of C++ ?


I was just recently writing a somewhat complex websocket client API in both Go and Python. They were both a bit tricky, but Go's support for goroutines and channels as the concurrency primitives right in the language made the solution much more obvious. With Python, I had to figure out how to do concurrency and communication: threads, async/await, queue.Queue, something else? Go also has much better interfaces for I/O with its io.Reader and io.Writer, whereas in Python you have a "file-like object" or maybe types.TextIO if you're lucky (but is the file-like object a reader or a writer?).

So even though neither Python nor Go have websocket support in their standard libraries, it was much more obvious in Go how to do concurrency and what form the readers/writers should take.

In short, Go has good concurrency primitives baked into the language, and a much better-designed and cohesive standard library (Python's was designed over the past 30 years by a much larger collection of people; it's more bazaar than cathedral, with all the pros and cons that come with that).


I'm looking forward to Java getting Project Loom for the concurrency aspect inspired by Go I believe. It will be a nice middle ground between the extremes of Python and Go.


The thing with Java - and by extension Scala - compared to Go is that Loom adds yet another concurrency and language mechanism - so developers have yet another thing to decide and choose from, and a risk of mixing styles together.

Whereas in Go there's only one way and that's it. pretty much. Oversimplified maybe, but my point is, 'bigger' languages offer options, and options increase overhead.

I support having simplified subsets of languages - or rules in a company, whatever - that sets standards as to which language, stdlib and library features to use. Less options reduces complexity. At the cost of code volume maybe, but code volume is not and never has been the problem - complexity is.


I think the idea with Loom is that you won't have to deal with the red/blue pill bifurcation of sync/async code anymore. It's a return to basics where everything is once again a thread and can be written synchronously, with the runtime handling the scheduling of the userspace threads.

In theory this would be very similar to what Go offers, in practice I guess we'll see once it's released, whenver that is.


Loom is just lightweight threads so you don't have to use third party async libraries to get better performance.


I'm not super familiar with Loom, but I will say that unless Loom transparently converts all sync API calls to async API calls, it will probably suffer similar problems to Python's async framework.


You can have millions of loom threads. Why do they need to be async?


The question is how do those loom threads map to OS threads. If blocking I/O calls aren’t parked on designated OS threads then other work on those OS threads can’t progress.


That's the low level details loom takes care of for you. It maps the fibers onto threads and keeps the os threads active even if your fiber is blocked.


Been hearing about loom for what seems like years.

Keen to know what extremes you're seeing in Python and Go unless you're referring to the static vs dynamic typing?


I keep hearing about this project over and over again. Do you know what is the current status of that project?


Right but it's concurrency primitives - which I think most people like - have little to do with its minimalism. Those seem like independent traits of the language as far as I see


If you've coded in Python for years then you probably haven't been exposed to the problems that Go was designed to solve. It's a systems programming language, like C. It's designed to be high performance and highly concurrent without sacrificing readability or overcomplicating the syntax.

You can think of it as C with type safety and easy concurrency. That's it really.

Python is very "hackable", as in you throw something together very quickly. If that's what simplicity is to you then Python is simple. I don't think Python is that simple though, there's a lot of hidden behaviours due to things like operator overloading. It's not type safe and type hints are relatively new. Shipping a Python application is an exercise in itself. What's worse is downloading a Python project, running it, and then getting a random crash 10 minutes later at runtime only to find that you're using the wrong version of Python.

I can code Go as fast as I could code Python. I find Go simpler than Python. There's very little hidden behaviour and the behaviour that does feel weird is easily explainable. Shipping a Go application is as easy as sharing an executable. I'd pick Go over Python for all of my problems today because it's what I like, and there's many tasks where Python would be slightly better, but I don't think Python is well suited/well-rounded to all the tasks Go is good at. Like I wouldn't program a high performance websocket server in Python.


emphasis on "easy concurrency", yet hardly simple. What C messed up with memory, Go seems to mess up with concurrency primitives. The ease with which you can introduce thread leaks and race conditions is just ridiculous.

Sure, they say use the race detector. But then I say: sure, but I need to run it first.

Of course the race detector isn't holy either. Thread leaks are a thing and there's probably a detector for that somewhere too. But surely the parallel to messing up memory with C and its detectors (valgrind?) can be made, and I thought we were all pretty done with that.

I never shot myself in the foot with Java and Scala as easily compared to go, yet you almost get ridiculed for mentioning those 2 former languages.


> Shipping a Python application is an exercise in itself

This, good cross platform support and single file executables. I use https://www.py2exe.org, and it works great once you've figured out how to set it up. However, doing the same thing in go is simply `GOOS=windows GOARCH=amd64 go build ./cmd/...`


Python I find a lot nicer to deploy on *nix. When on Windows I agree it could be painful ;)


Thanks for the helpful explanation :). C++ was my "first love" that I got paid to code in and enjoyed, and I really felt shocked at lack of classes in Go. With C++ you could happily code C style without classes if that suited better, but you could use classes nicely (e:g like in Qt, well IMHO anyway..) or badly as I'm sure many people did. Regarding deployment, I've never found that too bad with Python, but I only use *nix. Agree Python seems pita to deploy on Windows (like many things on windows), I generally use docker, have used conda in the past, so don't particularly feel a lot of effort difference deploying Python vs Go , but different people have different skills and backgrounds, perhaps I'm better at jumping through hoops for Python so don't even notice.


> Shipping a Python application is an exercise in itself

I love programming in python, so long as I don't have to ship it. That part is painful enough that it's worth considering other languages to avoid that headache alone.


Is this really the right way to think about Go?

My impression was that it doesn't really compete with C at all - it just can't given that it's garbage collected. How many projects that actually need to be written in C can be successfully rewritten in Go?

I see Go more as a replacement for Python that's compiled, statically typed, performant and has better concurrency. Great for backend, web APIs, CLI tools etc but you wouldn't build an OS kernel with it.


I never said it competes with C, it's just almost syntactically C with type safety and concurrency primitives. It's a good way to think of the language as that's where its simplicity comes from. Also, pretty sure most things that "need" to be written in C can be written in Go, except things like real-time operating systems. They might not be as performant in a lot of cases, but there's nothing stopping them.

The other thing is you're saying _need_ to be written in C. Out of the millions of C applications out there, do you actually think the majority need to be written in C? C is good in a lot of places, like kernels, or very constrained systems, but would something like cURL need to be written in C? I'd argue not. A lot of developers swear by C because they like how basic it is. Go gives you almost the same levels of basic syntax and then stops you corrupting memory accidentally.

The other strength of Go is that it's trivial to target so many different platforms. I can take an application that runs on Windows and then run it on Linux. And then run that same application on an Arduino or ESP32 microcontroller with minor changes.

Maybe you wouldn't build an OS kernel to challenge Linux in Go. But people have built kernels in Go. There's nothing that prevents you from doing that other than the aims of your kernel. It's not stopped people writing kernels in Java or C# before.


My impression was that it doesn't really compete with C at all - it just can't given that it's garbage collected. How many projects that actually need to be written in C can be successfully rewritten in Go?

That is the wrong way to ask the question. Almost no C program had to be written in C. The right way to phrase the question would be more like: how many C program could be successfully rewritten in Go? And I think the answer is: most of them.

The main speed difference between a C program and the same program written in Go is that the Go compiler is less good at optimizing the assembly output. There are very few tasks where the presence of a GC is prohibitive - if you don't generate garbage, it won't run. The Go GC is even written in Go, carefully scrutinized to not allocate dynamic memory.


Go for me is simple in the wrong ways.

I'm torn on this article. On one hand I love the simplicity of Wirth's languages. Oberon-07's language spec is 17 pages, including a 1.5 page BNF specifying the syntax of the entire language.

On the other hand my current preference in terms of language is Ruby, and while I dislike Python I'd pick Python over Go any day if I had to choose between them.

My impression of Go is similar to yours - it's a better C or C++, and it's conceptually simple-ish, but on one hand it doesn't match the simplicity in terms of syntax and semantics of Oberon, so if I wanted Wirth-like simplicity Go wouldn't even make the list, while it doesn't match the simplicity of use of Ruby or Python.

And if I had to pick a C/C++ replacement today, I'd probably opt for Rust over Go.


FWIW, I have 15 years of experience with Python and only 10 with Go. In my opinion, Go is simpler in general, but the most important "domains" in which it is simpler are as follows:

1. Tooling. Things like package management, single-binary deployment, profiling, etc are far simpler in Go than Python.

2. Performance. Optimizing Python is a painful endeavor. And the usual platitudes (e.g., "just rewrite the slow parts in C!") have significant caveats. Naive Go tends to be hundreds of times faster than optimized Python, and when you must optimize Go, it's typically just moving allocations outside of a hot loop or something trivial by comparison.

3. Concurrency. Python's async framework leaves a lot to be desired. If anyone calls a library that makes a sync call under the hood or uses too much compute, the whole process gets hosed and it's really difficult to identify the culprit. I also regularly see people forget to await the result of an async function--yeah, these are type errors and yes if you have the forethought and unlimited time you can write tests for any kind of type error, but this isn't a good use of anyone's time.

4. Rails. Go guides people toward good code. It guides people away from the kind of code that dynamic typing purists tend to write. For example, in the Python standard library, the type of an object returned by the `open()` API varies based on the value of an input parameter. Similarly, matplotlib, pandas, sqlalchemy, etc do silly things like this (although specific APIs aren't coming to mind). And of course there's a whole universe of junior engineers who try to emulate these things. As an aside, these rails are largely a matter of static typing, which also means automatic documentation (compared with Python's "x is a file-like object" with no information about what "file-like" actually means--does it support read? write? close? seek? truncate?) and great static analysis tooling.


FWIW I agree with you on most of this except for scientific computing libraries like matplotlib and pandas. A lot of these are used for exploration so the weird dynamic typing going on is specifically for the ergonomics of exploration. Everything else I agree with. I also think that Python is a particularly deplorable example of this though because of its longevity and its use by people who aren't career programmers.


I could buy this for those limited use cases (although since more and more we're productionizing scipy code, it would be nice to have sane, maintainable APIs as well as rapid-iterating APIs, but I digress...), but the fact that these are so common across the ecosystem is the more troubling bit (and I think we agree here). :)


Thanks for the helpful perspective :)


There's a talk called "Simple made Easy" by Rich Hickey (you can find it in YouTube) that talks about the difference between simple and easy in software development.

Under those definitions, I agree some things in Go are not necessarily as easy to solve as in Python, but Go is definitely simpler.


That talk should be at the top of every programmer's list. At the risk of sounding cliché, it actually will change how you think about program design.


Most of Rich Hickey's talks fall into that category.


You have it backwards, Go gives you the illusion that is easy (easy to start with, quick to start coding something in it) but is no simple. Writing simple systems is hard and Go works against you here.


There's a few notes in the article that touch on why Go is simple. FWIW, Go comes with plenty of batteries included and as a day-time Go developer, most of what we build doesn't require external libraries (generally logging, database modules, etc. are external)

Also what makes Go simple? This (From the article):

> If it worked ten years ago, it works now.

I have a collection of blog posts from the Go blog and other sources which I revisit from time to time and many are from indeed, ten years ago. Less cognitive overhead on figuring out what's from version x or y, if it's been deprecated, etc.


Can't agree more with you. I've seen people trying to write web applications and web services with Go going crazy reinventing the wheel (hand written validations, shell scripts for migrations, raw sql because "who needs ORMs, ThEYaReSloW", etc for thinks that with Python and a any of its battle tested frameworks you'd be done in a fraction of the time.

Go can be a great replacement for C, C++, maybe some Java stuff. Not for what you'd otherwise use Python, Ruby, PHP, etc.... right tool for the job. The "But channels are amazing and concurrency and speed and memory" argument doesn't matter as much for 90% of projects using it.


Python used to be simple. V3 cleaned it even more (adding incompatibilities and the whole drama). The recent additions are not much but there are many that you need to know to be able to understand code using it: walrus operator, type annotations, pattern matching, the whole possibilities of pasding parameters (non optional named parameters), ... each of those in isolation are not so complex, useful and welcomed by the community. But simple... less and less so.


I'm still not sure that list comprehensions in Python 2 were the right move...


Nah, list, and even moreso, dict comprehensions, are one of my favorite python features. Any time you can avoid state-in-flight (e.g. mutating stuff inside a for loop) is a win in my book. Maybe if you aren't used to the syntax, there is some mental overhead, but that goes away after (at most) a few months of writing python.


Back in the days, I was seeing Python basically as the anti-Perl, where "There's More Than One Way of Doing Things" is the core mantra. Sure, a for loop is a blunt tool, but there's something to be said about not having too many different ways to approach things. Python as the new Basic.

Back then, that was a unique selling point of the language for me. These days, those are quite diffrent.


this one I love, and is very useful and clear, but I'm sure each one of the c++ features are dear to someone...


I came to Go from Python and while I know they're not exactly similar, I view Go as a "better Python" for certain tasks. I also continue to use Python for many things. One aspect of Go I love is how it handles concurrency. To try to get the same benefit from Python is just not worth it.


Go is really simple. Took me a while to get that. The issue is it forces you to deal with things that need to be dealt with rather than leave them as an exceptional case (pun intended).


> The issue is it forces you to deal with things that need to be dealt with

Except for all the times where it does not, which is most of them.

It won’t force you to deal with errors (generally, it will in some cases as a side-effect of erroring on unused variable but there’s plenty of case where that’s going to be suppressed or irrelevant), it won’t force you to deal with nullable pointers (which is all of them), it won’t force you to deal with unsynchronised shared mutables (which is easy to do unwittingly, just send a map or a pointer over a channel), it won’t force you to lock the right mutex before manipulating those when you’ve thought of having one, it won’t force you to properly use debatable APIs like `append`, it won’t force you to deal with the possibility of typed nils, etc…

It will force you to remove or explicitly silence unused import though, because, you know, that’s what’s important.


Software complexity also helps to establish or maintain the predominance of large corporations on the market. In contrast, a simple software system can be reimplemented by a small team or even a single person, so there is more competition. This is not only demonstrated by Wirth's systems, but also by several 7th edition Unix clones of the 80's [1-4] as well as current ones [5] and also reimplementations of the classic MacOS [6].

Back in the 1980s, operating systems and compilers were often seen as the most complex pieces of software. Nowadays, web browsers include (and reimplement) large parts of an OS and compiler and are probably even more complex than a current OS such as Linux or a current compiler such as clang/LLVM. Only rich (or well funded) companies such as Google, Apple and the Mozilla foundation can afford to build a browser today that can be used to access current web pages.

So a central question is if we can turn back times and make software more simple again. Maybe this ship has already sailed - but it can't hurt to try. From experience with my students, it is extremely satisfying for them to build a complete system from scratch instead of mostly copying and pasting library calls or StackOverflow code snippets. Thus, I try to enable my students to experience this sense of achievement. They will probably never get the chance to do something similar in their later career in industry.

[1] One Man Unix for 68k - http://www.pix.net/mirrored/discordia.org.uk/~steve/omu.html

[2] Uzix for Z80 (link to the MSX port) - http://uzix.sourceforge.net

[3] Coherent Unix - https://en.wikipedia.org/wiki/Coherent_(operating_system)

[4] Minix - https://www.minix3.org

[5] Alan Cox' Fuzix - https://github.com/EtchedPixels/FUZIX

[6] Ardi Executor - https://en.wikipedia.org/wiki/Executor_(software)


> You cannot reduce the complexity of your problem by increasing the complexity of your language.

This strikes me as backwards. There’s a certain amount of irreducible complexity in any requirements, and every commonplace problem the language refuses to solve is one more problem for me to face with no help. Nearly every feature that makes a language more concise and powerful (reentrant functions, higher-order functions, tail recursion, continuations, dynamic dispatch, multiple dispatch, garbage collection, sum and product types, pattern matching, exceptions, concurrency, laziness, macros) makes it harder to specify and implement, but reusing a heavily tested implementation that everyone knows is always better than trying to roll my own.

> Language is (comparatively) easy to pick up.

This is not a good thing. The shorter the learning curve, the quicker you run out of ways to improve your work. It’s like boasting that your toolbox is easy to carry, because it’s empty.

> Deploy by running a single executable.

This is not a good thing. It guarantees you can’t reuse any code provided by the platform. Static linking was a problem that was solved in the 1980s, and being able to choose it is always better than being forced into it.

Anyway, Go does have a GCC-based toolchain with a normal linker, even if proponents prefer the one that reinvents everything incompatibly.

> Being stuck in the 70s means no breaking changes since flared pants.

I actually agree that languages should commit to “if you ask for the 2021 semantics that’s exactly what you get now and forever,” but any language can do that, we don’t have to settle for weak ones.


>This is not a good thing. The shorter the learning curve, the quicker you run out of ways to improve your work.

I disagree with this. There are many languages that are perfectly productive which are reducible to a very small core: Haskell, Standard ML, OCaml, Modula-2/3. The size of the ecosystem is an extrinsic property to the language.

Simple languages are easier to learn, easier to implement, and it's easier to understand code written in that language, since the semantics are simpler. Very rarely does language complexity buy you anything except painful surprises or obfuscated code contest entries.

For example: Rust is a much simpler language than C++ and you can use it to do essentially anything reasonable you'd want to do in C++.


Counterexample: Brainf*ck is very easy to learn and very difficult to program in.

You have a point that a large feature set doesn't necessarily mean a productive language, but the same can be said for a small feature set. Your examples only show that there is some set of features that can subsume other features, so the number of features can be effectively reduced. At some point however there would be an irreducible set of features.


There's obviously going to be a happy middle ground between too many features and too few. That middle ground is going to differ for different people too.

That's the beauty of having different languages. If Bob finds Go too simplistic then he can use something else. Personally I like Go and don't want the language to change into yet another C++-like jack of all trades.

The real problem with languages these days is people conflate personal preference with irrefutable fact. Probably because we're taught our profession is a science.


> The real problem with languages these days is people conflate personal preference with irrefutable fact.

This bears repeating. I've found language preference and style depends a huge amount on education, language exposure, and even neurotypes. For example, personally I dislike keeping pieces of state in my head. I have gravitated towards a very functional style, eschew OOP, short functions, lots of static typing. I need it to stay sane. My kryptonite is long, scripty things with tons of mutable state and magic. Some people can work like that. They can keep a big chunk of varying state in their head. I don't proscribe that either type is better (though the former is much more approachable for those new to the codebase). It's just different tradeoffs.


> I disagree with this. There are many languages that are perfectly productive which are reducible to a very small core: Haskell, Standard ML, OCaml, Modula-2/3.

In regards to “small core” and Haskell, one of the complaints about I’ve heard is that any real code will inevitably end up using all sorts of language extensions, which seems to be the case in the admittedly small amount of Haskell code I’ve seen.

Now I’m not a Haskell developer (unfortunately it seems at the end of the day the energy expert learning it would be wasted), so I want to stay away from citing this as fact. But those pragmas tend to scare me away from the language.


I wrote a nontrivial Haskell application this year [1] and use exactly 0 language extensions.

[1] https://github.com/tromp/ChessPositionRanking/tree/main/src/...


I agree that Haskell in practice is the Cartesian product of all sorts of language extensions. But nevertheless Haskell 98 and Haskell 2010 are, in and of themselves, perfectly productive languages without extensions.


>Simple languages are easier to learn, easier to implement, and it's easier to understand code written in that language, since the semantics are simpler.

But "simpler" language specifications can also be harder to use in the real world. I made a previous comment on how the simplicity causes extra complexity in real-world code bases: https://news.ycombinator.com/item?id=14561492

For example, a "simple" language I used did not have bitwise operators like C/C++ (|&^~). However, my problem still had irreducible complexity that required reading individual bits of a byte so I wrote a bit reader using math:

  function bitread     && returns .T. or .F. value
  parameters cByte, nPos
  return ! (int( asc(substr(cByte, nPos/8+1, 1))/(2^(nPos%8)) )%2 == 0)
By eschewing the so-called "extra complexity" of "&" bit operator in C/C++, we end up using mathematical combination of exponentiation, modulus with substring extraction.

Yes, one can argue that not having bitwise operators means it's "simpler to learn the language because it's one less piece of syntax to grok" -- but now you've caused extra complexity in the codebase. This extra complexity multiplies in other ways:

- Programmer John spelled his custom bit reader as "BitGet()"

- Programmer Jane spelled her custom bit reader as "bit_fetch()"

- in addition to different spellings in the wild not being interoperable, each may have subtle bugs. (Did the programmer implement the math correctly?!?)

Therefore, adding a bitwise operator adds complexity to the base language spec but also simplifies real-world coding.

A lot of so-called extra complexity (extra keyword concepts in C#, Swift, Rust, Javascript ES6) in newer modern languages let you write simpler programs because the base language encodes a common pattern that a lot of people were re-inventing.

E.g. C Language doesn't have generics but that doesn't mean the need for expressing a concept of generics goes away in actual real-world C codebases. See comment by pcwalton: https://news.ycombinator.com/item?id=14561664


You can very justifiably move bitwise operators to the standard library given that they are all just functions of the form `(bitfield * bitfield) -> bitfield`.


I believe you are mixing 'complex' and 'complicated'.

> productive which are reducible to a very small core

That's like saying first-class continuations are easier than coroutines or generators because they effectively subsume both. I don't think that's the case at all: you now need to understand continuations in addition to coroutines and generators.

> Rust is a much simpler language than C++

C++ is more complicated than Rust, but Rust is more complex than C++. If you don't understand RAII, lifetime annotations are going to be tough to figure out...


I think if both languages were fully formalized Rust would have a smaller formalization. I don't know if Rust's grammar is context-free but it's much closer than C++'s grammar.


The grammar is not a big issue. The assertion it trivially true through how much C++ features interact with one an other, badly.

Just understanding the generation of special methods depending on which you implement by hand is an 8x6 matrix, and that tells you nothing about how they misbehave when you fail to follow the “Rules of Whatever” (variously 0, 3, 5, 6) properly.


(Rust's grammar has one teeny tiny corner that's context sensitive, and so that makes the entire thing context sensitive in a formal sense, but in practice, it is much simpler than that in the vast, vast majority of cases.)


Haskell 2010 is an small language. GHC haskell is an beast in comparison.


> Having created PASCAL, MODULA and MODULA-2, Wirth set out to develop the OBERON family of languages in order to build his operating system on his workstation.

This is not quite true.

Modula and Modula-2 originated from Wirth's first sabattical at Xerox PARC, where he got acquaited with Mesa and the XDE development environment, back at ETHZ he created Modula/-2 and the Lillith OS.

On his second sabattical at Xerox PARC, Mesa had evolved into Mesa/Cedar, and that was the genesis for Oberon, his second workstation OS.

Also although I appreciate Wirth's work, for me the best languages were Modula-2, and Active Oberon, while the best Pascal dialect is indeed what Apple and Borland did with it.

I am not a fan of Wirth's later pursue for the minimalist GC systems language design.


I love the piece in general, but one statement stands out to me:

> Now, perhaps OSX is more feature complete than Oberon, but certainly not by a factor of ~40 000X.

I am not at all sure that this is true. Unix alone has a vast feature set. Then consider just the features connected with something like displaying type, or accessibility.

OS X is complex because it has accreted a large collection of messy human requirements.

> Something was lost along the way.

This is 100% true. Even to me, as someone who thinks Apple does good work, OSX doesn’t feel like a general purpose computer in the way that someone like Oberon or an Alto running smalltalk does.

The Kay quote comes to mind: “ Most software today is very much like an Egyptian pyramid with millions of bricks piled on top of each other, with no structural integrity, but just done by brute force and thousands of slaves.”

It a bit unfair to Apple. There is structural integrity to OSX, but it’s like a civil engineering project where the integrity is gradually being retrofitted over time.


> There is structural integrity to OSX, but it’s like a civil engineering project where the integrity is gradually being retrofitted over time.

Despite the engineered "user-friendliness" of the OS, it still suffers from a simple problem, which is this: how can a user make a button for themselves? There are buttons all over the system. Users know how to interact with them. But the only consistently supported way to make a "real" button is to learn a full low-level language and all the dev tools and classes that go with it. That is not good design.


Shortcuts has really taken off in the last year or two on iOS and I think it provides the most user-friendly way of creating a button that automates something tedious. Certainly a lot of its popularity is owed to the ability to create custom Home Screen app icons with iOS 14, but it's great if that was the spark that encouraged people to explore how they can automate their devices.

Later this year Shortcuts will finally arrive on macOS as well—I bet it will get a lot more use than Automator or AppleScript does.


I agree with the sentiment, and indeed I’ve been disappointed with the rate of progress in this area by Apple, however they clearly are slowly moving in that direction.

But to me the question this raises is - do you think anyone has done better, and if so who?


> But to me the question this raises is - do you think anyone has done better, and if so who?.

Yes. Apple in the past, for one. When they included Hypercard free with every Macintosh and also began integrating Applescript into the core of their OS, they were taking steps in the right direction.

Also, as you mentioned, Smalltalk and similar systems also had a good trajectory in this regard.

The incentives and visions of computing have changed drastically since the 80s.


Hmm. I don’t see HyperCard or Applescript as better than anything we have now. They are indicative of a direction, but neither were scalable. Shortcuts seems better than either of them.

I see smalltalk as a great vision, but the fact is hasn’t gone further isn’t accidental.

It’s certainly not easier to build an app in Squeak, for example, than it is to use XCode and build one using SwiftUI.

I am personally of the opinion that nobody has actually done better in reality, even if they have a preferable vision.


> I don’t see HyperCard or Applescript as better than anything we have now.

The way they worked holistically on their systems at the time and brought users into the fold was certainly better than what we have now. I don't know of anything at present that is equivalent to that relationship -- on OSX or any of the Unicies, for that matter.

To be clear I'm not suggesting a Hypercard clone for the modern era, but more of the "spirit" of the thing. Regular users as authors is one part of that. Today all of computing seems geared towards users as consumers. To me the reasons for this are obvious.


I don’t see how there was anything holistic about HyperCard. It wasn’t really much different from Visual Basic except for the Card and Link metaphor, which was good for creating scaffolding without having to start coding first. Database builders like FileMaker or multimedia tools like Macromedia director, and Flash, are in the same family of tool. Even PowerPoint. There was certainly nothing specially integrated about it. It was just a tool.

The card metaphor was a good way to build quick interactive presentations, and that was better for a lot of tasks than breaking out a code editor, but that’s all it was.

AppleScript is a ridiculously awkward programming language, coupled with a cumbersome way for apps to publish an API, along with some simple coordination primitives.

You could imagine a great programming language and a simple mechanism for apps to expose their functionality, but this was not it.

> but more of the "spirit" of the thing.

Right. My point is that there is nothing these old technologies do that isn’t done way better today.

The spirit, I agree, is lost.

> Regular users as authors is one part of that.

I agree with this too.

> Today all of computing seems geared towards users as consumers.

Except for the giant stack of programming languages, creative tools, etc, all of which are vastly more end user programmable than anything from the HyperCard or even smalltalk era.

Think about blender, Pythonista, gnuradio, Swift playgrounds. Programmability is everywhere.

> To me the reasons for this are obvious.

That’s where we differ. I think there are enough people who want this thing that if it was that easy we’d have it by now.

We can argue that it’s not in Apple’s interest to make this thing (although I think that’s false, and they are trying as hard any one to make programming more accessible). Even if that were true, it doesn’t explain why things are no better on Linux.

The spirit of these things is some kind of ubiquitous and powerful and yet progressively accessible programmability and composability of the entire system.

That just isn’t something that they actually offered, even though they gestured towards it. It turns out to be a hard problem.


I don't want to drone on about this too much longer since we obviously disagree about some of the big points here (and agree about the "direction," which is more important anyway), but:

> I don’t see how there was anything holistic about HyperCard. It wasn’t really much different from Visual Basic except for the Card and Link metaphor, which was good for creating scaffolding without having to start coding first. Database builders like FileMaker or multimedia tools like Macromedia director, and Flash, are in the same family of tool. Even PowerPoint. There was certainly nothing specially integrated about it. It was just a tool.

If you look at, say System 7 and the versions of Hypercard that ran on it you'll see that this isn't true. One could control important "outer" functions of the whole operating system from within Hypercard using its own conception of the world, which I would say counts as special integration. The UI even looked quite similar to the rest of the system, making it "real". There was a kind of seamlessness there, and it came before things like PowerPoint etc.

Also it was more than just presentations. Non-"programmer" Mac users were building all sorts of things, from zines to point-of-sale systems for their local businesses. At one point in the early 1990s Apple estimated that there were 4 million authors creating their own stacks.

I definitely agree that today -- with the current systems we have and the environment that the companies who make them operate -- the problem is extra hard. My recurring thought on the matter is that we need to toss aside things like backward compatibility and software portability (ie, recreate a computing system from the ground up) in order to have what we are talking about. At the end of the day we are still in the world of C and Unix and I don't think we're going to find what we are looking for so long as that remains the case.


> One could control important "outer" functions of the whole operating system from within Hypercard using its own conception of the world, which I would say counts as special integration.

Ok - this sounds interesting, but what does it do that can’t be done by VB?

> The UI even looked quite similar to the rest of the system, making it "real".

This is an important quality, but absolutely one shared by VB and database builders.

> There was a kind of seamlessness there, and it came before things like PowerPoint etc.

Yes, that it was early and beloved is not in dispute. My point is that it’s not special beyond that, and nothing has been lost.

> Also it was more than just presentations.

Ok, but that straw-man’s the other tools I mentioned. I mentioned a bunch of things that go far beyond HyperCard in their programmability.

> Non-"programmer" Mac users were building all sorts of things, from zines to point-of-sale systems for their local businesses.

Zines are augmented presentations. Point of sale systems require programming, and are the canonical example of what database builders are used for today.

> At one point in the early 1990s Apple estimated that there were 4 million authors creating their own stacks.

Ok, but what point are you making with that.

Probably hundreds of millions of people have created a PowerPoint, but how many of them have programmed a behavior using the embedded basic?

It’s quite obvious that only a tiny fraction of those 4 million people did anything more than simple presentations.

> I definitely agree that today -- with the current systems we have and the environment that the companies who make them operate -- the problem is extra hard. My recurring thought on the matter is that we need to toss aside things like backward compatibility and software portability (ie, recreate a computing system from the ground up) in order to have what we are talking about.

I’m not sure about that because I can’t see what advantage that has over just building a VM that can leverage existing platform work, but I am open to being convinced.

However, what is not clear, and which nobody articulates, is how such a tabula-rasa would be different and not just dead end again.

> At the end of the day we are still in the world of C and Unix and I don't think we're going to find what we are looking for so long as that remains the case.

This isn’t clear to me. Given that the entire platform HyperCard ran on can be trivially emulated in a browser in JavaScript, C and Unix aren’t standing in the way of building something better.

Nobody being able to say what it would even look like is the real problem.


> neither were scalable

What do you mean by "scalable"?


From another reply:

> The spirit of these things is some kind of ubiquitous and powerful and yet progressively accessible programmability and composability of the entire system.

They gesture towards it but in practice are limited in how far they can get.


> > Now, perhaps OSX is more feature complete than Oberon, but certainly not by a factor of ~40 000X.

> I am not at all sure that this is true.

Well, the first 90% takes 90% of the time, and the last 10% takes the other 90% of the time. That is, the last 10% doubled the size of the project. And if you add another 10% to the scope...

That is, doubling the number of features much more than doubles the lines of code.

Well, does OSX have double the number of features of Oberon? Far, far more than double. It's almost laughable to even compare the two. So while Oberon is very tidy and efficient, the comparison is kind of misleading.


The main premise of the article seems to be;

'You cannot reduce the complexity of your problem by increasing the complexity of your language.'

Disagree; eg. DSLs. Here "Simplicity" is born out of a large array of "Complicated Constructs" in some language. The size of the surface area encompassing the various constructs is directly proportional to the expression of various possible models of computations and thus effective problem solving. This idea is what is behind the Mozart/Oz language (and of course C++ :-)


> 'one way is to make it so simple that there are obviously no deficiencies, and the other is to make it so complicated that there are no obvious deficiencies.'

I think this is also a premise of the article, I don't see that quote as "the main" premise, but just another quote used to explore the overall points presented


The article is trying to focus on "Simplicity"(in everything?) with Go being the exemplar according to the author. However it confuses two major issues; Simplicity in the Language used for expression vs. Simplicity in the Solution of a Problem. They are two very different things; and the quote you list (by Tony Hoare) actually refers to the latter.

Simplicity in the solution of a problem is not to be argued against. This was the main thrust of Wirth's work (see also the book The School of Niklaus Wirth: The Art of Simplicity). But he presupposed that this is only possible by using a "Simple" language. This is what is debatable. I think with the experience that the community has gained now (in terms of volume of software written) we are realizing the need for "Complex" languages/libraries/frameworks.


    > Now, perhaps OSX is more feature complete than Oberon,
    > but certainly not by a factor of ~40 000X.
Not sure why the author is so certain of that. Isn't that conceivable that a modern industrial-grade OS is 40 000 times more feature rich compared to its academic counterpart from 35 years ago?


No, it is not conceivable to me as user. Though it could be for product managers creating release notes every month from thousands of GitHub issues or JIRA tickets resolved and listing features.


After a few decades away from programming I got back into it some and decided that Python was the easiest and closest to the languages I was fairly fluent in back in the old days. I did write a lot of Pascal code at one point and after going through the "give it a Go" link I can see the appeal. It looks like a good tool for some things. I look forward to working with it some more. Thanks for your article. I have always enjoyed Brooks and Wirth so that's some good "geek clickbait" that I can't resist!


From top popular language, say from https://redmonk.com/sogrady/2021/08/05/language-rankings-6-2..., only Go and Swift/ Objective-C are languages with static typing, compiled to native tool chain . And Swift are very apple centric.

This have huge impact for go popularity. Alternative to Go: - java and JVM language are heavy gorilla languages that's need more resources/ no value types. graal native is not straightforward alternative. - C# is good alternative, but again jitted, and there are no alternative for Microsoft products in open source community,people choose go as default alternative to java few years ago..


I think you're discounting how much having Google backing you adds to popularity. D has static typing, compiles to native, works on every major platform, and has been around far longer, but doesn't have near the adoption of Go.


Same case for Nim, I really enjoy working with it.


Or Pascal. Wirth's most popular languages

It has a unique combination of features that I cannot find in any other languages

Unfortunately there is a disturbing lack of quality libraries. I have to write everything myself. Really everything. A few days ago I wrote my own string to int conversion.


Having actually tried using D, it has far more warts than Go.


.NET has had AOT support since its origin, although it only supports dynamic linking.

https://docs.microsoft.com/en-us/dotnet/framework/tools/ngen...

Since Windows 8, .NET Native is also a thing, and then there are the other AOT toolchains like Xamarin and IL2CPP.

AOT compilation has been part of Java world for those willing to pay for commercial JDKs, specially in the embedded domains.

In fact, the JIT caches with PGO optimization now available for free on Hotspot and OpenJ9, come from JRockit and IBM J9 respectively.


Did you forget about C++ and Rust?


No, i like Rust very much. But Rust require from me to deep dive with language, it's not productive language, it's correct from start to end language. Language with optional GC, value types like Nim or D are better for small hobby projects. But community and library ecosystem and tooling are weak. On other hand there are Ocaml and F#. But Ocaml for windows is weak, community is small etc F# inherits problems from c#


OK, but this:

> From top popular language, say from https://redmonk.com/sogrady/2021/08/05/language-rankings-6-2..., only Go and Swift/ Objective-C are languages with static typing, compiled to native tool chain

is still false.


That's a fair point, but I don't find Rust really harder to use than Go for small hobby projects. Swift is nice as a language but has a huge lack of libraries.


You had me at "Deploy by running a single executable"


I'm a big advocate for simplicity. I think it's one of the most important guiding principles in software engineering. I don't care for Go mostly for reasons unrelated to its simplicity; I have respect for the latter, even for decisions like excluding generics. I think broadly that we need more simplicity-focused languages.

However, I find that simplicity's most vocal advocates often go off the deep-end, dismissing extremely productive technologies and practices simply because they appear complex when viewed from a certain angle. To use the author's own quote:

> There is no single development, in either technology or management technique, which by itself promises even one order of magnitude improvement within a decade in productivity, in reliability, in simplicity.

Nothing is a cure-all. That includes a relentless focus on simplicity. If your language is minimal, it probably means you're going to have a lot more complexity in the code itself. Maybe that's worth it, maybe it's not. There's also no hard line between accidental and intrinsic complexity; the goal-posts shift when things get generalized at the language or library or service level. Both the complexity of the holistic system, and the complexity of what you and your team actually need (or don't need) to manage yourselves, have to be considered.

In short, the landscape is more complex (no pun intended) than the ideology represented here.


>> You have to completely comprehend your idea in order to fully realize it.

This is a gem.


Corollary: you will fully comprehend your idea once you have fully realized it :-).

My point is that iterative implementation often helps to better understand the problem, I think.


Indeed. Wirth's insistence on only adding those things to his language he fully understood was one of the things that inspired me to start writing a Ruby compiler (long dormant in an "almost self-compiling but wildly incomplete" state; I tinker with it every now and again but haven't had time since May judging by my last notes) - it struck me that fully understanding the true complexity of Ruby as a language required understanding how to implement it efficiently. I love using Ruby, but it's an absolute nightmare of a language to implement (and far worse to compile), and it's if anything driven home an appreciation for Wirth's insistence on simplicity.


The word “realize” has two meanings, one is “understand” and the other (perhaps less used), “make real” or “implement.”

(I would still disagree with the sentiment - better understanding often comes during implementation.)


> There is less magic, less hiding, which yields much, much greater clarity. No surprises, ‘it just works’.

I think the author of the following article would disagree: https://fasterthanli.me/articles/i-want-off-mr-golangs-wild-...


Go values stability. But time passes and big changes do happen. Go modules. Generics.

I wonder how many of the same arguments were made about Java back in the day. Java got generics at around 8 years after its 1.0 release in 1996. Go's getting them 9 years after its 1.0 release in 2012.

We should probably compare apples-to-apples. C++ has been around for 35+ years. Java has been around for 25 years since its 1.0 in 1996. Python for 26 years. C# for 20 years.

I wonder what people will think of Go in 10-20 years when its age matches more established ecosystems. Will devs be complaining about dealing with pre-generics code then?


Would be interesting to compare the success of languages, by comparing those that were born in academia vs born in industry. For example, Python I believe can be said to be born in academia, while Java in industry. Obviously, industry-born languages still borrow from academia, but what have been the extra ingredients that made them more successful (=more used) than "obscure" languages that remained in academia?


Too many leaps of faith and too many appeals to authority.

Wirth's quest for one true small language led to an array of languages that only show that no, you can't have too simple of a language, or you will end up with a system that is too complex because of the language. The ones I remember from a very long discussion a decade or so ago:

- the meaningless distinctions between functions and procedures

- the very many iterations on loop constructs (in some of his languages there are three loop constructs with different rules. For example, you can break early from only one of them)

IIRC in Oberon BlackBox the necessity for complexity is just swept under the rug where system modules are allowed operations that are not available to user modules (IIRC it was function overloading and exceptions? I could be wrong).

In many ways Go is following Wirth's path of confusing language complexity, problem complexity and solution complexity. Yes, you can a wonderfully simple language with very few constructs. But if you have to write thousands of lines of boilerplate code to handle a problem, you haven't solved complexity. You've offloaded it onto the programmer.

As for "OMG, Wirth wrote an OS in Oberon, he was onto something"... Menuet OS is written in assembly, fits on a floppy disk, and can boot near-instantly on a 200 Mhz processor [1]. Does this mean we should ditch all the bad complex languages and switch to assembly?

Though I agree: most of present-day software is extremely bloated and inefficient.

[1] https://en.wikipedia.org/wiki/MenuetOS


Blackbox / Component Pascal is not Wirth.

As for the distinction between function and procedures, I halfway agree with you, with the caveat that having the compiler enforce that you don't try to use the return value from something that isn't producing one is pretty consistent with the static typing of those languages. If you don't like it nothing stops you from declaring everything to be functions.

The iterations on loop constructs reflects trying to find the set that maximises clarity with the minimum amount of syntax. At the same time the early exit issue is complex - one of Wirth's guiding principles was to strip out things he was not sure had an unambiguously good solution for implementation of and/or that promoted bad practices. At the time of his earlier language designs, the issue of single exit vs. multiple exit was still hotly debated, and while mostly debated in terms of functions, it also matters for things like loops. When I studied CS, we got docked points on exercises if we didn't stick to single exit (which greatly annoyed me). The changes in Wirth languages reflects changes in consensus on those issues.

I don't agree with all of Wirth's choices (or I'd have been an active Oberon user, rather than predominantly use Ruby), but in terms of the principles his languages are designed by his decisions are very much understandable. I wish more language designers had such a clear vision of what they wanted to achieve.

As for Menuet, Menuet is substantially larger (several times the number of lines of code) than the original Oberon in terms of source, so if anything it contributes to the point of just how compact Oberon is.


>Wirth's quest for one true small language led to an array of languages that only show that no, you can't have too simple of a language, or you will end up with a system that is too complex because of the language. The ones I remember from a very long discussion a decade or so ago:

>- the meaningless distinctions between functions and procedures

>- the very many iterations on loop constructs (in some of his languages there are three loop constructs with different rules. For example, you can break early from only one of them)

Arguably the problem here is Wirth didn't go far enough. You can design a simpler version of, say, Modula-3 by unifying statements and expression as in ML, and replacing the multiple iteration constructs with recursion, and guaranteeing tail recursion.


One of the key requirements of Wirth in terms of language construction was chasing languages that were not just simple but that 1. could be implemented simply (e.g. if a feature would significantly complicate the compiler, chances are he'd reject it), 2. where the implementation of a language construct was relatively unambiguously a near-optimal way of implementing that construct (e.g. he'd strip out things that might well be good if he did not feel he knew of a clearly superior way of implementing a feature at the language level), and 3. could be read in a very straight-forward way.

You can make lots of changes that makes for smaller and simpler languages, but Wirth was chasing a very specific set of goals that did not seek to minimise the syntax for the sake of minimising the syntax alone, but to seek a balance between language simplicity, compiler simplicity and a rigidity that forces (his idea of) readability.

His languages makes a lot more sense when evaluated in terms of those goals. Personally I admire his vision and execution a whole lot even though I prefer to use a language (Ruby) that is an utter mess evaluated by Wirth's criteria.


> Arguably the problem here is Wirth didn't go far enough.

The quest for langage simplicity in service of nothing is not actually useful.

There’s an entire class of esoteric langages called turing tarpits, and their entire MO is to be so simple they’re theoretically universal while being practically unusable. Brainfuck is probably the most famous but by no means the best / worst.


Simplicity buys you a lot of things:

1) Ease of learning, which allows the community to grow.

2) Ease of implementation, which in turn allows you to have multiple implementations conforming to the same standard, which makes the language ecosystem more robust by removing implementation-specific bugs.

3) Ease of formal specification.

Obviously Brainfuck and other turing tarpits are esolangs and not meant for industrial use. There is however a golden mean that unites simplicity and pragmatism.


> Obviously Brainfuck and other turing tarpits are esolangs and not meant for industrial use.

You’re contradicting yourself here. Turing tarpits are literally the end point of the quest for simplicity, and are indeed easy to learn, easy to implement, and easy to formally specify.


...and what simplicity won't buy you, nothing else will.


Just a remark that Wirth had nothing to do with Modula-3.

The language has grown out of Mesa/Cedar, Modula-2+, when some of XEROX people went into Olivetti Research Center.

At most Wirth might have given permission for the use of Modula name.


I believe that it is very instructive for a programmer to spend some time with studying the evolution of the many programming languages created by Wirth, from Algol W and PL/360 to Oberon.

Because all his languages are relatively simple it is feasible to understand why some features are included or not and why they are implemented in a certain way and not in another.

While all his languages are interesting, they also have various shortcomings that make their use difficult in many more complex applications.

The main importance of the languages designed by Wirth is not in what they have been used directly to do, but in their great influence over many other more widely used programming languages.

Frequently, the innovations made popular by Wirth languages had already been introduced much earlier in other programming languages, but nobody was aware of that until Wirth made them well known.

For example, after Hoare introduced the "case" keyword, but with a syntax that was only a minimal improvement over the Algol "switch", Wirth introduced in Pascal (1970) the modern form of the "case" structure, with labeled alternatives (instead of using their ordinal position).

After that, practically all languages have included some variant of the labeled "case" (even when the older "switch" keyword was retained, like in C).

Nevertheless, it appears that neither Hoare nor Wirth nor anyone else was aware that McCarthy had introduced in LISP already at the end of 1958 the special form "select", with a syntax that was practically identical to the Pascal labeled "case", except that the LISP "select" had much fewer restrictions than the Pascal "case", because the labels could be not only constants but also expressions and the "select" could also be used as an expression, not only as a statement, like in Pascal.

There are also other examples like that, e.g. the modules of Modula have been much better known and influential than the modules of XEROX Mesa, which inspired Modula, and so on.

In any case the influence of Wirth over programming languages has been huge, even if many of the ideas propagated by him were not necessarily encountered for the first time in his work.


> "it has taken more than two weeks for the result to come back"

That sounds exceptionally long for that era.

I didn't easily find hard numbers. https://spiral.imperial.ac.uk/bitstream/10044/1/35672/2/Newl... (in a student environment in 1981) shows a turn-around from 20 minutes to 6 hours. See page 123 table 4.1.

Also in a student environment, early 1970s, https://ir.library.oregonstate.edu/downloads/8g84mp855 "Batch turnaround time averages somewhat less than a hour with a range of one-half to three hours."

At The Rand Corporation in 1974, "Normal batch turnaround for jobs run during the day had averaged 30 minutes for quite some time." - https://dl.acm.org/doi/pdf/10.1145/1500175.1500305

More generically, "Turnaround time may take minutes, hours, days or even more than a week before completed outputs are returned in response to job requests." - https://dl.acm.org/doi/pdf/10.1145/1468075.1468077 .

Two weeks therefore sounds very much like an outlier.

> "Meanwhile, another engineer programming in Smalltalk and Interlisp is writing and running their implementation directly against a system console."

Or BASIC, developed in in 1964 along side with the Dartmouth Time Sharing System. For example, by 1975, the HP 2000F was "the first minicomputer to offer time-shared BASIC", and it supported Fortran and other languages besides BASIC. https://en.wikipedia.org/wiki/HP_2100#HP_2000

Python is the modern BASIC.

Also, note that interactively starting a long job (eg, compiling a large FORTRAN program) only saves the manual batch submission overhead. The compilation might still take time to run, especially if it requires system resources that aren't currently available so is still put into a queue.

> Forget about a “10X programmer”, how about a “10 000X programmer”?

That is not supported by the research. See https://dl.acm.org/doi/pdf/10.1145/1468075.1468077 for one summary of several papers comparing batch vs. online computing.

The Grant-Sackman paper is the only of those using professional programmers instead of students. The summary is "Time-sharing requires fewer man-hours to debug programs for highly experienced programers than a simulated batch system with a two-hour turnaround time." (19.3 hours instead of 31.2) and "Individual performance differences in 3 highly experienced group of programers are considerably larger than observed system differences between time-sharing and batch processing"


To add to your comment, the intro immediately reminded me that Forth was born even earlier than 1975, to solve the same punch card problem:

"The programming environment in the 50s was more severe than today. My source code filled 2 trays with punch cards. They had to be carried about to be put through machines, mostly by me [Chuck Moore, the inventor of Forth]. Compile took 30 minutes (just like C) but limited computer time meant one run per day, except maybe 3rd shift.

So I wrote this simple interpreter to read input cards and control the program. It also directed calculations. The five orbital elements each had an empirical equation to account for atmospheric drag and the non-spherical Earth. Thus I could compose different equations for the several satellites without re-compiling." [1]

I therefore take issue with the statement in TFA:

"This notion of reducing accidental complexity to the bare minimum is the key to a lot of our problems, and there is no greater champion of this principle than Niklaus Wirth."

As far as "bare minimum" goes, the true champion is Chuck Moore, no question asked. He eventually embedded the essence of his language in 144-cores chips [2]. The thing was designed by a CAD program he made with his language, too. This is actually the last chip of a series of "Forth chips" that did better than "Lisp machines" or the various similar "<language> chip", because a derivative of one of his designs, the RTX2010, has visited a comet not long ago.

[1] https://colorforth.github.io/HOPL.html [2] http://www.greenarraychips.com/index.html [3] https://en.wikipedia.org/wiki/RTX2010


> That is not supported by the research. See https://dl.acm.org/doi/pdf/10.1145/1468075.1468077 for one summary of several papers comparing batch vs. online computing.

Of course the 10 000X is skewed heavily by the presumption of a 2 week turnaround time.

But I also think people who haven't experienced anything close to batch programming don't appreciate how much you could do on paper. I used to bring printouts of code to school with me so I could work on my programs during free periods and breaks, and it was highly productive, though of course not comparable to sitting in front of the computer.


> Even cellphones are powerful enough to calculate every computation humanity had computed by the 20th century put together.

What does he mean? A single cellphone? Or all cellphones in the world combined?


A single cellphone.


It's unfortunate that few here will have used M2 in a commercial setting. Well crafted M2 code led to feelings of deep satisfaction. It was like codification of quality.


I want to know what author or others think about Elixir? I recently started learning and found it interesting and something totally new than what I know.


OP links to my previous entry, 'Why Erlang'[0] near the end! :)

0. https://www.fredrikholmqvist.com/posts/why-erlang/


This article wants to portray Go as a "small" language, but the batteries-included standard library is definitely a part of the language and Go is nothing but a small language by this measure. It is not hard to have just a large standard library (see Python), but a coherent one is difficult and requires a ton of efforts---with a large corporate backing in this case. By having a large and coherent standard library Go effectively swept the complexity under the carpet. Which is fine while it lasts.


I'm not sure how the size of a standard library has any bearing on the size and perceived complexity of the language. To me, it's about how many language features/keywords/constructs etc. one must keep in their head to effectively write code, including writing the standard library.

I'd rather have a language with a very small core and an extensive set of libraries implemented with that core than one with a large core that tries to handle everything with features. There's something to be said about comprehensibility of libraries written in a language with a small and focused core, as well.


> To me, it's about how many language features/keywords/constructs etc. one must keep in their head to effectively write code, including writing the standard library.

This is only a partial measure. Imagine that you are working with a string. You must keep the basic properties of strings in your head: Unicode string, byte string, byte string with defined encodings, byte string that decodes as UTF-8 by default, null-terminated, C ABI compatibility, length-limited, can or cannot contain lone surrogate code points, mutable or immutable, ownership, thread safety, copy-on-write, tree-structured (e.g. ropes), locale dependent or independent, grapheme clusters and so on. These properties are not a part of language proper but still something that occupies your consciousness and definitely relates to the complexity. And even more so if you want to do something with strings (we call them idioms, which are very important parts of the language that people normally doesn't perceive so).


> You must keep the basic properties of strings in your head.

Actually I just call whatever the languages version of len, rest, first, strip, split etc. is and move on. Snark aside, when I'm using a language I don't keep the implementation details of it's data structures in my head, I just use the provided API. I think the representation of data structurs is a different discussion.

Maybe a more appropriate analogue would be how many features were used to implement a string library, rather than focusing on the details of how a string is represented in memory.

Do I need to be aware of 6 different potential ways to sequentially navigate the string? Is there a way to do it using a loop, iterator protocol, destructuring, pattern matching, coroutines, special string indexing syntax, etc? Or can I just use a simple, uniform consistent interface and build the library on top of that?


> [...] when I'm using a language I don't keep the implementation details of it's data structures in my head, I just use the provided API. I think the representation of data structurs is a different discussion.

The exact details of data structures used do not matter, but their implications should still be in your head. Depending on the implementation you may need a separate type for string builder, or can append efficiently only to the end, or can append efficiently to both ends but not in the middle, or can append or insert an arbitrary string at any position but everything takes log(n) time by default.

> Do I need to be aware of 6 different potential ways to sequentially navigate the string? Is there a way to do it using a loop, iterator protocol, destructuring, pattern matching, coroutines, special string indexing syntax, etc? Or can I just use a simple, uniform consistent interface and build the library on top of that?

There is nothing like a "simple, uniform consistent" interface for strings. Strings are conceptually a free monoid^W^W an array of string units with the following tendencies:

- The "string units" can be anything from bytes to UCS-2/UTF-16 units to code points (or Unicode scalar values if you don't like surrogate pairs) to grapheme clusters (whatever they are) to words to lines. Even worse, a single string may have to be accessible in multiple such units.

- Many common desired operations can be efficiently described as a linear scan across string units. There is a reason that regular expression exists for strings but not for general arrays. (Regex-like tools for arrays would be still useful, but less so than strings.)

- A slicing operation is very common and resulting slices generally do not have to be mutated (even though the original string itself can be mutable), suggesting an effective optimization.

As such there are multiple trade-offs in string interfaces across languages and there is hardly the single best answer.


> By having a large and coherent standard library Go effectively swept the complexity under the carpet. Which is fine while it lasts.

Are you implying that it's a bad thing? Complex things that are frequently used got abstracted away so they can be easily reused - sounds great to me. Why wouldn't it last?


It is not necessarily bad, but a language with a large standard library isn't small. The article prominently features the benefit of small languages, which (I only agree partly but notwithstanding) wouldn't apply to a language with a large standard library no matter the size of the core language. Therefore the article's final claim is invalid, no matter it actually turns out to be true or not.


> If it worked ten years ago, it works now.

As long as it isn't a dependency management system. /s


Unfortunately, large parts of the IT industry thrive on excess complexity in software and languages. Which is probably why Wirth's languages went out of fashion at some point in the early 1990's.

As far as Go is concerned, it really started out well, but with generics it finally broke ties with Wirth's spirit because adding them to the language makes it unnecessarily bloated and will never reduce the complexity of your problems.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: