Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Simplicity and the ideas Go left behind (sourcegraph.com)
79 points by calavera on Feb 24, 2015 | hide | past | favorite | 88 comments


One thing that I've appreciated now that I've spent a decent amount of time writing Go as an individual developer is that it's much easier to jump into open source code and make small improvements and fixes because the language is very "context-free." When you're reading code, the control flow is always spelled out, property accesses never magically invoke getters, and it's generally hard to make things too complicated. There are things that annoy me but the downside is almost always bounded.


Code readability is the original intent behind striving for simplicity.

Though I find some of the design choices a bit frustrating, e.g., generics, lack of min/max; for each design choice I disagree with there are dozens of other choices I do agree with. Structural typing, built-in concurrency, a rich standard library, the syntax, the list goes on.

There is a certain brutal elegance to the way the Go standard library is designed, it is really easy to read and comprehend. It took me less than half an hour to make sense how the net/http server was implemented.

This doesn't only apply to the standard library: because the language style is standardized--there are no coding styles, there is just a coding style--other libraries or programs are very easy to understand. If the program architecture is easy to understand, it doesn't require any significant domain expertise to understand.

Languages aren't supposed to be cargo cults, though some culture can be nice, let's not kid ourselves: programming languages are means to to an end. Go is pragmatic, it gets things done, it is a tool, and most importantly, it doesn't get in the way of design or architecture. You are free to build programs in any style or way you want.

Ultimately, languages are just expressions of grander designs that are far more important than arguments for and against a particular syntax or language feature.


This oversells golang's simplicity I think. Not a lot, but enough to rub me the wrong way a tiny bit.

---

> Go programs are built from just their source, which includes all the information needed to fully build the program.

Still have to deal with GOPATH, vendor your dependencies, and have everything a `go generate` comment wants to invoke. It's certainly better than makefiles, but it's hardly just the source.

> C# is joined at the hip with Windows. Objective-C and Swift are for Apple. Java and Scala and Groovy might benefit from JVM bytecode and its independence… until you realize that Oracle isn’t interested in supporting Java on anything other than Intel hardware.

C# has Mono, you can use Objective-C with gcc, the JVM has a bajillion implementations. I suppose Swift is more or less unportable at the moment. 1 out of 4 ain't great.

> Go is helping pioneer a command-line renaissance that reintroduces a generation of programmers to the idea of writing tools that fit together like segments in a pipe (the original Unix philosophy).

This never went away. Heck we were going over this in college, which for me was in Scheme, Java, C, and C#.


This oversells golang's simplicity I think. Not a lot, but enough to rub me the wrong way a tiny bit.

True. Go is a good language for server-side web stuff. That alone is enough to make it very useful. It is a good language for when you want to get server stuff done and it has to go fast. That's enough. It doesn't need to be overhyped.

Go is helping pioneer a command-line renaissance that reintroduces a generation of programmers to the idea of writing tools that fit together like segments in a pipe (the original Unix philosophy).

That's inherently a batch processing approach. The "strings through a one-way pipe" approach was never that great. It's brittle. If anything goes wrong, the options are to abort the whole thing, or print a message to stderr which will probably be ignored.

Better ways to plug programs together are needed, but Go doesn't do much new in that direction. Today, plugging programs together probably involves a number of programs which use protocol buffers to communicate, running on, say, Amazon AWS. There's much work going on in tools for doing that. They aren't typically pipe-oriented.


> Better ways to plug programs together are needed

Yes! Maybe a language with direct support for software architecture, such as defining + (re-)using your own connectors.


I like the fact that the Go guys continue to avoid talking in detail about Rust and Clojure.

That tells me that the don't think they can win the comparison.


The core goals behind Rust and Clojure are very different from those behind Go. This article/presentation would not be appropriate for those languages. I don't think anyone would say that Rust or Clojure are simple languages, or that simplicity is a core goal for them. Rust's core goals are performance, memory safety, and lack of race conditions (AFAICT). And clojure is a LISP which puts it in its own category, really.

It's like saying a jeep can't compare to a ferrari or a minivan - other than having 4 wheels, they're really not at all designed to do the same kinds of things... Sure, maybe driving to the corner store they're all pretty much the same, but which do you want in 8" of mud? Which do you want in a car chase on the highway? Which do you want to bring your 4 kids to soccer practice?


> I don't think anyone would say that .... Clojure is simple language, or that simplicity is a core goal for it.

Good god you are so wrong.

Watch yourself some of Rich Hickey's trove of excellent presentations, including the one where he breaks down the detailed etymology of the word "simple" and how much he strives for that.

http://www.infoq.com/presentations/Simple-Made-Easy


> Rust's core goals are performance, memory safety, and lack of race conditions (AFAICT)

We usually formulate this as "memory safety without garbage collection," which has secondary implications on speed and concurrency, but yes. (Also, 'data races' rather than 'race conditions,' technically).


Thanks for clarifying :) Yes, I should be more careful about specifying data races vs. race conditions :)


Who can win against lisp anyways?

The reasons we aren't seeing more lisp/clojure are not technical/objective ones but attitude, social and nework factors - which are just as relevant though.


> Still have to deal with GOPATH

Once, when you install Go.

Then for each project, put your stuff where you want and symlink from $GOPATH to where it is. Once per project.

There really is minimum hassle to integrate the recommended flow.


You also install runtimes once for other languages.

GOPATH isn't awful, but it's something more than "just source" as claimed in the article; which is my point. Go is simple, but not as simple as claimed.


note that symlinking is not actually the recommended flow ;) Just put everything in gopath.... I even put non-go code in my gopath... it's just a nice way to organize - by the VCS url.


Package users should not have to run "go generate". It's mainly for package writers to generate code and then distribute it.


And? Presumably some of these packages get updated, handed off, or are collaborative in the first place. Re-generating is part of continuing development.


Sure, but that's not specific to any language. Is it Go's fault that package writers work a certain way?


Go encourages working in that way, so yes, it is.

With other languages, you might do code generation through powerful macros (e.g. as rust has) or some other tooling which is not literally just "run a program in the user's path".


My point is that `go generate` requires more than just source, and thus isn't "simpler" in the manner claimed in the article.

I'm not judging the existence of `go generate` or it's merits relative to some other environments. Except makefiles. Makefiles are worse.


Consider the following:

it is actually more difficult to code in a language that's simple

Why? Because a more feature-complete language allows you to ELIMINATE the concept of `nil` through an `Option` type.

An `Option` type is an `enum` that consists of either `Some(x)` or `None`. That means it is always checked. You can never accidentally use a value that is `None` because the type checker would not let you use `Option<T>` instead of `T` itself.

The code snippets on the websites are far from simple. You HAVE TO remember to do `if err != nil` in every single function. A more advanced type system would actually make this a requirement.

So what is more important, the simplicity of the language or lack of bugs?


Go depends on tools to check correctness that type system doesn't cover. Check, http://blog.golang.org/error-handling-and-go and https://github.com/kisielk/errcheck


Isn't this just layering another, non-standard type system on top of the language itself? After all, a type system is no more than a tool to check the correctness of programs. (And often, document the assumptions that the programmer made.)

I don't think this is a bad thing in and of itself - I like how languages like Python, Lisp, Javascript, and Erlang have been able to layer typesystems on top without building them into the language itself. But I wouldn't exactly hold it up as an example of simplicity, particularly since in those languages the community hasn't agreed on any one type system.


That's the answer for everything in the Go community. More ad-hoc tools to replace the type system. Have concurrency bugs? Use a tool that detects some types of data races. Have problems with errors? Have a tool that detects not checking for errors.

How is approaching every single problem with a different tool more simple than using the type system as the one tool for static checking? The Go ecosystem is creating a ad-hoc, informally-specified, bug-ridden, slow implementation of half of the type checker of Haskell.


The jury is out on whether, for large time frames and for large communities, having a centralized type system is better than having a decentralized set of independent tools.

I'm not very excited by GHC's model of language extensions, https://downloads.haskell.org/~ghc/6.12.2/docs/html/users_gu.... And in spite of the large number of Haskell extensions, there are still pragmatic niches that aren't covered, see for example Rust's encoding of memory management in the type system.

In a sense, GHC contains a ad-hoc, informally-specified, bug-ridden, slow implementation of half of the type checker of Coq ;) Which is to say that there are many flavors of sophisticated type systems, and it's not clear which flavor is most conducive for writing good software on a tight time budget.


One of the goals of the D programming language is to render add-on tools like 'lint' and 'coverity' redundant.


errcheck actually tries to give you what you get for free with a type system (if you have option types)


You and I have vastly different definitions of "free", I think.


Exactly. If simplicity were the most important thing, we'd all be writing code in Forth or some sort of macro assembler.


It's true that assembler is actually pretty simple. It's just that the simple instructions combine in hard to understand ways.


You could have said Lisp instead, and it's not hard to find people making that case.

It's a tradeoff between designing your own complex building blocks or having very complex tools with thousands of them ready-made.

I won't pretend to have an easy answer, but I do know I personally prefer when people err on the side of simpler tools.


This answer is slightly infuriating:

> Q: Lack of generic collection classes like in Java’s Guava library?A: For 90% of cases, slices and maps do what you need. For the other 10%, you might consider whether your package should own the logic of those special containers, instead of using an external package.

I read this as:

"We're not willing to put in the hard work on thinking of a decent generics implementation, despite decades of working solutions with a myriad of choices in tradeoffs, so you'll have to do the hard work of integrating a dozen different collections libraries and who knows how many different, after-the-fact, mediocre strategies to the generics issue, each slighly off, all of them conceptually incomplete somehow, and with the slight bugs that come from not having a well-trodden path for an essential component of most programming languages."


I am not a member of the Go team, just a contributor. I represent only my own opinions, please don't put words into my mouth.


This is my own personal opinion on core dev's position and no one else's, but to me generics and dependency management have always been the pink elephant in the room.

I can wait for features; I'm patient. But the downright refusal of Go's core team to even begin to address this issue is not just a technical problem, but a communications one. This is clearly big for a lot of people, and I have yet to see any serious responses aside from either "you domain model's wrong if you need generics" or "deal with it".

In contrast (just picking this language for its community outreach, not because of any perceived technical competition), Rust's core devs have been forthcoming about practically every objection they've gotten. Their answers are clear, detailed, not demeaning, and constructive. When they don't know, they're honest about it, and when answers are hard, they take the time to explain. But I have never heard of anyone being belittled for not understanding lifetimes or the borrow checker.

Yet it seems that somehow if I have a bone to pick with Go not being able to dispatch functions by argument or arity, or with how its simplicity ends up with codebases a lot of people would consider much more verbose than necessary, it's just that I don't "get it". The problem isn't even in the accusation; I just don't even receive examples or reasonable explanations, something I'm used to in related language discussions.

I was willing to give the language a pass on these things when it was just getting started and a lot of the classical, early product criticisms were abound. But it's gotten tiring; the number of unanswered questions is remarkable by now.


If you haven't found any "serious response", maybe it's because you haven't searched enough.

Here is a "clear, detailed, not demeaning, and constructive" answer of Ian Lance Taylor that addresses your concerns:

https://groups.google.com/d/msg/golang-nuts/smT_0BhHfBs/MWwG...

I would guess his answer will not suit you, but you can hardly argue the Go team ignores the issue or refuses to talk about it.


Several members of the Go team have put a lot of time into studying and prototyping various implementations of generics.

If you look through the mailing list archives you will find many emails from Ian Lance Taylor on the topic.


I'll be completely honest: it's one of the most BS reasons I've ever read in a technical discussion.

How can we talk about high performance when:

* The current "generics" mechanism (interface et al) does runtime introspection, which is just about as slow and unwieldy as it gets * There is no option for pervasive, truly high-performance data structures since everything is a map (which comes with its own type parametrization as an exception to everything else), and if you don't like the hashing algorithm, tough luck. * You have a garbage collector running in the background, which is barely tunable compared to the options other runtimes have

Talking about performance when it's convenient as an argument against generics but disregarding the other holes in the language is not reasonable, because I would say that the choice of implementation for native maps is probably far more important for high performance. Yet here we are, no one complaining.

So we can really discard the performance argument, thus there is now an ample, valid set of choices for generic programming, several of which don't go against the goals of fast compilation.

Speaking of fast compilation; pretty much everything is going to be faster than C++ templates, since the entire compilation chain in C++ is slow.


It would be unreasonable to have designed a generics implementation into Go 1 that did not cover the builtin polymorphic map, slice, and append. A simple set of orthogonal features is an important principle in Go.

For these, performance is most certainly critical.


He is not putting words in your mouth, he's stating his own interpretation.

No sane person would read "I read this as" and assume that this is in fact exactly what the speaker said, nor should any speaker assume "I read this as" is meant as a defamation or misquote.

In addition, you being a contributor not a core team member does not change what you said in the least. A language's design, even if it seems like there's a tight core, is ultimately decided in part by all contributors and the community around it as well.


The use of the opening phrase "We're not willing ... " implies that the OP interpreted my statements as a policy of the wider Go team.

I have no knowledge of any such policy and represent only myself on stage. You can think what you like about my statements, just don't generalise them to anyone else.


But isn't that effectively what the core team did indeed say? They just dismissed everyone's implementations of generics as having a downside, couldn't come up with anything better, then just left it at that. (Edit: This is just my knowledge from a while ago. There's some mailing list thread where every existing way to do generics is dismissed for one reason or another, and well, Go still doesn't have that one feature.)

Also, "Sans runtime"? When did that happen? Last I heard, Go had a fairly substantial runtime to it, making it unsuitable for many places, and not trivially possible to just link right into any old program. And without a runtime, it'd be hard to have a GC, eh?

I think you meant "Statically linked", and nothing at all about runtimes. Rather large difference. FWIW, you can statically link many things, including C#. Which is how C# is deployed e.g. in bestselling iOS apps (and running on iOS has gotta be far from being "tied to the hip" of Windows).

(As a comparison, Rust is actually what is generally meant by "sans runtime", as in you can just call right into a Rust function without setting up anything else (just don't like, call panic or something).)


> despite decades of working solutions

Which "working" solutions? Has anyone solved creating a type system that offers OO/inheritance, generics, mutability and isn't mind-bogglingly complex?

What is your definition for a "working" solution? I don't think any of the current languages we have fit this description, since all of them are capable of producing type errors that are way too complex to comprehend.

> with a myriad of choices in tradeoffs

Yeah, but how do you decide on which tradeoff to settle for? And why is the tradeoff to not engage in this mess not a just as valid one?

> so you'll have to do the hard work of integrating a dozen different collections libraries and who knows how many different, after-the-fact, mediocre strategies to the generics issue, each slighly off, all of them conceptually incomplete somehow

You're missing the point. Solving the generics issue for a specialized case, even if just on a library-level, is much easier and has much less impact on applications than solving the general case and forcing that solution onto every bit of code in that language.

I don't think the author is advocating using a general purpose 3rd-party library for containers, but rather writing your own special-purpose ones for the few cases where the on-board tools aren't enough.

> with the slight bugs that come from not having a well-trodden path

Localized bugs are easier to solve than the type errors common to that "well-trodden path" you talk about, which may span an entire application.

..not that there is any consesus on what exactly that "well-trodden path" is, since as you mentioned there's a myriad of tradeoffs.


> Which "working" solutions? Has anyone solved creating a type system that offers OO/inheritance, generics, mutability and isn't mind-bogglingly complex?

Pretty sure OCaml would match your definition of "not mind-bogglingly complex" as well as "type system that offers OO/inheritance, generics, mutability".


What would you want OO for? First-class functions + HM + typeclasses work fine.


Unification is not a substitute for semi unification.


It is interesting that the OP things that static binaries is related to being born in Google. The thing is that Plan9 doesn't have shared libraries, all binaries are statically linked. And the reason for this is that plan9 is a networked operating system, needing to load multiple files at runtime would severely harm startup time for a binary.

Run trace on a Linux binary and see the slew of "file not found" errors from syscalls looking for shared libs at startup and then imagine each one of these is taking place over a 9600 baud connection.

Good design realises benefits that authors never needed to consider.


Static binaries used to be common, the normal way of doing stuff. People tend to think that package management killed them, but it was actually glibc which cannot make proper static binaries as it insists on dynamic functionality for some functions, such as name resolution. Now we have Musl libc there may well be a revival in static binaries from C applications, and the C-derived ecosystem.


As a demonstration I traced date(1) on CentOS. Here are the file accesses, if you are running diskless, each of these needs a round trip to the file server. (except the final 3, of course)

access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory)

open("/etc/ld.so.cache", O_RDONLY) = 3

fstat(3, {st_mode=S_IFREG|0644, st_size=54185, ...}) = 0

close(3) = 0

open("/lib64/librt.so.1", O_RDONLY) = 3

read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0@!\0\0\0\0\0\0"..., 832) = 832

fstat(3, {st_mode=S_IFREG|0755, st_size=43880, ...}) = 0

close(3) = 0

open("/lib64/libc.so.6", O_RDONLY) = 3

read(3, "\177ELF\2\1\1\3\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0p\356\1\0\0\0\0\0"..., 832) = 832

fstat(3, {st_mode=S_IFREG|0755, st_size=1921176, ...}) = 0

close(3) = 0

open("/lib64/libpthread.so.0", O_RDONLY) = 3

read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\340]\0\0\0\0\0\0"..., 832) = 832

fstat(3, {st_mode=S_IFREG|0755, st_size=142640, ...}) = 0

close(3) = 0

open("/usr/lib/locale/locale-archive", O_RDONLY) = 3

fstat(3, {st_mode=S_IFREG|0644, st_size=99158576, ...}) = 0

close(3) = 0

open("/etc/localtime", O_RDONLY) = 3

fstat(3, {st_mode=S_IFREG|0644, st_size=3661, ...}) = 0

fstat(3, {st_mode=S_IFREG|0644, st_size=3661, ...}) = 0

read(3, "TZif2\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\7\0\0\0\7\0\0\0\0"..., 4096) = 3661

lseek(3, -2338, SEEK_CUR) = 1323

read(3, "TZif2\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\10\0\0\0\10\0\0\0\0"..., 4096) = 2338

close(3) = 0

fstat(1, {st_mode=S_IFCHR|0620, st_rdev=makedev(136, 0), ...}) = 0

write(1, "Tue Feb 24 08:57:58 GMT 2015\n", 29) = 29

close(1) = 0


Ok, here is strace of date(1), which is dynamically linked, on Alpine Linux which uses Musl libc not glibc.

execve("/bin/date", ["date"], [/* 16 vars */]) = 0

mprotect(0x7777dcd5a000, 4096, PROT_READ) = 0

mprotect(0xdc2d89a3000, 4096, PROT_READ) = 0

arch_prctl(ARCH_SET_FS, 0xdc2d89a4268) = 0

set_tid_address(0xdc2d89a4298) = 2439

clock_gettime(CLOCK_REALTIME, {1424769563, 611556639}) = 0

open("/etc/localtime", O_RDONLY|O_NONBLOCK|O_CLOEXEC) = 3

fstat(3, {st_mode=S_IFREG|0644, st_size=118, ...}) = 0

mmap(NULL, 118, PROT_READ, MAP_SHARED, 3, 0) = 0x7777dcd57000

close(3) = 0

ioctl(1, SNDCTL_TMR_TIMEBASE or SNDRV_TIMER_IOCTL_NEXT_DEVICE or TCGETS, {B38400 opost isig icanon echo ...}) = 0

writev(1, [{"Tue Feb 24 09:19:23 UTC 2015", 28}, {"\n", 1}], 2Tue Feb 24 09:19:23 UTC 2015 ) = 29

close(1) = 0

close(2) = 0

exit_group(0) = ?

+++ exited with 0 +++


That is worlds apart.


OpenBSD's ktrace is similar

31144 date CALL readlink(0x3c001c16,0xcfbe7048,0x3f)

31144 date NAMI "/etc/malloc.conf"

31144 date RET readlink -1 errno 2 No such file or directory

31144 date CALL open(0x3c001967,0<O_RDONLY>)

31144 date NAMI "/etc/localtime"

31144 date RET open 3

31144 date CALL open(0xcfbde9d4,0<O_RDONLY>)

31144 date NAMI "/usr/share/zoneinfo/posixrules

31144 date RET open 3


Yes, in many ways Musl is a BSD style libc for Linux (even down to the license). Add pkgsrc and it is pretty BSD-like.


Static binaries are bad design as soon as a library has a security flaw. Remember when there was a double free in zlib and Apple had to release a 1.3GB patch to update everything that linked it in statically - and even that only fixed the problem for Apple-official programs, not for anything the user had installed?


recompiling all of plan9 takes about 15 minutes


Well yeah, but plan9 has so little code because it doesn't really do anything. Recompiling any consumer-grade system takes much longer.


Actually recompiling Plan 9 takes about 60 seconds on my current Thinkpad.


A 15x speed up in a few short years, how delightful.


So how would one implement Erlang-like functionality in Plan9, where the code of a server can be hotswapped while still keeping all existing sockets to the clients open?


fork, I guess. Plan9 uses files so there is nothing special about sockets.


> until you realize that Oracle isn’t interested in supporting Java on anything other than Intel hardware.

Oracle does support the JVM on embedded hardware. It's right here: http://www.oracle.com/technetwork/java/embedded/embedded-se/...

Also since when was Oracle the only JVM vendor. There are plenty of others that support different hardware: http://en.wikipedia.org/wiki/List_of_Java_virtual_machines

And finally since when has the choice been between slow, interpreted languages and fast, compiled ones ? Plenty of options exist in the space between.


I'd argue that in most cases "easy > simple" with the definitions of Rich Hickey. Easy means you understand it quickly. Simple means it uses few concepts.

Go doesn't want to use the concept of generics. However, if your code uses "List<IP>" instead of "List", it is easier to understand, because it additionally tells you it is about IPs. Python is a language which tries to be easy by resembling pseudo code.

If you really want simple, you could as well use SML, TCL, or Lua.


Indeed Lua came to mind with his "Dave can’t think of any language in his lifetime that didn’t start out with simplicity as a core goal. Yet he can’t think of any language in his lifetime that didn’t eventually become more complex and “powerful”." quote - it has stayed simple and removed features.


> If you really want simple, you could as well use SML, TCL, or Lua.

And if you truly want simple, you use Smalltalk, Scheme or Forth.


I dunno about smalltalk, but Scheme and Forth start off simple until a programmer writes tens of thousands of lines of code, at which point it gets harder to read and follow the code exactly.


No matter how much convoluted code your write, it is still a "simple language". Your code however is not necessarily simple or easy.


That tends to be the flip side of the "simple language".

And at least the 3 languages quoted are so simple that they must provide the tools for building new abstractions (which are the tools for building the language in the first place), so you can cut down on code by building a reusable toolbox of abstractions.

Go is complex enough that they can get away without that, and even get praised for pushing the complexity to userland code and providing no way for users to manage that complexity.


> Go doesn't want to use the concept of generics.

How many times do we have to read this... It's not that Go doesn't want to use generics, it's that the use of generics doesn't come for free.

For generics to be introduced, the gain have to overcome the costs. It seems we're not there yet (I'll be honest, I didn't follow all the arguments closely)


>He currently works at Canonical, where part of his work involves porting Go to ARM 64.

That's interesting. Has Canonical stated what their interest in Go is?


Go is an open source project, Canonical sponsor Go in part by paying my salary. Canonical have been building products in Go for three years.


Probably something to do with Docker...


IIRC, Canonical joined the go community around 2010/2011 when docker has not been created. They are actually one of the early adopters of Go. Some major projects from Canonical using Go are juju[0], mgo[1], etc.

0. https://juju.ubuntu.com/

1. https://labix.org/mgo

Edit: format.


Also lxd https://github.com/lxc/lxd

and in the course of juju, canonical wrote client libraries for quite a few clouds as they didn't exist at the time.

openstack -> https://github.com/go-goose/goose (imho the best one out there.)

goamz -> http://github.com/go-amz/go-amz (many forks of this one, aws is going to use a different auto gen'd one for forthcoming official go sdk)

azure -> https://launchpad.net/gwacl (imho the best one out there.)


Also go-qml


Interesting.

Perhaps Go becomes the language for Ubuntu Phone apps ?


I believe that is an intended purpose of go-qml, yes.


I was using Go recently and I ran into some simplicity issues. It is not straightforward to create a map of net.IPs or manipulate netmasks. You will have to copy to and from a separate array/integer.


That is not a simplicity issue.

"Something that is simple may take longer to write and might be more verbose" -- from the article.

In a simple language, there should not be functions that do exactly what you want to do; that would be a sign of the language being complex and featureful, which are enemies of simple.

(and yes, this is a bit tongue-in-cheek)


Even in a simple language, simple things should be simple. Granted, it's easy to make complex things sound simple, but the example given seems simple.


net.IP is implemented as a []byte. Byte slices are not valid keys (you can't use the "==" operator).

An alternative would be to use [16]byte as your map keys and then subslice the array for net.IP.


Why isn't a byte slice a valid key? If anything, there should be a pretty straightforward equality check on an array of bytes.


A slice is not an array of bytes. That's exactly the point.

See http://blog.golang.org/go-slices-usage-and-internals to get a basic understanding


OK so I've read it. Slices sound exactly like one would guess, using the word from other languages. A view into an array.

So exactly why are they not suitable for equality? Even that article starts with "Slices are analogous to arrays in other languages". Please elaborate on this basic thing.


For an array (which are value types in go) you have the obvious element-wise equality.

For slices? Also element-wise? Even with different capacities? If the refer to the same window in the underlying array? Is there a need to copy the slice for inserting it to the map? Probably yes, because otherwise you could mutate the key from outside. But then it would be inconsistent to assignment (slices are reference types).

I have no idea how this could be done concisely.


Yes, element wise, why anything else? Different lengths are not equal; if capacity is externally visible, then that needs to be part of the compare.

As far as the whole ownership issue, that's a bit more than just ==, isn't it? Equality was the only thing I was questioning.


My thoughts exactly


Congratulations. Welcome to why you don't use Go.

Lack of generic access to data structures is one of their bigger fails.

However, they don't see it that way. One point of Go was to prevent needing to describe things before being able to compile it. Most things that people regard as "failures" in Go were deliberate choices to enable large codebases.


No, that's not why you don't use go. What the parent comment is dealing with is the fact that you can't have maps keyed by a net.IP, which is implemented as a []byte. Byte slices are not valid keys. This isn't really about generics.


It is very much about polymorphism and generics.


Would we be better off if the big Apache projects were written in Go instead of Java? Would that have been harder (starting today, say)?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: