Well, it's good to have a hard-compiled language that's (almost) memory safe. Three problems with Go:
- The Go mantra is "share by communicating, not by sharing". Then look at all the thread examples in "Effective Go". They all share memory, while trying to construct locks using message passing. Multi-threaded Go programs are not memory-safe. That's why Google won't let you use them on their AppEngine. Compare Erlang, which takes message passing seriously.
- The lack of exceptions is resulting in hacks using the "panic" mechanism to create an exception mechanism. This is where we were with "longjmp" in C. I know someone at Google who has constructed a language on top of Go mostly to deal with exceptions.
- The lack of generics is resulting in hacks using the reflection mechanism to create generics. This is painful and slow. Go has generics for built-in objects; channels and maps are parameterized types, so there's already syntax for instantiating a parameterized type. Extending that to user-defined types would not be too bad. Fear of the C++ template mess seems to have been the problem.
Rust addresses all of these problems. It's memory safe through compile-time reference counting and borrow checking (not garbage collected), and it has exceptions and generics. It also has a more powerful type system with type inference.
Having tried out both Go and Rust I don't see any reason to prefer Go, at least once Rust has a 1.0 release, which is supposed to happen the next 2-3 months.
Both languages were designed to replace C++, but only Rust has the features to actually succeed at that, IMHO.
AFAIU Go was designed to replace C++/Java/Python as _the_ language to use at Google, not as languages themselves. Which is why there is so much focus on simplicity, tooling, productivity (like compile speed and easy deployment).
Criticizing Go for not having enough features is missing the whole point: Go was created in part because C++ has too many features.
To my understanding, it was built for systems creators, not for programmers; it was built as a goto language who have to write program that is both efficient and simple to write and maintain. OTOH, I do see Rust as a C++ replacement, which makes it complementary to Go's focus (sadly it seems that D is lost in the middle, though...)
This is the feeling I get. I don't write much Go (I'm just now getting into it) but all of the major features I hear about seem to be directed at creating code and projects that are tractable at "Google Scale". Everything has been designed for massive teams, from fast compile times, or having a standard formatting scheme, to choosing syntactically insignificant whitespace (specifically the podcast mentioned finding instances where python snippets were embedded in another language somewhere which would shoot the whitespace to crap).
The language being dead simple also seems to be to that end, with things like removing ternary operators or not having the pre-increment being smaller sacrifices, and no generics being a larger one.
All in all it seems like the underlying philosophy of Go is focused on Google specific context and engineering goals, and it's a happy accident when they work out well for programmers in the general case, whereas something like C++ or Rust isn't catering to those specific parameters (though they're obviously capable of working in that role, perhaps just not in the "ideal" way that the Go language maintainers would prefer).
Go would be fantastic for microservices or targeted applications. But features like exceptions are critical when you have 100+ developers spread across the world working on the one codebase and you want to ensure consistent error handling behaviour across the application.
Well, Google's C++ Style Guide[1] disallows exceptions, so it makes sense that they wouldn't put exceptions in Go either.
Come to think of it, Go does seem pretty similar to the subset of C++ that Google allows their developers to use. If you think of it that way, it starts to make sense to think of Go as an intended replacement for C++ at Google.
People love to say this, but the observation falls apart quickly under scrutiny. People working in Java are doing so to retain access to the JVM and to the zillion libraries built on it. The JVM with its ecosystem is a colossal asset for Java; one of the greatest in all of commercial software development. (I say this as someone who does not enjoy Java).
Meanwhile: Golang neatly fills in a sweet spot just "below" Python and Ruby, where finer-grained control of memory and more predictable performance characteristics are required, but bare-metal performance isn't. That sweet spot actually describes a huge fraction of all the use cases for C/C++ in Internet software.
So this is a meme I'd like to see die. It somehow manages to simultaneously get Golang, Java, and C++ wrong all at the same time. The only way to make it worse would be to work in some kind of Lisp comparison, based on Golang's parsimony with parenthesis.
That observation is based on history. COBOL used to reign where Java sits now. The nature of corporations is changing, for the last nearly 20 years Java has enabled corporates to capitalise on investments in new hardware at a greater pace than COBOL did.
Golang is the new Java, you are watching it being born. Its not ready for the corporate world, its not on any large organisations radar (apart from its creator for course). The point is that Golang solves the problems of the future, it offer parallel execution to normal programmers.
In the future banks and large companies are going to stop having data centres of their own, CPU's are going to have 32+ cores. Java does not solve those problems natively.
Java is great, its got 40+ years of life left in it, but don't say that Java is strong because of its ecosystem. Ecosystems change from decade to decade.
Anyone who thinks Go will ever be a replacement for Java is frankly clueless about enterprise software development. Go has almost non-existant integration with enterprise systems e.g. SAP, Hadoop. It lacks operational management capabilities e.g. JMX. And seriously the range of libraries on the JVM covers pretty much everything e.g. banking/finance use cases.
And concerningly there is not a single reasonably sized Go project to get an understanding of how it works with 20, 50, 100 or 500+ developers working on the same codebase.
And I believe there are a number of projects internally within Google but they tend not to talk about internal product technologies (at least at how large they are).
No Docker is not fairly large. It's tiny. 1M - 10M LOC is the size of most of the codebases I've worked on.
Most people simply don't have an understanding of enterprise applications. The majority of the apps are single codebase, Servlet/Spring type monstrosities. They are 10+ years old and have had hundreds of contract developers who come in, add a few new features and then go onto the next contract.
And my point is that I am yet to see what Go would be like in these situations.
It seems a little unfair to ding Golang for not providing evidence of something Golang was designed to avoid. "Where's all the boilerplate? Where are all the 10 layers of XML mapping? What would an AbstractStrategyFactory even look like?"
If "enterprise applications" mean over-architectured monstrosities filled with boilerplate code, a hundred levels of indirection and horrible XML configuration files then thank god Go doesn't encourage that.
I think when people claim that Go is a replacement for Java, what they really mean is "Go has similar performance characteristics as the JVM and also doesn't require manual memory management".
Anyone who has ever worked in enterprise software can tell you, Go is (currently) exceptionally poorly suited for that style of development. I don't think that is an accident though. I think there is at least the implication that enterprise software development models and architectures are flawed at the core. Go seems to be designed from the start to prevent you from developing that way.
I may be reading more into the Go culture than I should, and I certainly don't think that there is proof that enterprise software can't be successful, but Go clearly steers you into building small, self contained servers that do one thing well and can go without change for a long time.
> there is not a single reasonably sized Go project
I'm not sure if I misunderstood your statement, but Go has Docker, CoreOS and numerous other projects which have more than 20 developers working on the same codebase.
Docker and CoreOS are tiny compared to many enterprise Java apps. And they don't have 20 full time, active developers all contributing at the same time (at least not from the commit log).
Take for example eBay or Amazon and the huge array of different use cases they support. Those are the typical sized Java applications we are talking about. Insurance, Banking, Finance etc. Legacy integration, payments, reporting, various web front ends. One codebase.
For what it's worth, a pristine checkout of Eucalyptus[1], that has parts written in Java, appears to be around 430k loc of java (comments and white space included).
Also, OT, didn't realize Eucalyptus had worked so much towards friendly development, and was moving towards including RiakCS as a S3 work-a-like backend[2].
What I've heard from a friend who's a SDE there is that because they have a Service-Oriented Architecture the individual teams can use just about whatever language they feel is best for the particular job. The other teams don't care because they're just making service calls. I'm sure that was mostly Perl in the 90s, but today it is mostly Java, because their HR strategy is basically to hoover up all the fresh CS grads, whose strongest language is almost always Java. And also because when Amazon started building their SOA, Java/XML/SOAP was all the rage.
Perhaps, but the problem is, with no generics or exceptions, it might just be too low-level and not expressive enough to really be a suitable Java replacement.
Yeah i just see Go as an alternative for web technologies, where Rust is much more system level focused. I could be wrong, but that seems to be the direction I see with Rust.
We could not have handled the load at Microcorruption, emulating CPU state for thousands of concurrent users, using Ruby --- which remains the front-end language for the site.
We could easily have done so in Java, but we'd be in Java's concurrency model, which is much more difficult to reason about.
I think this is exactly what people mean when they say Go is more of a replacement for Java than it is a replacement for C++: In the sense that when Ruby and Python become too slow, people normally reach for Java, but now they have Go as an alternative option, with a better concurrency model than Java's.
We never even considered writing it in Java. We wanted something with the performance characteristics of C, which is what we would have used had we not had Golang.
I'm starting to use Go as an arguably better Node. Very fast, easy (pseudo)concurrency, relatively nice HTTP capabilities in the stdlib. It wouldn't be my first choice for building a full library, but it works well as an internal component in a service oriented architecture or a background worker for a queuing system.
Better language? Not really, if you want types TypeScript is a much better option. With node+typescript+react you can get end-to-end typechecked code with no FFI involved - thats really, really cool.
Unless/until Go gets generics, I stand by my statement. A horrible type system that severely limits expressiveness is worse than no type system. Even TypeScript's type system is miles ahead compared to the one in Go.
All the things node is designed to do and it does it at least as fast as Go.
We can argue that the Go concurrency model is better.I think nodejs's is good enough,easier to understand and to deploy.
If your goal is to write lightweight webservers that do a specific task,nothing beats node in my opinion.
Why do you say Rust has exceptions? We have fail!(), which does roughly the same thing as Go's panic(). So much so that we decided to rename it to panic!().
People always forget that sane language choice isn't ticking checkboxes on lists of features. If you actually program in said languages, you'll learn to appreciate maintainability, the time it takes you to get something right and the general pleasantness doing so, which is where Go shines (but you'll have to actually use it to see that).
Perhaps you'll be one of the 5% users who will actually miss generics or cannot work without exceptions (even though Go's error handling makes perfect sense without them, it's just a different approach), but it's not very likely.
And you seem to forget that exceptions, generics etc all affect maintainability, time to get something right and general pleasantness. Those "list of features" are pretty important for many people.
Go is basically Java 1.0 + Quasar. At some point Go will "grow up" and start adding these features and people like you will fall by the wayside.
Even if Go added exceptions or generics, people would still complain because something would be different from how they expected generics or exceptions to be.
If you want a specific language, then just use that one, but don't try to impose your expectations on other languages.
> The Go mantra is "share by communicating, not by
> sharing". Then look at all the thread examples in
> "Effective Go". They all share memory,
That mantra is spoken in the context of concurrent code, not as a general axiom of the language. And I think it's remarkably well sustained in all of the concurrency examples I've seen so far. Can you be a bit more specific?
> The lack of exceptions is resulting in hacks using the
> "panic" mechanism to create an exception
> The lack of generics is resulting in hacks using the
> reflection mechanism to create generics.
At the risk of making a No True Scotsman fallacy, idiomatic Go code does neither of these things.
All that's sent over the channels is a flag message to start the process and report completion.
Yes, there's the fanboy answer that they're not "really sharing" because both threads don't access the list at the same time. I've heard that excuse. The threads are sharing data. Deal with it. Locking is (hopefully) provided by abusing the channel mechanism to simulate a semaphore. The channel mechanism isn't doing anything here that a semaphore couldn't do better.
Actually the Effective Go page needs to get updated. They also comment next to c <- 1 that value does not matter where the use of a struct{} channel would be more idiomatic.
But still the Go mantra is valid, the means (channels) provided by the language allow for sharing by communicating and IT IS actually considered idiomatic where appropriate.
Also your points about exceptions and generics are non-valid or at least very specific cases. I am working on backend stuff (mostly command-line tools) and i never felt any "lack" of them.
From what I've seen lately, Golang programs take up too much disk space to replace small C utilities that are used in lightweight or resource-constrained environments. Memory usage also tends to be much higher, but that may be less of an issue since it can be optimized through careful profiling. I haven't seen any way to really optimize disk usage yet...
I hope I'm wrong though. It'd be exciting to see Golang equivalents of optimized C utilities that come close to using the same disk and memory. Or maybe there's an embedded golang compiler in development that does a good job of optimizing for these use cases...?
I don't see how that's all surprising. I'd be surprised if a Go program without the explicit goal of being tiny managed to compete on disk size and similar metrics with something like Busybox, which has the explicit goal of being tiny.
Thankfully the go language designers were more focused on practicality than purity. Shared nothing concurrency is nice in theory, but it is so slow by comparison to just sharing memory between threads. But then I write high performance servers and concurrent data structures using lock free algorithms (including in go), where even a mutex is a luxury, so maybe the problems you work on are very different to what I work on. But I'm very happy go can accommodate the evil things I do in the name of performance.
Fear of copying overhead can introduce more performance problems than copying overhead. Modern CPUs are really good at copying, which, after all, is completely parallelizeable. If you just created some data, and then pass it to something else by copying it, and it's immediately used there, it will probably still be in the fastest level of cache. At least if the message passing and CPU dispatching are properly connected.
QNX gets this. Almost nobody else does. The reason for having subroutine-like IPC, rather than "send on channel A, then wait on channel B for reply", is that the scheduler can immediately transfer control from sender to receiver. If two unidirectional channels are used, you have the sender and receiver threads both in ready-to-run state, which means a pass through the scheduler for somebody, and possibly a handoff to another CPU, with all the attendant cache misses.
A good test of an IPC system is to have one thread calling another as a service, with control going back and forth rapidly, while other threads are compute-bound.
If the presence of compute-bound threads kills IPC performance, it was done wrong.
Single-threaded Go programs also run faster for some (many?) cases. The runtime is able to bypass a number of locks when running in a single-threaded mode. And depending on the profile of your code, turning on multiple threads for a Go program can slow it down even if you use channels (as moving data from one CPU core to another can be much slower than just doing a context switch in a single core).
The ability to use type parameterization to create a type with one or more "holes" in it that is filled in at declaration time. For example, a type that is a List of ints would often have almost exactly the same code as a type that is a List of strings. But without generics, you have to write it twice (or copy/paste, use code generation, etc.) With generics, you say "List<T>", and the compiler makes it work for both List<int> and List<string> (and pretty much any other type you substitute for T). (There are various ways of making it work under the compiler can use under the hood).
Reading this I feel Go is boring, and that's an asset. Let me explain.
Seeing [what's happening in Haskell (GHC)](https://www.haskell.org/pipermail/ghc-devs/2014-October/0065...), which is soooo much more exciting; but then I totally understand that "exciting" is what you want to stay away from in some cases. In these cases a Go is a much better choice I guess.
I often describe Go as being boring as a positive asset. For example, when people what makes Go special, I say "it's not better than any specific language at specific things, but generally better than most languages at general things." Which is extremely dull, but most of the time you do just want something straightforward, stable and ordinary when you have normal development projects.
So I think Go is an extremely dull language - but personally I think that's what makes it so good.
I tend to think like that about C#, the thing is I feel like there are already many dull, private company managed, programming languages out there to make me curious enough to use it in real life.
I wrote a couple large projects in Go when 1.0 came out. Since then I've kinda kept of with the language, but not very closely. When I notice there's a new version, I recompile my existing projects in the new version and they tend to speed up a bit and use a bit less memory, but they have never broken. I love boring. :D
Q: There are several dependency management tools in the wild: godep, gpm, etc. Are there any plans to provide this functionality in the core?
Brad Fitzpatrick: We don’t want to dictate a policy, so we hope the community fights it out and a victor emerges. Then maybe we’ll bless that one. Then if everyone likes it and it has been stable for a couple of years, maybe we’ll add it to the core.
Brad Fitzpatrick: Part of the reason why we don’t care as much about dependency management inside Google is that we don’t use the go tool inside Google.
Andrew Gerrand: The lack of versioning built into the Go tool incentivizes library authors to provide good, stable APIs.
Andrew Gerrand: The lack of versioning built into the Go tool incentivizes library authors to provide good, stable APIs.
Are you kidding me??? So if some invariant behavior on an otherwise stable API changes, I don't know until runtime that a bunch of the code that I shipped, haven't changed, and then recompiled a month later has changed it's behavior?
So, uh, why is the go-supported option encouraging me to download it from github at compile time without so much as crude versioning support?
Lots of people run internal Maven repos, they have every dependency vendored, but they also want the ability to choose between versions of that dependency.
Sure, but manual vendoring is just one (tedious, error-prone) means to that end. Any sane package manager with something akin to lockfiles gives you the same guarantee without all of the headaches of vendoring (and unvendoring when no longer needed!) all of your transitive dependencies.
If "vendoring" precludes one from using a dependency installed on the system, then this practice is static linking all over again. Static linking has its place, but dynamic linking with sane versioning solves many problems.
People like Go because you have a single executable to drop on a box, no worrying about what version of what libraries are installed. Though, the Go team realizes there is a need for dynamic linked libraries, so they at least have a proposal for it[0].
I've been following that proposal, and I'm a bit surprised that it isn't mentioned on the roadmap. Looks like dynamic linked libraries will come sometime in 2.x land?
Sure, but that's orthogonal to what's being discussed here. You don't check your compiled binary into source control, but proponents of vendoring do check their (transitive) dependencies in.
Yeah, this seems crazy to me. Also, honestly, while stable APIs can be nice, I would rather have people have the option to change the API in later versions, if it makes things better. But I want to be able to consciously decide to upgrade when I'm ready.
You do have that option. As I explained at the time (but this was not included in the live blog), the convention is to change the import path when the package API changes.
This is quite interesting, because after the Python fiasco in 2012/13 I thought a new language like Go would have learned from that and done that right from the beginning and I always thought Go did it. "We don't care" is even worse than "we just don't know how, yet".
Different people have different requirements. It makes no sense to declare some tool to be THE STANDARD(TM) if it's of no use for a large proportion of their users.
The way $GOPATH works and dependencies are managed is well-documented. Everybody is free to develop their own tooling. Right now, many people flock towards godep, but that might change in the future. Also, since this is only relevant during development and for compilation, getting it right is not as important as if you had to deploy every single one of your dependencies. With Go, you get one binary, and that's what you deploy.
And there are people like me who think that the programming language tooling shouldn't have to be tracking dependencies. If, for no other reason, than it is common to use multiple languages for a single project, and then a language-agnostic method should be used.
I've leaned towards just having everything it our DVCS (git in our case). External libraries are handled using git subtree.
So there is a protocol that is standard, but not the tool which implements it? That is not what I understood after I've read the article. But that would definitely mean it's actually not bad to have different tools.
There must be some standard, though. If you have neither a defined protocol, nor a defined tool, then you have a broken community. That you need to support different package managers for different Linux distros is one of the reasons people don't support Linux for their tools, and I personally would even argue that it is one of the reasons languages like Ruby, Python and Go need to solve that themselves.
It's only an issue for where source code is put, not binaries or shared libs. All Go programs are statically linked, so end-users of a program don't care at all about dependancies. The only people that will care about it are those that want to build the binary themselves.
Also, if you are unaware, Go has great cross-platform building. I can generate the executables for all platforms/architectures Go supports from my single dev machine.
I didn't get the sense they were celebrating their lack of care, or that they chose not to care. (They didn't create Google's build system.) They were simply stating the reality.
Yes, we do vendor everything, in that we have a snapshot of all of our dependencies in our source control system. But we do it across the entire codebase, not per project. That is, we typically only ever have a single version of a library for the entire company (with a few exceptions). If a project needs to update to a later version, they basically update everyone using that library. For widely used packages, this can sometimes be a time consuming process, but we've found it to be preferable to the alternative of having version conflicts all over the place. This is generally true not just for Go, but all languages. So the idea of a project needing to pin to a very specific version of a dependency and never update doesn't really fly.
It sounds similar to the Linux kernel. If all the providers and consumers of an API are in the same repository, then you can change an API at any time as long as you update all consumers of the API at the same time.
It also results in having a huge version control repo. From what I've heard, Google can't move to git because of this... they're stuck with Perforce because it's the only VCS that can support a repo of their size.
We have good tools for this. A Googler can create a patch that upgrades a library and run tests for all projects to see what breaks. The project owners review the changes. Of course, some upgrades are easier than others.
In some ways this is similar to what Linux distros do, but sharing common source control, build system, and test runner makes it easier.
I'm going to guess you're a Rubyist. I had to look up what "vendor everything" referred to. There's a reasonably well referenced blog post by a Rubyist -- but only referenced by other Rubyists. I see that the Bundler tool looks for a 'vendor' subdirectory, so I'm going to guess that's how it became a part of the Ruby lexicon.
Anyway, just wanted to point out that it's possible you're using Ruby lingo and not general-purpose tech terms. I could be wrong, maybe I just missed the boat.
I think "bundle everything" might make sense to everybody, including Rubyists.
I remember reading about "vendor" branches in the cvs documentation (probably in the early 2000s), which if I recall correctly were trying to solve a similar problem to sub-modules in git. So I don't think the Rubyists invented the term.
That's where I first read the term, but it predates svn. As far as I know, it's been the common term for committing external dependencies to your own VCS for as long as there's been such a thing as a VCS.
"Vendor everything" was my conclusion to solving dependencies as generally as possible, while trying to knit PostgreSQL modules (locally produced schema chunks) and extensions with a base of versioned Rails apps.
> Andrew Gerrand: The lack of versioning built into the Go tool incentivizes library authors to provide good, stable APIs.
It certainly encourages stable APIs, but I don't see how making it very painful to iterate on your API is a strategy for good ones. My experience is that doing anything well requires the ability to gather and respond to feedback. If you can't iterate, then you're doomed to be stuck at 0.0.1 levels of quality.
So, do you consider the Go1 compatibility guarantee to be a mistake? Most suggested stdlib or lang improvements are DOA for the time being as a result, which is annoying, but of course so too would be frequent breakages.
I'm curious about your take, since you have some relevant experience.
> So, do you consider the Go1 compatibility guarantee to be a mistake?
No, I think a language basically has to declare that level of compatibility for a major version. The important part is to get as much real-world feedback and iteration in as possible before you bang the 1.0 gong.
This is a summary of a talk from the dotGo conference in Paris last week. Lots of good stuff was presented there. Recommend reading the other posts by sourcegraph on this and watching the videos when they come out.
How about a better optimizing compiler? I've been using Go and everything feels quite snappy. Justifying it use over Java on the server-side, for example, might require a little more supporting data.
Regarding the article's mention of GopherJS: The Google Dart project should adopt Go as its language and reboot. Dart hasn't gone anywhere. GopherJS is a great low-budget transpiler and its source could be incorporated into a Go-based Dart.
It would be great to see what could be accomplished with a big-budget GopherJS.
While I really like Go's approach to concurrency, I don't think the rest of the language would map well to the browser whose core DOM API is designed around objects, inheritance, and exceptions.
PureScript and ClojureScript all have type systems better then the one in Dart. The simplest ways to get a type system is TypeScript. Not sure how I how it compares.
"Better" is subjective (and I certainly wouldn't consider any dynamic system to be better than dart's) - Dart hits a certain sweet spot IMO, being simpler than a typeclass-based approach but far more usable than Go or Java. TypeScript is pretty nice but gradual, which is its own set of tradeoffs.
Dart has other selling points than the type system, sure, but degrading it to Go-style types would be a serious loss.
If this is the reason to finally add generics to Go, all the better, and a lot of people puzzled by the community's copy&paste attitude to code reuse might reconsider the language.
Also, this would require Google to finally polish up the Android NDK, which would be great even for non-Go users.
This is the highest and most important thing I want from go. A proper debugger that is not insanely hard to setup ( on any machine! ). Gives a complete stack trace and such info. I am not sure how people can live without a debugger in 2014. GDB is not the answer to this - for sure. There are soo many things that can be done ( tooling wise! ) and this one is , I personally think, the first things the golang guys need to do .
That's true, I am great at setting things up... but Go+GDB was zero set up. `go build` then `gdb ${executable_file}`, and nothing else. I suspect that it's Windows being in the mix that gave you trouble?
Just an FYI, you can use MacPorts to `sudo port install gdb`, and it installs gdb as 'ggdb' without the signing mess. To use with ddd invoke as `ddd --debugger ggdb` and it works like a charm.
Depends how you define "proper" and how you imagine a debugger interface to be with which you can handle a running application with hundreds, if not thousands of goroutines.
I've been writing Go for the past couple weeks and have really enjoyed it. My day jobs is in JavaScript land all day long, and while I really love JS I was looking for something more. Go has helped me think differently about how I approach my code in JS now and continues to be a great source of knowledge.
Anybody up for backporting Go's stdlib to C? Doing so in an automated way would be all the better.
There are just so many things in Go that feel like 80% solutions - they make great demos but in every day use you have to fight them (looking at you import system and the GOPATH, magical make() function, magical overloaded accessors, lack of expressions or at least ternary if, pre and post increment are hacks not expressions, lack of coercion to more precise types, no templating / generics / preprocessing, needs an equivalent to realloc, having to go through reflect / unsafe to get things done, lack of proper type resolution for complex types).
There are many things I do like about Go, but much of the time it feels like a very pretty prison compared to the (admittedly less pretty) freedom of C.
> GOTRACE: emits Chrome trace viewer and will allow for us to visualize scheduler actions and more in Chrome
I'm very excited about this, but I wonder if it will scale to visualize hundreds to thousands of goroutines in a useful way. That's where existing inspection tooling like logging and snapshotting goroutine dumps fall apart.
I see anything that ties Go to a specific browser as a bad thing. Surely they should be browser agnostic if they want to encourage universal adoption of their language (or perhaps they don't really care about this)?
Remember that Go is coming out of Google, and therefore was initially run almost exclusively on desktops and servers. For a desktop or server, ARM is in fact exotic hardware.
I see he mentions "beginnings of Android support". This is something I would love to see. Does anyone know if there is a product roadmap that provides a timeline on this?
Thanks for the link. That's pretty interesting, albeit it also disappointing. I'm focused on enterprise applications rather than games so it looks like I'll have to stick to Java. I understand the complexities since this is being built onto the NDK; I guess I was envisioning it more as an implementation of Go on top of the JVM.
- The Go mantra is "share by communicating, not by sharing". Then look at all the thread examples in "Effective Go". They all share memory, while trying to construct locks using message passing. Multi-threaded Go programs are not memory-safe. That's why Google won't let you use them on their AppEngine. Compare Erlang, which takes message passing seriously.
- The lack of exceptions is resulting in hacks using the "panic" mechanism to create an exception mechanism. This is where we were with "longjmp" in C. I know someone at Google who has constructed a language on top of Go mostly to deal with exceptions.
- The lack of generics is resulting in hacks using the reflection mechanism to create generics. This is painful and slow. Go has generics for built-in objects; channels and maps are parameterized types, so there's already syntax for instantiating a parameterized type. Extending that to user-defined types would not be too bad. Fear of the C++ template mess seems to have been the problem.