Ignoring the obvious difference in reach and success, in what way are Go, Dart and Android not equivalent to C, C++ and Unix in terms of open software projects?
* Go is just a less crufty C with a few niceties (modules, gc, coroutines).
* Dart, imho, is just an attempt to capitalise on Java expats, like they did with Android, but this time in the browser. It's sadly a much less interesting language than Javascript, which it aims to replace. It does however fit the bill of getting more enterprise friendly software running in the browser and in the cloud.
* Android is a pretty crappy Java runtime running on top of Linux. It's not at all interesting.
What all of these things lack in comparison to C, C++, Unix, and itself Java for that matter, is broad, industry wide repercussions. None of them are the culmination of years of careful research. If you look for the huge public-facing industry epochs out of Google, you're looking at marketing and social change, not individual technologies.
Android has industry-wide repercussions. I'm too young to remember but my history says that UNIX was regarded as a crappy, uninteresting operating system for years.
Go's dealing with a much more established computing environment than C, so it's not going to take over the industry overnight, but a lot of us think it hits the niche between C and Python very well, and could (with enough time) dislodge Java as the choice for most server-side development.
Agreed on dart.
More importantly, look at how much google stuff is not public: Their whole distributed computing infrastructure. That's why they're not bell labs.
I always loathe these attempts to compare how "innovative" the past is compared to the future, when the relative ease of mining entirely new fields of endeavor is not considered. One might as well complain that Humanity's days of invention are clearly behind us, because in the past thousand years we've come up with at most one invention on par with Fire, The Written Word, and Agriculture. And piffle, Computing is hardly that anything more than an obvious extension of other things anyhow.
We do not get to discover brand new fields of endeavor every day. Of course Go isn't as "innovative" as C... probably no computer language can be as innovative as C ever again. (Or Lisp, or a couple of others.) Even if one were to somehow be constructed (or simply pulled from the future somehow) it would almost certainly still have some sort of pedigree that could be traced whereby people could poo-poo its innovativeness. This is not a weakness, this is a strength of the richness of the field and how much exploration we've done. We don't get to discover new fundamental things every day precisely because we've done such a good job of exploration in so many fields, not because we've lost the ability to explore.
I disagree that there is no room for PL innovation left. I think the situation with Go is that the authors were deliberately trying not to innovate too much in the interests of familiarity.
There's lots of programming language innovation going on, you just don't see it in languages like Dart and Go. Which, by the way, is fine; Go has plenty of reason to exist without being innovative from a PL standpoint. (I would point to languages like Clojure, Kotlin, Scala, and C# as examples of industry innovation in programming languages.)
I guess this is sort of flame-baity, but what did C# really innovate in? Even the highly-lauded LINQ is a few functional features + reified code (and rather ad-hoc at that). I know LINQ is award-winning, and probably changed the industry by putting functional concepts in front of tons of people that wouldn't have otherwise used it. But is there anything even remotely new in C#'s actual language design? (Yes, anyone can be a critic.)
Async workflow was in F# about 5 years before C#, and implemented purely as a library - no hardcoded keywords needed. As I understand, any language with monad syntax can create such a feature.
The new generic variance is actually interesting. AFAIK, the MSR team had that as part of the spec, and it's been sitting in the CLR since 2.0. It's curious that C# is the only language besides MSIL to expose the feature.
I agree, but I do tend to think true innovation moves around in ebbs and flows, comes in spurts, and is mostly the product of hard work and necessity.
Computing has reached a point where resources have become so cheap we don't know what to expend it all on. There's still a hard compsci core, masses of unsolved problems, but most of us live in this nasty ecosystem of commodity "innovations" where it's not even clear there's an incremental improvement, if there is it can be a matter of opinion, let alone something that's going to indisputably change the way most humans live for the next half a century.
I don't suppose it's a bad thing, it's just that once a technology reaches a certain level of awesome we expect the flow of goodness to continue, and blinker up and squint at it long after looking for novelty and ingenuity in really small details.
The only way I can get a sense of perspective sometimes is to talk to my grandparents, who washed in a tin bath, witnessed TV, washing machines, microwaves and even modern soaps, shampoos and detergents come in to the home. I swear my gran still revels in modern food processes that I take for granted, and sometimes even despise. Somehow incremental improvements in programming language semantics don't seem to compare to the fact that 2 years ago I had laser eye surgery, or that we have the capability to print chips at unfathomable nanoscopic sizes.
You're right that the past was not 'move innovative', but I think it's misguided to think we are not discovering new things every day - we certainly are. They're just not noticed immediately because of relative 'obscurity' to an existing paradigm we have built around imperative and structured models of computing. This is the real difference between the past and now - in the past they were free to explore without restraint, but now we are carrying baggage that can't simply be dropped (The requirement to remain compatible with C, JS or whatnot, and worse, the unwillingness of many programmers to learn new things.)
Even C had baggage, from (B)CPL and Algol. C wasn't some magical innovation that appeared, but the result of incremental improvements to an existing paradigm, and the popularity of C++, Java and the likes happen for the same reason.
Conversely, LISP was hugely innovative because it introduced a new paradigm for thought and experiment, which led to even more new ideas appearing that wouldn't have been discovered if we were only stuck in an imperative world, but some of those ideas were then able to be brought back into the imperative world which brings about more innovations. Likewise, integrating procedural ideas into the mathematical world has brought about new innovations too.
I don't see languages like Dart as very innovative because they barely add anything new to the mindset. It's largely existing ideas that have been tried before, getting rid of some of the bad ideas of past, and putting it together in a neat package, however, it's still very much in the same existing paradigm, and incremental improvements to an existing paradigm only tend to solve small problems.
Bigger problems will be solved by completely new ways of thinking, but one needs to be able to abandon existing frameworks in order to explore them, then worry later about how to integrate the good ideas into existing models. This can be seen with LINQ - an innovation which cleverly linked ideas from FP into C#, but which no imperative-only programmer could've ever imagined within their limited paradigm.
I think people are so disappointed with Dart, and somewhat Go, is because there's a huge amount of innovation in programming languages outside the popular paradigms which they have largely ignored for the sake of making 'simple' languages to appeal to popularity, but many hackers are more concerned with discovery, and advancing our subject.
I'm old enough to remember and as far as I can recall it was never considered "crappy" or "uninteresting". The command-line interface was considered a bit dangerous (rm /<return> OOPS!) A lot of people were trying to get their hands on it to escape from the stranglehold of IBM and the other non research-friendly OSes.
Anything on a billion devices (in five years) is very interesting. Android lacks broad repercussions, wut. Because it wasn't crafted by CS titans at the dawn of computing?
No, because it's a kernel google didn't invent with a runtime that uses a language they didn't create and has borrowed a lot of ideas from a competing OS that beat them to market.
Android is important. Is it really innovative? I don't see how.
"Ignoring the obvious difference in reach and success, in what way are Go, Dart and Android not equivalent to C, C++ and Unix in terms of open software projects?"
You replied with:
"What all of these things lack in comparison to C, C++, Unix, and itself Java for that matter, is broad, industry wide repercussions."
Which is exactly the one thing the original comment wasn't asking for. Your criticisms may or may not be true. However, I think it's fair to ask what differences exist while IGNORING the magnitude of the success of each company's projects. The article asserted that Google would never release its work to the public in the way that Bell Labs did because it is a publicly traded company. To counter this assertion, the original comment listed present day examples of open source projects that Google has released. The success (or failure) of those projects doesn't negate the fact that Google releases a lot of code to the public.
I reaffirmed the differences in scope because I lean toward the opinion that the achievements in Bell labs were substantial on technical merit alone, to such a degree than I don't think it would have necessarily been less historic if they had been private inventions. I don't think you can make a meaningful comparison avoiding this aspect.
Go, Dart and Android, to a great extent, have also been 'set free' yet they will never match some of the technologies out of Bell that we haven't even mentioned.
What's so interesting about Javascript? It's prototype-based OOP that raises the shittiness of dynamic typing to a whole new level? :-)
I mean, seriously, just the fact that Dart brings static typing and sane class-based OOP to web front-end is enough for it to be considered worthwhile.
Under the heading, "Background: Dart is a dynamically typed language, and proud of it":
"Dart is dynamically typed, in the tradition of LISP, Smalltalk, Python, and JavaScript. Users of dynamically typed languages, and in particular users of JavaScript for web programming, will understand why we chose this. If you are more of a static typing person, you may not be convinced—but let’s save this for another discussion. For now, let’s take this as a starting point: Dart is dynamically typed."
You should really read the link that I posted there (note as well that this is documentation from the official Dart site). Scroll down to the heading, "Why is the static typing unsound?" to determine why Dart is really and truly not statically typed by any stretch of the imagination. The TL;DR is that, by definition, static type systems are pessimistic and dynamic type systems are optimistic, and Dart's type system is profoundly optimistic.
If you truly want a Javascript alternative with optional typing, try TypeScript instead.