I don't see how working remotely can ever replace the human interaction of working in one place. People forget, humans are not machines. They are biologically wired to work better in groups than alone, and our biology includes the ability to signal others using our face, body, voice, and even the way we perform an action. Voice is still a faster way of communicating than anything else despite our 60's sci-fi level technology.
Remote work will never replace face to face contact for jobs where high amounts of collaboration are needed, too many parts of the signal are lost.
Take for example a few jobs ago when our servers went down because of a local internet outage. The IT guy literally stood up from his desk and said "holy shit anyone that knows our infrastructure get in the meeting room now." We could hear the warning bleeps from the server room merging into an incomprehensible chorus, the death rattle of a company hours from implosion. The seriousness was obvious just from his actions and the sound of his voice. The response time was a few seconds.
How long would this take if we were spread across timezones with different hours and variable lag time between communication? If the internet is down is there any backup? How much would it cost to give every single employee a backup method of communicating? How does an employee separate a desperate please for help from the endless stream of BS emails and messages that aren't terribly important?
Working in one place gives you a very powerful and natural means of communication that's nearly instant and can't be stopped by any hardware or software failure, save multiple employees dying simulatanously.
I don't see how working remotely can ever replace the human interaction of working in one place.
For things that require physical presence, like IT, sure. Most people don't work on stuff like that. For those people, video meetings, slack, etc., can be fine for keeping in touch and having the entire team feel connected.
And, yes, having a working internet is the single point of failure for remote work. It's reasonable for the office to have redundant connections, but not for people. For the 16 years I've lived in my current place, I've probably not had a connection for no more than 24-48 hours.
And during those outages, the telephone generally works.
Everyone running to a conference room -- feels very Michael Scott to me -- and almost always just as unnecessary.
I wonder about those who claim face-to-face is best explain how Github and Basecamp seem to do just fine. Others point to how Yahoo eliminated remote as some kind of argument supporting same, yet I'd hardly consider Yahoo to be a good example of anything. If "good enough for Yahoo" is an argument, then count me out.
Unless you physically have to touch something, remote can always work (and even be better.) It's a question of establishing processes that work. The 'genre' of remote isn't the problem -- it's always the implementation.
> How long would this take if we were spread across timezones with different hours and variable lag time between communication?
Most of us work in Europe, but our ops team has been created specially to be distributed across timezones. They have rotating pager duty, as you do.
If I am fixing our staging setup, which is not customer can access data with 99.85 uptime required but still can halt team of 50 devs to a halt especially during release-testing, it is ~5 mins to get somebody to help \w slack/irc/e-mail.
> If the internet is down is there any backup?
Internet? No. But we do have several VPNs. Getting someplace where there is internet was reasonably simple so far.
> How much would it cost to give every single employee a backup method of communicating?
Hey, if there is real trouble, I can always call my manager.
> How does an employee separate a desperate please for help from the endless stream of BS emails and messages that aren't terribly important?
We are split into sub-teams, where the 3-8 people you are closely working with would definitely read your message.
>Take for example a few jobs ago when our servers went down because of a local internet outage.
having an option of your servers going down because of a local internet outage is a luxury your company had. As you correctly point out, a company with remote employees spread across timezones just wouldn't be able to allow themselves such a luxury, and to me it sounds like a good thing. It makes people and company think and work in a distributed fault tolerant way.
I've found that working all in one place can lead to "over-meeting". What I mean by this is the tendency to have meetings for the sake of meetings and people not having time to get stuff done because they are stuck in meetings all day that are dubiously productive. It takes the same discipline that remote working takes to cut down those interactions to only what is important.
> Take for example a few jobs ago when our servers went down because of a local internet outage.
That's not a great example, because that's a problem that's already caused by over-centralizing. If you used colocated servers or cloud hosting, that doesn't sound like a local Internet outage would have been a problem.
More broadly: at some point, every successful company will need to handle workflows of people who aren't in the same physical space. It's much easier to bake these habits in as a small company and grow with them than it is to retrofit them onto a company once it needs to open a second office in a different city or country.
People are used to working with others far away constantly in daily life. I don't think it's much effort to say "yeah the guy that sits across from you, from now on youll have to call him" .
Especially if most of the employees in the new office are new, I don't think it's much or any real work needed to teach people to work with remote employees.
I'm just arguing that decentralizing has a fixed cost of more difficult communication, not something you can save up for or build into your company from the start.
> I don't think it's much effort to say "yeah the guy that sits across from you, from now on youll have to call him" .
Let me start by saying that I'm not speaking hypothetically. I'm speaking from experience here, having worked for companies both remotely and not-remotely, and seeing where that's succeeded and failed.
It is actually difficult to make that adjustment, because working in the same space breeds a lot of bad communication habits which don't scale.
Working remotely from the start, you end up being forced to document work (even minimally), or to make decisions over a medium that is readily archive-able like email or Slack[0]. This is particularly true if you're working across timezones, even a small difference like east/west coast in the US (3 hours). Reading through an email thread to reconstruct history isn't the ideal form of company documentation, no, but it sure beats not having it at all because all of the discussions happened in real-life and nobody felt the need to send out an email afterwards to formalize it.
If you develop a collective habit of never making important (or irreversible) decisions without some sort of asynchronous and archiveable communication, and always having canonical internal documentation and runbooks for internal systems (because some of the the people who may need to operate them are working different hours), you don't run into the situation where your company has suddenly hit 300 people and needs to open an office in Europe, but can't break the bad habits of relying on information held in people's heads and exchanged in ephemeral, synchronous form.
[0] Lest anyone misinterpret what I'm saying: Slack is emphatically not a replacement for proper documentation. It can, however, be a helpful forcing function to bootstrap proper documentation, and it serves that role far better than meatspace interactions or even video conferencing ever do.
>I don't see how working remotely can ever replace the human interaction of working in one place.
What "ever"? It has already replaced that for thousands of companies, including some valued at billions...
>People forget, humans are not machines.
Thankfully there's this great new technology called IM, where they can ask somebody (and in an even less intrusive manner than directly disrupting their flow going to their desk to talk to them).
So, in your anecdote above, what was the resolution by utilizing "all hands on deck" to restore the servers?
I would also argue that McDonald's employees require a large amount of collaboration, but this is built into the operating processes of the company, not face-to-face contact.
I love your strawman example of the "war room" meeting.
On my remote team this has happened 3 or 4 times during production outages. The situation was nearly identical, except for lack of physical server room (we use AWS). Someone wrote an @team message in Flowdock, "Things are broken, we need a war room." Within seconds, people were in a video chat room. Within minutes (or, in one terrible case, hours), the issue was resolved.
The way everyone knew it was serious is because we rarely use @team-mentions and rarely call "war rooms". Also, our automated monitoring was firing and triggering pagers in PagerDuty for the on-call engineer, the log of which was also plainly visible to everyone in our Flowdock Inbox.
So yes, your example here is a nice one, but a strawman. Remote teams know how to take outages seriously.
I would say that Google is trying to replace JavaScript with dart in any way they possibly can. The reason is simple, JavaScript is an open standard, dart is owned by google.
Their reasons that "dart is better" is the typical google koolaid before they attempt a market takeover. As we've seen over and over with Android, chrome, and AMP especially. Google loves to make glass house open source projects you can't touch. You're free to look at how great it is, feel it's well refined curves and admire the finish, but God help you if you don't like how the project is going and want to fork it for yourself.
Don't bother trying to commit a new feature to any of Google's software that they don't agree with. It will languish forever. Don't bother forking either, because they'll build a small proprietary bit into it that grows like a tumor until it's impossible to run the "open source" code without it.
Fuck dart, I don't care how great it is. Microsoft is being the good one in this case by extending js with typescript, google is trying to upend it into something that they control
Eh? Dart was trying to replace JavaScript in some fashion, but that obviously failed (they had good intentions, but bad execution). Seeing that Chrome hasn't included it yet, I doubt it will. So that dream is dead.
Dart is a replacement for GWT at this point. See AdWords being written in dart now[0]. Though it's not clear now Flutter.io will play into all this (that's targeting mobile with no web target).
As for typescript, Google actually embraced that fairly heavily with Angular2 being written in it.
Give me a break. It's just as accurate to say "Dart is an open standard, Javascript is owned by Mozilla". There may be valid technical, pragmatic or moral reasons to prefer Javascript, but this is just FUD.
Really? Google's once rosy history with open source project isn't looking too friendly these days.
And yeah ECMA is a totally open standard with committee members from all sorts of companies and backgrounds. Dart is not. I don't care if JS is slightly worse, as least I know that for now and the foreseeable future I won't be paying a google tax to use it.
After the open source community "stole" mapreduce and hbase google has begun offering maglev and spanner as "services" rather than giving them to the OSS community. Maglev was supposed to be open sourced a while ago, and google now offers DDoS protection service on Google cloud instead, most famously with their Krebs PR stunt. Maybe they forgot about it? Did I mention they removed "don't be evil" as their motto a while back because it was "immature"?
Google has begun down a decidedly different path since the Alphabet transition a while back. It's no longer the brainchild of Sergey and Larry, it's losing its soul and becoming a shareholder cash machine. Maybe the floundering of some of their moonshot projects is taking a toll on the companies' confidence to remain a market leader while maintaining their traditional values of openness and shunning of questionable marketing tactics? I'll admit that's pure speculation but I really wish I knew what happened to the Google I remember.
Since I'm being accused of FUD I might as well throw a bunch more speculation in for the hell of it. Their most recent papers are conspicuously lacking enough detail to make your own implementation, and read more like marketing whitepapers on how to use their services and how great they are. Their tensorflow library was probably released as truly open only because they couldn't hire enough devs with machine learning experience to meet their needs. They needed to introduce the world to enough of the secret sauce to meet their own demand and they remain completely silent on how their real moneymakers work.
My extreme speculation? They started using machine learning for search a few years back and found out just how easily their previous search algorithms, developed and perfected for years, were utterly outclassed within months. A start-up with these techniques could have been their undoing. This oversight cannot be repeated, they cannot offer too much of their technology back to the world anymore lest they risk being beaten to death by their own weapons. Thus google threw away a lot of what made them google, and rebuilt themselves as a semi monopolistic oligarch that's much more in line with traditional too big to fail companies.
They now spend more on political lobbyists than any tech company by far. They like to release nice things for free when a competitor just happens to be a making a decent living charging for the same thing. They engage in a lot of the typical corporate warfare now that doesn't seem natural for a company with a nice playful exterior and an original motto of "don't be evil".
As far as the FUD accusation, does it count that I don't work for or with any company that has anything to do with google or the other tech giants? These are just my opinions based on observations, and a lot of those opinions are backed by verifiable facts.
You're free to put the same data together and make your own conclusions, which would lead to more interesting discussion than dismissing my points just because.
One addition I would like to make, every corporation is a shareholder cash machine, and Google has always been one, it didn't suddenly become one. The problem is in the institution of corporations itself, which has a lot of flaws.
"Don't be evil" is the first and last thing stated.
Why do you post stuff that's trivially searchable and trivially called out as bullshit? Why would I bother reading any of your rant if you can't get trivial details right?
JavaScript sucks because it has a weak standard library, ugly syntax, and its monopoly in web development has the industry stuck in a state of mediocrity, in my opinion. I have a VERY hard time believing that the apex of engineering intelligence and ingenuity is found in JavaScript. Also, as much as I love Elm, for instance, languages that transpile to JavaScript are just lipstick on a pig, and do little to solve the underlying problem. I'm not a fan of Dart either, but at least Google made an attempt to solve the JavaScript issue in the best way possible with Dart; by aiming to get rid of it.
I agree with you that JavaScript sucks balls in far more ways than is reasonable for such a widely used language. The design is seriously shit when compared directly to really any popular language, even PHP.
I disagree with transpilers not being a reasonable answer. Eventually JavaScript will be okay to work with, some day. Until then, transpilers offer nearly unlimited freedom in redesigning the bad parts of the language while maintaining 100% fowards and backwards compatibility. It's really as good as it can get.
Since they compile down to a Turing complete language there's really no limit to the heaps of dog shit they can abstract away. Historically, c++ is nothing more than an insanely complicated C preprocessor and it has more than proven that such a strategy can be viable long term. In fact, the first c++ compiler made, cfront, is still available and literally outputs raw C code from c++.
Typescript is easily my favorite since it's designed to compile down to very human friendly JS. Getting typescript out of your stack requires nothing more than one last compilation with optimizations turned off. Unlike most transpilers (looking at you babel) the output JavaScript uses standard JS workarounds like the crockerford privacy pattern for classes. This gives typescript fairly practical fowards and backwards compatibility. You can always target output to a newer version of js or convert your codebase out of typescript back to js at any time.
If it catches enough traction, browsers will begin implementing native typescript parsing since it offers many potential performance optimizations on top of what js is capable of. At this point you just maintain your typescript codebase and use some library to give your legacy clients some transpiled J S on the fly.
If typescript gets enough adoption it will fix JavaScript for good, in the same way the original c++ compiler (which just transformed to c) led to native support, so I'm really rooting for it.
I can see your point about transpilers. Of all the transpilers I've used, I like Elm the best, due to its functional nature, syntax, strong typing, compiler, and debugger. It isn't fully stable yet, as a language, and there have been breaking changes in each release since I started using it, but it offers the most promising departure from JavaScript. I guess anything that facilitates the de-turding of web development in general is a good thing.
Haha de-turding is a great way to put it. I just don't think a new language is a reasonable option. There's what maybe... 50,000 different versions of the ~500 web browsers from different eras still running out there somewhere?
If having code work almost everywhere is important for a project, that project will be using vanilla ES3-5 JavaScript for the next 10+ years. Maybe not the latest startups but all sorts of enterprisey ancient stuff that needs to run needs some path forward. If typescript can provide that it will become the lowest common denominator at any company that ships both new and legacy codebases.
Typescript to JS transpilation is extremely similar to the strategy that produced C++ from C. We know it will work, and it's been done before to great success. C++ isn't perfect but I think everyone agrees it's definitely a lot nicer to work with than C, and that's exactly how I describe Typescript as well
I do see your point about C++. I was programming when it first came out, back in 1985. However, I've always thought C++ felt "Band-aid-y." It never felt elegant and cohesive to me, the way Objective C does. C++ is like a chainsaw-hang-glider-shotgun-bat; badass, to say the least, but still a clumpy work of bailing wire and duct tape. Typescript feels the same way, only not quite so badass. It's more like the Robin to C++'s Batman.
Having said that, my only exposure to Typescript has been in Angular 2. Having used other tools like Ember, React, and Elm, Angular 2 seems like a magic step backwards to me. I will concede that my opinions on Typescript may be tinted by my experience with Angular 2 though, so I'll give Typescript a stand-alone, honest evaluation, and adjust my opinions as necessary.
Looks like alternative facts have reached the tech world too?
You can take as hard look at Google as you would like, but choosing Microsoft over Google (one for-profit company over another), while not caring how the technology, the licensing or the workflow compares is a bit hypocrite. (e.g.they are both open, and they both have rules of commits).
I'm wondering, why do you need a throwaway for such heavily invested FUD? Your other comments here are in similar tone, and I'm surprised to see such hatred without any obvious trigger. Maybe if you would come forward with your story, it would be easier to discuss it?
disclaimer: ex-Googler, worked with Dart for 4+ years, I think it is way ahead of the JS/TS stack in many regards.
>I think it is way ahead of the JS/TS stack in many regards.
In what ways do you consider it ahead of Typescript? Personally as someone who's particularly fond of static type systems (Haskell and the like), Typescript's type system seems way more advanced and powerful than Dart's (union and intersection types, in particular, and non-nullable types). Map types (introduced in Typescript 2.1) also seem pretty interesting.
Some of my earlier notes are in this thread (it is more about the day-to-day feature I actually use and like, and less about the fine details of the type system)
https://news.ycombinator.com/item?id=13371009
Personally I don't get the hype around union types: at the point where you need to check which type you are working with, you may as well use a generic object (and maybe an assert if you are pedantic).
Intersection types may be a nice subtlety in an API, but I haven't encountered any need for it yet. Definitely not a game-changer.
I longed for non-nullable types, but as soon as Dart had the Elvis-operator (e.g. a?.b?.c evaluates null if any of them is null), it is easy to work with nulls. Also, there is a lot of talk about them (either as an annotation for the dart analyzer or as a language feature), so it may happen.
Mapped types are interesting indeed. In certain cases it really helps if you are operating with immutable objects, and mapping helps with that (although does not entirely solves it, because the underlying runtimes does allow changes to the object).
I agree about union types. They can quickly result in insane variable declaration statements that are hard to understand.
I dislike nulls though, I always wish people would just use a flag or error handling when objects are undefined, instead of "hey this object is the flag and sometimes it's not actually an object!"
You'd think language designers would learn after dealing with null pointers :)
So Dart hasn't really incorporated any lessons from 20 years of Java, has it? Google's answer to Tony Hoare's billion-dollar mistake is... "The Elvis operator"?
TypeScript has some really cool type system features. Union and intersection types are fun and really handy when interacting with dynamically typed code. (If you go back through history, you'll find almost every language with union types also has a mixture of static and dynamic typing. See: Pike, Typed Racket, etc.)
Self types (the "this" in the return type) is handy.
I can see us adding some of those to Dart eventually.
Non-nullable types are great, which I've said for a very long time[1]. We are finally working to try to add them into Dart[2]. It's early still, but it looks really promising so far. It kills me that I've been saying we should do them for Dart since before TypeScript even existed and still they beat us to the punch, but hopefully we can at least catch up.
The main difference between TypeScript and Dart's type systems (and by the latter I mean strong mode[3], not the original optional type system) is that Dart's type system is actually sound.
This means a Dart compiler using strong mode can safely rely on the types being correct when it comes to dead code elimination, optimization, etc. That is not the case with TypeScript and at this point will likely never be. There is too much extant TypeScript code and JS interop is too important for TypeScript to take the jump all the way to soundness. They gain a lot of ease of adoption from soundness, but they give up some stuff too.
In addition to the above, it means they'll have a hard time hanging new language features on top of static types because the types can be wrong. With Dart, we have the ability to eventually support features like extension methods, conversions, etc. and other things which all require the types to be present and correct.
Typescript is definitely an improvement over type-free JS, but it's still wedded to the JS type system so unfortunately, it will still let you shoot yourself in the foot in ways you might not anticipate if you have experience with other languages with a stronger type system.
For example, If you have a string-typed foo and a number-typed bar, "foo + bar" is still a valid statement in TS because they have to maintain backwards-compatibility with JS's unfortunate language design choices.
Typescript and dart are completely different animals. I can leave typescript for good by compiling to js once and it's designed to output human readable code. The js it produces will be immediately usable as javascript, and I'm totally free from the semi open language that MS controls.
Dart is a different language, it has no fallback to something familiar. I don't doubt that it's many years ahead of TS in every way but it's still rather proprietary compared to TS that I can shut off at any time with minimal effort.
The openness of typescript and dart are comparable. Both being run primarily by their champion companies with code free to review and fork but with limited ability to commit changes. They both require you to sign over copyright of code committed which I don't like for my own reasons but the license is open source.
The big difference to me is that typescript offers an escape hatch and dart does not, because one is pretty much a JavaScript enhancement and the other is completely different. I hate vendor lock-in and loss of the open web in general and you will see this as a common theme to most of my more flamey(controversial) comments. The web is closing off in so many directions and as an open source developer in my free time this is of great personal concern. I don't like that hackernews and reddit can be an echo chamber and posting contrary opinions usually makes the discussion more balanced even if a lot of people don't like it.
I'm not ex google, MS, or any of the tech giants. I'm not smart or dedicated enough to work anywhere you've heard of :). Most of my comments on throwaway accounts are unpopular, that's why I don't use my normal account. I'm not some invisible super shill, hackernews knows all the accounts I use and I'm fine with that.
I've just got my own opinions and when they're controversial it's not in my best interest to comment using my normal account. It wouldn't be for anyone. It would be utterly stupid to hurt my open source projects or reputation as a developer just because somebody doesn't like my opinions. My code and my work have no opinions, and I like to keep it that way. Throwaways are my way of keeping my opinions to myself, and I don't see anything wrong with that. Separation of church and state if you will.
I'm not totally against Google or any company in general. Microsoft in particular has an extremely rocky history when it comes to open source projects. They've probably done more harm to Linux than any company in existence. If typescript and dart both had equal migration paths I would choose dart in a heartbeat. I love tsickle and the closure compiler and the fact that the angular team is using typescript. Still, I feel like my criticism of dart has some truth to it at least.
I've taken aim at Google for the past week for what they've done to the openness of Android, AMP, and dart. Am I wrong? It's hard to argue that any of Google's platforms are as open as they were a few years ago. Some of my really unpopular opinions were posted in reponse to other poster calling me FUD or a shill, and can you blame me? It's one thing to say "I disagree and this is why" but pretty rude to just say "I don't believe you because you're obviously lying or getting paid to say that". To that I say well screw you I'll post what I want without being polite at all if you're going to be so rude. I'm replying nicely to you because you genuinely asked why I used a throwaway and said that you worked with dart at Google, way more than most would admit.
Having an unpopular opinion just gets you labelled as a shill or FUD and that's a lot of the reason I use throwaways. I've actually gotten death threats before for disagreeing with people on the internet. It's hard to say I would be better off getting death threats from people that can easily find my name, occupation, and address. Look at more of my post history and you'll see that I'm probably not a shill, or a really sneaky one if you don't want to believe that.
I just bashed Intel for removing ECC support from their desktop lines a few days ago, and Facebook and Microsoft a day before that for their shittastic timeline algorithm and irrational fear of Linux taking over corporate clients respectively. A day before that I trashed PayPal for some of their recent cronyism and praised the quality of Google's Guava libraries.
In my more ancient post history I mention how much C#'s ecosystem sucks compared to java and how even open sourcing the language doesn't mean much when it only targets windows using visual studio. I bitched about the baby boomers screwing the mellenials and asked how it's possible to start a side business. I said I don't like Python because it's slow and gave people online marketing tips on how I write high ranking blog articles. I mentioned that .NET core sounds great but it's alpha quality and it sucks that I have to use the new version of Windows server to have http2 support. I talked about how WordPress is absolute shit for anything even medium traffic. Mentioned a quip about using monotonic grey codes. Brought up some arguments about montsanto and GM windblown crops. Mentioned that Uber doesn't give a fuck about their drivers and some examples of this despite their press releases saying otherwise.
I'm getting a lot of comments about how I'm some full of shit corporate mouthpiece again so I guess it's time to cycle the old throwaway again.
I'm not asking you to agree but keep in mind that some people will have opinions completely contrary to your own. Sometimes their reasoning will be logical even though you come to a different conclusion, people are just different
Your worries about being locked in may have been valid 2-3 years ago (1), but things have changed a lot since:
Dart has an ongoing project (Dart Developer Compiler) which has a goal, among others, to produce readable, idiomatic EcmaScript 6. That is as close to your TypeScript fallback as it can get. (2)
Somebody also demonstrated Dart to LLVM compilation is possible. The language has a decent library for parsing the Dart sources, worst case, if you are that heavily invested in your product, you could also write something that does transpile your codebase. I did try to do it on small scale and specific examples, it is actually not _that_ hard to do, if my business relied on it, it would be certainly within reach.
(1) I'm not sure if you can call it lock-in as it is entirely open source, you can fork it, build it for yourself, change it if you have special needs. The same goes for perl, php, python, go, whatever language you prefer. Yeah, most people don't do it. Why? Because most people don't need it. If you become Facebook-size, it may look better to invest in the PHP toolchain and VM than in transpilers. YMMV.
(2) From the pure technical point of view, I wouldn't call it reassuring that the default fallback platform is JavaScript for so many people (even on the server-side). It is sure depressing that we are stuck with "1" == 1 and wrong ordering of "[1, 2, 10].sort()" for as long as we fall back to JS, and TypeScript does not improve on it.
I didn't know of the developer compiler, when that reaches production I won't have any real citicism of Dart.
For now Js fallback is the only realistic option for running code on the web. Even if we get native typescript or dart support tomorrow we will still need to put up with JavaScript for like 7-10 years. For this reason a readable JS fallback seems like a vital feature to me at least. It's depressing but reality for the majority of web projects.
Does dart have a pluggable compiler framework similar to Roslyn or Antlr AST's? That would make it a lot easier to write your own conversions.
One more point in Typescipt's favor though... It would be a lot easier to modify the JS VM in browsers to support native typescript than dart. In my mind it's a lot more likely to happen because of this(less work)
> when that reaches production I won't have any real citicism of Dart.
It's still got bugs, of course, but we have internal customers working on real projects using it on a daily basis.
I agree totally that picking a language is a huge commitment and you want to do that with an organization (company, standards committee, group of open source hackers, whatever) that you trust.
Google is a huge company and has done lots of good and bad things, so it's easy to find enough evidence to support assertions that we should or shouldn't be trusted based on whichever view you want to demonstrate.
One way I look at it is that instead of answering the absolute question "Can I trust Google to shepherd the language well?", consider the relative question "Can I trust it to shepherd the language as well or better than the maintainers of other languages I might choose?"
Assuming you've got some code to write, you have to pick some language, so the relative question is probably the pertinent one. I hope that we on the Dart team are a trustworthy pick, but different reasonable people have different comfort zones.
> Does dart have a pluggable compiler framework similar to Roslyn or Antlr AST's?
All of our stuff is open source[1], including all of our compilers and the libraries they are built on. Most of it isn't explicitly pluggable because plug-in APIs are hard and Dart in particular doesn't do dynamic loading well.
But it's all hackable, and much of it is reusable. In particular, the static analysis package[2] that we use in our IDEs also exposes a set of libraries for scanning, parsing, analyzing, etc. that you can use.
Yeah, you can fork. Take AMP or Android for example.
With AMP, the instant you fork and change a single character of code it becomes incompatible because part of AMP is a verifier that makes sure only the official version is used with Google's cache. Without being able to serve your custom AMP pages from Google's AMP cache the entire point to its existence goes away. The reason? Typical "security". "Tampered" versions of AMP could "do bad things", laughable considering that vanilla web pages allow you to do absolutely anything javascript allows and google has no problem showing those pages in search results or letting them freewheel in a Chrome tab. If Google wanted AMP to be open they would have built it into their chrome browser so the browser could enforce restrictions while allowing users to run whatever customized AMP implementation they want.
And Android. Android used to honor the promise of being open. Years ago. This was before every manufacturer was encouraged to lock bootloaders, and back when platform SDK's and drivers for hardware were generally available even if they were kinda hard to get. This was also before the Android kernel heavily diverged from mainline Linux, and before "google play services" grew from a tiny app to a framework that powers half the OS features.
Nowadays you can only run your own Android on devices specifically built for it. Open distributions like CyanogenMod are dead or dying. Google Play services is closed and proprietary, and probably about 95% of popular apps require it to work. Even if you manage to get your own Android distribution built and running you will need to side load all your apps, and most apps just don't work because they've been built to depend on proprietary bits that Google has snuck in all over the place.
Google is better at the "embrace, extend, extinguish" strategy than Microsoft ever was. So good, in fact, that they have many well intentioned people defending them to the death even as they choke off the very open source projects they created. Virtually every platform that Google runs for more than about 5 years goes from completely open to something impractical to run yourself. If you don't believe me look into any of their older projects that are "open source".
After a certain point it's free software as in "free coupons". Somewhere in the mix, eventually, the price of of their "charity" is passed on to you.
Hardware made a big difference with Linux and its growth. It could run on PC/x86, alpha, sparc because those are platforms. ARM is a spec sold to manufactures that all have their own SoC that attach random shit to random pins and implement the worst kernel hacks that can never be upstreamed.
We can't have the 90s Linux revolution for handhelds because they each need customized kernels and drivers. Many fall into disrepair and go unmaintained, even in things like Cyanogen. (On two phones I tried running newer CM images on old hardware and ran into speed and performance issues).
This is why things like Plasma and Ubuntu mobile have such limited phone support. Porting is difficult.
Also notice that I said "PC" above. There are plenty of x86 systems that are just as difficult to port to (PS4, Wonderswan, those old T1 cards with 4x486 processors on them). At least Microsoft forced their ARM manufactures to use UEFI. Too bad those platforms have locked bootloaders. I'd love to see some Lumia running Plasma.
First up - I do not argue with you. Android is not as open as it was in the old days. But I think its not fair to only blame Google here.
Google making Google play services was a natural reaction to manufacturers never updating Android on their phones for years leading to all kinds of vulnerabilities and bugs on Android that kept it far behind iOS in quality and features. Lets face it - Android used to be sneered at, the red-headed problem-child OS that used to the butt of jokes till it grew out of puberty and pimples in Ice Cream Sandwich. If manufacturers had truly honoured OS updates, Play Services may never have been built - it allows Google to update Android without updating the OS. And yes, they will retain full control over Play Services - I completely understand the need to fully possess it and ensure a high level of quality assurance.
Also blaming the fall of CyanogenMod on Google is ridiculous. CM fell because of mistakes made by Kirk Mc Master and several others. He attempted to be a dictator even going to the extent of banning OnePlus phones selling in India - this was fought and resolved in the courts. All goodwill for CM was destroyed. OnePlus ditched CM and moved to Oxygen OS. CM had a stroke and died. Now Lineage is the new shiny OS rising from the cooling corpse of CM.
Interesting info about CM, didn't know there was more to it.
I still disagree with play services because it wouldn't be that hard to force manufacturers to support updates when you command such a large part of the market
There are these things called contracts, and if there are clauses for an OEM to be allowed to have access to Google services, Google lawyers could certainly add a few more sentences regarding compulsory updates.
Android is open source, and OEMs see long-term updates as a money-losing proposal. If the "lost sales" (to them) costs outweigh the benefit of shipping Google services, OEMs will fork/ship AOSP and call it a day - they want profits more than they respect/fear Google. Google's negotiating position is not unassailable.
It sounds like you have no idea how deep Google's MADA contracts with OEMs already go. =) Shipping devices with Google services included requires you sign over your entire business decisionmaking to Google: They have approval/veto power over every single device and software update you release that contains their services, and they also forbid you to sell any devices running Android that don't contain their services, just to make sure you don't try to exert any independence on the side.
Android isn't open source, except in the hearts and minds of fanboys everywhere. =)
Thanks so much. I get a lot of downvotes and use throwaways because of comments like this, so it's nice to hear some praise every once in a while.
Google's projects all seem very inviting from a distance. Usually it's not until you're ready to implement something that you find out that you're fucked, and how.
Serious ranting below but something I never get a chance to say:
I'm a born skeptic and avoid the silicon valley mindset even though I'm a driven person. I used to find myself often in disagreement with others because they don't or refuse to see the truth. Some people don't like to be told they're wrong. Many of those will fight other opinions just to justify their own decision, but will secretly reconsider. Others will hang onto beliefs with every ounce of strength as their mistake builds into a Maelstrom that consumes everything they care about.
With some people, after challenging their beliefs, they will end a friendship rather than admit you were right in the first place. Especially if you refused to do something their way and it saved them from disaster. Some can't stand to be THAT wrong. As if I was some asshole who saved them from their fate, and now they're a spirit left wandering the earth until they can fulfill their original destiny. Its like I helped them cheat without telling them about it, stealing the joy from victory. This is something I learned the hard way more than once.
In real life I keep my opinions to myself to avoid this nastiness, and offer opinion only when asked. The people open to advice even if they disagree learn to ask my opinion since I always tend to have one. The majority of people I know, including some good friends, have no idea what my personal opinions are on many subjects. It would cause pointless pain and argument with people I care about regardless of their beliefs.
I'm not loyal to any platform or company and I will freely throw a strongly held notion to the wind if I find disturbing evidence that I was mistaken. Most people are not so malleable.
A lot of people take their beliefs too seriously to the detriment of society. At least on the internet I can express my opinion, however "uncool" using throwaways.
In the real world the best and most meticulously researched advice I've ever given is at exit interviews. The one time you can be open, honest, and politically incorrect with coworkers. Multiple companies made serious operational changes after giving my exit interview. Others have told me in nicer words "that's really fucking great to hear I'm pretty happy I never have to talk to you again".
The problem is, you never know how somebody will respond. During exit interviews I'm treated more like a person than a subordinate since the boss relationship is formally over, which helps I'm sure.
In real life, the way to influence a strongly held opinion is best decribed by watching the movie Inception. You introduce nothing more than minor inconsistencies while outwardly expressing little opinion, then wait to see if your clues are enough to lead them to towards the promised land.
My other common tactic is to do things without asking any opinions first. You at most come off as insensitive, aloof, rather than someone to intentionally disregard their advice. Usually the opinion matters less in practice than if you had asked in the first place. Classic forgiveness is easier than permission.
Ive sometimes wondered if this makes me a physcopath or if that's just how some people tick. Anyways, god bless throwaways and the internet
At the risk of down votes, I'll be as blunt and honest as you claim to be - after reading this screed - you mostly come off as a someone with an inflated view of your own importance and abilities. While I might agree with you that the silicon valley mindset is harmful - anyone who would rather keep a friendship and watch a friend go over a cliff in a barrel, isn't really worth keeping around - either as a friend or an employee.
Fighting the good fight, fighting for the things that are just, and true, and good - are nearly always worth it, the key is to back off before it becomes a pyrrhic victory.
I think parent poster is using the term "friend" in a more liberal sense. I would counter that anyone who you're afraid would berate you for honest feedback can't honestly be considered a "true" friend. In most cases anyway. Even then, some people just have to learn the hard way.
I'll save somebody but only if it's worth the cost of losing a friend. The better the friend the more I let them learn from their mistakes. The truth is that losing a good friend would hurt us both more than helping them mend the wounds after smaller stuff.
It's not being evil or that I'm always right. The comment was mostly in reference to those that have been calling me a shill the past few days and how they should keep in mind that their opinion is not fact.
I gave up the good fight years ago. The worst was when I helped turn around a failing small business. We all wanted the same goal, the company to be successful. It sucked so bad that I learned that it's better to be nice to your friends than to dedicate yourself to a cause or try to fix all their problems.
If that means letting them fall sometimes that's okay, as long as you don't let them get any deeper than you can reach. If you help pull them out in the end you're still a good friend.
So the company turnaround, it worked in the long run but at great cost. Cutting employees that sucked at their jobs but were friends and helped us with the initial plan. Cutting moochers that I loved but were sucking the company dry with constant unscheduled time off and freebies. Redoing our systems to automate as much as possible made us our first profit in years but a lot of that was from jobs eliminated. Hiring people of a higher caliber than existing employees by raising application requirements above what most of the current employees would meet. Offering our new more qualified people more money than Bob who's been here for 15 years but did our financials on pieces of scrap paper.
By the end of that process a few years later, my lesson was that I made the owners a lot of money at the expense of losing about half my friends. Most of the other half resented me for what I had done and thought I was a traitor, even though I had just helped implement exactly what we had agreed upon a few years back.
We planned to cut dead weight and streamline and automate operations. To add new talent with up to date skills. To cut our benefits slightly to money to invest in the company's future. Everyone wanted this until it was their benefits or their job being automated. I followed through with the cause and at the end I felt like a Judas figure and packed up and left in shame.
You could say it was a pyrrhic victory for sure, but after that I'm very wary to set anything in motion that's too heavy for me to stop on my own
I agree with a lot of your sentiments and am only responding to rubbish your psychopath claims. I don't think you need to worry, in particular if you don't exhibit cruel or violent tendencies. You demonstrate concrete moral reasoning, even if at odds with others, so not sociopathic. "Psychotic" perhaps but your reasoning seems lucid enough. The one reservation I would have is about your "omega man" mentality; that you could be suffering unescessary mental anguish as a result. Also if I were operating an online community I'd be somewhat concerned with your obvert circumvention of moderation checks and balances using throw aways etc. However - I think you're submitting perfectly valid opinions in a respectful way and I share your unease at how there seems to be a groupthink at play shaping the quality of discussion.
Hahaha psychotic ish ramblings are fun to write sometimes though :). I'm about to retire this throwaway so IDGAF about what I'm writing as much as usual.
Omega Man is an interesting term, never heard of that before. You're totally right that it's how I try to operate but only when I'm doing controversial things. Perhaps I'm doing it right if I seem to be going about it in the most quiet and passive way possible :) .
You don't have to worry about me running any communities online. I'm a productive member on a bunch of online communities including HN and I don't use my throwaways to respond to, upvote, or otherwise sockpuppet my regular account except a couple times I admittedly may have upvoted the same thread on different accounts by mistake. Most of my less opinionated stuff is under my real name
The only reason I respond sometimes is because I disagree. Sometimes my controversial opinions prove to be a lot more popular than I thought. And possibly miraculously, all of my throwaways eventually gather substantial positive karma despite the fire and brimstone rained upon some of my comments :)
This reduces to the old argument that the Four Freedoms model of open-source software is basically moot in a world where the value of software is dominated by network effect, not modifiability.
It continues to be a weakness of the Four Freedoms model.
> This was also before the Android kernel heavily diverged from mainline Linux, and before "google play services" grew from a tiny app to a framework that powers half the OS features.
How diverged is it? Would the ever be merged back together?
Last I checked some devices were running mainline kernels many years old, like 2011, with zero code back to mainline. One of the other posters mentioned rampant hacks to the kernel to get things to work in stupid ways which I've heard a lot of as well.
Android is missing a ton of new Linux features on many devices and the kernel is getting increasingly unusable by ARM devices in vanilla form because of these badly done third party modifications
If Dart ever became the real thing, it must be supported in not just Google Chrome, but also Firefox, Edge, Safari, etc. At that point Google will lose its control.
Microsoft is lobbying to get their favorite syntax into JS6/7. Who wanted the class syntax? etc And TypeScript and WebAssembly are part of their plan. Ultimately, they want to recompile their 27 year old Office codebase from C/C++ to web browser.
To be fair, it seems to me like the typical webdev coming from C#/Java really wants the class syntax. I disagree with it, but I don't think it's just MS that's pushing it through, and even if it is, there's definitely an audience for it.
I've used a ton of languages over the years and vastly prefer Java type syntax when working on larger projects. The forced organization tends to lead towards some level of mandatory code clarity. Something greatly lacking in Js land.
OO is a bad word these days and functional is all the rage, even though functional languages were largely superceded by OO languages eons ago for many reasons people are slowly redicovering.
There's a huge push to put more structured language concepts into js now that it's being used for substantial projects and it's out of necessity more than convenience.
When I'm hacking together a quick Python script all that stuff gets in the way but when working on larger systems strong typing and object syntax are practically a neccesary evil for maintaining readability
> strong typing and object syntax are practically a neccesary evil for maintaining readability
No, it's not like that.
You can write readable code in any language as long as you can write readable code. It sounds tautologic, but what I mean is that ability to write readable code is a skill separate from writing code or knowing a particular language.
Strong static typing - as just about any tool and language feature - can have both good and bad effects on code readability. In the end, the readability (so also maintainability and other related metrics) depends on the skill of a particular developer in the largest part.
Both OO and FP techniques, as well as all the language features, are the same. You can misuse (or ignore) them all.
What we need is to make an "average developer" better at writing code, not more bondage and discipline in our tools. The latter is (a lot) easier, so that's where we focus our efforts, but - in my opinion - it's not going to solve the problem.
Erlang and common lisp have been around for a long time, and functional programming is nothing new.
The reality is that most business problems map conceptually to communication between objects, and that IDE's which greatly help developer productivity work a lot better with objects.
Functional programming has origins in lambda calculus and academia because mathematical problems map more easily from pure math to functional programming. It's really popular in the circles where it's more useful/easier than OO.
Honestly I don't think the people 20 years ago chose OO for most business languages over functional out of ignorance. They had a choice and decided that OO was better for business problem solving languages like Java even though a large majority of programmers from that era were math majors and familiar with functional syntax.
I feel like we're in one of those cycles where a large number of a previous generation have retired and it's time to learn some of these lessons all over again.
Notice how many wood commercial buildings have been going up in the last 15-20 years? A lot, and just long enough after everyone involved in all the great city fires of WW2 to be too dead to object.
Who "choose" OO 20 years ago and (much more importantly) why?
I'm going to ignore the social component... that said, we work in a wonderful profession where the world is changing completely every decade and many design decisions from the previous generation make no sense anymore. The business case for developing your application in COBOL rather than Common Lisp may have been sound 20 years ago, but today many of the reason why you didn't choose lisp are invalid (e.g., garbage collection takes milliseconds rather than seconds).
Note that this is not the case in more mature fields such as construction.
The movement from ownership to renting on the web is absolutely terrifying to me. Within the span of a few years we've gone from owning our technology to renting it out from a big players for monthly fees that we cannot completely predict or control.
The advantages of owning your own hardware will never go away, but soon this will be made quite intentionally impossible as the big players coalesce and continue building their walled gardens.
This is already happening. All the big players own their hardware and rent it out to everyone else, while trying to convince everyone it's not worth owning your own hardware at the same time.
These companies have already begun closing off server platforms by developing custom hardware and software systems that cannot be bought for any price, only rented. These systems represent a new breed of technology with unbreakable vendor lock in.
Theses same companies compete with each other and countless other companies across the space. Take for example a start-up that wants to run their own app store. Google, Amazon, and Microsoft all run app stores. Where will this company go for cloud services? Their only big name options are to host their software on the hardware of a direct competitor. Their host has full visibility on how their system works, and control over the pricing and reliability of their machines.
It's laughable to think their "cloud partner" will give them any chance to compete if they enter the same market.
We've seen UEFI BIOS and un-unlockable mobiles enter the market in droves the last few years. A lot of new PC's can't run anything except windows. A lot of new phones can only run the carrier's version of android. We have all these general purpose CPUs that can no longer run general purpose programs because "security", and a lot of lobbyist pushing to make it actually illegal to run your own software on these with "anti tampering" laws, again for "security" . Soon the big guys (same companies, MS and Google) will make it impossible to run your own software on any reasonably inexpensive devices and the walled market will be complete.
Mark my words, I've never seen an industry with a couple big players where growth and innovation doesn't eventually turn into collusion, higher prices, and market stagnation. Once MS, Google and Amazon have their slice of the pie and they've killed off everyone else, we will see the death of general purpose computers and mobile devices. Everything you buy will be "android computer" "windows computer" and "apple computer". Anything general purpose will be massively more expensive because individual companies can't get the kind of volume discount of the giant behemoths that increasingly control large swaths of the world's computing power. We've already seen the endgame, with Amazon trialing an "on premesis" version of their compute platform which is basically a super locked down server that you can't buy, only rent endlessly. The future of on premesis will be a cloud in a black box if these companies have anything to do with it. Why? Because once they've got you locked in it makes no sense to sell you anything for keeps. Why keep improving their product so you buy the new version when they can just make it incompatible with everything else and force you to rent it forever, for whatever price they feel like charging?
One day running your own servers will be like running your own ISP . Massively impractical because the free market has been manipulated to the point that it effectively no longer exists
> One day running your own servers will be like running your own ISP . Massively impractical because the free market has been manipulated to the point that it effectively no longer exists
What? People use cloud computing because it already is massively impractical to run your own servers. Hardware is hard to run and scale on your own and experiences economies of scale. This principle is seen everywhere and can hardly be viewed as something controversial. Walmart for instance can sell things at a really low price because of the sheer volume of their sales. Similarly, data centers also experience economies of scale.
As someone who cares about offering the best possible, reliable user experience, cloud computing is absolutely the next logical step from bare metal on-prem servers. When your system experiences load outside the constraints of what it can handle, a properly designed app that has independently scaling microservices horizontally scales.
Even if you had the state of the art microservice architecture running on a kubernetes cluster on your own hardware, you still wouldn't be able to source disk/CPU fast enough if your service happens to experience loads beyond what you provisioned.
And there is the rub, buying your own hardware costs money, and no one wants to buy hardware they may not ever use. Another advantage of cloud computing.
You are seeing the peak of free market right now, because of cloud computing, which enables people with little upfront cash to invest to form real internet businesses and scale massively.
You think a game like Pokemon Go can exists and do the release they did without cloud computing?
"Even if you had the state of the art microservice architecture running on a kubernetes cluster on your own hardware, you still wouldn't be able to source disk/CPU fast enough if your service happens to experience loads beyond what you provisioned." That basically means you never planned. As everyone moves to cloud what makes you think AWS, Azure wont have same issue. If entire region is down do you think other regions can handle the load. If you think so you're kidding yourself. Unless you have business where you dont know your peak number then cloud does not matter.
You can plan all you'd like, failures happen not necessarily due to poor planning but because in real life, shit happens. Pokemon Go for instance experienced like 50x the amount of traffic they planned for.
Secondly, software companies like Microsoft, Google and IBM might know a thing or two about running data centers. Due to economies of scale, these companies are inherently in a better position to supply hardware at scale.
> If entire region is down do you think other regions can handle the load. If you think so you're kidding yourself
Netflix routinely does just this to test the resilience of their systems. They pick a random AWS region, and they evacuate it. All the traffic is proxied to the other regions and eventually via DNS the traffic is routed entirely to the surviving regions. No interruption of service is experienced by the users.
Here's a visualization of Netflix simulating a failure on the US-east-1 region and failing over to US-west-1/US-west-2
The top right node is the one that fails. As the error rate climbs, traffic starts getting proxied over to the surviving nodes, until a DNS switch redirects all traffic to the surviving nodes. Netflix does this monthly, in production. They also run https://github.com/Netflix/SimianArmy on production.
The cloud enables fault tolerance, resiliency and graceful degradation.
I think you missed the point, Netflix evacuating a region is not the same thing as that region failing. If the whole region goes down, their (AWS's) total capacity just took a major hit and unless they have obscenely over-provisioned (they haven't), shit is going to hit the fan when people start spinning up stuff in the remaining regions to make up for the loss.
Have you run your own servers in a colo? I've done it myself.
One person, with maybe 3 hours a week of time investment after a few weeks of setup and hardware purchase. Using containers I can move between the cloud and my own servers seamlessly, and long as I never bite the golden apple and use any of the cloud's walled garden "services" like S3. If I need more power I can spin up some temporary servers at any cloud provider in a few hours. For me the cloud is a nice thing because I don't use too much of it. If AWS disappeared tomorrow it would be a mild inconvenience, not devestating like it would be to many newer unicorns.
Go ahead and try to use the cloud you're paying for as a CDN or DDoS sheild, or anything amounting to a bastion of free speech. You'll quickly find out that your cloud provider doesn't like you to use all the bandwidth and CPU you pay for, and they don't like running your servers when they disagree with your views. They quietly overprovision everything pulling the same crap as consumer ISPs where they sell you a 100mbps line and punish you if you use more than 10 of that on average. That's the main reason the cloud is so cheap.
Hardware is cheap, colo's are cheap, software is largely easy to manage. The economy of scale they enjoy is from vendor lock-in and overprovisioning more than anything else.
Is it really that hard to double the amount of servers you own every few weeks? No! If you're using containers or managed KVM you can mirror nodes basically for free over the network as soon as the Ethernet is plugged in. Your time amounts to what it takes to put the thing in a rack, plug in the Ethernet, and hit the "on" button. Everybody in SV land thinks you have to use cloud to "scale massively" but they forget that all of today's technology behemoths were built years ago when the cloud didn't exist. Oh yeah, they all still run all of their own hardware too and have from the early days. Using their model as a template, you should own every single server you use and start selling your excess capacity once you get big enough.
Did you ever read about how Netflix tried to run their own hardware but can't because they have so much data in AWS that it would basically bankrupt them to extract it? Look at how these cost models work. Usually inbound bandwidth is extremely cheap or free but outbound is massively more expensive than a dedicated line at a datacenter, 50-100 times the cost if you're saturating that line 24/7. The removal fees from a managed store like S3 or glacier are even more ludicrous. The cloud is like crack and as soon as you start using it more than a few times a year you will get locked in and unable to leave without spending massive $$$. Usually companies figure out this shell game once they're large enough, but by then it's far too late to do anything about it.
Why are they marketing these things so heavily to startups? Because lock in is how they make their money. They make little or nothing on pure compute power, but since you don't have low level hardware access they can charge whatever the hell they want for things like extra IP's, DDoS protection, DC to DC peering, load balancing, auto scaling. You give massive discounts to new players using these systems and inevitably some of these will become the next Uber or Netflix. Then you are free to charge whatever exhoribitant rates you please once it's so impractical to move that it would require a major redesign of the business.
I see it a lot like franchising. By building on Amazon's cloud services you become "Uber company brought to you by Amazon". Like franchising, your upside is limited because any owner with a significant share of total franchises will begin to put pressure on the service owner itself.
To be honest, you sound like conspiracy nut hell bent on hating the Cloud. Maybe you should try taking a deep breath, and try to open up to the possibility that the Cloud is actually a good thing, and Cloud providers aren't the illuminati trying to "lock you in". Well maybe they are. Of course every cloud provider wants you to use their services.
You can architect your system in a way that it'll run on any cloud provider. All the major Cloud Providers support kube for orchestration.
To be honest I don't think you know what you're talking about. You should refrain from making uninformed opinions on hacker news, especially on a throwaway.
Did you ever read about how Netflix tried to run their own hardware but can't because they have so much data in AWS that it would basically bankrupt them to extract it?
Where did you read this? You can have Amazon send you a truck full of hard drives. I doubt it costs more than Netflix can afford.
Nevermind, I misremembered the story I read about them. They moved the main site to AWS with the huge omission of their movie streaming system. Their own Open Connect servers are far cheaper to use for this becuase of massive AWS outbound data costs.
Also, the truck is for data in, not data out. Getting data out of AWS is far more expensive than putting it in. That's the lock in.
You did not ever own your own globally consistent, massively scalable, replicated database. The fact that you can now rent one by the hour is strictly an improvement for you, if you need that kind of thing.
Cassandra also does that without requiring the "magic" of a system you can only get from a single vendor and never buy. At the same time these walled gardens have come up free software has grown to fill the gaps
Spanner is unique in a lot of ways, but it still trades off consistency for speed.
The most unique thing about spanner is the use of globally synchronized clock timestamps to guarantee "comes before" consistency without the need to actually synchronize everything.
There is nothing stopping startups and open source developers from building the same thing in a few years. The missing ingredient is highly stable GPS and local time sources which will hopefully be available on cloud instances sometime soon. This is a new piece of hardware so it will be interesting to see if cloud providers make one available or use the opportunity to sell their own branded "service" version you can't buy. Unfortunately I think we'll see the latter far before the former, it it ever even exists. Without a highly stable timesource doing what spanner does will be completely impossible.
Yes spanner is special right now but that's even more reason to not go near it. Google has a complete monopoly on it, the strongest vendor lock in you can possibly have
Only "new" in the sense that it is currently not commonly offered, the devices themselves have been available for ages. (If you are a large enough customer you apparently can get at least some colo-facilities to provide you with the roof-access and cabling needed for the antennas). If cloud providers make precise time available I don't see much potential for locking you in with their specific way of providing it, as long as it ends up as precise system time in some way.
I'm saying I doubt they will ever offer it precisely because it will conflict with their paid offerings. The fact that it takes its hardware is a great excuse to not give your customers the option.
I know GPS time sources have been available forever but a fault tolent database needs a backup. The US GPS is incredibly reliable but there have been multiple issues with both Glonass and Galilio.
It sounds like Google has an additional time source making this possible, probably a highly miniaturized atomic clock, possibly on a single chip. There's no way they're running on GPS alone
Yes, they clearly say that they use atomic clocks in addition, but that's commercially available as well. Atomic clock for frequency stability short- to mid-term, GPS to keep it synced to global time. E.g. in many cases, mobile-phone base stations contain just such a setup, and the data-center versions should fit in a few HE.
A system build on top of it? Possibly, but thats the trade-off if you don't want to pay for/be lock-in to somebody else running it. For just the timing stuff: not really. Of course it adds complexity, but these things are established and should be quite stable.
You can get around this with some fairly simple hacks. Write some JavaScript that evals a part of your page or something crazy like loading part of itself from Rot13 text file. Have this js generate an ID you can identify as 'real' or 'fake'. Filter your analytics by this ID. If you want to be extra funny make real and fake IDs look indistinguishable to human eyes.
99.9% of spammers are too lazy to spend any time figuring this out for a single site, and their tools won't even tell them spam isn't working. I've gotten away with adding a simple static ID to everything and except for really large juicy targets spammers don't even watse time on this.
I think the reputation of who clicked it also matters even though it isn't publicly said. If a few people with a lot of upvoted got to it, it could hit the front page fast
The spillway is kind of "around the corner" from the main dam, so I would guess the latter is probably not "doomed," at least at this point.
A realistic disaster scenario would be if the erosion retreats all the way back to the main spillway gates, or if it undermines the emergency spillway. The berm there would then fail, which holds back maybe 15 or 20 feet of lake altitude. A volume of water equal to that depth times the surface area of the lake would then exit over the hillside, completely uncontrolled. That would be more than enough to cause a serious downstream disaster.
The water flows would be very high (~a million CFS? total guess) which would cause even more severe erosion on that hillside. At that point the main dam might be at risk if the erosion traveled far enough laterally.
Indeed, 13 times more dams than France (who prides itself for 12% hydroelectricity), while USA is only 5 times bigger in population. China has the most number of large dams in the world: More than twice the USA, for 4x population.
The damage to the main spillway is extreme. Look at the pictures, it's more of a semi controlled waterfall at this point. If erosion moves upstream on the spillway, which it eventually will, once it reaches the gates they will need to stop the spillway or risk the dam itself.
Their emergency spillway is already unusable, so this means just get everyone out and wait for the dam to blow once the main spillway erodes too far back
Holy shit look at the pictures. Both the regular and emergency spillway are heavily damaged and even more rain is expected Tuesday and in the coming weeks.
They shut the main spillway off because of damage and now they're running it at maximum flow because the emergency spillway is even more damaged. This feels every bit like a hail Mary pass to drain the lake as much as possible before the dam breaks. In another day or two they won't have any spillway left.
This dam is doomed, it seems like one of the rare times media is downplaying the risk... Probably because a lot of people are going to die :(
If the emergency spillway goes, that's 30ish lake-feet of water that would drain uncontrolled. That would likely wipe out a few towns down the river.
I think gp was getting to the fact that the main spillway is eroding as well, and it could reach a point where it was unusable. In that case, with significant rainfall expected in the next week and a good deal of snow melt to follow, there could be huge amount of water coming into the basin with no way to control the outflow, which could result in much worse flooding.
Hopefully that all can still be avoided. It's a bit dramatic to say it's inevitable. It's a possible scenario, but it's also possible the right plan avoids it.
The emergency spillway was used because the main one was already badly damaged so they shut it off. Now they're saying the damage to the emergency spillway is even worse than the main spillway so their turning the main spillway on full tilt to drain below the level the emergency spillway activates.
The thing is... They drained below the emergency spill way level many hours ago and it's off, but they're still running the heavily damaged main spillway at full tilt. I think it's a last ditch attempt to drain the lake before the dam fails, they just don't want to say it because everyone will panic
The biggest scam is actually the other way around. Intel pulled ECC support from desktop processors a few years ago to force datacentres to buy their Xeons instead.
It's nearly impossible to find a desktop CPU that supports ECC ram now even though 5 years ago it was commonplace.
Trying to run a NAS with some sensitive data is now impossible unless you buy their server chips
Sorry, maybe not a NAS but my point stands. Almost any chip that has a server equivalent either has ECC disabled in the chip or the motherboard chipset. This was never the case until some years ago, and the only example I've ever found of Intel removing a feature from their desktop chips exclusively. I don't see any other reason for this except to force people to buy Xeons for their servers
I keep Facebook around so old friends can contact me and that's it. I used to use it every day when it was a way to communicate with friends and family and share pics of your experiences with friends.
Eventually it became a place for people to show off or bitch at each other because it's easier than doing it in person.
The ranking algorithm prioritizes annoying shit that will maximize ads and "engagement" over things I actually want to see like pics from my friends wedding.
Fuck Facebook and messenger. I login about once a week to check my messages and immediately log out. Snapchat now is a lot of what Facebook used to be
Remote work will never replace face to face contact for jobs where high amounts of collaboration are needed, too many parts of the signal are lost.
Take for example a few jobs ago when our servers went down because of a local internet outage. The IT guy literally stood up from his desk and said "holy shit anyone that knows our infrastructure get in the meeting room now." We could hear the warning bleeps from the server room merging into an incomprehensible chorus, the death rattle of a company hours from implosion. The seriousness was obvious just from his actions and the sound of his voice. The response time was a few seconds.
How long would this take if we were spread across timezones with different hours and variable lag time between communication? If the internet is down is there any backup? How much would it cost to give every single employee a backup method of communicating? How does an employee separate a desperate please for help from the endless stream of BS emails and messages that aren't terribly important?
Working in one place gives you a very powerful and natural means of communication that's nearly instant and can't be stopped by any hardware or software failure, save multiple employees dying simulatanously.