I've been at Google for 3 years and have 20%ed the entire time I've been there on: grpc-go, Drive, and Go tools (gopls, etc).
I think it's fantastic. The whole 120% thing is up to the individual: there have been times I've made it a 120%, and there are times when it's been just "take a friday off to work on other stuff". You end up getting less of your "job" done but my managers have always been supportive.
It's been great for sanity: some weeks/months feel just, like, meetings and chore work. It's great having that one day a week to work on a rockstar feature request in some fun project. It's also cool to work on your dream projects without the luck/physical move/whatever to get on the actual team. (you can effectively work on anything since no project is going to say no to free headcount)
It's also nice because it spreads your professional network in the directions you choose to spread it, rather than the more organic spread that your normal job entails (assuming luck and available are big drivers of where and which projects you "end up" working, rather than 100% your choice). So, maybe I don't work on project X today, but I can 20% on it and build up those connections, and later in my career I have a much better shot getting on the project. That agency is a nice feeling.
So, as far as the employee happiness goes, I think it's fantastic.
The myth that 20% time is dead was started by mchurch. He was at Google for six month and was let go. He then went on to bad mouth the company here and elsewhere online.
20% time at Google exists and managers are supposed to adjust the workload - it's not supposed to be 120% time. That said I think it would be hard (but not impossible) to a launch a 20% project to external users that doesn't have people working on it full-time.
I wasn't aware of anyone saying it was dead, but the narrative has shifted over time from "work on whatever you want one day a week" to "your team might let you spend up to one day a week working on something of your choosing if it's in the interest of the company". I'm not at Google, and never have been, but it's almost certainly the case that the narrative has changed - whether or not there was ever a change in policy.
I don't think things have changed that much, only Google grew so big that 20% projects became a potential damper on your career (the incentives of perf and promo and all that horrible stuff). But the freedom has always technically been there.
I am a tech lead and find it surprising that 20% projects could damper your career, perf & promo etc. Where I work, a well-executed 20% project is seen as very positive during the perf review. In fact, doing zero 20% projects for a long period of time could actually damper your career.
But if nothing came of the 20% work, you could've done 25% more on the core job, doing positive performance things there. I'm pretty sure that's what's meant about it being a career dampener, not that it wouldn't be great to hit on something fantastic and accepted as a new core product with thousands working on it full-time. (IANAG.)
Everything is obvious in the hindsight. 20% projects carry a risk, failing one or more 20% is fine. Failing many is seen as lack of judgment and does dampen your career eventually. My point was that, with reasonable judgment, an engineer can get good stuff done with a 20% project. Also, not every 20% project needs to become a fantastic new core product to be considered a success.
Also implies a widely varying experience with the program; levels of promotion politics must vary across the company, and levels of recognition for anyone's particular project must also vary.
> the narrative has shifted over time from "work on whatever you want one day a week"
This was never the narrative, except maybe when the company was a few hundred people. The project has to have a relevance to the company, just not to your primary project. Most 20% has always been chipping in on a team whose problem space interests you and that you might want to transition to. That's definitely what it has been for me.
It was a shame that he overstepped the mark that one time, because he had a lot of sensible and deeply insightful things to say. He is literally one of the best essayists in tech.
I will say that usually it is easy to move teams, but if the expertise of the team veers strongly from your background (web dev wants to work on C++) they might ask if you want to do 20% first to get a taste. Also can help convince a team to take a more junior hire, e.g.
I’m not sure if it’s ever been an explicit requirement and if you have strong performance reviews it’s usually pretty well-oiled.
Real 20% time though I don’t think I’ve ever encountered.
Maybe if you work on a team without a high-traffic product (internal tooling?).
YMMV, that’s all I’m saying. Don’t come in expecting it or you’ll be disappointed. And that’s resonated with other coworkers of mine on other teams, too.
I spent 3.5 years at Google, and, like most things at large companies, 20% is very team dependent. Mchurch may very well have had the experience of being on such a, crappy, team, not that his record is a good indication.
She was lying to Yahoo employees to make them shut up. There's no nice way to put that. My opinion of Mayer just halved reading that, I had no idea she had been misleading Yahoo staff. What a pity.
I worked at Google for quite a long time and had multiple 20% projects. One of them went into production and now has (I'm told) a team of more than 20 people working on it full time, so the idea they can't go live to users or become real products is totally wrong.
A few things were consistently true when I was there even in 2006:
• Some people would claim 20% time didn't exist or was theoretical
• Other people would be simultaneously taking it and launching new products based on it
News started as a 20% thing. So did GMail, if I recall correctly. Google Sets, if you remember that. There were many, I'm just picking whatever examples spring to mind quickly.
Now, can I believe that at times teams were put under pressure and some managers asked people not to take it? Yeah, absolutely. I spent my time at Google on teams that were doing maintenance and operational time work, first as an SRE and later on a did a tour on the front line fighting spam and hijacking. Those are the sorts of things where there are no product driven "crunch times" (except when there's an attack). So the culture there is maybe more conducive to side projects.
But the idea that it never existed at all is just a lie, sorry. It existed for me across multiple parts of the company and a span of nearly 8 years.
We should ignore the word of one of the company's earliest employees, who worked there for over a decade and became a VP just because she no longer works there?
Yes, because context is key. In that statement she was attempting to craft Yahoo!'s work culture as it's CEO.
In contrast, I've been at Google since 2006 and have been utilizing 20% time since I started, and other Googlers here echo the same experience, and many of our projects are publicly available.
Not sure how her statement jives with the experience and artifacts of other's work other than to say it's not correct, and shaped by the context of the statement.
Google employs over 100K people. With that many people there are statistically going to be people with the most amazing and dreadful experiences of working at the place - none of them representative.
Well. They are representative of a wide variety of different work cultures within Google. And if one were to join it would depend on the team/manager one ends up with what part of the stories told would be representative to you.
As always in big corporations there is no one culture, but an amalgamation of many different subcultures. They can have a shared core, but even that becomes more unlikely the more a company grows. At least in my experience.
It was amazing how some people lapped up everything he wrote about Google (case in point: apparently a lot of people don't believe 20% time exists at all any more), I guess because he was so prolific (it was hard for actual Googlers to keep up rebut everything), and he was playing into some confirmation bias.
The confirmation bias still exists today, too. On HN I frequently see users claim that Google sells user data, but that isn't true; user data is only used for ad personalization.
I also frequently see the claim that "Don't be evil" has been removed from Google's code of conduct, but it's still clearly visible in the document.
I hope we develop better tools to combat misinformation in the next few years, because social media is far too effective at propagating it. For every person that verifies and corrects a claim, there's ten others who will gladly repeat it. This is especially problematic when there's a bias to exploit, be it political or otherwise.
"A lie can travel halfway around the world while the truth is still putting on its shoes."
> Google sells user data, but that isn't true; user data is only used for ad personalization.
That is selling user data. It’s just got a layer of bullshit in between.
As long as Google offers ways to target ads at specific demographics, they are selling that demographic indicator about you whenever you click on one of those ads.
If your data isn't being sold, then no, they're not selling user data. That's changing the meaning of language to make it sound worse than it really is.
You might argue that this data is being leaked, but that isn't clear either. Ads work on an auction system, and there's no guarantee that a user clicking an ad meets a demographic. You'd need to make a case that user data can be accurately built from buying ads alone.
Even if you could prove that, that's _still_ different than the claim that Google is selling user data directly.
> If your data isn't being sold, then no, they're not selling user data.
That’s just phrasing to attempt to weasel out of the fact that it’s selling the ability to derive the user data. It’s effectively the same thing with a layer of indirection.
> Ads work on an auction system, and there's no guarantee that a user clicking an ad meets a demographic.
Auction system is irrelevant. Unless Google is ignoring your target demographic and keywords entirely, they are selling you a stream of traffic that matches that demographic on average.
That's not a social media problem. The claim that Google sells user data came from the traditional news Big Media outlets like the NYT and WSJ. I remember when it started actually: it happened around the time Google News launched. The narrative in the media industry became very quickly "Google is making tons of money whilst we're going through huge layoffs" and the media spin went from highly positive to very negative at dizzying speed.
A major inflection point was when Rupert Murdoch gave a speech proclaiming the iPad was the future of news and Google was some sort of parasite sucking the blood of journalists. The tone of the output from his newspapers changed overnight and they immediately started digging around for largely fake 'scandals'. The rest of the news industry didn't need much persuading and the rest is history.
Claims you see on social media since then are largely just repeating the media's talking points. It's not like it originated there.
> Google sells user data, but that isn't true; user data is only used for ad personalization.
I mean, i guess that's a lot better in principle, but it still seems like basically the same thing with an extra layer of indirection. It certainly doesn't give me warm fuzzy feelings about google.
> Google sells user data, but that isn't true; user data is only used for ad personalization.
And other companies getting referrals can’t connect the dots when they know the user clicking on the AdWords Ad and what the Ad was for?
Companies that work with big data companies run AdWords campaigns in Google for other companies, right?
Even if Google isn’t evil and does everything in the interest of privacy, if their ads, based on your preferences from your emails and browsing history, are used to direct you to some product, there is a way that other companies will learn of those preferences, store them, sell them, and use them.
Google is targeting the ads, with each targeted ad they leak personal information about the users. Advertisers only have to pay and Google tells them who's matching whatever demographic they want. On the other hand print and billboard ads don't reveal personal information to the advertisers.
Also (and keep in mind I'm just repeating the rumor) wasn't it supposedly replaced with the slogan "do the right thing"? It was something else fairly harmless.
I’d never heard of 120% until this thread, but I immediately interpreted it to mean “you’re free to do your self directed 20% work on nights/weekends/your “own time”.”
It actually took me a second to see yours, but I guess you mean more of a “ya you can do whatever you want on fridays as long as you get 100% of work done by Thursday” vibe?
I think of 20% time as error correction for management. Engineers can route 20% of their time to what they think is important, and not what management thinks.
Nobody from management told me to rewrite the primitive array handling in the proto libraries for Java. It just irritated me that they didn't do what I thought they should, so I spent some time fixing it.
In my experience, 20% time is something that has management blessing. Otherwise it's just a risky side project where you're still accountable for 100% on your main assignment.
Similar experience. I 20% on python...things. some python tooling, python3 was a major one, etc. I backed off recently just because most things I was involved in are wrapping up, but yeah, it was an active area for me. And also I am sort of 20%ing on my own project that is greenfield tooling with only dubious management support.
It's given me exposure to lots of stuff, I've learned more about a favorite language, met some cool people, etc. Not to mention earned a handful of bonuses directly related to my 20% work.
There's also now done tech debt reduction work that I've been tangential to that's clearly a value add, and it's primarily 20%ers.
Wondered if I could ask... is Google a relatively normal place to work? I currently do my own thing but I would love to work on projects that go far due to the amount of talent involved... but at the end of the day I'm just concerned with "office politics" I get the vibe that you have to "pick sides" which might just be an availability bias due to the news stories. I guess just relatively speaking, when talking with Amazon I get a different feeling. Recruiters probably aren't the best way to judge a company though, so figured I'd ask.
It's fairly normal for a big company these days: extremely slow pace, red tape all over the place, your level matters more than your skill. It's an engineering-led culture so there's a lot of focus on code nitpicking and purity (arguments over whether mocks are evil, etc).
Great place to work if you're high level (5+) and land on a good team with a good manager. Mind numbingly boring if not.
Evil enough that new mocks should be banned by fiat and enforced by code that requires rarely granted permission to override against projects that are 5 years old and already have hundreds of them in place and do not have architectures that support better ways of testing?
It might be a nice thing to avoid them, but you're always working with legacy code, and I've literally had "I need to add a button to accept new permissions" blow up into "I need to refactor our entire class structure across 200 files because I'm not allowed to add a new test that follows our old patterns nor commit this code without coverage, and oh yeah, nobody wants to review that CL in one go so I have to figure out how to break it into 20 bite sized changes. There goes my quarter...". That's just dumb, and that type of dumb is very fashionable at Google.
This is a vast oversimplification of mocks. Mocks themselves aren't evil, they just enable subpar programming. I'd much rather spend 2 minutes making a functional test with a mock than 2 days rewriting a bunch of core functionality in the name of code purity.
I would never try to judge a work environment by the recruiters (unless the recruiters are unusually bad, which could suggest a toxic management culture). If at all possible, talk to actual engineers.
My two cents on Google and Amazon: Both companies are large enough that making blanket statements about company culture is a dangerous game. What you need to do is find out which silo you would be working in, and try to figure out what the culture is like in that silo.
One thing you can grill the recruiters about is what the employee review process looks like. Both Google and Amazon use stack ranking, which is about the most brutal system there is. At Google, feedback from fellow engineers factors heavily into your performance reviews. Amazon seems to be more focused on measurable performance goals and demonstrable contributions to the company's bottom line.
> Both Google and Amazon use stack ranking, which is about the most brutal system there is.
This isn't really true (at least at Google, IDK about Amazon). "Stack ranking" historically has two components:
1. Being rated relative to your peers as opposed to a rubric.
2. (Usually fast) removal of the lowest performing individuals.
When combined, these become "fire and replace the bottom 3% of people on every team every year." The downsides to this approach are that if you're on a strong team, you may be a median employee, but be the weakest on your team, and be forced out for this reason. It's clearly problematic.
IDK about amazon, but the first is only half present at Google, and the second isn't really at all.
In some sense you are rated to your peers, not a rubric. But only in the sense of at aggregate (that is, in an organization of 1000 people, you'd expect ~30 people to be in the "worst 3%" category, so if only 10 are, that may raise some questions.) This removes the competition and potential animosity with your direct peers and aligns more with the rubric based idea, though it is possible that a an organization may have across the board above-average (or below) performance on occasion.
For the second, people with NI ratings aren't immediately fired (I know of a few people who got NI and found new teams and much more happiness), though obviously consistent NI could result in that.
IMO it's easy to avoid politics, activism, and the associated drama if it doesn't interest you. Most people seem uninterested in that stuff. I think you're spot on with the availability bias assessment. It exists and there are conversations happening all the time, but nobody expects you to participate.
Amazon has a more singleminded focus on succeeding in the business and making money. This can be good or bad depending on your personality and goals.
I think it's incredibly healthy to encourage that sort of "visitor" committers in an enterprise codebase, for a couple of reasons.
The problems of getting an existing employee up to speed with the codebase are a subset of what you need to do to get a new hire up and running. And so you really should have a decent story for this regardless.
And culturally, it's super-valuable to have a certain percentage of visitors, since it helps break down the silos, one commit at a time.
The counterpoints, of course, are that no team really wants to have to maintain a bunch of crap written by someone who's moved on after a quick stab at something, and it sorta sucks for the sustaining team if all the "rockstar feature requests" (as the GP put it so succinctly) are picked up by folks in their 20% time.
The former can be mitigated with a good model for managing incoming pull requests, in my experience. The latter is a tougher nut to crack.
Particularly at Google, programming languages, code styles, common libraries, and tools are incredibly similar between teams. There are really low barriers to jumping into someone else's codebase. It's not uncommon to review changes to your team's codebase from people you have never met (on average, maybe weekly?).
Anecdotally I also think a lot of code was generally really well documented. There is some effort to document things for a general Googler audience (i.e. you should understand most of the documentation on any given page without having to read all the other docs).
All in all, once you have onboarded into the Google ecosystem, it's fairly painless to jump around the codebase.
Others have answered this better than I, but I'll tell you what I've seen:
- grpc-go and gopls had great "Contributors welcome" issues set up. For both of them, I had to email/chat various people for help getting used to the code, but everyone was extremely pleasant and helpful (and happy to help).
- Drive treated it like an internship, where a TL curated a set of low-priority issues and I just went and chatted with folks when I had questions. Again, everyone was very helpful.
- Other Go tools I've worked on have had really intuitive codebases, or were quite small, and have been easy to dive right in. That's helpful too, though I do end up chatting with people a lot.
If I had to make a general statement I guess I would say: don't allow 20%ers if you don't have the time for an intern. Treat the 20% like an internship: free labour with a little bit of extra onboarding. It's ok to not have that time, but I think most people are usually happy and excited to provide that help! =)
Oh, actual tip: having a myproject-users@ / myproject-devs@ mailing list, or a #myproject-devs chat channel, goes a long way. Then, chatting and asking questions can be informal and ad-hoc.
Well, Google values independence and so almost everyone is pretty independent. There is also consistency in the internal tools, libraries and infra used, so onboarding to the new team is just understanding their project. The 20% project that you work on is also usually a task that someone can work on independently and isn't hi-pri stuff which makes it pretty easy.
TL:DR - nobody has to hand hold anybody onboarding 20% - just answer basic questions and point to the right resource.
Having formerly worked on 20% projects at Google and now working at a small company, I'd say that 20% time can be valuable at any size.
For smaller companies it's easy to essentially always have feature work which means never really having time for code health; anything that wasn't done right the first time will be hard pressed to justify getting done ever unless it's either breaking something or preventing features. 20% time can allow engineers to do something about issues they see and care about which could improve things for everyone else in a way "one more feature" may not.
We do something like this: Maintenance Monday. The dev team is just working on the bits of code they think need it, cleanup, test case, formatting, all those little things - that have no direct customer facing value.
And then we do a Feature Friday so folks can work on the fun new stuff (of their choice). It's all company project related tho - sometimes loosely - we don't yet have a business case for AR/VR or using templates on a Remarkable 2 - but maybe - and they are used to spread knowledge around the team
I imagine it's not appropriate for startups, since they're burning money fast and mental health appppears to be less of a priority? I've not worked at one, so hard to say.
Besides startups, I think it's fair play at companies of all sizes. Employees are the most important thing to companies, and the overhead of losing trained, context-carrying talent tends to be heavy. So, why not let them fulfill that scratch and keep them at your company.
I've seen some companies allow 20% on any team within the company (rather than any project whatsoever): that could be a nice middleground for a company that is unsure about the whole thing.
Not really; the consensus view within the company was that the justin.tv website wasn’t long-term sustainable and the company needed a narrower focus.
There were three segments with sizeable viewer counts at the time: sports, video games, and social streams. The people streaming sports probably didn’t have the necessary permission from the copyright holders, so that was only possible through the DMCA safe-harbor provisions; obtaining the rights ourselves didn’t appear to be a viable option.
The entire company basically split into two divisions: the social division turned into SocialCam, which had some moderate success, and the gaming division turned into Twitch.
Wow, that is a great insight of them. Usually my managers want to do MORE disparate things, and spread more thinly. It's nearly impossible to get them to cut anything and focus on a mission.
It definitely has more impact imo in organizations that tend to specialize and get too big to known everyone.
I worked in a larger technology/shared services team where the execs set an expectation of “no email Friday” policy and encouraged peer learning and training on Friday afternoons.
It wasn’t 100% effective, but helped establish a personal development culture, got SMEs talking outside their “turf” and sparked a few good projects and staff transitions. (We discovered we had a change management guy who was passionate about kubernetes)
Where I work we don't get 20% but we do get 1 day a month (so about 5%). We're a lot smaller than Google.
When I first started with the company I used the time to gain experience using the products we make. That gave me more insight into our user's perspectives and paid dividends in future tasks like bug fixing.
Sometimes people use the time for small things that they want to add but aren't important enough to be scheduled. I work with people that are passionate about what we do so it's nice to be able to make improvements in the spots we care about.
Sometimes people will use the time to throw together a proof of concept for a bigger feature. The real feature might take months to get working to the point where it is stable and polished enough for an end user but you can get a lot of management buy in if you have something tangible that can show people what the feature could do.
I don't really understand the focus on the 20% thing. I have worked at different sized tech companies, from startup to medium sized, to giant. And I have worked in several capacities, from line engineer, to middle manager, to director. In my experience, it is impossible to measure an engineers performance with an error less than +/- 20%. I have worked on side projects that large and my manager never noticed until I demo'd the results.
Generally, if your manager is not shit and not under unusual time pressure, they should have no problem with a project that doesn't significantly impact the primary work and has some chance of benefiting the company.
Google is interesting as they have an explicit 20% policy, but in practice it doesn't really matter one way or another.
We used to do it at my previous company, maybe 30-40 engineers, but only 10%.
It didn't really work, in the sense that no project went anywhere.
Based on the feedback here, it seems to work if you can contribute to external projects, where it's easier to add marginal value.
In our case, everyone worked on the same project, so it was either develop some new features for the existing project, or create something brand new.
We would develop some POC, and every time the answer by management was "that's cool, but we don't have the resources to polish it up/bring it to market and it's low priority". It was pretty depressing to me.
i guess I was asking this because I've never worked at Google and wondered about any downsides - e.g. distraction (the 20% project would seem to be the "cool" project, which makes the 80% boring) or politics (could be anything from jockeying for position on a hot 20% project or something else I can't even imagine because humans are petty)
I really don't get what you're saying here. You get paid for 100% of your time, you just get to work on something beyond your direct responsibilities that still benefits Google for 20% of it.
If you're alluding to "120% time", fair enough, but that's not what GP was referring to.
> If you're alluding to "120% time", fair enough, but that's not what GP was referring to.
From other comments, it's obvious that 1) Google expects you to do your 100%, and then a 20% on top of that of "Google related projects", most likely owned by Google, for free.
I've never ever heard of that actually becoming a requirement.
What happens is that some managers really don't buy into the 20% concept, and don't provide time for it. Employees in those teams determined to do it anyway end up doing 120%.
There is absolutely no requirement at Google to put in time on a 20% project.
That said, what happens in practice is that 20% projects are the way that Google engineers are able to move across teams: You pick a team you want to join. You do something for that team as your 20% project with the goal of getting that manager to request your transfer to their team.
So if you're desperate to get off your team without quitting Google, you could get backed into committing 120%.
Former Google employee and Yahoo CEO Marissa Mayer once stated “I’ve got to tell you the dirty little secret of Google's 20% time. It's really 120% time.”[6]
Yahoo CEO and formal Googler Marissa Mayer once bluntly denied its true existence.
“It’s funny, people have been asking me since I got here, ‘When is Yahoo going to have 20% time?'” she said on stage during an all-employee meeting at Yahoo. “I’ve got to tell you the dirty little secret of Google’s 20% time. It’s really 120% time.”
I don't see what Marissa Mayer's comments actually add to the discussion. She was obviously acting in PR mode as the CEO of a competitor. It's barely information.
At the time she made this statement, she represented Yahoo, not Google, and was making major changes in its culture to fit her direction. I wouldn't put much stock into these statements.
As a current Googler, I've been spending 20% of my time on self-directed work, and 80% on what is assigned since I started, and my contributions have been to both open/released software and also internal stuff, and I joined in 2006.
The 20% Project is an initiative where company employees are allocated twenty-percent of their paid work time to pursue personal projects. The objective of the program is to inspire innovation in participating employees and ultimately increase company potential. The 20% Project was influenced by a comparable program, launched in 1948, by manufacturing multinational 3M which required employees to dedicate fifteen-percent of their paid hours to a personal interest.[1]
Nothing say anything about company related projects, it only mentions personal projects. If you're talking about company projects, then it's not a 20% project. Even less so if you are talking about doing 100% of your company work beside.
I've got a feeling SV companies have hijacked the term 1) for PR bs, and 2) to get more work from employees.
It looks to me like the Wikipedia text is misleading, because I agree with your interpretation of the text.
Somebody should update Wikipedia...
I think that originates with Google's PR, because I remember the PR when the 20% initiative was introduced was misleading at the time too. To outsiders, I remember (~20 years ago) the PR gave the impression you could work on anything of personal interest, such as running or contributing to open source projects, your own programming language or editor or whatever for 20% of the week if you joined Google. Google would own what you did there so they would benefit (much like the 3M post-it notes thing), but other than that it was like paid personal-development and mind-refreshment time. But it's not like that and probably never was.
Denver voter here: wer'e very excited about this! Our neighbours in Longmonth and Fort Collins both have similar situations, and we're jealous over here. Excited for some competition to ISPs, and excited for our government to be, ya know, building things that help its peoples. =)
Fort Collins just started rolling our broadband out and we voted on it three years ago. It's been a little bit of a boondoggle, behind schedule and over budget, like most of these kind of projects tend to be I guess. It's been just within the last quarter that Fort Collins is finally getting the number of paying customers into the thousands which should help the financials some. They pulled the fiber throughout my neighborhood a couple of months ago but then it seemed to stall. Occasionally I see a work truck doing something at one of the boxes, but we haven't heard anything as far as being able to get hooked up yet. Our cost is $59.95/month for gigabit up and down so there are many of us chomping at the bit. For anyone interested in the rollout of a municipal broadband program, the quarterly financial reports Fort Collins has been releasing during their rollout are kind of interesting:
> Package-Side Problems
> I also find it problematic because it breaks (in my mind) one of the most useful things about module names – they reflect the file path.
Sorry, this is incorrect: https://golang.org/ref/mod#module-path. A module path describes where to find it, starting with its _repository root_ (github.com, golang.org, etc) and then the subdirectory in the repository that the module is defined in (if not the root).
So, if a module lives in golang.org/username/reponame/go.mod, its module path is likely golang.org/username/reponame. If a module lives in golang.org/username/reponame/dirname/go.mod, its module path is likely golang.org/username/reponame/dirname. (and so on with a /v2 folder, etc)
I mention this because it appears that OP's major gripe in "package side problems" is that the /v2 dir "breaks" the (mis)conception that a module path describes _only_ the repo root.
(see also: multi module repositories)
> In other words, we should only increment major versions when making breaking changes
No, you can increment major versions whenever you want (though it's painful to your users). But, you _should_ increment a major version when you make a breaking change.
> I think a simple console warning would have been a better solution than forcing a cumbersome updating strategy on the community.
Could you elaborate on how a console warning solves the problem of users becoming broken when module authors make incompatible changes within a major version?
> Another problem on the client-side is that we don’t only need to update go.mod, but we actually need to grep through our codebase and change each import statement to point to the new major version:
What if you need to use v2 and v4 of golang.org/foo/bar? How would you import them both without one having a /v2 suffix and a /v4 suffix? Are you proposing that users should only be able to use one major version of a library at a time?
(I assume you are talking about a user upgrading to a new major, not the package author bumping to a new major. If the latter, a grep and replace is quite approachable and is shown in the blog you linked :) )
> Go makes updating major versions so cumbersome that in the majority of cases, we have opted to just increment minor versions when we should increment major versions.
I'm reminded of when the "unused imports not allowed" rule was lambasted, and then goimports was released and the conversation was snuffed out. This situation feels analagous.
You praise the toolchain and the good decisions in modules, but then hinge your thesis against it on "it's cumbersome". That's a valid concern, but it's likely that a tool that makes major version upgrades easier will resolve your issue. A wholly different design certainly is not needed .
Check out https://godoc.org/golang.org/x/exp/cmd/gorelease for one tool that's under development aimed at helping version bumps. It sounds like you also need something that will create the v2 branch/directory, change all the self-referencing import paths in that branch/directory, and change the go.mod path. That sounds like an easy tool to write - I expect something like that should come out quite quickly if it doesn't already exist.
-------------------
Side note: In my opinion, dependency management is a rat's nest of choices that seem good at the outset and end up with terrible consequences later on. Go modules make super well thought out decisions, and is a very very simple design, built to last for a long time without regrets. Sometimes the right answer is to work around a small problem with some tooling or documentation rather than go a totally different direction that will have large, sad consequences later.
That is: all choices have downsides, but it's good to choose the best choice whose downsides can largely be tackled with easy solutions like tools and docs. Decisions like "we should build a SAT solver" have sad, sad, sad consequences that can't be tool'd and doc'd away, for example.
One of the takeaways from this article was, "there needs to be more documentation", and I think I can speak to that:
First, thanks for the feedback. We also want there to be a loooot more documentation, of all kinds.
To that end, several folks on the Go team and many community members have been working on Go module documentation. We have published several blog posts, rewritten "How to write Go code" https://golang.org/doc/code.html, have been writing loads of reference material (follow in https://github.com/golang/go/issues/33637), have several tutorials on the way, are thinking and talking about a "cheatsheet", are building more tooling, and more.
If you have ideas for how to improve modules, module documentation, or just want to chat about modules, please feel free to post ideas at github.com/golang/go/issues or come chat at gophers.slack.com#modules.
I noticed that a lot of the documentation seems to combine "how this works" with "why this works the way it does"; the why is great when you're interested in diving deeper, but it's frustrating when all you're interested in is the how.
For example, the linked blog post spends a lot of time talking about diamond dependencies and other package managers. This is just noise that gets in the way when you're trying to figure out how does this work?
If you did want to combine both in a single reference doc, I would move the why out into separate, skippable sections.
When I first was learning Go, I was really impressed by how easy it was to understand the language just by reading the spec. I've found the opposite to be true for Go modules. (Which also, as near as I can tell, doesn't have a spec, but just various scattered blog posts, for various different iterations of the idea.)
Googlers tend to explain "why" of their decisions.
I'd suggest just taking the advises and decide the best course of actions. The why part is generally only meaningful to the decision maker and not something people care or have enough context to appreciate.
This way a lot of potential misunderstanding was avoided.
As an everyday user of Go perplexed by this making it into the mainline, I'd like to second the request to make this feature optional. More documentation would be nice, but I'd prefer the default to change.
The assumptions in that v2 go modules article around the meaning of major semantic versions do not jibe with the way the majority of software in use today uses version numbers - they are most often used to denote new features, which may or may not have breaking changes large or small, and small breaking changes are tolerated all the time, often in minor versions. This assertion in particular seems wrong to me for most software in use today:
By definition, a new major version of a package is not backwards compatible with the previous version.
Semver is very clear on what a minor vs a major change means.
> the majority of software in use today uses version numbers - they are most often used to denote new features, which may or may not have breaking changes large or small, and small breaking changes are tolerated all the time, often in minor versions
We're getting into opinion here. Let's be clear: semver very strictly, objectively disagrees with this approach. In general, this approach of "what's a few breaking changes in a minor release amongst friends" leads to terrible user experiences.
Go modules takes the cost of churn, which in some languages gets externalized to all users, and places it on the module author instead. That is far more scalable and results in much happier users, even though module authors sometimes have to be more careful or do more work.
Thanks for working on the docs and engaging here, I know it can sometimes be a thankless task.
I don't think it's a matter of opinion that the vast majority of software in common use does not use strict semantic versioning, most likely including the web browser and operating system you are using to read this comment, popular packages like kubernetes, and the Go project itself in the mooted 2.0 with no breaking changes. It is highly desirable to avoid significant breakage, even to the point of ignoring strict semver and avoiding it across major version changes! So I'm not arguing for encouraging packages to break, but rather the reverse, I prefer the status quo pre go mod where packages are assumed not to break importers, though sometimes small breakage happens and/or is acceptable.
Most packages use a weaker version of semver than the one you describe, which is still useful, so I'm not clear why the go tools have to impose the very strong version which is not commonly used. The difficulties introduced seem to outweigh any benefit to me.
Because the version of web browser and the kernel more or less doesn't matter because they care way more about backwards compatibility than the "vast majority of software".
The kernel doesn't break user space. Web browsers' api generally remains backwards compatible (how long did it take to remove flash - and you can still install it if you want!)
You mention "major semantic versions", but semver itself explicitly says that the point of major version changes is backwards incompatibility.
> This assertion in particular seems wrong to me for most software in use today: By definition, a new major version of a package is not backwards compatible with the previous version.
It is true for any package manager using semver, cargo, npm, pub, etc.
In practice, that is not true for many of the packages on those managers or even for the mooted Go 2.0. Major versions are often used for major feature releases or changes of direction, which may or may not break importers, minor versions sometimes break things. There is more chance of breakage in a major version but it's not a given. And that's ok.
The meaning of versions is a negotiation between producer and consumer, not a fixed rigid set of rules as strict semver would have you believe. In practice the definitions are more fluid, something like major: big changes, may be breakage, minor: minor changes, should be no or minimal breakage, patch: small fix, no breakage.
Putting versions in the import path is not something any of the popular package managers do AFAIK, and they certainly don't force you to do that, nor do they force you to use strict semantic versioning.
> There is more chance of breakage in a major version but it's not a given. And that's ok.
I think you have are looking at this from the perspective of the package consumer, but versioning is controlled by the package maintainer and its their notion of "breaking" that determines the versioning story.
Yes, many users of a package will in practice not be broken by a "breaking change". I could, for example, depend on your package but not actually call a single function in it. You could do whatever you want to the package without breaking me.
But the package maintainer does not have awareness of all of the actual users of their code. So to them, a "breaking change" means "could this change break some users". If the answer is yes, it is a breaking change.
> not a fixed rigid set of rules as strict semver would have you believe.
Semver is a guideline so is naturally somewhat idealistic. Yes, there are certainly edge cases where even a trivial change could technically break some users. (For example, users often inadvertently rely on timing characteristics of APIs, which means any performance chance for better or worse could break them.)
But, in general, if you're a package maintainer, your definition of "breaking change" means "is it possible that there exists a reasonable user of my API that will be broken by this?", not "is there actually some real code that is broken?" Package maintainers sort of live as if their code is being called by the quantum superposition of all possible API users and have to evolve their APIs accordingly. Package consumers live in a fully-collapsed wave function where they only care about their specific concrete code and the specific concrete package they use.
> Putting versions in the import path is not something any of the popular package managers do AFAIK, and they certainly don't force you to do that,
That's correct. Go is the odd one out.
> nor do they force you to use strict semantic versioning.
The package manager itself doesn't necessarily care if package maintainers strictly follow semantic versioning. The version resolution algorithm usually presumes packages do. But if they don't, the package manager don't care.
Instead, this is a relationship between package consumers and maintainers. If a consumer assumes the package maintainer follows semver but the maintainer does not, the consumers are gonna have a bad time when they accidentally get a version of the package that breaks them. This is acutely painful when this happens deep inside some transitive dependency where none of the humans involved are aware of each other.
When consumers have a bad time, they tend to let package maintainers know, so there is a fairly strong effective social pressure to version your packages in a way that lets consumers reliably know what kind of version ranges are safe to use. Semver is just one enshrines consensus agreement on how to do that.
The package manager itself doesn't necessarily care if package maintainers strictly follow semantic versioning. The version resolution algorithm usually presumes packages do. But if they don't, the package manager don't care.
This was the core point I was trying to make - other package managers correctly leave this negotiation on how much breakage is acceptable to producers and consumers, they do not impose strict semver but a looser one, and importers choose how strict they want to be on tracking changes in their requirements file (go.mod or similar), while producers choose how strict they are going to be with their semver (strict semver is almost never used in its pure form for good reasons, versions communicate more than breaking changes).
The result of this change to go imposing strict semver on both parties will be IMO far more breaking changes, because it explicitly encourages them and forces importers to always choose a major version. It's a change of culture from previous go releases and will have significant impact on behaviour.
We'll also end up with a bifurcated ecosystem with producers who don't like the change staying on v1 and others breaking their packages all the time and leaving frustrated consumers behind on older versions without bug fixes.
The 90% number probably comes from Mozilla's financial report, last released for 2018 [1].
"Approximately 91% and 93% of Mozilla’s royalty revenues were derived from these contracts for 2018 and 2017, respectively, with receivables from these contracts representing approximately 75% and 79% of the December 31, 2018 and 2017 outstanding receivables, respectively."
Since for 2018 'Royalties' income was about 430 million, and reports are[2] that the deal was 300 million per year from Google, it's still a massive chunk of Mozilla's operating income, but you're right, it's not 90%. It's more like two thirds of their revenue coming from their market-dominant direct competitor.
Also note that while this specific contract is a four-year deal, Google had been paying Mozilla millions even before then; in 2006 most of Mozilla's money came from Google (note the source is a now-deleted blog post from the current CEO of Mozilla)[3]. It is indeed a decades-old source of revenue, but it remains to be seen if it's dried up; word is they're renewing the contract, but we'll have to wait for Mozilla's next financial report to get more information.
Let's please not upvote this. It is _not_ a "standard" go project - it doesn't even have a go.mod. Furthermore, it is in an org called "golang standards", when in reality it and its projects are not standards - they're just some opinions of some programmers.
Let's please not make getting started with golang more confusing to beginners by falsely claiming standards.
While I do agree that this particular case may have some issues I still think that having a good starting point for people newly introduced to the ecosystem is invaluable.
Cookiecutter's pypackage repo [0] isn't necessarily standard either but it does provide a lot of helpful boilerplate and 'best-practices' that are hard to integrate after the fact.
Although I respect that this shouldn't be confused as a standard, as a Go beginner with some very small attempts at creating Go projects[1] I would appreciate some sort of guidance on organizing code for a larger project. I've considered rewriting some of my larger NodeJS side projects in Go and I, possibly misguidedly, feel like the biggest barrier to entry is understanding the correct way to organize everything (i.e. directory structure). Is there any documentation, tutorials, anything that you or others would recommend for larger Go projects?
I'm curious - what you feel is specifically the barrier? Which parts of your Node project you want to port to Go and don't because you don't know where to put them?
> The point of antitrust regulations is to stop the anti-innovation practices
Is that true? That doesn't match the definition of any antitrust regulation I'm aware of. AFAIK antitrust regulation is intended to enforce _fairness_, not _innovation_. Often the two go hand in hand, of course, but I think it's worth not conflating the two.
The idea of anti-trust regulation is to break up companies that cornered a market, that is, (mostly) prevented competition on it. Breaking them up serves to make more, smaller companies (out of the split giant) that would start to compete again.
Anti-trust laws can be seen as pro-market laws that try to prevent long periods of monopolized markets without waiting for a naturally occurring disruption, instead providing a mandated disruption.
Whether it's _efficient_, and whether it works as intended, can be discussed.
I used "fairness" because it appears in most definitions of the anti trust regulation. A "level playing field" is 100% not the point of antitrust laws. Sidenote: I don't believe any "playing field" in business is "level".
I think something for folks to keep in mind is that much of the US antitrust laws were made back in the early 1900s to combat _literal_ monopolies, objective collusion between companies to harm consumers, and so on. We're talking price fixing here.
> Anti-trust laws can be seen as pro-market laws that try to prevent long periods of monopolized markets without waiting for a naturally occurring disruption, instead providing a mandated disruption.
This sentence is dangerous: it is very close to saying that any long-term, successful company should be "disrupted". Interpreted differently it could be read that startups should have some inherent right to evenly compete with large companies (by fining or splitting up large companies to be "beatable" by startups).
Again, that is not all the point of anti trust laws. I won't argue whether there should be laws like that (as you can tell, I think not), but the anti trust regulation in the USA is definitely squarely aimed at _actual_ monopolies and collusion.
> This sentence is dangerous: it is very close to saying that any long-term, successful company should be "disrupted". Interpreted differently it could be read that startups should have some inherent right to evenly compete with large companies (by fining or splitting up large companies to be "beatable" by startups).
No it isn't. "Successful company" and "company that has cornered/monopolized their market" are not even close to the same thing.
Anti-trust laws don't aim to make competition fair, they aim to make it possible.
I think it's fantastic. The whole 120% thing is up to the individual: there have been times I've made it a 120%, and there are times when it's been just "take a friday off to work on other stuff". You end up getting less of your "job" done but my managers have always been supportive.
It's been great for sanity: some weeks/months feel just, like, meetings and chore work. It's great having that one day a week to work on a rockstar feature request in some fun project. It's also cool to work on your dream projects without the luck/physical move/whatever to get on the actual team. (you can effectively work on anything since no project is going to say no to free headcount)
It's also nice because it spreads your professional network in the directions you choose to spread it, rather than the more organic spread that your normal job entails (assuming luck and available are big drivers of where and which projects you "end up" working, rather than 100% your choice). So, maybe I don't work on project X today, but I can 20% on it and build up those connections, and later in my career I have a much better shot getting on the project. That agency is a nice feeling.
So, as far as the employee happiness goes, I think it's fantastic.