It would require Ma Bell. As the article correctly deduces (IMO), it was related to their quasi-monopoly. It was a partial and insecure monopoly; they did need to innovate or they would eventually be eaten. Telecoms was their business, in the very broadest sense. New ways of drawing wire more efficiently would benefit their bottom line. New forms of amplifier technology would almost certainly benefit them ten or twenty years down the line. Same with a new kind of treatment to make telephone poles last longer. So basic metallurgy, forestry, and cutting-edge semiconductor physics were all within the remit. In hindsight it's almost like a public/national research laboratory, that happened to be privately held.
Rather than a monopoly, I'd say it takes 2 things (that are often present in monopolies):
1) Big piles of money
2) Enough of a moat that there's no immediate threat from competition
#2 drives a lot of the desire to innovate, the hope being that by the time competition catches up, you've got so much new stuff that they're back where they started.
The closest thing I can think of in the tech space now is Apple with consumer hardware moat. The other big players are still cannibalizing and copying each other's business segments.
>You just described google 10 years ago. And I would argue they have been on a downhill slop ever since.
Don't forget Microsoft ever since the early 90s or so. And I would argue they haven't produced much usable stuff in the research realm at all.
So I think it's more than just #1 and #2. There's also a #3: a national culture where doing great things is seen as more important than simply maximizing shareholder returns. The US had that many decades ago when it sent people to the Moon, but it lost it sometime after the 1980s.
I'd love to see an alternate timeline where one of these giant companies took their trillions of dollars and said "You know what, fuck you, shareholders! We're going to be pure research now! We're going to turn our offices back into a college-campus-like playhouse of innovation! We're going to hire the best and brightest again and give them amazing things to work on rather than improving click-rates by 0.1%! We're going to do only moonshots! We're not going to maximize shareholder value!" The stock price would go to $0.01 but it would be a paradise for researchers and developers.
It would be such a breath of fresh air from all of the dollar-maximizer companies that we seem to be stuck with now.
You need a playhouse of innovation, but you also need a pipeline of taking that innovation and making lots of money/practical applications. And you need the interaction between people doing practical work and feeling limitations and the playhouse trying stuff out.
Bell Labs worked because AT&T had huge field operations, a huge automation problem (telephone switching and billing), and a huge manufacturing arm for telco equipment and customer equipment, and a reputation for solving problems related to communication. Small innovations easily contributed to the bottom line, and large innovations enabled massive expansion of communication. Also, there's a huge benefit from research being connected to scale --- Bell labs innovations were likely to be applied in AT&Ts businesses quickly which informs future innovation; if you're an unconnected playhouse of ideas, it's hard to know if your innovations can be applied.
For better or worse, things are less integrated now. Telcos still have large field operations, but they don't usually manufacture the equipment. Automation of telephone switching is so complete, I don't know if there are any operators anymore. Billing for usage is very limited.
You see some companies developing the processors in equipment they use and sell, which is more integrated than when it was all Intel/AMD/Qualcomm.
As much as I dislike Elon these days, this is basically how all his recent companies have been setup. Tesla, SpaceX, Neuralink, and arguably even Boring are all moonshots, the latter 3 being so anti-shareholder that they aren't even publicly traded.
There seems to be some amnesia about the sort of company AT&T was. AT&T was first and foremost a profit maximizer. They were the poster child for abusive monopolies. Remember until the 1980s, literally no one owned a phone. All hardware that connected to a phone line was owned by ATT.
> I'd love to see an alternate timeline where one of these giant companies took their trillions of dollars and said "You know what, fuck you, shareholders! We're going to be pure research now!
If this alternate timeline where giant companies are autonomous creatures were to be, why would there be shareholders?
Because that's what was asserted to be in this alternate timeline. An alternate-alternate timeline could have no companies, sure, but the discussion is centred around a specific alternate timeline.
An alternate-alternate timeline might have companies run by LLMs, but the alternate timeline under discussion states that the giant companies act on their own accord, with their own dollars – which, again, questions where the shareholders fit.
The corporate tax structure was different back then as well. Much higher tax rates and it was far more difficult to simply ship your profits off to another more tax-advantaged nation. If the choice is spend the money hoping to find the next great innovation or hand it to your government and hope they do something productive with it, companies are usually going to choose the former.
Today, you can license your profits to Ireland or some other off-shore holding company and never have to choose.
Other than Haskell, OSes in memory safe languagess whose System C# and Bartok features starting arring in .NET Native and .NET Core, Haskell contributions, plenty of Cloud stuff on Azure, Azure Sphere with Pluton, a LLVM like compiler stack using MSIL, contributions to OpenJDK JIT, Q# and Quantum stuff, ...
If you count OpenAI , Microsoft has produced a lot. Bell Labs existed in an era of conglomeration, where Microsoft existed in an era of startup culture, but the idea of monopolies spending cash on research is still there
The government allowed Bell to maintain its monopoly position with the understanding that Bell Labs would work for the public good. In other words, Bell was unable to capitalize on what Bell Labs created. Bell Labs, in hindsight, is so significant because everyone else was able to take their research and do something with it.
Google has given us things here and there (the LLM craze stems from papers published by Google), but it seems most of the research they are doing stays within Google. For example, I expect you will struggle to find what you need to start a self-driving car company based on Waymo’s work.
> Google certianly had the talent
Including Bell Labs alumni. They used their time to create Go. If that isn't game changing, perhaps Bell Labs was also a product of its time?
It seems like there was something special about the innovation derived from collective spirit during that time, the 1940-1960s. Post-depression and -war momentum combined with major scientific and technology milestones that had to be passed, with a dynamic caused by an unprecedented global situation.
Some major drivers of innovation for that time period:
A lot of people were very recently very upskilled on the government's dime. A shitload of Navy personnel had just been trained on things like electrical engineering to better understand the radios and radars they were operating, and others were taught how computers worked and accidentally turned into early software developers.
Labor was in short supply so companies had to provide significant benefits and eventually pay to make their business work. Low level labor making more money directly enriches a majority of Americans at the time, and a rich base like that has more resources to devote to buying goods but also investigating WHICH goods are worth it, ie making market competition actually work instead of just being about who has the lowest sticker price, which is a direct result of modern americans being dirt fucking poor in money, time, attention, resources, etc. A shortage of labor and therefore increase in the wealth of the everyman may have also contributed to the renaissance.
The US was still PUMPING money into basic sciences and basic technology research. This is especially useful for materials science which feeds a lot of engineering.
So many people had been recently put into such high positions that there wasn't as much of a "Management class", workers were managing workers. People could focus on doing things instead of office politics, ass covering, and pretending to look busy to people who have no freaking clue what they are managing.
Germany lost IP protections for many things. This results in a fertile ground for innovation, because everyone can freely make a widget that matches the now invalid german patent X, so there is high compatibility in widgets, while also being high incentive to improve your version of Widget X in some way so you can patent that improvement, meaning there's a lot of exploration going on in the solution space. Patents explicitly freeze investment into large solution spaces, especially with how vague modern patents are, how incompetent and undertrained/understaffed the patent office is, and how hard it is to get a court to invalidate bullshit patents.
Basically all those things businesses insist are bad made the world better for consumers, and more importantly, people. State investment into the population paid huge dividends. Who could have guessed.
What was most special is that it was new, few were doing it, and so any achievements stood out and were able to make a splash. Nowadays you can't even step outside in a rural area and not bump into someone who is trying to make their mark.
We now see more achievements made each year than Bell Labs achieved in its entire history, but because of that there is no novelty factor anymore. It is now just the status quo, which doesn't appeal to the human thirst for something new. Mind-blowing discoveries made today are met with "meh" as a result.
It's kind of like a long-term relationship. In the beginning feelings are heightened and everything feels amazing, but as the relationship bonds start to grow those feelings start to subdue back down to normal levels. Without halting innovation for a time (perhaps a long time), to the point that we start to forget, I'm not sure there is any way to bring back the warm fuzzies.
Go is basically Limbo, with a bit of Oberon-2 sprinkled on top.
They got more lucky with Go than with either Limbo or Oberon-2, thanks to Google moat, Docker pivoting from Python to Go, Kubernetes pivoting from Java to Go, both projects hiting gold with devops community.
Google itself keeps being a Java and C++ shop for most purposes outside Kubernetes.
It doesn't seem that they ever really used Plan 9, though. Are you trying to imply that Bell Labs was just few trick pony, or what? I'm still not clear on what connection your story about Go/Limbo/Oberon-2 has to the topic at hand. It seems you forgot to include the conclusion?
Who are also the Go guys, but we already know that as it was already talked about much earlier in the thread. If you have some reason to return to that discussion, let it be known that you have not made yourself clear as to why.
> Punchline is without Google's moat, Go would have gone the way of Plan 9 and Inferno.
And that relates to the topic at hand, how? I am happy to wait. No need to rush your comments. You can get back to us when your conclusion that connects this all back to what is being talked about is fully written.
"Including Bell Labs alumni. They used their time to create Go. If that isn't game changing, perhaps Bell Labs was also a product of its time"
Failed twice to create anything else, only succeeded yet again thanks to Google moat and a set of lucky events caused by Docker and Kubernetes rewrite in Go, followed by their commercial success.
Bold claim that Go succeeded. A couple of software projects used by a tiny fraction of the population (hell, a tiny fraction of the software development population!) is of dubious success. Just about anyone's pet language can achieve that much. What sees you consider it to be more?
Also interesting that you consider UTF-8 to be a failure. From my vantage point, that was, by far, the most successful thing they created. Nearly the entire world's population is making use of that work nowadays. Most people can only dream of failing like that.
That conclusion, though... We still have no idea what this has to do with the topic at hand. Again, don't let me make you feel rushed to get your replies out. I am happy to wait until you are complete in writing that.
Given that you don't consider UTF-8 to be success, perhaps nothing is?
Explaining what any of this has to do with the topic at hand is definitely not a success. Is this supposed to be your admission that you have no idea what you are trying to say?
Part of the problem, I think, is that Google's relentless focus on information is just never to end up being as sexy as the stuff Bell Labs worked on over the years.
Even when you create the conditions there's always some luck factor. Whether it is leadership with the right vision or the right people in other key roles to create innovation. But I feel like the 2 points above are sort of the starting point that makes it possible for the other things to matter (or more likely, flukes can occur).
And they did innovate in those days, eg, Gmail and purchase and integrate very innovative things like Writely (Docs), Grand Central (Voice).
At some point they got the idea that because they made so much money, new innovations could not be valuable if the profits would likely be insignificant compared to the existing oil gusher.... which is stupid, because nothing is ever gonna look that good early on, so they took on the persona of a flaky ADD schemer -- like Cosmo Kramer from Seinfeld -- always scheming with some new idea, but never with the perseverance to stick with it for more than 25 minutes before moving on to something else -- because what can possibly ever compete with ads?
Google actually tried to be a "fuck you investors, we're gonna do research" company with the supershares, but they screwed up:
A big mistake they made, which then got replicated throughout the industry, was issuing stock to employees. It meant that even though Larry and Sergey had supershares and no shareholders could countermand their beliefs, the employees (especially middle managers) could, by changing their work decisions to try to boost the stock, thus inoculating the company indelibly with the wall street short term thinking disease that the supershares were supposed to have prevented.
It was pathetic how interested in the share price the average L4 Google employee was when I was there in the mid-teens, at least around the quarterly vest date.
Whoops. Should have listened to Buffet-- never give out stock. Give performance bonuses, but never stock. You cannot depend on a large mass of people like that to stick to a higher purpose and not get tempted by the money.
We individual employees had little control over stock and I don't remember anybody being particularly delighted when Patrick left and Ruth took over, despite the stock going up up up. I don't think our GSUs were influencing work decisions on aggregate.
What was influencing decisions was just the same as always: ad revenue. I think people who worked outside of ads at Google probably didn't get the sense of it. I came into Google as part of a competition crushing, monopoly focused acquisition, so I started off cynical. I could see right away what Google was, just a money printing machine with everything else they did a way to flush some of the excess gravy away. And because of that it all lacked purpose or real motivation.
And into that void just crept more and more careerism and empire building.
Agree about the ADD thing, but I think it's more a product of Perf economy than stock price.
I lasted there 10 years for lack of anything else worthwhile to go to in my geo-region, and because the money was too good to pass up. But what a weird, weird, weird place. Wandering around the MTV campus was like being in a real life Elysium (movie) or something. I could never figure out what all those people were doing between free meals. I felt guilty taking the money, the whole time.
> I could see right away what Google was, just a money printing machine with everything else they did a way to flush some of the excess gravy away. And because of that it all lacked purpose or real motivation.
This is precisely how I would summarize it too. Very well stated.
> Agree about the ADD thing, but I think it's more a product of Perf economy than stock price.
That I agree with, and I already regret implying stock was a bigger part of it than Perf/Promo-- I just wasn't thinking about them when I wrote that.
Perf was a bigger factor for sure from our ground level (L3/L4/L5 presumably) and was the main reason I left, as all I saw around my on my team was people doing things that added zero value or even subtracted it, but at least they created a new project that would sound good for Promo committee. Managers helpfully shepherded their team in that direction, which was kind and all for growing our careers and helping us get more slop from the trough, but it still felt gross. I couldn't bear seeing that as my future.
Sometimes, when I look at the housing market, or think about colleges for my daughter though, I regret not playing a long a little longer. I don't know.
THAT SAID: I don't think the stock incentives affected decision making at your or my level, but at higher middle management levels, where GSUs started to become huger both in absolute value and percentage of compensation. At the levels of the people actually deciding what products to close down or pursue. I wasn't up at that level so this is just a theory I admit.
Yeah, fair enough -- I still don't know if I've done a good job of successfully explaining to myself what the fuck it was I experienced working there and why, but it sounds like you and I had very very similar experiences.
Everything about the internal incentive structure was fucked up and incentivized very low value producing behavior.
And yet, I did learn some really valuable things there
- napa cabbage is delicious
- celery root is a thing
- fresh made pasta is better
- quiche and arugula pair great together
- oh my god legit ramen is amazing
Oh wait, outside of the food:
- monorepos, with the right tooling, are unbeatable. all the monorepo haters just don't have the right tooling. google did
- doing meetings right, with agendas and minutes, is also unbeatable, strictly better and more effecient than the alternative
- doing planning right, by circulating proposed design plans for comments, is strictly more effecient than the alternative
- doing cloud deployment right -- this is irrelevant now, most everyone knows how to do this now while google is stuck with legacy borg configs, but for a while, google actually was far ahead of the industry on this
Despite having some fundamentals like that that made certain things way way more effective, it was squandered pursuing cockeyed goals.
Yep. Monorepos and fully vendored third-party deps, the two things I wish other people were doing... because the way the bulk of the industry is doing it is a stupid productivity killing, security, and reliability nightmare.
I was always a square peg in the round hole there, and I have no idea how I lasted 10 years. I was at L4 the whole time and never made the move to even try for L5 because the perf process seemed like such stressful garbage to me and I was perfectly fine with my already generous compensation.
I miss that kind of $$, but oh well. It got to the point I couldn't stand it anymore, and after watching the insane growth in hiring... I knew layoffs would be coming.
Oddly, I agree-- that section was just kind of an obvious hurried joke about the great food. I really did learn a lot of culinary ideas due to eating a lot of food I could have afforded in Manhattan, but normally would never have bought due to my frugality.
> Give performance bonuses, but never stock. You cannot depend on a large mass of people like that to stick to a higher purpose and not get tempted by the money.
Well, if you gathered this big mass of people by dangling a nice pay and stock options in front of them then yeah, it does seem a bit odd to lament they are tempted by money.
> It was pathetic how interested in the share price the average L4 Google employee was when I was there in the mid-teens, at least around the quarterly vest date.
How dare someone look out for their own financial well-being!
(1) I was earning over triple what I had been earning just a couple years earlier, so it was hard to sympathize with people who felt that still wasn't enough.
(2) It is very unpleasant to know that what you are doing day in and day out adds no value to anyone else's life and is just busywork to trick the system into giving you a paycheck. It's not that different from being on unemployment and having to do the busywork of proving you applied to N jobs per week.
I’ve also seen it with biotech companies like Genentech. They are making money hand over fist, more and more each year, so they can try for moonshots and give researchers a lot more leeway to do things their way.
But once the money dried up, it was mostly gone. Got bills to pay and need to show returns on R&D in the near term.
they did this ten to a billionth thing where everyone got to submit ideas. They hired an army to look at it all before it went into the incinerator.
Im kinda curious what machine learning or llms could make from such a dataset. If enough people proposed something similar funding shouldn't be an issue. perhaps there is even an apples vs oranges metric for quality to be had.
You don't need to mandate a short term view when the executive's massive pay packages depend on stock valuations: They will chase the money all by themselves, not trying to build a company that might make major discoveries eventually
One of the commenters on the original article said "Bell Labs was funded through a 1% tax on Bell's overall revenue, as well as a claim on a portion of Western Electric's revenue, so yes, AT&T was able to recoup Bell Labs' expenses."
This would protect it from (at least certain types of) C-suite predation presumably.
“The American Telephone and Telegraph Company (AT&T) was listed on the NYSE on Sept. 4, 1901. Only 11 other companies have been listed longer than AT&T.“
Yeah, impressively wrong gp! 1984 was when they were broken up by Justice Department.
It's not like corporate executives have carte blanche to make these decisions. They report to the board, who represent the shareholders, to whom they have a legal duty to maximize profits. The entire construct is set up to promote the alternative to building the next Bell Labs.
They don't have a legal duty to maximize profits, they have a legal duty to represent the fiduciary interests of the shareholders. If the company doesn't make as much profit next quarter but it will be healthier in the long run for it, that is perfectly OK.
I think it requires a third thing: long-term thinking.
Other commenters noted that Google and Microsoft had big piles of money and a decent moat. They have not created anything like Bell Labs. Google tried, with DeepMind.
The problem is that companies used to think about 5,10,20 years out. Modern companies are gauged and gauge themselves by metrics that are quarterly or annual. They, to borrow from The Fast and the Furious, are living their life one business quarter at a time.
I don't think you can build something like Bell Labs, PARC, or the Lockheed-Martin Skunkworks with short-term thinking.
Based on the book "The Idea Factory" (https://en.wikipedia.org/wiki/The_Idea_Factory), it sounds like the regulated monopoly forced certain constraints on their business, such that they had to prove they were a utility. This meant not having infinite profits, and that the engineers were generally allowed to think long term since any good idea in the future could benefit the company without short term profitability risk
3) Politically savvy managers with a technical background, who treat money/sustainability as an enabling factor and see the primary output as being technical.
The closest I every came to this was Australia's DSTO in the late 1980s. The management was technical and cared deeply about their people, the development of those people and making a positive technical contribution.
I think it's not about recreating Bell Labs, but doing something better that will fit in with today's technical ecosystem. Are entry barriers lower today, or is that part of the problem: low entry barriers mean people don't have to congregate to get sufficient resources?
Google or Microsoft are pretty close nowadays. Look at all the infrastructure both of these sponsored over the years. It may not be as foundational as what BL did, but then again there is not as much low hanging fruit today either.
Ma Bell was regulated and they used the labs partly as a slush fund to smooth out their apparent earnings. When they got a rate increase, the labs would be well funded for a while, so AT&T wouldn't have to show an unseemly spike in profits. When they went for a long while without an increase, the labs would run lean until the next increase came. At least that's what someone told me back then.
Also Google and Microsoft for the most part are not even trying to be innovative. Rather they try to milk out every last cent of existing products, driving people to buy their cloud services, or directly or indirectly sell user data to third parties.
There is an insane amount of innovation happening at Google and Microsoft et al. The amount of investment going into efforts like making data centers more power efficient, making better cooling systems, reducing latency or using fiber more efficiently etc is incredible and rivals the work done at Bell Labs back in the day. You just don’t hear about it because these private companies have no incentive to share it.
And that’s entirely separate from the fact that Generative AI wouldn’t even be a thing if not for the research that Google published.
So if Microsoft discovered something very useful (e.g. new battery technology, or more efficient air conditioning system), but decided that it was something they didn't want to develop and market; would they share the knowledge or just bury it in case they might want to use it someday?
These innovations don’t always have to be “marketed” to be shared. Things like this get developed and used internally, and then sometimes the company likes to brag about their accomplishments, even if it’s not an externally facing product.
Literally just watched an internal talk about this topic at work (Microsoft). Lots of cool internal research to make things better in every domain, but as you said, it won't be shared that much.
The Open Compute Project is great though, and MSR does awesome research across many domains too, as does Alphabet.
Was the Bell Labs Systems Journal shared outside of Bell Labs contemporaneously? I have original copies of the Unix issue, for example, but have no idea if that was 'generally available' back when it came out...
Without deeply researching the topic, my understanding is that Bell Labs didn't really "open source" everything or really most things. Just look at the later law suit over Unix.
"Due to a 1956 consent decree in settlement of an antitrust case, the Bell System was forbidden from entering any business other than "common carrier communications services", and was required to license any patents it had upon request. Unix could not, therefore, be turned into a product. Bell Labs instead shipped the system for the cost of media and shipping.
...
In 1983, the U.S. Department of Justice settled its second antitrust case against AT&T, causing the breakup of the Bell System. This relieved AT&T of the 1956 consent decree that had prevented the company from commercializing Unix. AT&T promptly introduced Unix System V into the market. The newly created competition nearly destroyed the long-term viability of Unix, because it stifled the free exchanging of source code and led to fragmentation and incompatibility. The GNU Project was founded in the same year by Richard Stallman."
I'm well aware of the history of Unix. My point was simply that Bell Labs was historically not a particularly open organization outside of the bounds that they were required to be by law.
do you happen to know why the tremendous progress at BL was shared (did it take a long time?) whereas the progress that happens at today's datacenters are mostly secret? I fear that no matter how much progress they make, if it's not eventually shared, it'll just be lost/wasted and others will have to reinvent it.
My take is that it’s related to the parent commenter’s thoughts on the relative monopoly that Bell had.
If you’re a monopoly with no practical competition, sharing your accomplishments gets you good will and has little downsides. But if you’re Microsoft, and one of your big moats and competitive advantage is the massive fleet of data centers you’ve been building up over the years, you don’t want to hurt yourself by giving your competition the information they need to build new, more efficient data centers.
There has always been a lot of good research happening in MS, but what we see as end-users are Recall, and the latest Bing news in Edge. Seriously, who are the PMs that allow this cr*p?!?
Someone from The Valley told me a couple (~8) of years ago that he didn't like the way SV startup culture had transformed. He argued that in the past, the culture of startups in the valley focused on innovation and disruption. VCs would fund moonshots and small teams doing crazy bew thing.
But nowadays, the valley looks more for "scalability' projects, things that will sell to millions of people.
He blamed the cost of living/hiring in the area. He mentioned that with a lower cost of wages and living. A pre-seed startup could go a long way with family money funding the moonshot.
I think it's sad that as you said even companies with large chunks of money aren't willing to spend in r&d as much as before. I guess a war is needed for that unfortunately.
You're saying Google's long work in AI, Quantum Computing, etc... are not trying to be innovative? Their researchers certainly put out a lot of influential papers.
This makes me think of Meta's approach in open sourcing a lot of their AI efforts. I can't find the exact snippet from the Zuckerberg interview, but the reasoning was:
If Meta open sources their models/tools and it gains wide adoption, ways will be found to run the models more efficiently or infrastructure/research built on top of Meta's work will ultimately end up saving them a lot of costs in future. Release the model that cost $10bn to make now, and save yourself billions when others build the tooling to run it at 1/10th the cost.
> In any case, their hypothesis is testable: which open source innovations from Llama1/2 informed Llama3?
I am not sure, but I agree that it is definitely testable.
If I had to guess/answer, I would argue that the open source contributions to Pytorch have a downstream contribution to the performance, and maybe the preparation and release of the models required an amount of polish and QA that would otherwise not have been there.
> In hindsight it's almost like a public/national research laboratory, that happened to be privately held.
In more ways than one. Sandia National Laboratories (mentioned in the article) is a national research laboratory, and since it was managed by AT&T from its inception in 1948 it was organized like Bell Labs. It's no longer managed by AT&T and its culture today feels a lot less like Bell Labs, but in the early days the employees called it "Bell Labs West." Sandia is the only national lab that has an AT&T origin story.
Thank you - that's a spot on answer. Bell Labs was the only way
Ma Bell could stay relevant given that the rest of the world was making progress on their own.
Perhaps you mean the only way they could maintain their monopoly? Bell Labs was Bell's bargaining chip with the government to allow it maintain its market dominance. The government allowed, at least for a long time, the monopoly to persist so long as Bell Labs was acting as a public good. As part of that social contract, Bell was not allowed to capitalize on what Bell Labs produced. I expect Bell Labs is so notable because the public at large was able to take their discoveries and turn them into productive ventures, to which they did. That's not completely unheard of today – LLM-based businesses being a recent example that stem from Google opening up research to the public – but it is unusual for today's research labs to give away everything. Today, not even the research labs with that explicit intent (OpenAI) are willing to give away everything (or much of anything).
I think, ironically, the "Tax Cuts and Jobs Act of 2017" has gutted tech jobs and R&D in a profound way.
> The TCJA amended I.R.C. §174 such that, beginning in 2022, firms that invest in R&D are no longer able to currently deduct their R&D expenses. Rather, they must amortize their costs over five years, starting with the midpoint of the taxable year in which the expense is paid or incurred. For costs attributable to research conducted outside the U.S., such costs must be amortized over 15 years. This will be the first time since 1954 that companies will have to amortize their R&D costs, rather than immediately deduct those expenses.
The act actually took us back by many decades in terms of R&D incentives, and devastated US competitiveness vs China by disincentivizing R&D across the board.
Probably, given that they've probably done more open research than anyone else (open compute, llama, basically all of the Big Data stuff came from papers by Google and code from Facebook).
They're unlikely to be praised around here, but I think history will potentially be kinder to them.