I can attribute jumping several economic classes to the social skills I honed in high school and college. I have many friendships that are decades-plus and I had 150+ of my invited friends / family attend my wedding.
Emotionally, I do not long for new friends. It's a lot of work to maintain the relationships I have with my friends, family, wife and daughter.
I find aimless socialization these days to be laborious. I just do not give a shit.
I recently moved to NYC. I am at a point in my career where it's networking and politics that will get me ahead. I see a lot of my net-new socialization moving this direction.
I agree with you. This is brilliant. Tele-operation of the humanoid vs. waiting for AI is the key here. Then off-shore the tele-operation when you've smoothed out the edges.
I'll personally wait to own best hardware (Unitree) and purchase my own 3P tele-operation service contract.
But what I’ll say is, ideally they would demonstrate whether this model can perform any better than simple linear models for predicting gene expression interactions.
We’ve seen that some of the single cell “foundation” models aren’t actually the best at in silico perturbation modeling. Simple linear models can outperform them.
So this article makes me wonder: if we take this dataset they’ve acquired, and run very standard single cell RNA seq analyses (including pathway analyses), would this published association pop out?
My guess is that yes… it would. You’d just need the right scientist, right computational biologist, and right question.
However, I don’t say this to discredit the work in TFA. We are still in the early days of scSeq foundation models, and I am excited about their potential.
Cellular level computational simulation existed a very long time and it's more impressive by the day because of large collections of experimental datasets available.
However to infer or predict celular acitivities you need a ton of domain knowledge and experties about particular cell types, biological processes and specific environments. Typically the successful ones are human curated and validated (e.g large interaction networks based on literature).
In cancer it's even more unpredictable because of the lack of good (experimental) models, in-vivo or in-vitro, representing what actually happens the clinically and biologically underneath. Given the single cell resolution, its uncertainty will also amplify because of how heterogeneous inter- and intra- tumours are.
Having said that, a foundation model is definitely the future for futher development. But with all of these things, the bigger the model, the harder the validation process.
Thank you for spending the time to write this comment. As a solution architect, I don't write production-level code. I did not know of anything beyond unit / functional testing in verification automation.
FB does not have the flywheel of running data centres - all three of those mentioned run hyper scale datacentres that they can then juice by “investing” billions in AI companies who then turn around and put those billions as revenue in the investors
OpenAI takes money from MSFT and buys Azure services
Anthropic takes Amazon money and buys AWS services (as do many robotics etc)
I am fairly sure it’s not illegal but it’s definitely low quality revenue
How is it free equity? Spending money to invest it somewhere involves risks. You might recover some of it if the investment is valued by others, but there is no guarantee.
You do not need cash in hands to invest. Instead, you print your own money (AWS credit) and use that to drive up the valuation, because this money costs you nothing today.
It might cost tomorrow though, when the company starts to use your services. However depending the deal structure they might not use all the credit, go belly up before credit is used or bought up by someone with real cash.
Neither did AWS when they started. They were just building out data centers to run their little book website and decided to start selling the excess capacity. Meta could absolutely do the same, but in the short term, I think they find using that capacity more valuable than selling it.
> Neither did AWS when they started. They were just building out data centers to run their little book website and decided to start selling the excess capacity.
This is a myth. It simply isn't true. AWS was conceived as a greenfield business by its first CEO. Besides, S3 and SQS were the first AWS services; EC2 didn't appear till a few years later. And it wasn't built from excess Amazon server capacity; it was totally separate.
Unless you've worked at Amazon, Microsoft, Google, and Facebook, or a whole bunch of datacenter providers, I'm not sure how you could make that claim. They don't really share that information freely, even in their stock reports.
Heck I worked at Amazon and even then I couldn't tell you the total datacenter space, they don't even share it internally.
This would be an interesting dataset to use for trading decisions (or sell to hedge funds).
But I wonder how much of their infrastructure is publicly mappable, compared to just the part of it that's exposed to the edge. (Can you map some internal instances in a VPC?)
That said, I'm sure there are a lot of side channels in the provisioning APIs, certificate logs, and other metadata that could paint a decently accurate picture of cloud sizes. It might not cover everything but it'd be good enough to track and measure a gradual expansion of capacity.
Then you should be aware that, for the longest time, Google was against multiple floors, until they suddenly switched to four floors in many locations:
A decade ago, there was a burst in construction and in some places the bottleneck was not getting the machines or electricity, but how fast they could deliver and pour cement, even working overnight.
To date, facebook has built, or is building, 47,100,000 sq ft of space, totaling nearly $24bn in investment. Based on available/disclosed power numbers and extrapolating per sqft, I get something like 4770MW.
Last I updated my spreadsheet in 2019, Google had $17bn in investments across their datacenters, totaling 13,260,000 sq ft of datacenter space. Additional buildings have been built since then, but not to the scale of an additional 30mil sq ft.
Amazon operates ~80 datacenter buildings in Northern Virginia, each ~200,000 sq ft -- about 16,000,000sq ft total in that region, the other regions are much much smaller, perhaps another 4 mil sq ft. When I'm bored I'll go update all my maps and spreadsheets.
Does the square footage take into account multiple floors? What's the source? It can be misleading, because you don't know the compute density of what's inside. Using just public data, power is a more accurate proxy. Until at least 5-6 years ago, Google was procuring more electricity than Amazon. Before that, it had a further advantage from lower PUE, but I bet the big names are all comparable on that front by now. Anyone that has worked at several of them can infer that FB is not the largest (but it's still huge).
As for the dollars, were they just in 2019 or cumulative? The Google ones seem low compared to numbers from earnings.
Google certainly has more compute density than Amazon, the numbers I was able to find from the local power company was 250MW at Council Bluffs back in 2015 or so.
Amazon builds out 32MW shells, and the most utilized as of 5 or 6 years ago was 24MW or so, with most being much less than that.
At this point Power Companies (ala PG&E, etc) should be investing in AI companies in a big way. THen they make money off the AI companies to build out power infra - and vice versa.
I am surprised we havent heard about private electrical grid built out by such companies.
Surely they all have some owned power generation, but then if they do, the local areas where they DO build out power plants - they should have to build capacity for the local area, mayhaps in exchange for the normal tax subsidies they seek for all these large capital projects.
Cant wait until we pods/clusters in orbit. With radioisotope batteries to power them along with the panels. (I wonder how close to a node a RI battery can be? Can each node have its own RI?) (sas they can produce upto "several KW" -- but I cant find a reliable source for max wattage of an RI...)
SpaceX should build an ISS module thats an AI DC cluster.
And have all the ISS technologies build its LLM there based on all the data they create?
I updated my map for AWS in Northern Virginia -- came up with 74 buildings (another source says 76, so i'll call it directionally correct). If I scale my sq ft by ~5% to account for missing buildings, we get 11,500,000sq ft in the northern virginia area for AWS.
Yeah, Google buys servers in public datacenters like those from Equinix. One "region" needn't be one datacenter, and sometimes AWS and GCP will even have computers in the same facility. It's actually quite annoying that "region" is such an opaque construct and they don't have any clear way to identify what physical building is hosting the hardware you rent from them.
Those are almost lost in the noise, compared to the big datacenters. (I've been inside two Atlanta facilities, one leased and one built from scratch, and the old Savvis one in Sunnyvale).
Meta could build their own cloud offering. But it would take years to match the current existing offerings of AWS, Azure and GCP in terms of scale and wide range of cloud solutions.
And then there's sales. All of those three - and more you haven't considered, like the Chinese mega-IT companies - spend huge amounts on training, partnerships, consultancy, etc to get companies to use their services instead of their competitors. My current employer seems all-in on Azure, previous one was AWS.
There was one manager who worked at two large Dutch companies and sold AWS to them, as in, moving their entire IT, workloads and servers over to AWS. I wouldn't be surprised if there was a deal made there somewhere.
The real question is: why aren't they? They had the infrastructure needed to seed a cloud offering 10 years ago. Heck, if Oracle managed to be in 5th (6th? 7th?) place, Facebook for sure could have been a top 5 contender, at least.
Because they make more money using their servers for their own products than they would renting them to other people. Meta has an operating margin of 41% AFTER they burn a ton on Reality Labs, while AWS has a 21% margin with more disciplined spending. Social media is a more profitable business than infrastructure.
> Advertising (over 97.8% of revenues): the company generated over $131 billion in advertising, primarily consisting of displaying ad products on Facebook, Instagram, Messenger, and third-party.
Tensorflow and keras have gotten better, but pytorch historically had better flexibility than keras and was much easier to debug/develop in than tensorflow.
aww, those existing offerings are overcomplicated as hell, a fresh look could yield substantially simpler cloud developer experience and this would compete well against those other cloud offerings on simplicity alone
When evaluating use-cases where blockchain technology is leveraged to disintermediate, I came to your same conclusions. Technically novel? Yes, sure. But, for what?