ITEP gets the number by dividing Meta’s current federal tax expense ($2.82B) by its domestic pretax income ($79.64B), which is about 3.5–3.6%.
But Meta’s total 2025 GAAP effective tax rate was actually 29.6%, because it also booked a huge $15.93B charge tied to the Corporate Alternative Minimum Tax and valuation allowances.
Both numbers are wrong. Meta's actual income before tax was about $83 billion and its actual tax cost was $25 billion. This is an effective tax rate of ~30%.
I have no idea what you mean by "AGI, however, is mathematically impossible."
Further, your point about political pushback is short sighted. As AI becomes more lucrative there will be more impetuous to "pay" locations to have data centers, and as that becomes too expensive space is clearly the next answer.
The development of AGI assumes zero constraints, when constraints exist at every layer of the stack. That's why it's mathematically impossible.
In a system driven by capital, manufacturing can ramp to an extent but they generally can't exponentially ramp due to dependencies they have.
When you ramp one layer of the stack, other layers of the stack are pressurized. We're seeing a small preview of that now with memory pricing. But these break points for AGI are everywhere. Power capacity, power infrastructure, DC labor, cooling systems, memory, motherboards, GPUs. All of these things have dependencies that cannot be scaled exponentially, or quickly. As you pressure points of each of these dependencies, prices rise exponentially.
Let's take memory for instance, it is merely one block in the jenga tower but it's a good example. Memory is already at close to 100% capacity. Spinning up new capacity is highly constrained, and money can't really make it faster. Lead times are 4+ years on new plants, which cost billions.
The same is true for other components, and in some cases the situation is worse.
"Won't happen for 4+ years" and "mathematically impossible" are quite different. Given that humans apparently exhibit the "GI" part of "AGI", I find "mathematically impossible" difficult to believe. "Extremely unlikely with current LLM architecture", sure, but that's a very different statement from "mathematically impossible".
If you are making a prediction on the viability of AGI assuming that an entirely new technology will make the efficiency problem of LLMs moot then you're essentially engaging in mysticism, aren't you?
It is correct to say it is mathematically impossible, as all the people making AGI claims rely upon advances that are not even theoretical, they have not even been discovered yet, and the mere possibility of them is questioned by many scientists.
LLMs have hard and soft limits all over the place preventing AGI. You aren't gonna train and loop yourself to AGI because the compute does not exist, and will not exist.
My 4+ year point was for a single memory fab. Increasing capacity by merely 5% (generous assumption) takes 4 years and $10bn. It's starting to sound like the path to AGI in the current paradigm will cost infinite dollars and take infinite years of build-out.
Even with a transformational efficiency breakthrough, you still have hard limits all over the place. Where are you going to store all the data? Memory constraints again.
Given comparative advantage gives a offramp to this for a lot of what we currently understand as "economics", if the author is positing that we will be beyond this, then your response is missing the forest from the trees.
There is no indication that the surplus extracted by automated labour will be distributed to the advantage of the population. If we look at how things are going at the moment and in the present, there will be a further concentration of power and capital. And I don't see any reasons why the billionaire class should give this up. You could, of course, give an argument why things are will be different this time.
If comparative advantage will not hold then that's really something, no one understands what happens in that future, proposing some random solution at this point is unbelievably premature.
Seriously Guardian, this has to be the least interesting question possible "if AI makes human labor obsolete", I mean FFS talk about a lack of understanding.
Given Depreciation this is expected. And ... the intent of the law, to prompt capital investment.
What's the scandal, exactly?
reply