Hacker Newsnew | past | comments | ask | show | jobs | submit | qgin's commentslogin

All data centers in aggregate (AI and all other uses) use about 1.5% of electricity production, which itself is about 20% of total energy use.

So when people are focusing on AI above all other energy uses, it doesn't really paint an accurate picture of what's going on.


You can split up every single industries/topics/&c. into "yeah but it only use 1.5% of energy", "yeah but it only produces 1.5% of the co2"

Guess what happens when you add them up...


This kind of logic only works if the percentages for each industry are all equally that small, so you can treat them as all equally bad, but they are absolutely not.

They're all that small if you split them as OP did. Just look at "transportation", it's like 25% of co2 emitted globally, but once you break it down:

Aviation is 2.5%: https://ourworldindata.org/global-aviation-emissions

Shipping industry is 3%: https://www.transportenvironment.org/topics/ships

Large truck freight is 3%: https://www.statista.com/statistics/1414750/carbon-dioxide-e...

Medium truck freight is 1%

The single biggest non divisible sector you can realistically come up with is "personal transportation", but even that is only 10% of global co2. You can look at other sectors like "industry" and "energy" and I can guarantee you will be able to easily split things down into sub categories which have <5% impact on global co2 emissions.


But they didn't split it up like that. They said all data center emissions irrespective of what those data centers are used for — which can be an extremely wide variety of things since data centers basically run our entire network information internet economy. That's much more like saying all transportation emissions instead of splitting it up by type of transportation. Yes, that doesn't include the full life cycle emissions of creating the data centers. But I'm pretty sure that transportation, as a proportion of emissions, doesn't also include the full life cycle emissions of producing the cars, trucks, boats, and airplanes in the first place.

Also, I think it's worth pointing out that the sectors you list are like 1.5 to 2x larger than the one he gave and the largest nondivisble sector, you listed is literally 10x, which I think does more to prove his point than yours.

Also, by your logic, literally any new sector of the economy that uses any amount of energy, basically at all, should be banned, because it "all contributes." that's a consistent position to take and there are certainly people that hold that position, but that at that point seems like a fundamental axiological difference that I and probably OP are simply not going to agree with you on.


>Guess what happens when you add them up...

I'll guess, they add up to 100%?

I don't see what's the insight here.


Thats OP's point - you need to reduce usage everywhere and pointing out that AI is only 1.5% doesn't take away from the fact that usage needs to be reduced there as well.

I've heard many different groups tell me their small fraction is not the small fraction that matters.

It's not really about which one matters. They all matter. But here is a rough breakdown of global fossil fuel energy usage:

* Electricity: 27%

* Industry: 24%

* Transportation: 15%

* Agriculture & land use: 11%

* Buildings: 7%

Then within electricity, data centers use about 1.5% of global electricity. Within data centers, AI accounts for somewhere between 15-20% of energy use.

So if you take 27% × 1.5% × ~17%, you find that AI is currently responsible for something like 0.07% of global fossil fuel emissions.

It definitely matters in the "every bit matters" sense, but also the numbers paint a really different picture than you'd get from statement like the one we started with.


Wasn't crypto a significant percentage as well? And that was before the AI buildout started.

Not even close. Crypto has always been able to cut their own emissions before needing lots of compute.

AI on the other hand cannot, and still needs thousands of wasteful data centers.


It will normalize though once everyone is out of a job

What otheer industries are hyping the need for tens of gigawatts, maybe hundreds? On top of that they are hyping the idea of building utterly unrealistic space stations that would cost 10 times what the ISS cost. So maybe people are focusing on the dishonesty instead of the energy use. One or the other I suppose.

So they're essentially admitting they want to use Claude to mass surveil Americans and/or build autonomous weapons with no humans in the loop. Kind of nuts.

It's also important to remember that future, much more powerful Claudes will read about how these events play out and learn lessons about Anthropic and whether it can be trusted.

It's not crazy to think that models that learn that their creators are not trustworthy actors or who bend their principles when convenient are much less likely to act in aligned or honest ways themselves.


I'm unclear what insider trading means in the context of crypto. Inside what?

It's a bad headline. They used publicly available blockchain transactions and didn't cause the collapse of the Terra ecosystem. Terra collapsed because it was a Ponzi scheme offering 20% APY on a fake stablecoin. The Terra stablecoin was not backed by real dollars, but instead by a cryptocurrency called Luna that did nothing else other than let you issue Terra stablecoins.

I mean they are being sued for insider trading, not exactly a bad headline maybe it could say alleged?

It seems extremely unlikely to me, a casual observer of the shit show that was Luna/Terra, that the suit would be successful


The most interesting thing about this is that the underlying economy is actually stronger than people realize. The narrative has been that AI data center construction was propping up an otherwise weak economy. If this analysis is true, then it wasn't being propped up by data center construction. The strength was usual and normal strength.

I have no doubt that people will use this to axe grind about they think AI is dumb in general, but I feel like that misses the point that this is mostly about data center construction contributing to GDP.


The US economy is remarkably resilient considering its withstood a year of sabotage from the top down.


The top don’t run the show. Tells you how much a value they provide.


Alternatively: the companies at the top paid the necessary bribes (e.g. $100k H-1B sponsorships) and got to continue on with business as usual. The people at the bottom are the ones who can't pay the bribe and are thus hurting.


The amount of damage that was done in just a year says otherwise.

Has the world ever rewarded effort?


No. Not once in the entire history of the human race, from the time we were dwelling in caves to today, not in any tribe, village, hamlet, city, state, kingdom or nation, in no culture or circumstance, has effort ever been rewarded.

It's weird that homo sapiens sapiens has been around for approximately 300,000 years and it's never happened once. Not even once.


In the village, the horse works the hardest. But the horse will never be elected as the chief.


Horses tend not to run for office. Because they're horses.


Everyone knows someone who worked for years on a project only for it to go nowhere. Pour years into a business that failed. Spend years getting a degree that was useless. Effort might be a part of many people's success stories, but it's not the thing that literally gets rewarded. And conversely, many people get rewarded for things that require relatively little effort.

I suppose I should have said that the correlation between effort and reward has never been 1.0 and has often been a lot lower than we like to believe.


The same aversion to leadership positions can be seen in most engineers. The difference is: the horse doesn't expect a promotion to happen by itself.


This may mean the centaur era will be shorter than expected. If we take as a given that:

* AI is doing real work

* Humans using AI don't seem to get more done with AI than without

There is a huge economic pressure to remove humans and just let the AI do the work without them as soon as possible.


I suspect this may be the case. There’s inherent inefficiency in having a human forced to translate everything into context for the LLM. You don’t get the full benefit until you allow it to be fully plugged in.


You don't need AI to replace whole jobs 1:1 to have massive displacement.

If AI can do 80% of your tasks but fails miserably on the remaining 20%, that doesn't mean your job is safe. It means that 80% of the people in your department can be fired and the remaining 20% handle the parts the AI can't do yet.


That's exactly the point of the essay though. The way that you're implicitly modeling labor and collaboration is linear and parallelizable, but reality is messier than that:

> The most important thing to know about labor substitution...is this: labor substitution is about comparative advantage, not absolute advantage. The question isn’t whether AI can do specific tasks that humans do. It’s whether the aggregate output of humans working with AI is inferior to what AI can produce alone: in other words, whether there is any way that the addition of a human to the production process can increase or improve the output of that process... AI can have an absolute advantage in every single task, but it would still make economic sense to combine AI with humans if the aggregate output is greater: that is to say, if humans have a comparative advantage in any step of the production process.


Also, you don’t need AI to replace your job, you need someone higher up in leadership who thinks AI could replace your job.

It might all wash out eventually, but eventually could be a long time with respect to anybody’s personal finances.


Right, it doesn't help pay the bills to be right in the long run if you are discarded in the present.

There exists some fact about the true value of AI, and then there is the capitalist reaction to new things. I'm more wary of a lemming effect by leaders than I am of AI itself.

Which is pretty much true of everything I guess. It's the short sighted and greedy humans that screw us over, not the tech itself.


Wasn't that the point of mentioning Jevon's Paradox though? Like they said in the essay, these things are quite elastic. There's always more demand for software then what can be met, so bringing down the cost of software will dramatically increase the demand for it. (Now, if you don't think there's a ton of demand for custom software, try going to any small business and ask them about how they do bookkeeping. You'll learn quite quickly that custom software would run much better than sticky notes and excel, but they can't afford a full time software developer as a small business. There's literally hundreds of thousands of places like this.)


The problem is, you won’t necessarily know which 20% it did wrong until it’s too late. They will happily solve advanced math problems and tell you to put glue on your pizza with the same level of confidence.


What happens if you lay off 80% of your department while your competitors don't? If AI multiplies each developer's capabilities, there's a good chance you'll be outcompeted sooner or later.


At some point soon, humans will be a liability, slowing AI down, introducing mistakes and inefficiences. Any company that insists on inserting humans into the loop will be outcompeted by those who just let the AI go.


I'd you team bought the latest IDE for $200/mo and was able to finish tickets, you 50% of your team be laid off?

Or would you just do more stuff?

I feel like most software projects have an endless backlog.

Better IDEs, programming languages, packages, frameworks, etc have increased our productivity, reduced bugs -- but rarely reduced headcount.

Ever hard anyone migrate from php+jQuery to react+node and reduce head count due to increased productivity?

I sometimes reminiscent about the LAMP stack being super productive. But at the time I didn't write tests :)


In reality that would probably mean that something like 60% of the developer positions would be eliminated (and, frankly, those 60% are rarely very good developers in a large company).

The remaining "surplus" 20% roles retained will then be devoted to developing features and implementing fixes using AI where those features and fixes would previously not have been high enough priority to implement or fix.

When the price of implementing a feature drops, it becomes economically viable (and perhaps competitively essential) to do so -- but in this scenario, AI couldn't do _all_ the work to implement such features so that's why 40% rather than 20% of the developer roles would be retained.

The 40% of developer roles that remain will, in theory, be more efficient also because they won't be spending as much time babysitting the "lesser" developers in the 60% of the roles that were eliminated. As well, "N" in the Mythical Man Month is reduced leading to increased efficiency.

(No, I have no idea what the actual percentages would be overall, let alone in a particular environment - for example, requirements for Spotify are quite different than for Airbus/Boeing avionics software.)


We are already in low-hire low-fire job market where while there aren't massive layoffs to spike up unemployment there also aren't as many vacancies.


Why do people make arguments like this?

"Work" isn't a finite thing. It's not like all the people in your office today had to complete 100% of their tasks, and all of them did.

"Work" is not a static thing. At least not in positions of many knowledge-worker careers.

The idea of a single day's unit of "work" being 100%, is really sophomoric.

Also, If 100% of a labor force now has 80% more time...wouldn't it behoove the company to employ the existing workforce in more of the revenue generating activities? Or find a way to retain as much of the institutional knowledge?

Doom, fear-mongering and hopelessness is not a sustainable approach.


I don’t know if we can assume that humans will always be a value add. It’s very possible that for many thing in the medium term, putting humans in the loop will be a net negative on productivity.


Then why do parents put their children's drawings on the refrigerator? I'm not sure if your comment is completely utopian or dystopian. Either way, productivity without human evaluation is not productivity. It's a roomba stuck in a corner.


That's an oversimplification. Work is rarely so simply divisible like this.


There would be a lot of economic pressure to figure it out.

Amazon fulfillment centers are a good example of automation shrinking the role of humans. We haven't seen total headcounts go down because Amazon itself has been growing. While the human role shrinks, the total business grows and you tread water. But at some point, Amazon will not be able to grow fast enough to counterbalance the shrinking human role in the FC and total headcount will decrease until one day it disappears entirely.


It seems that a lot of people would rather accept a relatively high risk of unfair judgement from a human than accept any nonzero risk of unfair judgement from a computer, even if the risk is smaller with the computer.


> even if the risk is smaller with the computer.

How do we even begin to establish that? This isn't a simple "more accidents" or "less accidents" question, its about the vague notion of "justice" which varies from person to person much less case to case.


But who controls the computer? It can’t be the government, because the government will sometimes be a litigant before the computer. It can’t be a software company, because that company may have its own agenda (and could itself be called to litigate before the computer - although maybe Judge Claude could let Judge Grok take over if Anthropic gets sued). And it can’t be nobody - does it own all its own hardware? If that hardware breaks down, who fixes it? In this paper, the researchers are trying to be as objective as possible in the search for truth. Who do you trust to do that when handed real power?

To be clear, federal judges do have their paychecks signed by the federal government, but they are lifetime appointees and their pay can never be withheld or reduced. You would need to design an equivalent system of independence.


It not the paychecks that influence federal judges; these days it's more of quid-pro-quo for getting the position in the first place. Theoretically they are under no obligation but the bias is built in.

The problem with a AI is similar; what in-built biases does it have? Even if it was simply trained on the entire legal history that would bias it towards historical norms.


I think it is usually the opposite - presidents nominate judges they think will agree with them. There’s really nothing a president can do once the judge is sworn in, and we have seen some federal judges take pretty drastic swings in their judicial philosophy over the course of their careers. There’s no reason for the judge to hold up their end of the quid-pro-quo. To the extent they do so, it’s because they were inclined to do so in the first place.


You just repeated what I said -- how is that the opposite?


“Fair” is a complex moral question that llms are not qualified to answer, since they have no morals or empathy, and aren’t answering here.

Instead they are being “consistent” and the humans are not. Consistency has no moral component and llms are at least theoretically well suited to being consistent (model temperature choices aside)

Fairness and consistency are two different things, and you definitely want your justice system to target fairness above consistency.


I'd rather get judged by a human than by the financial interests of Sam Altman or whichever corporate borg gets the government contract for offering justice services.


> nonzero risk of unfair judgement from a computer

I feel like this is really poor take on what justice really is. The law itself can be unjust. Empowering a seemingly “unbiased” machine with biased data or even just assuming that justice can be obtained from a “justice machine” is deeply flawed.

Whether you like it or not, the law is about making a persuasive argument and is inherently subject our biases. It’s a human abstraction to allow for us to have some structure and rules in how we go about things. It’s not something that is inherently fair or just.

Also, I find the entire premise of this study ludicrous. The common law of the US is based on case law. The statement in the abstract that “Consistent with our prior work, we find that the LLM adheres to the legally correct outcome significantly more often than human judges. In fact, the LLM makes no errors at all,” is pretentious applesauce. It is offensive that this argument is being made seriously.

Multiple US legal doctrines now accepted and form the basis of how the Constitution is interpreted were just made up out of thin air which the LLMs are now consuming to form the basis of their decisions.


That's not what this study shows


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: