Humans run at approximately 100W, for 2 H100s you're looking at 600W-1400W. Plus humans have a much larger variety of capabilities. And they're more fun.
So you're paying ~10x the power costs to get worse, unverified, illogical answers faster when using LLMs vs humans. Which then have to be checked and revised by humans anyway.
That is an interesting question. Where I live, the cost of electricity is 0.2276 €/KWh.
So the two H100, at 1KW, cost 0.2276×24 = €5.5 ($6) per day, which is nearly my groceries average.
(My meals are powering all of my body though, which is five times the consumption that my brain requires, so all in all, it seems a bit more power-efficient than the GPU still.)
However the LLM doing inference on 2 H100s can easily exceed 10x the content generation rate of a single human.
Regarding quality of the output, it obviously depends on the task, but we are benchmarking the models on real world metrics and they are beating most humans already.
This is a pretty cool and neat comparison that I haven't seen before. Probably worth including the rest of the server required to run 2 H100s because those aren't trivial either... I think the 100W might just be for an estimate of the human brain so maybe it is an equivalent example.
I know this isn't the spirit you meant it in, but I'm also impressed with humanity that we've managed to develop something as capable as it is (admittedly significantly less reliable and capable than a person) at only an order of magnitude difference in power consumption.
The brain is literally liquid cooled (blood), it can totally dispense of 100w of heat if it has to. (that said, other commenters are correct, the brain used 20w)