A large telecommunications satellite operates at about 15kW. A Blackwell GPU consumes 1kW so you would be at 15 Blackwells per satellite. The cooling surface needs to scale linearly so there is little return to scale.
The author was frustrated that the error message identified him as an organisation (that was disabled) and mockingly refers to himself as the (disabled) organisation in the post.
At least, that’s my reading but it appears it confuses about half of the commenters here.
I understand your logic but I found LLM's to be quite strong at C#. It makes little mistakes and the mistakes seem related to the complexity of what I'm doing, not the language itself.
I agree this is easy enough to follow but I'd like to quibble about something else:
Comments should answer the question why you are not using some kind of hash set and do a single pass over the data and why it's OK to reorder the strings. One could reasonable expect that Dedupe shows first occurrences in order.
This can have another explanation as well: the moment a block is found, the miner starts building on top of the previous block but hasn't constructed a new full block of transactions yet as that costs a bit of time to calculate and distribute. In this period, a new block could be found.
Blocks are Merkle trees, only the head transaction contains global seed. So, for one to mine block, one needs to walk Merkle tree up from head and then finish work with small amount of data in the block header.
Thus, the time spent mining block is directly dependent on the logarithm of number of transactions in the block.
If one can mine a block with 3000 transactions (11-12 hashes to the header) in 10 minutes, one can mine a block with one transaction (1 hash to header) about ten times as fast.
The construction of the block is negligible if we talk about complete block mining time.
>If one can mine a block with 3000 transactions (11-12 hashes to the header) in 10 minutes, one can mine a block with one transaction (1 hash to header) about ten times as fast.
Huh? Surely the attempts for both take exactly the same amount of time after you've initially constructed the block, you're calculating only a single hash for each attempt.
Maybe, but buffer overflows would occur written in assembler written by experts as well. C is a fine portable assembler (could probably be better with the knowledge we have now) but programming is hard. My point: you can roughly expect an expert C programmer to produce as many bugs per unit of functionality as an expert assembly programmer.
I believe it to be likely that the C programmer would even writes the code faster and better because of the useful abstractions. An LLM will certainly write the code faster but it will contain more bugs (IME).
This doesn't sound like a good idea to me.
reply