Hacker Newsnew | past | comments | ask | show | jobs | submit | ForHackernews's commentslogin

Doesn't show the comparative energy waste of bitcoin?

This source[0] says

> One Bitcoin now requires 854,400 kilowatt-hours of electricity to produce. For comparison, the average U.S. home consumes about 10,500 kWh per year, according to the U.S. Energy Information Administration, April 2025, meaning that mining a single Bitcoin in 2026 uses as much electricity as 81.37 years of residential energy use.

[0] https://www.compareforexbrokers.com/us/bitcoin-mining/


About 1,200 kWh per transaction, currently[0]. I wrote about this back in 2022, when it was about 2,200 kWh per transaction[1].

Edit: made a chart with this data, but adding in a bitcoin transaction[2]

[0]: https://digiconomist.net/bitcoin-energy-consumption

[1]: https://rollen.io/blog/crypto-climate/

[2]: https://imgur.com/a/ggAGylW


LLMs cannot, as you put it, "properly, correctly think"

So-called reasoning models are hallucinating, their self-reported "reasoning" does not reflect their inner state https://transformer-circuits.pub/2025/attribution-graphs/bio...

(before someone comes at me, yes, humans can also lie about their inner state but we are [usually] aware of it. Humans practice metacognition and there's no evidence LLMs can distinguish truth from hallucination)


> LLMs cannot, a you put it, "properly, correctly think".

"My theory trumps your experience." ... okay!

You'll keep working with what you have and I'll keep working with what I have.

> Humans practice metacognition and there's no evidence LLMs can distinguish truth from hallucination

Yes and No. Humans have the capability of doing so, but all evidence suggests that it's rarely happening.

I have a huge background in psycho-analysis and neurolinguistic programming. The lack of evidence you perceive doesn't root in incapability, but lack of exposure to evidence proving you wrong ... and I'm not going to give it to you, because that'd be dumb.

If you don't want to believe me, that's not my problem.


But we also at HN have historically called your experience "anecdata" and take it with a grain of salt. Don't take offense. Provide more data.

I humbly suggest that a more hacker response would be, "That's really interesting that my experience doesn't agree with that study. Let's figure out what's going on."


I linked you a paper from one of the leading AI shops in the world demonstrating that the "Chain of Thought" reported doesn't match up with the actual activation inside the model, and you replied that you're an expert on some human psych stuff that may or may not even be real[0].

Forgive me if I don't immediately bow to your expertise.

[0] https://pmc.ncbi.nlm.nih.gov/articles/PMC11293289/


They are not a major OEM, but the Hiroh phone is going to offer hardware cutoff switches and and a de-googled OS: https://www.notebookcheck.net/Murena-taking-pre-orders-for-t...

I think this is great news, but I thought GrapheneOS considered unlocked bootloaders to be a terrible security risk? What's changed?

Unlocked baotloaders are mandatory to install graphene, but so is the ability to re-lock the bootloader.

Not if it comes preinstalled though. Isn't that the point of the partnership?

Doesn't seem to be, announcement only talks about GrapheneOS compatibility.

It has always been a hardware requirement to be able to unlock the device, install GrapheneOS and lock the device again. Verified boot has been a requirement since it was introduced for Pixels and the is main benefit of locking the device. There are additional security features enabled by verified boot. The overall hardware requirements are listed at https://grapheneos.org/faq#future-devices.

You always have to temporarily unlock your bootloader to install graphene.

The key point is being able to lock it again after installation.


Counterpoint: No one ever gets fired or goes to jail when big tech firms break the law. Companies will put out an apology, pay whatever small fine is imposed, and continue with illegal AI usage at scale.

You could try https://iode.tech/iodeos/ or https://e.foundation/e-os/ but neither is as minimalist as you'd like.

Rust is harder for the bot to get "wrong" in the sense of running-but-does-the-wrong-thing, but it's far less stable than Go and LLMs frequently output Rust that straight up doesn't compile.

LLMs outputting code that doesn't compile is the failure mode you want. Outputting wrong code that compiles is far worse.

Setting aside the problems of wrong but compiling code. Wrong and non-compiling code is also much easier to deal with. For training an LLM, you have an objective fitness function to detect compilation errors.

For using an LLM, you can embed the LLM itself in a larger system that checks it's output and either re-rolls on errors, or invokes something to fix the errors.


If you use the stable version of Rust, it's stable. There's a very strong commitment from the Rust folks on that specific point.

The only thing I see is the LLM not being aware of new features, so I have to specify the version rust.

Having a .rust-toolchain file and the edition in Cargo.toml should achieve this - and both of these should be done anyway!

/e/OS also supports locked bootloaders for devices that have official builds (a smaller subset than the ones with community builds)

I have used /e/OS for years and it's been good. ¯\_(ツ)_/¯

On the other hand, I would trust a randomly chosen organization more than the world's largest adtech firm.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: