> every miner would have to win every block to stay in business
To my understanding that'd be true if the energy cost was $2M per miner, but I think jpcfl was suggesting $2M total. i.e. that an individual miner would spend $100 on energy for a $100 expected return, which may be a 10% chance at $1000 or a 100% chance at $100.
Could adjust $2M up slightly for whatever portion was done by crypto-mining malware or dabblers with higher expected energy costs than return, and down for the portion done by professional miners who will be expecting a slightly positive return even after factoring in other costs.
If I spend $2M to mine $2M of gold, and the value of that gold increases by 2x over the next 10 years, then I think I could stay in business. That would actually be a pretty solid business.
The value of BTC has increased nearly ~200x in the last 10 years. I don't anticipate it will do this again, but I'm sure some of these miners think it will continue to grow at some rate (plus, they have already capitalized on the last 10yrs of growth).
That wasn't my point. My point was that they could stay in business.
My original point was that it was in the ballpark of $2M. Could be more, could be less, all depending on a number of variables--I believe one of those variable just doubled.
I'm not sure what the cost is now, but back in 2012-ish when I briefly looked into mining, it cost about $1 in energy to mine $1 worth of BTC. I used your logic and decided it didn't make sense to invest in mining, so I didn't. I wish I had, or at least purchased some BTC, but I was a broke college student just looking to capitalize on the hardware I already owned, and I didn't really know a thing about investing (other than investing in a 6 pack of beer to meet girls at parties--my ROI was not great, BTW, so I don't recommend this strategy).
I sympathize! I considered running a miner in 2012, with the positive side effect of heating my office. Decided I didn't want to listen to the fan noise for negligible returns.
> it cost about $1 in energy to mine $1 worth of BTC
This is the crux of it (we may be saying the same thing!). There is no 1:1 relationship between mining cost and reward on a single block. If there was, no one would do it, because they do not win every block. And all blocks you compete on cost the same, win or lose, but of course the blocks you don't win, pay zero.
There's a (very) roughly 1:1 relationship between a miner's overall cost and reward, averaged over many blocks. If reward increases (BTC price spikes), more competition comes online and your win frequency drops.
So if the question is "how much energy (cost) did it take to mine this specific $2MM block", the answer is closer to "The average block reward divided by the winning miner's win frequency", which is e.g. 5% for one of the bigger miners (I did not check this block or this miner). This was a high reward block, so the real miner cost might have been more like $50K. Less for energy alone.
But it's like the guy who buys a $10 lottery ticket every day. He needs to win a few hundred dollars per month to maintain the habit (gambling addiction notwithstanding!). Today he got lucky and won $700 on the $10 ticket.
> There is no 1:1 relationship between mining cost and reward on a single block
Ah, I see what you mean.
I think I've been mixing up my costs. When I wrote the original comment, I was thinking about the total energy cost for mining this block, i.e., the energy cost of all the nodes that worked on it. But the truth is, I have no idea about the economics of BTC mining. I was just being facetious :)
Safe Rust has no undefined behavior. Undefined behavior does not mean no crashing, it means that the semantics of the program are undefined.
Rust's semantics are to abort on a stack overflow. A language like C or C++ have no such semantics, they may abort or they may continue running and producing jibberish.
The fact that this program results in reading/writing an unmapped memory address means it’s doing an out-of-bounds access. It segfaults on macOS because the runtime/OS has allocated the stack such that the overflow results in a bad memory access, but that is a behavior of the runtime/OS/hardware, not the language.
I guarantee I could exploit this on a system that does not have virtual memory, or a runtime that does not have unmapped addresses at the end of the stack, to, say, manipulate the contents of another thread’s stack. Therefore, this behavior is undefined.
The language runtime can require that the OS & hardware always results in an exception on stack overflow (or, alternatively, compile in explicit checks for it). You running the program in an environment without that is, technically, just as wrong as running it on a system where integer addition does multiplication.
Now perhaps this means that there are real rust deployments that are "wrong", but that shouldn't include regular sane standard systems, and embedded users should know the tradeoffs.
.LBB3_1:
sub rsp, 4096
mov qword ptr [rsp], 0
cmp rsp, r11
jne .LBB3_1
That's a loop at the start of your 'main' that probes the stack specifically to ensure a segfault definitely happens if your array didn't fit on the stack.
> It segfaults on macOS because the runtime/OS has allocated the stack such that the overflow results in a bad memory access, but that is a behavior of the runtime/OS/hardware, not the language.
Stack overflows are checked in C on macOS not because of guard pages but because the compiler emits stack checks (with cookies). Probably the same is true here.
> I guarantee I could exploit this on a system that does not have virtual memory, or a runtime that does not have unmapped addresses at the end of the stack, to, say, manipulate the contents of another thread’s stack. Therefore, this behavior is undefined.
Software stack checking does not guarantee protection from stack overflows wreaking havoc. E.g., your thread could blow its stack, then get preempted before the stack checker can run.
Mandating guard pages/MPU protection would rule out targeting embedded platforms which lack sufficient hardware support.
What does preemption change here? Before the stack checker has finished, nothing else should hold a reference to any of the yet-unchecked stack. That's plenty trivial to ensure. (unless you mean preemption somehow breaking the stack checker itself, in which case, well, that's a broken stack checker and/or preemption, and should be fixed)
If you can't have hardware support, it's trivial for the compiler to do it in software - just an "if (stack_curr - stack_end < desired_size) abort();". I can't imagine a platform where there you cannot reasonably get a lower bound for the range of stack available. Worst-case, you ditch the architectural stack pointer and manage your own stack on the heap, if that's what you need to ensure correct Rust behavior on your funky platform (or accept the non-compliant compromise of no stack checking).
> What does preemption change here? Before the stack checker has finished, nothing else should hold a reference to any of the yet-unchecked stack.
If your thread overflows the stack, it could start writing into memory for which it does not hold a reference. If the thread is preempted before the stack checker can run (see below*) and detect the overflow, and another thread runs which accesses the now-corrupted memory, then you're hosed.
> just an "if (stack_curr - stack_end < desired_size) abort();"
That's not how the compiler-emitted stack checking works AFAIK (*I believe it uses canaries on the stack which are checked at certain points in code). But, I could see this solving the problem. Basically, for every instruction that manipulates the stack pointer (function calls, alloca's, and on some arch's interrupts use the current stack), the resulting address would need to be checked. That would be costly and require OS awareness, but I think it would be safe. Is this an option that the compiler provides? It would save me a lot of time debugging.*
Canaries are a separate unrelated thing solving a different problem - buffer overruns, i.e. writing out-of-bounds. (canaries are a best-effort thing and don't guarantee catching all such problems, and they're also useless for safe Rust where unchecked OOB indexing is not a thing; whereas stack overflow checking can be done precisely)
In my sibling comment showing the assembly that your Rust program generates, it is writing a "0" every 4096 bytes of the stack range that is intended to be later used as the buffer (this "0" is independent from the "0" in your "[0; N]"; it's just an arbitrary value to ensure that the page is writable). It does this, once, at the very start of the function, before everything else (i.e. before the variable "var" even exists, much less is accessible by anything or even initialized). This is effectively exactly the same as my "if (stack_curr - stack_end < desired_size) abort();", just implemented via guaranteed page faults. You can enable this on clang & gcc with -fstack-clash-protection where supported.
Indeed, stack checking can have overhead (so do other requirements Rust makes!), but in general it's not that large. If you don't have stack-allocated VLAs, it's a constant amount of machine code at the start of every function, checking that all possible stack usage the function may do is accessible. And on systems with guard pages (i.e. all of non-embedded) the overhead is trivially none for functions with frame size below 4096 bytes (or however big the guard range is; and for larger frame sizes the overhead of this check will be miniscule compared to whatever actually uses the massive amount of stack).
I don't know if it's technically UB or well defined. The crash is a SEGFAULT and not a panic/abort, but it's probably a SEGFAULT due to guard pages. Still, it's possible to evade guard pages so if you access var[X] such that X points to the heap, it's possible you're reading aliased memory which would be UB in safe Rust.
EDIT: Going to take it back. I'm unable to create a situation where I create a large stack array that doesn't result in an immediate stack overflow. I even tried nightly MaybeUninit::uninit_array but that crashed explicitly with a "fatal runtime error: stack overflow" so it seems like the standard library has improved reporting instead of the old SEGFAULT. So no UB.
Panics are not quite the same as an abort in Rust. Most notably a panic can be caught and execution can resume so as to gracefully terminate the application, but an abort is an immediate termination, a go to jail do not pass go kind of situation.
An out of bounds access in Rust will result in a panic but a stack overflow is an abort.
The stack guards would normally be setup by the system runtime (e.g. kernel in the case of the main thread stack, libc for thread stacks), not Rust's runtime. Likewise, stack probes that ensure stack operations don't skip guard pages are usually (always?) emitted by the compiler backend (e.g. GCC, LLVM), not Rust's instrumentation, per se.
In this sense Rust isn't doing anything different than any other typical C or C++ binary, except that automagically hijacking SIGSEGV (or any other signal) from non-application code as Rust does is normally frowned upon, especially when it's merely for aesthetics--i.e. printing a pretty message in-process before dying. Also, attempting to introspect current thread metadata from a signal handler gives me pause. I'm not familiar enough with Rust to track down the underlying implementation code. I presume it's using some POSIX threads interfaces, but POSIX threads interfaces aren't async-signal safe, and though SIGSEGV would normally be sent synchronously (sometimes permitting greater assumptions about the state of the thread), that doesn't mean the Rust runtime isn't technically relying on undefined behavior.
EDIT: To get the guard page range it's using pthread_self, pthread_getattr_np, pthread_attr_getstack, and friends, of which only pthread_self is async-signal safe. See https://github.com/rust-lang/rust/blob/411f34b/library/std/s...
I have no concrete evidence to believe the reliance isn't safe inpractice on the targeted platforms (OTOH, I could imagine the opposite), but it's a little ironic that it's depending on undefined behavior.
The runtime thing is the easy part. I was wondering about the stack probes, which require LLVM support. There's a comment in the sources that suggest it's still x86-only, but that may be outdated:
“
//! Finally it's worth noting that at the time of this writing LLVM only has
//! support for stack probes on x86 and x86_64. There's no support for stack
//! probes on any other architecture like ARM or PowerPC64. LLVM I'm sure would
//! be more than welcome to accept such a change!
”
I don't see where those methods are getting called from a Unix signal handler but the code is complex enough that it's easy to miss, especially perusing through github instead of vscode.
AFAICT those methods are called from `guard::current`. In turn, `guard::current` is used to initialize TLS data when a thread is spawned before a signal is generated (& right after the signal handler is installed): https://github.com/rust-lang/rust/blob/26907374b9478d84d766a...
It doesn't look like there's any UB behavior being relied upon but I could very easily be misreading. If I missed it, please give me some more pointers cause this should be a github issue if it's the case - calling non async-safe methods from a signal handler typically can result in a deadlock which is no bueno.
x86_64 macOS has tier 1 rust platform support, which I believe means that it's guaranteed that you get a crash on stack overflow and you can't evade stack protection in safe rust.
It's not possible on all platforms, hence the tiers.
Apparently ARM64 macOS has tier 2 rust platform support, which might mean that that this is not true there, but maybe safe rust has some different unrelated soundness issue on this platform.
I only have very surface knowledge about the tier stuff, so maybe someone can correct me.
I also tend to remember where a tidbit of information was physically. For some reason, details like "~50 pages back around the end of the paragraph in the top-right corner of the right page" will stick in my brain. Then I can quickly scan and parse out keywords to find what I'm looking for. This doesn't work reliably with e-books for me.
This is exactly the theory I proposed in a previous post about handwriting aiding memory - that it's the paper and spacial memory assisting, not the handwriting itself.
I did this often during school, looking up information in textbooks - I'd remember roughly the chapter/page, recognize the exact page, and know where on the page something was, even if I couldn't directly remember the thing itself.
That’s how a “studied” physical chemistry. Before the test, I’d flip through the book and review the equations. My brain encoded it as location on the page. Then at the beginning of the test, I’d scribble down all of the relevant equations from pictures in my mind. During the test, I’d flip back to my index of equations to solve the problems.
> Barack Obama ran for president on the platform that marriage was between a man and a woman. Is he homophobic?
No, his administration ratified nationwide gay marriage. He's a straight person too, that doesn't stop him from empathizing with the gay community and promoting marriage equality.
> Dr. Jay Bhattacharya has spoken out against vaccine-related policies. Is he anti-vax?
Vaccine-related policies are not vaccines. So you could say he's anti-vaccine politics I guess. I've never heard of the guy before though, so I can't really say for sure where he lies.
> I come to Hacker News to avoid this kind of divisive rhetoric.
I'm sorry to hear that you're disappointed by the curation of a user-moderated forum.
I would surmise that a great deal of this is biological, but a radical gender theorist would likely rebut that these differences are due to social conditioning. He would say that men/males are conditioned to take more risk, conditioned to desire working with _things_, and conditioned to desire prestige in their jobs.
How would one go about designing a study that eliminates the variable of social conditioning when trying to study sociological differences between the sexes, or is that even important to indicate these sentiments are/aren't linked to biology?
Thank you! I have to say, I find the gender-neutral toy layout here in CA stores very annoying when I go shopping for children's toys. It's bizarre to me that the state has mandated it, and I was really disappointed when Newsom glossed over this question in his interview with Maher last week.
A lot of people buy tools and then never use them, just like people buy trucks and 4x4's, but never use them to haul cargo or go off-road. When you buy a tool, you generally want to have a job in mind, and then have the follow-through to do that job.
> Saying that one isa is faster or more energy efficient is like saying that c++ syntax is faster than java syntax.
I think that could be a valid statement. APIs can influence performance by constraining the implementation. For instance, the syntax for constructing an object in C++ will, generally speaking, always yield faster code than Java, because Java objects are almost always allocated on the heap, while C++ objects can be allocated on the stack. Compare:
// C++
MyObj o{};
// vs. Java
MyObj o = new MyObj();
Sure, it's possible to write a Java allocator/GC that will yield similar performance to the C++ code, but in general, that will practically never be the case. The syntax of the language has constrained the implementation so that Java will practically always be slower. Presumably, similar design choices in an ISA could have the same effect.
Java code is generally 2x slower than C++ code, unless you're operating entirely on primitive types. The JIT usually can't remove enough of the gratuitous memory accesses the language forces.
My guess is that a lot of these features are hard-coded, and they are deprecating them in order to replace them with a more generic LLM-based assistant.
Perhaps, but it seems like a strange approach. I would assume they would retain these features and then gradually replace each one with an LLM-based approach.