This wouldn't do any noticeable damage. Modern CPUs have excellent thermal management. As far a wear goes, a hot spot in chip would in theory slightly decrease the long life span of a CPU.
If you expanded your question to hardware in the computer, then yes you can easily cause damage. BIOS’s can be flashed to make the system unbootable or overclock/stress components. Back in the bad old days of Linux, you could easily damage your monitor with the wrong xorg.conf settings.
Your question got me thinking what’s the MTBF of modern CPUs? My google-fu failed me finding any reliable source of this, but I’m sure it’s long, 10+ years.
> Back in the bad old days of Linux, you could easily damage your monitor with the wrong xorg.conf settings
You could also damage a floppy drive making it read/write, for many times, few sectors outer the common limits. Being there, done that.
But after so many discussions on online forums that it was impossible to cause physical damage using software (other than overwriting firmwares), I gave up and kept this (and the asm code) deep inside my heart.
And bringing it up still gives me chills that those discussions will return right now...
Your question got me thinking what’s the MTBF of modern CPUs? My google-fu failed me finding any reliable source of this, but I’m sure it’s long, 10+ years.
Probably decreasing, and soon not much longer than warranty period... the transistors have gotten so small that they're on the threshold of barely working even in normal operation.
As for older CPUs, they could definitely last many decades because of the lower stresses of larger process sizes, and they were designed with much higher margins.
According to the paper linked in another comment (https://news.ycombinator.com/item?id=12373015), apparently the high-k dielectric nodes used at 45nm and below show ~5x times worse NBTI ageing than non-high-k 45nm PMOS gates, which decides the tolerances that are selected to provide X years of life.
Back in the bad old days of Linux, you could easily damage your monitor with the wrong xorg.conf settings.
Back when a certain kind of line printer was commonplace (has a circulating ribbon with the typeface repeated, and n hammers in a line going across the entire width) programmers could sabotage the printer by printing the pattern on the ribbon. This would cause all of the hammers to fire at once, which the machine wasn't designed to withstand.
I've also heard of monitors being broken by having the speaker output the resonant frequency of the glass cover. However, I can't vouch for this one.
Wow, thanks, that's a fascinating paper! Direct link to pdf: [1].
So it turns out that if a transistor is kept on continuously its threshold voltage gradually increases (Negative-Bias Temperature-Instability (NBTI)), increasing the switching delay. This attack targets transistors along the critical path, increasing the path's delay until it exceeds the allowed tolerance (guardband). Turning the transistor off "heals" it; as a workaround they suggest periodically executing certain nop instructions to ensure critical path transistors spend at least 0.05% of their time turned off. They perform simulations using models of 45nm high-k PMOS transistors to produce their results. A good quote about processor reliability:
Guardbanding is the current industrial practice to cope with transistor aging and
voltage droops [Agarwal et al. 2007]. It entails slowing down the clock frequency (i.e.,
adding timing margin during design) based on the worst degradation the transistors
might experience during their lifetime. The guardbands ensure that enough current
passes through the processor to keep it above the threshold voltage and in turn ensure
that the processor functionality is intact for an average period of 5 to 7 years [Tiwari
and Torrellas 2008]. However, inserting wide guardbands degrades performance and
increases energy consumption. Hence, processor design companies usually have small
guardbands, typically 10% [Agarwal et al. 2007]. However, the MAGIC-based attack
can deteriorate the critical path by 11% and cause erroneous results in 1 month.
This also explains why overclocking a CPU may be a bad idea, although they also show that random instructions don't come close to the worst case ageing.
Well, I think this can be answered by considering that even under normal conditions there are transistors that are used as much as in your hypothetical scenario. For example, the instruction decoding logic is invoked for every instruction. Since all logic transistors are the same (afaik), I don't think that using one type of instruction would significantly reduce the life-time of your CPU.