It seems complicated to let our materials get damaged and then try to fix them on the fly. The earth protects itself from charged particles via a gigantic dipole field. Could the same be done with nano-electronics?
Boards could be designed to generate magnetic fields via embedded current loops. Instead of having a wire connect two components in a straight line (shortest path approach), it could be done in a way that meanders around, intentionally creating large curls. Since we're talking about scales of 1e-9m, these fields would probably be pretty strong.
Now, I don't know too much about superconductors, but vacuum tends to be pretty fucking cold (2K -- surely, lower than the critical temperature for many superconducting materials). It might even be possible to create a Meissner cage around the important components, in a way that protects our components from self-harm, while still protecting them from external charged particles.
Has this theory been tested? After all, it works for the Earth. I am afraid doing that might also be detrimental to our electronic components (unless we can somehow create a diamagnetic cage to selectively protect our components).
I think the main problem is of scale. The Earth's field isn't too strong, but it is huge in expanse. That's a lot of distance to deflect and decelerate. Going from a million meter diameter to a billionth likely requires a honking big*large field strength to deflect as hard.
Also while space has a very low temperature, it's an impressively terrible conductor of heat and thus a garbage cold sink. To keep things cool you either have to wait for the heat to radiate out or to off gas your coolant. You could generate radiative fins for the chips, but I don't know how effective that is in space; I would assume they would be so gossamer it would be hard to keep them from corroding in the solar wind.
It looks like these people have patented a classical/macroscopic version of a similar idea. They talk about using a completely separate solenoid to deflect incoming ions. Unfortunately they don't do any calculations (none that I could find) to see if it would even work?
What I'm suggesting is a bit different - a change to the transistor and microscopic wire structure so that components would be protected from external charges via self produced magnetic fields.
So, it would look like a bunch of little spins on a grid, some activated, some not activated, like 2D-Ising pretty much, but not random -- the directions are defined by the logic running on the board.
I think you are right, though. These B fields act over too short a distance in order to allow a relativistic particle to radiate an appreciable amount of energy before doing damage.
There's definitely an effect, but from my intuition I'd guess it'd have to be fairly strong. But maybe if they're cheap enough to produce and easy enough to mass up, you don't have to even be that effective.
Here's where my intuition is anchored, and you can figure if it's applicable (because I worked in a very different field). In magnetron sputtering, you ionize a gas as it passes through a strong magnetic field (harddrive magnet strength). Once the molecule ionizes (since it's moving at sonic speeds in the vacuum), it whips around and slams into some target, blasting molecules back from the target like a shotgun to a pile of dirt. These sputtered molecules/bits of material may be slightly charged, and when they hit the substrate will carry that charge and deposit it onto the substrate. Over time an insulating substrate (like glass) will charge up a bit, and this will begin to repulse incomming splattered charged bits. It'd be called something like 'biasing' and slows down the deposition rate, since the incomming material may slow too much to properly embed itself on the substrate. The magnets behind the target also trap electrons (in what is often called a racetrack), which helps amplifies ionizing the gases (since a strong field may not ionize the incoming gas molecules, but plonking an extra electron onto it will).
It tends to be a game of small effects in big numbers. Any individual interaction can vary wildly from one interaction to another, but over many trillions of trillions of events it averages out like you'd expect scales similar to Avagadro's number to. So if we could fabricate stupefying amouts of these gizmos, you wouldn't need it to be completely effective; just enough biasing or field strength to tilt the odds a bit in your favor and a lot of the little chips could survive. And if you have enough, you win! Sure the solor wind may clobber a few chips, but if you have billions enough of them, perhaps enough survive to keep effective.
This is part of the approach to some MEMS gizmos, where you play the odds and make huge numbers of the gizmos with a cheap fast process and filter out the working ones. Ideally you'd be smart and just have a really good process, but if your fabrication process is messy or too hard to control and the gizmo valuable enough you can take the losses. (I mean, you don't do that - you engineer good processes! But... well, that can be expensive. And sometimes waste is cheaper.)
EDIT: I'm thinking of the spherical cow equivalent of a gizmo here, too. You prolly wouldn't make billions of little computers and hork them at another star system unless you could essentially replicate them chemically. But not with a photolithographic process typical to chip designs. You can get millions, not trillions of trillions. Have something that self-assembles in a beaker and then perhaps it becomes an option. But that's sort of like hurling a viral infection at another star system, and... well, kinda gunks the idea too.
Your example (sputtering charge onto insulator) is of a similar flavor (but with E fields). The OP article seems like a monkey-patch way of "self-repair". Should we also create a self-repairer for the self-repairer? Little arms repairing other little arms repairing other little arms which are repairing a Raspberry PI. It's a cute, steampunk-y image -- what I imagine Leonardo Da Vinci would have jotted down in a book somewhere.
Alternatively, what you said: we could just send out a large number of duplicated devices. This might be easier, but more expensive.
How are computers even designed? If a single part fails, the entire thing can still function, more or less -- true or false? What about a brain? A body? Organs? My hand would still work, even if I ripped off a significant chunk of skin and flesh. It will even repair itself over time (the flavor of the OP article). If I cut a biofilm or an earthworm in half, both halves will "work".
Now, I am not a computer scientist, computers obviously aren't like biofilms -- but I am not familiar with the extent of this dissimilarity. For example, if I randomly remove 2,000 transistors from a computer (simulating solar wind), what is the probability that the computer will be rendered useless?
The second way I interpret what you say is that we actually redesign our computers, inspired more by biology, to be able to still work even if some computing elements or memory elements are flipped or destroyed. This is probably way outside of my lifetime, though.
But we are already seeing this in the ML community, with things like dropout. If I removed a random set of nodes from a deep learning network, the network would still work fine, more or less...
Edit 2: So, if we had something like that glass with the charged layer (i.e. maybe transistors which created a small magnetic dipole, so the entire device was effectively a ferro magnet), some elements would cop-out and the system would be rendered useless unless it was designed to be robust like an organ losing some cells. But that sounds really hard. We want the transistor or component to maintain some internal state, while being a magnetically shielded box. Hmm... almost like a topological insulator with a doped body.
Robustness really depends on the computer these days, I think. Flash drives already route around bad sectors, and most computer chips are capable of gracefully degrading performance as sections fail (one of the ways chip yields go up: over build the chip so failures can be routed and sold as a cheaper variant).
To your first line, the micromachine-for-micromachines idea often reminds of Feynman's talk "There's Plenty of Room at the Bottom" [0], which is great at least in how long ago it was made. Turns out it's a really hard problem. There have been some functional versions of the micro arm-onna-arm designs, but they're just stupendously hard to control. I'd love to see some progress on that front, though, because it really is cool when it works.
Was this the paper that made Feynman the father of "nanotechnology"? This paper's spirit reminds me of Leonardo Da Vinci's futuristic drawings. Incredible that this was written in 1959. I keep thinking to myself when reading this -- only if Feynman was around to see AFM and STM. I think those were created just before he died. I wish he had more time to play around with those, he probably would have loved them. Feynman had truly a deep imagination.
I intended the arm-on-an-arm expression to be conceptual imagery, more so in-line with your link, less with OP article. What do you mean by "some functional versions" of this? Please link!
I've been out of the field for a while, so I don't really know the state of the art anymore. But I had the pleasure of visiting what was then called the Texas Microfactory. It's basically a sort of microvoxel printer, capable of moving components around on a small stage.
I haven't read it, but it does have some cool pictures around page 40.
As you might imagine, control is the hard part. Damn thing was wobbly like all get out, to the point that it building anything at all was the impressive part. And it did - it wasn't practical for mass production, but it certainly could do cool things.
this question always pops up when i read stuff like this. Why are we never considering creating an artificial magnetic field? You could protect spaceships or settlements on mars. How much power would be required for either those things and what's the deal-breaker? Getting the power?
Boards could be designed to generate magnetic fields via embedded current loops. Instead of having a wire connect two components in a straight line (shortest path approach), it could be done in a way that meanders around, intentionally creating large curls. Since we're talking about scales of 1e-9m, these fields would probably be pretty strong.
Now, I don't know too much about superconductors, but vacuum tends to be pretty fucking cold (2K -- surely, lower than the critical temperature for many superconducting materials). It might even be possible to create a Meissner cage around the important components, in a way that protects our components from self-harm, while still protecting them from external charged particles.
Has this theory been tested? After all, it works for the Earth. I am afraid doing that might also be detrimental to our electronic components (unless we can somehow create a diamagnetic cage to selectively protect our components).