> It was a real time computer NOT designed for speed but real time operations.
More than anything, it was designed to be small and use little power.
But these little ARM Cortex M4F that we're comparing to are also designed for embedded, possibly hard-real-time operations. And dominant factors in experience on playback through earbuds are response time and jitter.
If the AGC could get a capsule to the moon doing hard real-time tasks (and spilling low priority tasks as necessary), a single STM32F405 with a Cortex M4F could do it better.
Actually, my team is going to fly a STM32F030 for minimal power management tasks-- but still hard real-time-- on a small satellite. Cortex-M0. It fits in 25 milliwatts vs 55W. We're clocked slow, but still exceed the throughput of the AGC by ~200-300x. Funnily enough, the amount of RAM is about the same as the AGC :D It's 70 cents in quantity, but we have to pay three whole dollars at quantity 1.
> NASA used a lot of supercomputers here on earth pior to mission start.
Fine, let's compare to the CDC 6600, the fastest computer of the late 60's. M4F @ 300MHz is a couple hundred single precision megaflops; CDC6600 was like 3 not-quite-double-precision megaflops. The hacky "double single precision" techniques have comparable precision-- figure that is probably about 10x slower on average, so each M4F could do about 20 CDC-6600 equivalent megaflops or is roughly 5-10x faster. The amount of RAM is about the same on this earbud.
His 486-25 -- if a DX model with the FPU -- was probably roughly twice as fast as the 6600 and probably had 4x the RAM, and used 2 orders of magnitude less power and massed 3 orders of magnitude less.
Control flow, integer math, etc, being much faster than that.
Just a few more pennies gets you a microcontroller with a double precision FPU, like a Cortex-M7F with the FPv4-SP-D16, which at 300MHz is good for maybe 60 double precision megaflops-- compared to the 6600, 20x faster and more precision.
I have thought about this a little more, and looked into things. Since NASA used the 360/91, and had a lot of 360's and 7900's... all of NASA's 60's computing couldn't quite fit into a single 486DX-25. You'd be more like 486DX2-100 era to replace everything comfortably, and you'd want a lot of RAM-- like 16MB.
It looks like NASA had 5 360/75's plus a 360/91 by the end, plus a few other computers.
The biggest 360/75's (I don't know that NASA had the highest spec model for all 5) were probably roughly 1/10th of a 486-100 plus 1 megabyte of RAM. The 360/91 that they had at the end was maybe 1/3rd of a 486-100 plus up to 6 megabytes of RAM.
Those computers alone would be about 85% of a 486-100. Everything else was comparatively small. And, of course, you need to include the benefits from getting results on individual jobs much faster, even if sustained max throughput is about the same. So all of NASA, by the late 60's, probably fits into one relatively large 486DX4-100.
Incidentally, one random bit of my family lore; my dad was an IBM man and knew a lot about 360's and OS/360. He received a call one evening from NASA during Apollo 13 asking for advice about how they could get a little bit more out of their machines. My mom was miffed about dinner being interrupted until she understood why :D
Ps. Try msp430 f model for low power. These can be CRAZY efficient.
Ps. Don't forget to short circuit the solar panel directly to system: then your satellite might talk even 50 years from now such as some HAM satellites from cold war (Oscar 7 I think)
NyanSat; I'm PI and mentor for a team of high school students that were selected by NASA CSLI.
> Ps. Try msp430 f model for low power. These can be CRAZY efficient.
Yah, I've used MSP430 in space. STM32F0 fits what we're using it for. The main flight computer we designed, and it's RP2350 with MRAM. Some of the avionics details are here: https://github.com/OakwoodEngineering/ObiWanKomputer
> Ps. Don't forget to short circuit the solar panel directly to system: then your satellite might talk even 50 years from now such as some HAM satellites from cold war (Oscar 7 I think)
Current ITU guidelines make it clear this is something we're not supposed to do to ensure that we can actually end transmissions by the satellite. We'll re-enter/burn up within
Yeah, I mean what do you expect or is the alternative? If you have a process that needs access to something only root typically can do, and the solution been to give that process root so it can do it's job, you usually need root to be able to give that process permission to do that thing without becoming root. Doesn't that make sense? What alternative are you suggesting?
Uhm no. Podman is a different product that is pretty much a drop-in replacement for Docker but lets you run as non-root.
You have to be root to set it up, but after that you don't need any special privileges. With Docker the only option is to basically give everyone root access.
It's true that it requires root for some setup though. Unclear if op was complaining about that.
Now. I was at Red Hat at the time, in the BU that built podman, and Docker was just largely refusing any of Red Hat's patches around rootless operation, and this was one of the top 3, if not the top motivation for Red Hat spinning up podman.
You'd have to point me to those PR's, I don't recall anything specifically around rootless.
I recall a lot of things like a `--systemd` flag to `docker run`, and just general things that reduce container security to make systemd fit in.
I think "in a VM" was elided. It's easy to tune qemu + Linux to boot up a VM in 150ms (or much less in fact).
Real hardware is unfortunately limited by the time it takes to initialize firmware, some of which could be solvable with open source firmware and some (eg. RAM training) is not easily fixable.
And most importantly and TFA mentions it several times: stripping unused drivers (and even the ability to load drivers/modules) and bloat brings very real security benefits.
I know you were responding about the boot times but that's just the icing on the cake.
At this time laptops still could have memory upgrades, and memory was pretty cheap compared to today. The first thing I did when I bought a new laptop was buying two 8GB SoDIMMs, it was way cheaper than ordering the upgrade from factory.
The thing is, memory in personal computer have plateaued for quite some time. 16GB was not uncommon in 2010. Things are not like the crazy 90s and early 2000s where PC configuration become obsolete in less than two years.
reply