Hacker News new | past | comments | ask | show | jobs | submit login

I agree with your point! Old electronics aren't going to be appropriate for every situation, and modern alternatives are superior for lots of situations. But that doesn't mean that it isn't worth maintaining projects to keep the old ones useful. Plenty of people are still using technology developed thousands of years ago even when there are modern alternatives that are thousands of times more efficient. It just suits their situation better. Putting one of these things in something like a tractor or a dam or anything that has enough energy to spare is exactly the use case. And the relative simplicity of old technology can be a benefit if someone is trying to apply it to a new situation with limited resources or knowledge.



Well, I disagree with yours!

What cases are you thinking of when you say "Plenty of people are still using technology developed thousands of years ago even when there are modern alternatives that are thousands of times more efficient"? I considered hand sewing, cultivation with digging sticks instead of tractors, cooking over wood fires, walking, execution by stoning, handwriting, and several other possibilities, but none of them fit your description. In most cases the modern alternatives are less efficient but easier to use, but in every case I can think of where the efficiency ratio reaches a thousand or more in favor of the new technology, the thousands-of-years-old technology is abandoned, except by tiny minorities who are either impoverished or deliberately engaging in creative anachronisms.

I don't think "the relative simplicity of old technology" is a good argument for attempting to control your tractor with a Z80 instead of an ATSAMD20. You have to hook up the Z80 to external memory chips (both RAM and ROM) and an external clock crystal, supply it with 5 volts (regulated with, I think, ±2% precision), provide it with much more current (which means bypassing it with bigger capacitors, which pushes you towards scarcer, shorter-lived, less-reliable electrolytics), and program it in assembly language or Forth. The ATSAMD20 has RAM, ROM, and clock on chip and can run on anywhere from 1.62 to 3.63 volts, and you can program it in C or MicroPython. (C compilers for the Z80 do exist but for most tasks performance is prohibitively poor.) You can regulate the ATSAMD20's voltage adequately with a couple of LEDs and a resistor, or in many cases just a resistor divider consisting of a pencil lead or a potentiometer.

It would be pragmatically useful to use a Z80 if you have an existing Z80 codebase, or if you're familiar with the Z80 but not anything current, or if you have Z80 documentation but not documentation for anything current, or if you can get a Z80 but not anything current. (One particular case of this last is if the microcontrollers you have access to are all mask-programmed and don't have an "external access" pin like the 8048, 8051, and 80C196 family to force them to execute code from external memory. In that case the fact that the Z80 has no built-in code memory is an advantage instead of a disadvantage. But, if you can get Flash-programmed microcontrollers, you can generally reprogram their Flash.)

Incidentally, the Z80 itself "only" uses about 500 milliwatts, and there are Z80 clones that run on somewhat less power and require less extensive external supporting circuitry. (Boston Scientific's pacemakers run on a Z80 softcore in an FPGA, for example, so they don't have to take the risk of writing new firmware.) But the Z80's other drawbacks remain.


The other draw of an established "old architecture" is that it's fairly fixed and sourcable.

There are a bazillion Z80s and 8051s, and many of them are in convenient packages like DIP. You can probably scavenge some from your nearest landfill using a butane torch to desolder them from some defunct electronics.

In contrast, there are a trillion flavours of modern MCUs, not all drop-in interchangeable. If your code and tooling is designed for an ATSAMD20, great, but I only have a bag of CH32V305s. Moreover, you're moving towards finer pitches and more complex mounting-- going from DIP to TSSOP to BGA mounting, I'd expect every level represents a significant dropoff of how many devices can be successfully removed and remounted by low-skill scavengers.

I suppose the calculus is different if you're designing for "scavenge parts from old games consoles" versus proactively preparing a hermetically sealed "care package" of parts pre-selected for maximum usability.


It's a good point that older hardware is less diverse. The dizzying number of SKUs with different pinouts, different voltage requirements, etc., is potentially a real barrier to salvage. I have a 68000 and a bunch of PALs I pried out of sockets in some old lab equipment; not even desoldering was needed. And it's pretty common for old microprocessors to have clearly distinguishable address and data buses, with external memory. And I think I've mentioned the lovely "external access" pin on the 8048, 8051, and 80C196 family, though on the 80c196 it's active low.

On the other hand, old microcontrollers are a lot more likely to be mask-programmed or OTP PROM programmed, and most of them don't have an EA pin. And they have a dizzying array of NIH instruction sets and weird debugging protocols, or, often, no debugging protocol ("buy an ICE, you cheapskate"). And they're likely to have really low speeds and tiny memory.

Most current microcontrollers use Flash, and most of them are ARMs supporting OCD. A lot of others support JTAG or UPDI. And SMD parts can usually be salvaged by either hot air or heating the board up on a hotplate and then banging it on a bucket of water. Some people use butane torches to heat the PCB but when I tried that my lungs were unhappy for the rest of the day.

I was excited to learn recently that current Lattice iCE40 FPGAs have the equivalent of the 8051's EA pin. If you hold the SPI_SS pin low at startup (or reset) it quietly waits for an SPI master to load a configuration into it over SPI, ignoring its nonvolatile configuration memory. And most other FPGAs always load their configuration from a serial Flash chip.

The biggest thing favoring recent chips for salvage, though, is just that they outnumber the obsolete ones by maybe 100 to 1. People are putting 48-megahertz reflashable 32-bit ARMs in disposable vapes and USB chargers. It's just unbelievable.

In terms of hoarding "care packages", there is probably a sweet spot of diversity. I don't think you gain much from architectural diversity, so you should probably standardize on either Thumb1 ARM or RISC-V. But there are some tradeoffs around things like power consumption, compute power, RAM size, available peripherals, floating point, GPIO count, physical size, and cost, that suggest that you probably want to stock at least a few different part numbers. But more part numbers means more pinouts, more errata, more board designs, etc.


I appreciate the thought and detail you put into these responses. That's beyond the scope of what I anticipated discussing.

The types of things I had in mind are old techniques that people use for processing materials, like running a primitive forge or extracting energy from burning plant material or manual labor. What's the energy efficiency difference between generating electricity with a hand crank vs. a nuclear reactor? Even if you take into account all the inputs it takes to build and run the reactor, the overall output to input energy ratio is much higher, but it relies on a lot of infrastructure to get to that point. The type of efficiency I'm thinking of is precisely the energy required to maintain and run something vs. the work you get out of it.

In the same way, while old computers are much less efficient, models like these that have been manufactured for decades and exist all over might end up being a better fit in some cases, even with less efficiency. I can appreciate that the integration of components in newer machines like the ATSAMD20 can reduce complexity in many ways, but projects like CollapseOS are specifically meant to create code that can handle low-level complexity and make these things easier to use and maintain.

The Z80 voltage is 5V+/-5%, so right around what you were thinking. Considering the precision required for voltage regulation required is smart, but if you were having to replace crystals, they are simple and low frequency, 2-16Mhz, and lots have been produced, and once again the fact that it uses parts that have been produced for decades and widely distributed may be an advantage.

Your point about documentation is a good one. It does require more complicated programming, but there are plenty of paper books out there (also digitally archived) that in many situations might be easier to locate because they have been so widely distributed over time. If I look at archive.org for ATSAMD20 I come up empty, but Z80 gives me tons of results like this: https://archive.org/details/Programming_the_Z-80_2nd_Edition...

Anyway, thank you again for taking so much time to respond so thoughfully. You make great points, but I'm still convinced that it's worthwhile to make old hardware useful and resilient in situations where people have limited access to resources, people who may still want to deploy some forms of automation using what's available.

Projects like this one will hopefully never be used for their intended purpose, but they may form a basis for other interesting uses of technology and finding ways to take advantage of available computing resources even as machines become more complicated.


In my sibling comment about the overall systems aspects of the situation, I asserted that there was in fact enormously more information available for how to program in the 32-bit ARM assembly used by the ATSAMD20 than in Z80 assembly. This is an overview of that information, starting, as you did, from the Internet Archive's texts collection.

Searching the Archive instead for [arm thumb programming] I find https://archive.org/details/armassemblylangu0000muha https://archive.org/details/digitaldesigncom0000harr_f4w3 https://archive.org/details/armassemblyforem0000lewi https://archive.org/details/SCE-ARMref-Jul1996 (freely available!) https://archive.org/details/armassemblylangu0000hohl https://archive.org/details/armsystemarchite0000furb https://archive.org/details/learningcomputer0000upto https://archive.org/details/raspberrypiuserg0000upto_i5z7 etc.

But the Archive isn't the best place to look. The most compact guide to ARM assembly language I've found is chapter 2 of "Archimedes Operating System: A Dabhand Guide" https://www.pagetable.com/docs/Archimedes%20Operating%20Syst..., which is 13 pages, though it doesn't cover Thumb and more recently introduced instructions. Also worth mentioning is the VLSI Inc. datasheet for the ARM3/VL86C020 https://www.chiark.greenend.org.uk/~theom/riscos/docs/ARM3-d... sections 1 to 3 (pp. 1-3 (7/56) to 3-67 (45/56)), though it doesn't cover Thumb and also includes some stuff that's not true of more recent processors. These are basically reference material like the ARM architectural reference manual I linked above from the Archive; learning how to program the CPU from them would be a great challenge.

There's a lovely short tutorial at https://www.coranac.com/tonc/text/asm.htm as well (43 pages), and another at https://www.mikrocontroller.net/articles/ARM-ASM-Tutorial (109 pages). And https://azeria-labs.com/writing-arm-assembly-part-1/ et seq. is probably the most popular ARM tutorial. None of these is as well written as Raymond Chen's introductory Thumb material: https://devblogs.microsoft.com/oldnewthing/20210615-00/?p=10... https://devblogs.microsoft.com/oldnewthing/20210616-00/?p=10... https://devblogs.microsoft.com/oldnewthing/20210617-00/?p=10... https://devblogs.microsoft.com/oldnewthing/20210625-00/?p=10... https://devblogs.microsoft.com/oldnewthing/20210624-46/?p=10... https://devblogs.microsoft.com/oldnewthing/20210531-00/?p=10... https://devblogs.microsoft.com/oldnewthing/20210601-00/?p=10... https://devblogs.microsoft.com/oldnewthing/20210602-00/?p=10... https://devblogs.microsoft.com/oldnewthing/20210603-00/?p=10... https://devblogs.microsoft.com/oldnewthing/20210604-00/?p=10... https://devblogs.microsoft.com/oldnewthing/20210607-00/?p=10... https://devblogs.microsoft.com/oldnewthing/20210608-00/?p=10... (I'd link an index page but I couldn't find one.) Chen covers most of the pragmatics of using the Thumb instruction set well.

There's an ARM Thumb assembler in μLisp (which can itself run on embedded ARMs) at https://github.com/technoblogy/lisp-arm-assembler, which of course explains all the instruction encodings, documented at http://forum.ulisp.com/t/an-arm-assembler-written-in-lisp/12.... Lots of free software already runs on the chip, including FreeRTOS.

https://mcuoneclipse.com/2016/08/14/arm-cortex-m-interrupts-... covers the Cortex-M interrupt system, and lcamtuf has written an excellent tutorial for getting the related ATSAMS70J21 up and running https://lcamtuf.substack.com/p/mcu-land-part-3-baby-steps-wi....

Stack Overflow has 12641 questions tagged [arm] https://stackoverflow.com/questions/tagged/arm, as opposed to 197 for [z80]. Most of these are included in the Kiwix ZIM files of SO like https://download.kiwix.org/zim/stack_exchange/stackoverflow.... (see https://library.kiwix.org/?lang=eng&q=&category=stack_exchan...).


I also appreciate your responses! I especially appreciate the correction about the Z80's power supply requirements.

> What's the energy efficiency difference between generating electricity with a hand crank vs. a nuclear reactor?

A hand crank is about 95% efficient. An electromechanical generator is about 90% efficient. Your muscles are about 25% efficient. Putting it together, the energy efficiency of generating electricity with a hand crank is about 21%. Nuclear reactors are about 40% efficient, though that goes down to about 4% if you include the energy cost of building the power plant, enriching the fuel, etc. The advantages of the nuclear reactor are that it's more convenient (requiring less human attention per joule) and that it can be fueled by uranium rather than potatoes.

> Even if you take into account all the inputs it takes to build and run the reactor, the overall output to input energy ratio is much higher. (...) The type of efficiency I'm thinking of is precisely the energy required to maintain and run something vs. the work you get out of it.

The term for that ratio, which I guess is a sort of efficiency, is "ERoEI" or "EROI". https://en.wikipedia.org/wiki/Energy_return_on_investment#Nu... says nuclear power plants have ERoEI of 20–81 (that is, 20 to 81 joules of output for every joule of input, an "efficiency" of 2000% to 8100%). A hand crank is fueled by people eating biomass and doing work at energy efficiencies within about a factor of 2 of the best power plants. Biomass ERoEI varies but is generally estimated to be in the range of 3–30. So ERoEI might improve by a factor of 30 or so at best (≈81 ÷ 3) in going from hand crank to nuclear, and possibly get slightly worse. It definitely doesn't change by factors of a thousand or more.

Even if it were, I don't think hand-crank-generated electricity is used by "plenty of people".

> projects like CollapseOS are specifically meant to create code that can handle low-level complexity and make these things easier to use and maintain.

I don't think CollapseOS really helps you with debugging the EMI on your RAM bus or reducing your power-supply ripple, and I don't think "ease of use" is one of its major goals. Anti-goals, maybe. Hopefully Virgil will correct me on that if he disagrees.

> if you were having to replace crystals, they are simple and low frequency, 2-16Mhz, and lots have been produced, and once again the fact that it uses parts that have been produced for decades and widely distributed may be an advantage.

I don't think a widely-distributed crystal makes assembly or maintenance easier than using an on-chip RC oscillator instead of a crystal. It does have real advantages for timing precision, but you can use an external crystal with most modern microcontrollers just as easily as with a Z80, the only drawback being that the cheaper ones are rather short on pins. Sacrificing two pins of a 6-pin ATTiny13 to your clock really reduces its usefulness by a lot.

> If I look at archive.org for ATSAMD20 I come up empty, but Z80 gives me tons of results like...

Oh, that's because you're looking for the part number rather than the CPU architecture. If you don't know that the ATSAMD20 is a Cortex-M0(+) running the ARM Thumb1 instruction set, you are going to have a difficult time programming it, because you won't know how to set up your C compiler.

There is in fact enormously more information available for how to program in 32-bit ARM assembly than in Z80 assembly, because it's the architecture used by the Acorn, the Newton, the Raspberry Pi, almost every Android phone ever made, and old iPhones. See my forthcoming sibling comment for information about ARM programming.

Aside from being a much better compilation target for high-level languages like C, ARM assembly is much, much easier than Z80 assembly. And embedded ARMs support a debugging interface called OCD which dramatically simplifies the task of debugging broken firmware.

> models like [Z80s and 6502s] that have been manufactured for decades and exist all over might end up being a better fit

There are definitely situations where Z80s or 6502s, or entire computers already containing them, are more easily available than current ARM microcontrollers. (For example, if you're at my cousin's house—he's a collector of obsolete computers.) However, it's difficult to overstate how much more ubiquitous ARM microcontrollers are. The heyday of the Z80 and 6502 ended in about 01985, at which point a computer using one still cost about US$2000 and only a few million such computers were sold per year. The most popular 6502 machine was the Commodore 64, whose total lifetime production was 12 million units. The most popular 8080-family machine (supporting a few Z80 instructions) was probably the Gameboy, with 119 million units. We can probably round up the total of deployed 8080 and 6502 family machines to 1 billion, most of which are now in landfills.

By contrast, we find ARMs in things like not just the Gameboy Advance but the Anker PowerPort Atom PD 2 USB-C charger http://web.archive.org/web/20250101181745/https://forresthel... and disposable vapes https://ripitapart.com/2024/04/20/dispo-adventures-episode-1... https://old.reddit.com/r/embedded/comments/1e6iz4a/chinese_c... — and, as of 02021, ARM tells us 200 billion ARMs had been shipped https://newsroom.arm.com/blog/200bn-arm-chips and were then being produced at 900 ARMs per second.

That means about as many ARMs were being produced every two weeks as 8080 and 6502 machines in history, a speed of production which has probably only accelerated since then. Most of those are embedded microcontrollers, and I think that most of those microcontrollers are reflashable.

Other microcontroller architectures like the AVR are also both more pleasant to program and more abundant than Z80s and 6502s. They also feature simpler and more consistent sets of peripherals than typical Z80 and 6502 machines, in part because the CPU itself is so fast that a lot of the work these obsolete chips need special-purpose hardware for can instead be done in software.

So, I think that, if you want something useful and resilient in situations where people have limited access to resources, people who may still want to deploy some forms of automation using what's available, you should focus on ARM microcontrollers. Z80s and 6502s are rarely available, much less useful, fragile rather than resilient, inflexible, and unnecessarily difficult to use.


> though that goes down to about 4% if you include the energy cost of building the power plant, enriching the fuel, etc.

Rereading this, I don't know in what sense it could be true.

What I was thinking of was that the cost of energy from a nuclear power plant is on the order of ten times as many dollars as the cost of the fuel, largely as a result of the costs of building it, which represents a sort of inefficiency. However, what's being consumed inefficiently there isn't energy; it's things like concrete, steel, human attention, bulldozer time, human lives, etc., collectively "money".

If, as implied by my 4% figure, what was being consumed by the plant construction were actually 22.5x as much energy as comes out of the plant over its lifetime, rather than money, its ERoEI would be about 0.044. It would require the lifetime output of twenty or thirty 100-megawatt power plants to construct a single 100-megawatt nuclear power plant. That is not the case. In fact, as I explained later down in the same comment, the ERoEI of nuclear energy is generally accepted to be in the range of about 10 to 100.


This is some quality information!

About the return on investment, the methodology is interesting, and I’m surprised that a hand crank to nuclear would increase so little in efficiency. But although the direct comparison of EROI might be small, I wonder about this part from that article:

“It is in part for these fully encompassed systems reasons, that in the conclusions of Murphy and Hall's paper in 2010, an EROI of 5 by their extended methodology is considered necessary to reach the minimum threshold of sustainability,[22] while a value of 12–13 by Hall's methodology is considered the minimum value necessary for technological progress and a society supporting high art.”

So different values of EROI can yield vastly different civilizational results, the difference between base sustainability and a society with high art and technology. The direct energy outputs might not be thousands of times different, but the information output of different EROI levels could be considered thousands of times different. Without a massive efficiency increase, society over the last few thousand years got much more complex in its output. I’m not trying to change terms here just to win an argument but trying to qualify the final results of different capacities of harnessing energy and technology.

I think this gets to the heart of the different arguments we’re making. I’m not in any way arguing that these old architectures are more common in total quantity than ARM. That difference in production is only going to increase. I wouldn’t have known the specific difference, but your data is great for understanding the scope.

My argument is that projects meant to make technology that has been manufactured for a long period of time and has been widely distributed more useful and sustainable are worthwhile, even when we have more common and efficient alternatives. This doesn’t in any way contradict your point about ARM architecture being more common or useful, and I’d be fully in favor of someone extending this kind of project to ARM.

In response to some of the other points: using an external crystal is just an example of how you could use available parts to maintain the Z80 if it needed fixing but you had limited resources. In overall terms, it might be easier to throw away an ARM microcontroller and find 100 replacements for it than even trying to use an external crystal for either one, but again I’m not saying it’s a specific advantage to the Z80 that you could attach a common crystal, just something that might happen in a resource-constrained situation using available parts. Better than the kid in Snowpiercer sitting and spinning the broken train parts at least.

Also, let me clarify the archive.org part. I wasn’t trying to demonstrate the best process for getting info. I just picked that because they have lots of scanned books to simulate someone who needed to look up how to program a part they found. I know it’s using ARM, but the reason I mentioned that had to do with the distribution of paper books on the subject and how they’re organized. The book I linked to starts with very basic concepts for someone who has never programmed before and moves quickly into the Z80, all in one old book, because it was printed in a simpler time when no prior knowledge was assumed.

There are plenty of paper books on ARM too, and probably easier to find, but now that architectures are becoming more complicated, you’re more likely to find sources online that require access to a specific server and have specialized information requiring a certain familiarity with programming and the tools needed for it. More is assumed of the reader.

If you were able to find that one book, you could probably get pretty far in using the Z80 without any familiarity with complex tools. Again, ARM is of course popular and well-documented, but the old Z80 stuff is still out there and simple enough to understand and even analyze with your bare eyes in more detail than you could analyze an ARM microcontroller without some very specific tools.

So all that info about ARM is excellent, but this isn’t necessarily a competition. It’s someone’s passion project who chose a few old, simple, and still-in-production technologies to develop a resilient and translatable operating system for. It makes sense to start with the earlier technology because it’s simpler and less proprietary, but it would also make sense to extend it to modern architectures like ARM or RISC-V. I wouldn’t be surprised if sometime in the future some person or AI did just that. This project just serves as a nice starting point for an idea on resilient electronics.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: