Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I remember a conversation I had with a friend's father, an HVAC engineer. He brought home a control board one-time, showed me the sensor inputs and control outputs. I don't remember how many there were, but let's say a dozen of each.

He described how a sensor in would connect to this spot and feed the system temperatures from a sensor in some remote part of the building. The control output would then go to the appropriate HVAC system and turn on/off A/C, heat or whatever to get the temperature to what it needed to be at.

The system would spider out around the building with these sensors and regulate the building's temperature.

At the heart of the board (which looked like a PC motherboard) was the CPU, a z80, already hilariously ancient when he showed this to me. So I asked him "why not use a more modern CPU?"

He responded, "why? This z80 can control an office building's entire HVAC system, and poll each sensor 200 times a second, how many times per second do you need? Temperature in a zone doesn't change that fast."

It was my introduction to the concept of "Lateral Thinking with Withered Technology" https://en.wikipedia.org/wiki/Gunpei_Yokoi#Lateral_Thinking_...



This is so true, and even today with new designs you end up with overkill. A Cortex M0 (32 bit ARM system) with 128K of flash and 64K of RAM is 75 cents in quantity. That means the processor complex is essentially "free" with respect to the cost of the other bits (sensors, actuators, communication over distance). The risk is that with all that "extra power" the programmer decides to use it creatively for something like "A built in web server to show you status of everything" and that "feature" requires you connect it to the wider network, and that "webserver" never gets patched, and now you have an HVAC system which becomes the exploit vector to get into a much bigger facility/network.

All because designing in a "limited" computer didn't make economic sense, and programmers couldn't help but use the extra CPU capacity that was available.

That is what makes IoT a challenge / bad-idea to a lot of people.


Spot on. Programmers in general are not well-versed in security. I don't mean you, the reader of this comment. But as a collective. The other people who write code through which you can pass buses full of black hats. Not you. Web applications with huge budgets get owned by common mistakes. And those web applications don't have access to anything but data. Imagine having all those same, eh, security issues, in devices that can interact with the real world. I really don't want my microwave to suddenly turn on and keep going for hours at time while I'm out of the house.

This comment is purely fictional. IoT is perfect.


Fnord.


I've noticed the opposite thing. Most of the hardware around is built on the weakest specs that still let the thing run. Those various 10 cent savings on flash and μC tend to add up quickly when you go into mass production.

But the primary problem, which is not limited to but obviously visible in IoT, is that companies ask themselves "what sells?" instead of "what is good and useful?". All that crap that is being created, with useless "features" that introduce security holes foster the fragmentation of the ecosystem, is pushed because someone out there figures out that people will buy it. But almost no one understands the implications of all these "features" so the buying decision they make is usually wrong and stupid.

I wish someone cut out sales people from the design process. You should be able to get designers and engineers together, having them ask themselves what would be an optimal, actually useful smartwatch/smartfridge/smarttoilet/whatever and how to build it, and then tell sales people to figure out how to sell it. But no optimizing for better sellability.


I too have seen the intense penny pinching, here in California the soda bottler removed one thread from the tops of plastic bottles, it saves probably a fraction of a cent in plastic, but makes the detached retaining ring for the cap rub on your lips when drinking. That makes it uncomfortable to sip from those bottles. Such a huge price to pay in user dissatisfaction for such a small savings.

Can't go this far though :

   > I wish someone cut out sales people from the design 
   > process. ... no optimizing for better sellability.
In my experience, actually doing things this way leads to less economic success for the product and eventually it gets outsold by a competitor without those restraints. And At FreeGate I told sales people "you have to sell what we have, not what we don't have" and still had them come back with complaints about how the competitor could install their box in a data center etc etc. Not a productive conversation (or fun for that matter).

There does seem to be a minimally required feature set for selling things these days. "High Quality" isn't the compelling feature it once was.


There's some penny pinching, for sure - I had a coworker whose brother is on the iPhone hardware team and they have a lot of trouble with samples coming back from manufacturing with the wrong resistor here or a missing capacitor there to save a few bucks, because the factory sees it as overengineering, but doesn't understand the purpose it's built for.

That said, relying on an older processor may actually not save money. Sure, there's a premium on the absolute newest processor, but in general what's cheapest is what is most mass produced Right Now(tm).

I think a z80 on something like this was likely similar to the reasons that NASA control systems typically use the most reliable hardware they can, which means something that has been in use for many years.

For HVAC, maybe a little of each, but also the software may have been written to the z80, and if you change that out, you have to do all the testing you'd have to do if you built a new machine.

I often think back on this old chat I had with my grandfather, where he kind of tilted his head at something I was explaining about 90s tech and said something like:

   "Interesting.  In my day, we programmed the software to the hardware, it kind of seems like now you all are programming the hardware to the software."


> I had a coworker whose brother is on the iPhone hardware team and they have a lot of trouble with samples coming back from manufacturing with the wrong resistor here or a missing capacitor there to save a few bucks, because the factory sees it as overengineering, but doesn't understand the purpose it's built for.

I find that story utterly implausible.

The day Foxconn makes unapproved changes to Apple designs is the day that...well, never.


I think you have an interesting point but it ignores humanity. People want what they want for different reasons. As a marketer, it's probably easier to give people what they want than to change people's minds to accept what they need. I blame neither person in this situation only I'd try to change the system which surrounds them.


I know that salespeople can often be the source of bad decisions, but determining market fit is still vitally important. Who wants to build (or, more importantly, fund) something that no one wants?


But then won't you be outsold by the products that have focused on sellability and features?


Totally off the top of my head, but:

It seems like a service discovery system for IoT devices might be a good idea where the service discovery system is tracking what is actually allowed to run a particular service - like an HVAC system running embedded webserver.

For example, imagine if the industry had it so that the HVAC system was to announce that it had the capability to be an embedded webserver for status - but it instead should check to see if there is a different host to where it should send its metrics. This way you could control what the core host is for said website - and have all systems in the community basically ask for direction on self-hosting or publishing...


You have to not do that. Cycles you don't use cost absolutely nothing. Indeed, they may actually reduce jitter if you're careful ( they should not but that's another story ) .

This is the ultimate YAGNI.


It seems insane to think that a programmer would just "decide" to build a feature like that. It would have to be decided by the product people, who probably think it's a good idea.


That is exactly right. A product person who wants to sell this system wants as many bells and whistles as possible because, hey who knows what the one thing is that will push the customer over the edge into the "buy zone" right? So you end up with all sorts of stuff in there. In my experience it is rare to have a programmer who will push back on that request.


[deleted]


Constrained resources encourages simplicity by design. Use only whats necessary, no more, no less.


Well, "no less" is not always true. How many controllers exposing vulnerable interfaces are out there because encryption added too much overhead?


I'd consider security, and the resources to enable it, necessary.


Yes, and the constraints encouraged those developers to use less than necessary.


Z80s are still available new and in IP core form for integration into SoCs... as is the 6502, 8051, and several other "classic" MCUs designed in the late 70s/early 80s.

As I'm typing this my keyboard's controller is an 8051 variant, the touchscreen of my phone also uses an 8051, the mouse has a 6502, and the monitors in front of me have an 80186 core in them.

They are fast enough for the task, cheap, and development tools widely available, so they really don't need to be replaced.


Interesting considering the article, it was a defining part of Commodore hardware design that they compensated for slow CPU's by using co-processors all over the place, including putting a 6502 compatible CPU in the keyboard controller for the A500 and A2000...

(But you'd also find this spilling over into 3rd party hardware: My HD controller for my Amiga 2000 back in the day had a Z80 on it.

That machine was totally schizophrenic: In addition to the 6502 core on the keyboard, the M68k main CPU and the Z80 on the HD controller it also had an x86 on a bridge board - the A2000 had both Amiga-style Zorro slots and ISA slots, including one slot where both connectors were in line where you could slot in a "bridge board" with an 8086 that effectively gave you a PC inside your Amiga, with the display available in a window on the Amiga desktop).


It's kind of mind blowing that our cheap peripherals are driven by what used to be top-of-the-line processors only a few decades ago. I guess all that firmware has to run on something.


As I mentioned in another comment, already the ca. 1987 Amiga 2000 in this article had a 6502 compatible core on the keyboard controller, and some same era hd controllers had Z80's on them - they were cheap already then.


Do you think its better to teach uni students on new processors and tools e.g. Freescale, ARM, etc. or on older z-80, 80186 cpus?


I think university students should definitely start with older processors, and then gradually change the levels. I agree there is an architectural change in the newer processors, plus the additional cores. But, working with an older processor with limited memory and processing ensures the programmer realizes how important is each line of code and appreciates the comfort provided by newer processors and thus their complexity.


The first computer I programmed was a Z80 micro-controller connected to some basic peripherals (LED readout, sensors, actuators, stepper motors, potentiometers, etc...). There was no compiler, no assembler; nothing but a keypad to enter the instructions into memory and a button to start execution.

The CPU was less powerful than any of the x86 32bit chips that were widely available at the time, but as a kid it still really gave me the idea that whatever I could think of, I could make a computer do.

I'd agree, understanding things at a really basic level first helped me to better understand things at a higher level later on. It probably helps me to keep in mind what a computer actually needs to do to run code as well. I think it's probably one of the reasons Knuth uses MIX in TAOCP.


Kind of a "which students" sort of question.

I'd say with the older ones. With those, you can put a logic analyzer on the memory bus and see what's going on - if the pins aren't on a BGA under the chip and the board has no vias.


Working on the older CPUs is more approachable to understanding all the low level details plus it makes you appreciate all that the newer CPUs offer. However when actually working, I don't think one should work with an older CPU unless it really makes sense (sufficient computer power, low power requirements, etc.) Working with a powerful CPU lets you focus on the job at hand instead of the idiosyncrasies.


I don't think this is true at all, older CPUs are not a "more purified" and "cleaner" version of todays, they have the same and often considerably more cruft and crazyness.

To work with them is to teach bad habits and useless skills.


Some older CPU's maybe, but you can't seriously look at e.g. the 68000 next to an x86 CPU and tell me the 68000 is not cleaner.

It's not that they don't have craziness, it's that the functionality that mere mortals need to use to write efficient code is simpler.

The M68k's 8 general purpose data registers and 8 general purpose address registers alone is sufficient to make a huge difference.

For me, moving to an x86 machine was what made me give up assembler - in disgust - and it is something I've heard many times over the years: it takes a special kind of masochist to program x86 assembly; for a lot of people who grew up with other architectures, it's one step too far into insanity.


I have the pleasure of working with PowerPC in my day job. Also a relatively clean architecture. I really do wish that Apple had been more successful with it, that Microsoft would have continued supporting it in NT, that Motorola / IBM had kept up with Intel in raw performance, and that it had a larger user base than it does today.


Not to mention the m68k flat address space. A clean architecture for clean code.


Just look at the 6502. No two instructions follow the same pattern - every one is a moss-covered three-handled family credenza, to quote the good Doctor.


The 6502's instruction set is pretty regular, with most instructions of the form aaabbbcc. For instance, if cc==01, aaa specifies the arithmetic operation and bbb specifies the addressing mode. Likewise with cc==01, aaa specifies the operation and bbb the address mode. See http://www.llx.com/~nparker/a2/opcodes.html

The regularity of the 6502's instruction set is a partially a consequence of using a PLA for instruction decoding. If you can decode a bunch of instructions with a simple bit pattern, it saves space.


Aftger arithmetic, instructions have little or no regularity. They omit addressing modes, swap codings for modes. There's internal hardware reasons for this, but for the programmer its chaotic.


Not that it's in the least bit relevant to the discussion, but the moss-covered three-handled family credenza is not a Dr. Seuss quote found anywhere in his books, it came from the 70's era 'Cat in the Hat' TV adaptation, authored by Chuck Jones.


Cool! I never knew. I guess it shouldn't be considered 'canon' then.


That's just not true. It has irregularities, but most of the instructions fit into a small set of groups that follow very simple patterns.

But secondly, where the 6502 deviates from a tiny set of regular patterns it is largely by omitting specific forms of instructions, either because the variation would make no sense, or to save space - the beauty of the 6502 is how simple it is:

You can fit the 6502 instruction set on a single sheet of paper with sufficient detail that someone with some asm exposure could understand most of it.


The x86 family is the same.


Oh there is quite a lot of consistency in the structure of instructions across the basic set - register numbering, many instructions allow full register and addressing modes. The 6502 had pretty much no two instructions the same.


What mouse uses a 6502?


This one:

http://www.mcuic.com/bookpic/200811516244620817.pdf

(Look at page 9. This IC is found in a lot of generic mouses.)


TI runs all its low-level calculator stuff on Z80 emulators which are then helpfully run by whatever actual chip they are putting in the calculators these days.


Nope; with one exception, the Z80-family calculators are still run by real, bona-fide Z80s (or, in the case of the new TI-84 Plus CE, an eZ80).

(That "one exception" was the TI-84 Keyboard for the original Nspire, which did run the 84's firmware in a Z80 emulator on the Nspire's ARM processor.)


From time to time, I am greeted with looks of shocked disbelief when a younger employee finds out how much of my employer's business gets done on OpenVMS Alphas and IBM Mainframes. They think it's stupid that we're not running it all on HP servers in the data center.

The thought never occurs to them that it's rock solid, only needs quarterly patching(at most) and has had 20+ years of tweaks that make it fit our needs. We don't need to replace it, yet.


When people really need something that's reliable, there's really no limit to how much effort can be put into producing a system with unfailing integrity and availability.

Take, for example, the lockstep facility on certain IBM processors:

https://www-01.ibm.com/chips/techlib/techlib.nsf/techdocs/29...

You can take two or more of them, run identical software on them, and compare their output on a cycle-by-cycle basis.

Now, the 750GX series may be a bit out of date in the modern era ... but good luck achieving that level of paranoid system integrity with just about any truly modern system.

One thing that I think they don't teach so well in most colleges is that a system's compute performance is not always the most important measurement of the system's capability.


Not to mention there's VMS clusters with 20+ years of uptime. The systems, especially Alphas, were so reliable that at least one sysadmin forgot how to restart them and had to consult the manual lol. I wish they made a good desktop. I'd have used it and probably lost less work. ;)

A link for you: http://h71000.www7.hp.com/openvms/brochures/commerzbank/comm...

Notice how "all systems crashed" from heat except the AlphaServer. That's great engineering, right there. It's why I wish they were still around outside eBay. That plus PALmode: most benefits of microprogramming without knowing microprogramming. :)


>It was my introduction to the concept of "Lateral Thinking with Withered Technology" https://en.wikipedia.org/wiki/Gunpei_Yokoi#Lateral_Thinking_....

Thanks for sharing this, I love finding creative new ways to take advantage of 'tried & true' technology and it's something that regularly feeds into how I build software–sometimes to the displeasure of colleagues who are most interested in the shiniest new tools. It's interesting to read about how this sort of thinking worked for Nintendo.


Why write your own 10-line function to do it when you could use this library and do it with 3 (not including the 3k line lib)


Why? So you can use modern, friendly UIs to control it instead of scary there-be-dragons only-one-guy-in-facilities-is-allowed-to-touch-it Win32 apps.


the counter to lateral thinking nowadays is power efficiency. Imagine underclocking the iphone 6 processor to 1st gen iphone screens. I image you'd get quite a bang for your buck on that one.

Though I am a big proponent of lateral thinking in general, for battery powered devices the optics change a little I think.


At some point it may become cost-prohibitive (or unwieldy in some other way) to continue to manufacture such chip designs, even though many applications may not require additional raw horsepower.


At best can replace it with a Raspberry PI instead of a full on PC.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: