That is how I was convinced to take a microwave engineering course. I was an EE undergrad, and had just struggled through the electrodynamics course. My professor asked me to take his microwave engineering course the following semester. I said I was not planning on it, since I had enough trouble getting through the prerequisite and I had no plans to become a microwave engineer. He asked me the following questions-
Professor: "How fast is your computer at home?" - Me:"2.8GHz.",
Professor: "OK, and what is considered the start of the microwave band?" - Me: "1 GHz"
Professor: "So, I will see you next semester?"
It turned out to be a great course, we built antennas, resonators, did simulations and compared to physical results, talked about a lot of different situations. In my career as a software engineer I have made great use of that knowledge by being able to better understand the issues that my EE counterparts are working on.
> At high enough frequencies digital design becomes analogue design.
I've heard this stated as "the difference between digital engineers and analog engineers is that the analog engineers KNOW when they're designing an antenna".
I read a great how-to once where someone described a device which blinks in the dark - and only in the dark - that uses the LED itself as light sensor.
At work we replace an obsolete diode in part of a circuit in front of a high gain amplifier. The new equivalent diode came in a glass package unlike the old one. It caused a bit of head scratching to figure out why the circuit wasn't behaving as it should and also appeared to be acting as a human proximity detector.
At high enough frequencies digital design becomes analogue design.
It's a great time to be a digital engineer. Projects like the PocketVNA and the NanoVNA-V2 make it feasible for you to do professional grade S parameter measurements of digital traces at home.
Pair them with QucsStudio, and you have a near professional grade high speed simulation package available for less than $500. So cool.
And, speaking of Rules of Thumb - Embedded Artistry's page calls out Eric Bogatin's Signal Integrity Rules of Thumb as well, which is a fantastic series of blog posts on high speed digital design and signal integrity. Worth a read if high speed digital design is your thing.
Had a GPS that started jabbering crap when it got cold.... showed up as system dying because it was being flooded with events, ie. Some other task died because it wasn't getting enough cpu time.
Missing the most important Rule of Thumb.
This is an embedded device. It has a _known_ purpose.
Every software design decision should be based around this.
This clears up any code goes here (ISR/Task), or 'Use this algo'.
Not so sure for larger devices with possibly several ECUs and in the field upgrades. All of the sudden the whole thing becomes a tiny data center with several nodes, and the architect needs to value some flexibility over clear cut single purpose.
In automotive, embedded is transformed in this way.
I disagree. If you have interrupt priorities then you can treat your interrupt service routines as the highest priority tasks. As an example lets imagine a drone controller which has two functions it needs to perform in real time: it needs to vary the PWM signal to the motor controller and it needs to acknowledge radio packets. An interrupt is raised when the accelerometer has new data and another is raised when the radio has received a packet. You could write ISRs which just queues the accelerometer data and radio packets so they can be dealt with by a flight control task and a radio task respectively. Alternatively, you could run the whole of the flight controller inside the accelerometer ISR. This reduces copying of data and context switches. Because the accelerometer ISR now takes longer you need to use the radio ISR to acknowledge the radio packets itself. You split the radio task into two tasks: a low priority task for processing commands and the ISR task which does the acknowledgements.
> Avoid blocking function calls
This can be rewritten as never call functions that block on an action of a lower priority task. It leads to priority inversion in normal tasks and deadlocks in ISRs.
Re: ISR priorities. I agree with you somewhat. Technology is always changing. Many architectures don't have a rich interrupt controller. A system built to take advantage of the Cortex-M NVIC will have different characteristics than one that doesn't.
For those who are unaware, the NVIC effectively turns the interrupt handling system into a hardware-based FIFO strict realtime task scheduler. The only hard limit is that since all of the tasks are executed on the same stack frame that you need an interrupt stack large enough for the maximum nesting depth. You can safely do much more work in an interrupt handler with such a system than you can in a classical non-nested interrupt-handling system.
> I disagree. If you have interrupt priorities then you can treat your interrupt service routines as the highest priority tasks. ...
I think "Keep ISRs small" is still a good guideline for most cases. Big interrupt handlers require more system-wide knowledge to verify (e.g., stack space, maximal latency, priority problems, ...). They introduce more coupling into the system, in other words.
Of course, this should not be some hard rule, rather just a rule of thumb. I wouldn't like it if some policy like this prevented you from choosing the optimal solution in a case like you suggested.
> Big interrupt handlers require more system-wide knowledge to verify
I think the point I'm trying to make is that sometimes following the rule of "make your ISR small" actually makes this worse because your bigger high priority task can be interrupted or you turn interrupts off or mask interrupts and you end up with even more complexity.
Sometimes following rules-of-thumb blindly will make things worse. This is one reason why we call them "rules of thumb" rather than e.g. "laws."
Another example: if you are shipping over a million units, then adding hardware to simplify software starts to look like a bad trade-off since BOM starts to dominate NRE.
Fat ISR's are often really hard to debug. And the more complex state an ISR has the harder it is to communicate with it. That's because with an ISR you are in a state of sin because you always end up with duplicated state.
I think the idea of "keep ISRs small" is that it is easier to spend time polishing your ISRs to be as small as possible than deal with long ISRs. As soon as you have long running (long is a relative thing) it is extremely difficult to guarantee any behavior of the system.
> As soon as you have long running (long is a relative thing) it is extremely difficult to guarantee any behavior of the system.
I understand your sentiment but simply pushing the work into a task doesn't make guaranteeing the behavior of the system any easier. And I argue that in some cases, it makes it more difficult, if you can eliminate an entire task by putting its processing in an ISR then that reduces the complexity. Handling priority inversion correctly is the example that springs to mind.
I think that in some cases, large ISR tasks make sense. But, ISRs by nature occur at times not known at compile time, meaning that the longer an ISR runs, the less you'll know about system behavior at compile time. So, if you know that you need a long running process triggered by an ISR, that's fine, make it so; but as a general rule of thumb, the shorter the ISR, the more deterministic your system is. Those ISR runtimes need to be designed to be good fits for the system, or they can cause problems.
Just a small correction, related to your specific example.
Controlling the motor is a real-time digital control system task, and it needs to be executed at exact time intervals, otherwise you introduce noise and instability into the control loop. "Accelerometer has new data" is a terrible trigger for the control loop. You always want to execute the controller at fixed intervals, triggered by a timer interrupt.
Once you enter that timer ISR though, you are correct. You need to run the whole thing, starting with adjusting the PWM according to the result of the previous control loop, then sampling the sensors and finally running the control algorithm, all before exiting the ISR. This is the epitome of a "hard real-time" requirement.
> Controlling the motor is a real-time digital control system task, and it needs to be executed at exact time intervals
This is correct but also the accelerometer data needs to be sampled at the same interval. For this reason, running the control algorithm in lock step with the sensor may be sensible.
True, but sensors do not generally output data at fixed intervals, which is why the most common method is to have the control loop run on a timer interrupt and to have it sample the sensor.
I assume the command processing was simply not a predictably small enough of a workload to fit into the time budget for an ISR. Besides, that is irrelevant for his point, he already gave an example of a "big" ISR, it doesn't have to be even bigger.
I mean, I (and most people on this thread...) agree with you, I believe it is almost always "nicer" to be consistent and have minimal ISRs. But her/his point (as I interpret it, at least) was that in some situations a somewhat larger ISR may make sense.
That's not a good argument. That an extreme is bad doesn't make the opposite extreme good. I could easily have a house that is too large, but I wouldn't want to live in a box.
This is a great list of some basic rules of thumb! Over the years I have encountered many of these in practice.
I think it is important as engineers to always try and keep an eye on the big picture. Many times it is easy to get bogged down in an interesting technical problem in embedded engineering, and we can forget how that problem fits in with the overall state of the project. One example was I found that I could not send more than 1 USB bulk packet per millisecond using Texas Instruments' USB library, even though in theory you could send 19 per millisecond. I wasted a few days to try and fix that, but it was a waste of time because speeding up USB only sped up firmware updates which were not a big deal! There are many examples like this unfortunately.
I think the biggest thing missing from this list is how important it is to thoughtfully design the hardware to make the firmware easier to design. This requires a less agile approach, or more flexible prototype hardware until requirements are better understood. Just one example is using separate SPI/I2C busses for separate peripherals, even though technically they can be shared. This helps you not have to worry at all about collisions or managing Direct Memory Interfaces (DMA).
I don't think solving the USB issue was a total waste of time. Knowing what is going on is always valuable.
The "My USB throughput is not what it should be... Oh, well. Must be the USB cable..." attitude is something that can bite you later, because it can be the symptom of a bug (e.g. packets lost because of a buggy ISR) that's just waiting the worst moment to show its ugly face.
Definitely agree with thoughtful hardware design. Simple things like good placement of test points to attach scopes on relevant signals is extremely helpful.
Yes test points are important for absolutely everything on the first iteration of a new hardware design, you just have no idea which ones you need before starting the project!
I think it is also important to have a big enough test point to actually use too! There have been many times I have been trying to hold a probe to a tiny little exposed circle pad, just wishing it was something big enough to clip a lead to.
Agreed, but the intention appears to serve as a launchpad for digging into the rules yourself. Links to original sources would make this much easier of course.
Thanks for the link to Ganssle’s blog. I’ll be sure to have a read today.
One of the things from the list that could be improved is this:
> Algorithmic optimizations have a greater impact than micro optimizations
This is kind of tautological, as micro optimizations can be an important part of algo optimization. We don't execute algorithms on formal automata, but on real machines with real architectures, limitations and opportunities.
> * “Real efficiency gains come from changing the order of complexity of the algorithm, such as changing from O(N^2) to O(N*logN) complexity”
Only if the constant factor that multiplies the lower-complexity expression is small enough and N is large enough.
Ye ... this O() obsession seem to miss the "as n approach infinity" part. It is just a polynomial and the constant term or which ever can be dominating for small n. Especially relevant in embedded where searching small static arrays is a thing.
As an electronics enthusiast, this really sheds a lot of light on different aspects of hardware/software development that I hadn't been exposed to before. I feel like there is a lot of tribal knowledge that people in the business or going to school for EE stuff are privy to that isn't always out there for the rest of us, so kudos for aggregating some of that.
along the same line, does anyone have further articles on hardware topics, like sourcing parts for commercial products? It's another part of electrical engineering and bringing products to market that I haven't seen much of online but would love to learn more about.
For sure, I totally realize those are huge topics, and many people have full time jobs just focusing one one aspect of that. I guess what I'm really looking for is a good overview of standard operating procedures and rules of thumb for purchasing parts and inventory management.
I've seen a lot of really cool hobby projects turned small run commercial products in the music/synthesizer world and I guess I'm curious how you would go about sourcing parts for something like that in way that's close to how it's done professionally.
Also, more specifically what sort of tools do people typically use for managing inventory? How about comparing prices/vendors? I imagine something more sophisticated than a simple spreadsheet would be in order and I have to figure it's different from company to company, but are their any widely accepted industry standard tool?
For small to mid-level production (singles to mid-thousands per build), most shops will use a distributor like Digi-Key, Mouser or Arrow to source parts. Above that, you can start talking to manufacturers directly in some cases if you can't get good enough prices through distribution.
What throws this off is that a lot of shops these days use Contract Manufacturers (CM's). Since CMs are building boards for many different shops, they can purchase parts in larger volumes and get better pricing. They make it easier to outsource some of your manufacturing since you can just give them a BOM (Bill of Materials) and your PCB files and get complete, tested boards back. So you don't even need to deal with parts shopping, etc.
As far as inventory management, if you're not big enough to be using an ERP (Enterprise Resource Planning) system, it's likely a Quickbooks plug-in, a spreadsheet or something custom built.
IME, as long as they people can meet their BOM price target, they just buy from whoever they're used to dealing with.
I have a couple products in low-volume production manufactured in my basement. 90% of my components are purchased from Digi-Key because we're both in Minnesota (and Digi-Key is the best vendor I've ever dealt with) so most things I order arrive the next day without having to pay extra. They also have BOM tools on their site: I can create a bill of materials for a board to make it easier to order parts: just bring up the BOM and enter a number of units and it will create a shopping cart for me.
I'll occasionally look around for better prices on some high-cost parts, but it's hardly worth it at my low volumes: there really isn't enough price variation on most things.
I feel like half of what's here points to using rust or a rust like language as a good thing. Even I suppose C++ if you can use the pointer types to help yourself.
> I feel like half of what's here points to using rust or a rust like language as a good thing.
Which points exactly?
Also, neither the Rust standard library, nor the C++ std::shared_ptr were designed with embedded in mind. Also, Rust is kind of experimental, not really a good choice for a system that is difficult to update. Not to mention only Intel and AMD64 platforms have Rust Tier 1 support.
One thing to keep in mind with embedded is size. Some projects you do not get to have std lib. It just does not fit. This is less and less of an issue as time goes on. But a few years ago 2k total memory (flash and RAM) was a real issue you had to conform against. Even that could seem huge for some platforms.
Depending on the platform Rust or C++ may not be the first one I reached for.
C is pretty compact if you strip the libs out. But usually at that point you may have to goto ASM just to make it fit.
Had one project which had to be in python (platform dictated by customer). Did all the right style guide things. Classes, the works. Had to toss all of that out. As just by making classes subjected it to a 300 byte overhead per object. I had hundreds of the things. Half my memory was being used by object management. I needed that space for data. Out goes all the cool by the book things that seem right to do. Borderline a total re-write. I took those lessons to the C version. Right up until the new guy decided everything needed to be C++ and use std:lib and would not listen to me. Suddenly it did not even fit in flash and RAM even if you use both by about 5x much less any data needed. He had to spend weeks backing the changes out again. Even after re-stripping it even then the code size dictated we just could not use some platforms.
Another thing to keep in mind is compiler maturity. C++ in the past 10 years has come a long way. But in these embedded platforms you may be working with a C++ tool-chain from the late 1990s (if you are lucky). Some of the toolchains shipped with these chips are in poor shape. They will never change unless you spend a lot of time bringing it up to something semi current. Getting a Rust tool chain on it? Maybe if you spent months messing around with it. Months where you could spend shipping product.
C++ can be written in a compact way: in fact some of it's features are useful in that regard. But you will probably want to turn off exceptions and RTTI (two features which are unfortunately not 'zero-cost').
Like you say it can be done in a compact way. There are some gotchas with memory though. Like you point out RTTI and exceptions. Another one people forget about is the object overhead themselves. That is not zero cost on many compilers. There usually is an abstract struct that holds the vtable or something like one. That is so you can do those cool C++ things like inheritance and mutability. Most of the time when people say 'c++' I have found they do not mean the language which is decently compact. They mean the C++ std lib. It is a distinction many do not make unfortunately.
It is becoming less and less of an issue as the more powerful SoC items have come down in price and have more memory/flash. The newer ones also usually come with a somewhat modern compiler stack. Also if you follow some of the MISRA standards in some projects it can even more radically change what you can and can not do.
Rust avoids a lot of the C++ pitfalls out of the gate with a split core vs std library set.
Panic handling and message formatting are something that however is a known drawback in debug builds. In release builds this gets optimized away in most cases.
Rust actually has a better standard library story than C++ here, since it actually has a segmentation between the 'core' and 'std' libraries, where 'core' contains stuff which doesn't require heap allocation. Makes it a bit easier to know what will be supported. (and tbh, the standard library is low on the list of reasons to use C++, especially in embedded)
I think Rust has a lot of potential in the embedded space. I'm especially interested in if the async stuff will have power in making small and safe state machine writing easier: I think this could be extremely good at combining the power of stackfull tasks (which are wasteful on memory and less predictable) and state machines (which can be a pain to write, especially long sequential ones). If it lets you ship a product sooner with less bugs on a cheaper micro, I think it may be a strong proposition. Unfortunately given the industry is often unwilling to even consider C++ (and not for good reason: the reasons given for not using it or not even considering it are often based of off misconceptions or just outright falsehoods).
Some components can become sensors under the right conditions. Example: Components becoming microphonic.
At high enough frequencies digital design becomes analogue design.