Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I might be wrong, but the key point between a microcontroller and a application processor is deterministic execution. When controlling a motor say, it might be vital that your interrupt handler finishes in less than 100 clock cycles.

A microcontroller usually[1] doesn't have fancy out-of-order execution, fancy caches etc as that would make the execution less deterministic in time. A MMU would as well.

Lacking these features also make microcontrollers a lot slower, and I'm guessing he's thinking about cost/watts per MIPS or something like that. Yes the application processors draw more power overall, but (I assume) they are so much faster it more than makes up for it in dollars per MIPS or watts per MIPS.

So it appears more of a symptom than a cause. But again, I might be wrong.

[1]: https://en.wikipedia.org/wiki/ARM_Cortex-M#Silicon_customiza...



None of these application processors I reviewed have out-of-order execution. Cortex-M7 microcontrollers have icache/dcache. You can run bare-metal code on any of these application processors and it will behave more or less like a microcontroller. The lines are really pretty blurry, but the MMU is the big dividing line in my opinion (but it's obviously open for discussion).


> None of these application processors I reviewed have out-of-order execution.

You're right, I was thinking of the A8, A9 and similar.

> the MMU is the big dividing line in my opinion

I'm no expert, but seems like a reasonable line in the sand to me.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: