Hacker News new | past | comments | ask | show | jobs | submit login
How to run C programs on the BeagleBone's PRU microcontroller (righto.com)
112 points by dwaxe on Sept 20, 2016 | hide | past | favorite | 46 comments



> How to compile C programs with Code Composer Studio

Oh god! Ack, ew, retch.

I've had to use CCS a few years ago, and it was a buggy monstrosity. I don't understand why silicon vendors have to wrap their dev tools into such horrid IDEs. One reason I loved working with AVRs is that I could get a gcc-like interface, so that I knew exactly which options and parameters were used rather than some godforsaken IDE do some magic for me and get in my way. Having to hunt down options in modal windows and tabs is enough to make my blood boil just thinking about it.

Embedded development requires very precise control and knowledge about what you're doing, and IDEs more often than not get in the way of doing that.


For a while I worked with Bada, the Korean smartphone OS pushed by Samsung which nobody has ever heard of.

(It was actually not bad; apps written in C++ with a Java-like class library which all ran inside a single process, targeted at fairly small devices. Our test hardware was this massive plastic brick with daughterboards poking out in all directions all wrapped round this utterly gorgeous curved OLED screen.)

You developed on it using a custom IDE which totally wasn't Eclipse (despite being Eclipse) which used gcc as the backend compiler and a bunch of custom tools to generate the deployment packages. Naturally, we ignored the IDE and just used makefiles, so that we could integrate the Bada build into our existing build automation.

...then one update they removed the custom tools and moved their functionality into Eclipse.

This meant that the only way to create applications was to open Eclipse, load the project, and then manually press the 'build' button.

Naturally, we complained, along with everyone else who used it, and never got an answer. The platform died soon after.


I use CCS and the TI compilers for a processor at work. It's not bad. I was able to move us to a Make-based build system for the pile of artifacts that we use that compiler and gcc for in less than a week. TIs command line tools are pretty useful.

(Yes, I have tried CMake, and it is entirely the wrong kind of cross-platform.)


I visited Samsung HQ in Suwon, Korea during that period. As I entered I recall being shown "that building is Bada OS", "that building is Windows", "that building is Android". (We were preparing the flagship application for the Galaxy device series launch in the US market, an on-device DRM'd digital video market with pre-loaded/licensed content and carrier billing integration). I guess that was 2010.


Hey I used Samsung Wave, because it came with Bada OS which looked pretty nice (and run much better) compared to Android on similar hardware! Too bad it was pretty much DOA as ecosystem. Interesting story anyway. :)


"...This meant that the only way to create applications was to open Eclipse"...

Surely not. Now, this may have been much more trouble than it was worth, but I very nearly guarantee there was a way to do it from a command line.

It's been a while since I Eclipsed, but I (probably erroneously) recall a "generate Makefile" option on most adaptations of it.


We could run gcc fine. The problem was (from memory, this was a while ago) the bespoke tool which bundled up the binary and resource data and signed it to create the installable package. Previously it had been an external binary; but they rewrote it in Java and linked it into the Eclipse binary, and the only hooks were from the Eclipse project build system. The generated makefiles didn't call this because there was nothing to call.

However --- good news! I've just found references from 2012 to a thing called MakePackage.jar, which seems to do exactly what we wanted. There certainly wasn't any such thing while I was working on it, so it looks like they changed their mind later. Which would be great, but we'd abandoned the platform by then.

Here's what seems to be left of the Bada documentation.

http://static.bada.com/contents/tutorials/bada_SDK_1.0.0/bad...


I was also intrigue after reading that. I wondering if gcc and friends were integrated directly into Eclipse? It is possible, but it would be a complete waste of engineering resource.


I think a problem is with a lot of eclipse based embedded studio's is the driver and board support package code is generated from templates via an eclipse plugin written in Java. Also good luck trying to figure out an Eclipse project file without retching.

Eclipse + Embedded => Cancer


Currently doing that.

Retch indeed.


CCS has come a long way since version 3. Ever since TI started using Eclipse as the base for CCS things have been getting better. Out of all the IDE's I've used over the past 5 years CCS is my favorite.

> Embedded development requires very precise control and knowledge about what you're doing, and IDEs more often than not get in the way of doing that.

Embedded development requires precise control and knowledge about what you're doing, and that's precisely why you would want to use the manufacturer's recommended IDE. Have you ever tried managing development with an RTOS? Or streaming data out of RAM while your code was running? Or fiddling with the processor registers when debugging? IDE's make all that much simpler, you may want to give CCS an other chance, it's a really powerful IDE.


> Have you ever tried managing development with an RTOS? Or streaming data out of RAM while your code was running? Or fiddling with the processor registers when debugging?

Yes to all of the above, and had to script it too, which I couldn't do with an IDE at the time.

I indeed haven't used CCS-Eclipse in years. Has it finally eliminated its lag and RAM usage? I was always amazed how it managed to continuously stay just ahead of Moore's Law in terms of resource usage.


> I indeed haven't used CCS-Eclipse in years. Has it finally eliminated its lag and RAM usage? I was always amazed how it managed to continuously stay just ahead of Moore's Law in terms of resource usage.

You're right, the lag was bad. Recently I haven't noticed it much, maybe I got used to it, or maybe it's been improved. Currently my instance of CCS 6.0.1 uses around 286MB of RAM, CCS 6.1.0 uses about 480MB of RAM. In comparison PyCharm (2016.1.4) is using 927MB of RAM :(

How did you debug code running on the hardware without an IDE? Is there an easy-ish way to do that?


> How did you debug code running on the hardware without an IDE? Is there an easy-ish way to do that?

gdb :)

Most in-system-debuggers expose a gdb-server interface, that a client gdb can then connect to. GDB is very scriptable.


What you don't want is a bunch of opaque gnosis in the IDE itself. You want the process to be transparent, in an SEI Level 2 sort of way.

I used to use ObjecTime, and it added random stuff to its C++/'C' code generation process that meant you were never fully SEI Level 2. We had to keep a Golden Build machine alive because of it.


I worked on a project about a year ago on a TI TMS470M using CCS and thought it was pretty nice. It had live RAM view which was really useful for the project I was working on at the time and it was stable.


I rarely find embedded developers in forums praising gcc-like interfaces and complaining about IDE's. This should happen more often.

Can you recommend any boards that allow embedded development without need for Windows (Microsoft or X11)?


Because the majority of them is just happy using Windows and IDEs, hence why the hardware vendors target them in their tooling.


my dream for the beagle bone was to be able to develop for the PRU (and why not cpu) from within the very linux distro running on the beagle itself.

just give me a gcc and header files.

all those IDEs will do nothing other than drive away novice and experienced devs alike. only the ones that evaluated every other option and still think you hardware is the best will stick around. and for the beagle, a fully hobbyist board with no dream of being used in a product, that amounts to nobody.


You can compile for the PRU from within the linux on the BeagleBone itself, see kens' comments about /usr/bin/clpru below.


There are many experienced devs that worked on both sides of the fence and rather use an IDE than CLI.


There is a nascent GCC port, may be good for hobby projects:

https://github.com/dinuxbg/gnupru


Since the author cares about a few cycles per check by using the interrupt instead of directly reading the PWM register, they should look up how to access the constants table (4.4.1.1).

The compiler can be coaxed, with some rather ridiculous clpru compiler-specific variable definitions, into referencing addresses from the constant table by a C number rather than loading up a general purpose register.

It is in the clpru compiler documentation under section 5.14.4 It is also discussed in some TI slides (flash required) in part 18 http://software-dl.ti.com/public/hpmp/software/pru_compiler_... PDF slides: http://software-dl.ti.com/public/hpmp/software/pru_compiler_... (PDF page 28)

TI states in their slides that this constant register table access "Barely counts as C code." I think they're right. It is, however, slightly faster. When trying to do cycle-perfect timing, such micro optimizations start to actually matter.


Thanks for mentioning the constants table. In my case, saving a few cycles isn't important but eliminating jitter is very important, which is why I use interrupts.

Reading the PWM register took 5 cycles, so I ended up with 25 nanoseconds of jitter, which caused bit errors in my Alto Ethernet emulator. Polling interrupts take 1 cycle, so I end up with no jitter. (Does anyone else think it's bizarre that TI calls them "interrupts" when you need to poll them?) You might think 1 cycle would cause 5 ns jitter, but the timer uses the same clock so everything is synchronized.

I considered using the constants table if I needed to shave off some cycles, but as you point out the code required is pretty hideous. And you also need to define all the constants in the linker file too? That's just crazy.


Where do all the PRU programmers hang out? Is there a forum to ask questions? Based on this HN discussion, there seem to be lots of people using the PRUs.


I would all be interested in such a place.


If you're doing anything big with the BeagleBone's PRUs, it's also worth checking LEDscape (https://github.com/osresearch/LEDscape).

It provides a good reference on how to do complex tasks on the PRU.


Just curious if the PRU compiler is some sort of gcc variant or something proprietary.


The PRU compiler (clpru) is a TI proprietary compiler, free but licensed. It is pre-installed on BeagleBones. There's also a gcc compiler‡, but I don't know anything about it.

https://github.com/dinuxbg/gnupru


I didn't know about this! My last experience messing around with compiling for the PRUs was really frustrating - I gave up after about three days' screwing around with freeware tools that didn't work.

Could you point to where the PRU compiler is located on the BeagleBone file tree? I'd like to try this out!


The TI PRU compiler is at /usr/bin/clpru on my BeagleBone (from Adafruit, with Debian). If it's not on your BeagleBone, you can download it here: http://elinux.org/Beagleboard:BeagleBoneBlack_Debian#TI_PRU_...

I'd recommend using the CCS IDE (as I describe in the article) rather than the command line compiler, but it's up to you. You'll still probably encounter frustration - it's not an Arduino experience for sure.


I think a lot of my frustration stemmed from NOT using CCS. Maybe I should just bite the bullet and do that...


Texas Instruments provides Code Composer Studio (CCS). It's about $800 a seat (version that I have) but there is a free version to use with the Beaglebone. See here http://processors.wiki.ti.com/index.php/Download_CCS


Also you don't necessarily need the IDE to compile with the PRU. The compiler itself is freeware, and is some sort of fork of the compiler line they developed for their DSPs.


I played with the idea of implementing a little Forth for the PRU, but wasted a weekend looking for fast multiply and divide algorithms (IIRC, it can only do addition, subtraction & left/right-shift). It's a neat little platform, but at the end of the day I've not found any really pressing need to use it.

That doesn't mean others haven't, of course — I'm very much a software guy these days. Kinda wish I'd had a BeagleBone back when I was a kid, designing & soldering my own stuff: I bet it'd have been awesome. Can't even imagine what Forrest M. Mims could come up with using one of them as a platform!


[Edit: see child comments, this isn't right]

Rather than using CCS, one can use gcc or clang targeted to cortex-m3 (gcc target: arm-none-eabi + `-march=cortex-m3`, clang:thumbv7-eabi) to build the code (with the usual caveats that you're targeting a bare metal system).

The PRU is just a cortex-m3 core which is connected in interesting ways to the cortex-a8 on which linux runs. Nothing too exotic.


The PRU is a proprietary TI architecture that is totally different from the Cortex M3. The PRU is a low-level deterministic real-time 32-bit microcontroller, while the Cortex M3 is a "real processor" with interrupts, memory management and so forth. The instruction sets and architectures have nothing in common.

Maybe you're thinking of the Cortex M3 that's on the chip for power management?


Different chip/vendor but Freescale's i.mx7 is an AMP design with a Cortex A7 + an M4 presumably for the realtime bits. I had the same thought w/r/t development on the PRU.

http://www.nxp.com/products/microcontrollers-and-processors/...


Ah, you're right, it does look like that's the case. It's a shame it's not something better supported (like a cortex variant).


I think newer chips have the option for CortexM companion. This goes bit beyond my skillset, but I think the point of proprietary PRU ISA was that they could ensure super specific and regular instruction timings, every instruction being executed in single cycle with no pipelining or other stuff that can make real-time programming more tricky. That is something that afaik you simply can't do with Cortex M cores.


Cortex-R series is the equivalent to the PRU.

'R' for realtime.... in-order scheduing, highly optimised for determinism over performance.

(Having said that.... 'M' series is pretty good too really).


I would use an FPGA for this kind of project. One of the ARM+FPGA hybrids could be easiest.


I looked at FPGAs, but that's not just a learning curve, but a whole new world to learn. I started reading about Verilog and decided that an FPGA was overkill for what I wanted to do (essentially controlling a few signals at 5.9MHz for an Ethernet emulator). On the other hand, for the Xerox Alto disk emulator, Carl (on the restoration team) is using an FPGA.


Is the disk controller using an FPGA because that's what Carl was most familiar with or is there a technical reason? Does data come in too fast for the PRU to keep up or are the timing requirements too stringent?


Carl is using a FPGA because that's what he is most familiar with and he's built disk emulators with FPGAs before. The Alto's Diablo disk provides data at 1.6 MHz, so the 200 MHz PRU could probably keep up. The FPGA can make the timing more exact, and Carl likes to make sure all the signals have precise timing. A complication with disk as compared to Ethernet is there are a lot of signals to deal with (track address, sector address, status signals, read clock, read data, write data, unit select, etc). He goes into details at his blog: https://rescue1130.blogspot.com/


If you can use the PRU instead, that's a lower barrier to entry - in some ways. For stuff that needs to be more or less +/- 1 msec in latency, an Arudino or might be an easier choice, dependent on size, weight and power.

Th real problem with the PRU is that its more or less single vendor.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: