Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm working on a few projects at home with ESP and RPi Zero. It's so much fun to do your own "IoT" devices.

There's still a problem I don't know solve well, holistically. When you do something with UI there's inputs from humans (buttons, knobs, etc), the network and timers. I'm completely lost how to coordinate that, especially when I use an LED/OLED display.

How to make sure a single screen is displayed long enough to be readable? But on the other hand how to react to all inputs? And how to deal with some inputs/triggers being more and less important (e.g. "modal" screens over static screens)? Also how to make sure not to get lost in the threading mess that I just created? The cherry on top is, some inputs require "short term memory" -- like patterns of button pressing (double/triple press/long hold) or rotary encoders.

In the end I give up implementing a lot of features.



You might want to look into a RTOS.

Each task has its own stack, and can communicate with other tasks with mailboxes or shared memory (with a lock).

You'd have a task for each of the things you mention: debouncing a button, drive the display from a buffer, and application logic.

The application logic then doesn't have to worry about bounces, how long it'll take to drive the display, etc.

Modal screens over static screens is orthogonal to this though. You'd need to build a priority scheme, and only pass down events to the current on-screen view.


Thanks, ipoopatwork.

I'm using MicroPython and this is what I want to stick with. But I'll research how RTOS is doing things and make an equivalent.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: