Hacker News new | past | comments | ask | show | jobs | submit login

IM GUIs are the way to go in my opinion. I've been playing around with some prototypes and they are dead simple compared to event-driven GUIs with multi-stage rendering pipelines, et. al.

Just the reduction in complexity is a massive feature. Managing view state is almost fun with this approach. The performance can be incredible too if you do a little bit of hacking with caching certain expensive resources between frames (i.e. texture atlas computation).

One trick I have employed is to process user events in "micro" batches that are 10-100uS wide. This allows for me to consolidate redraws across multiple logical user events. I have found this to be incredibly helpful in cases where inputs are sampled more frequently or otherwise occur at rates higher than output frames are produced. Driving redraws off individual events is probably a mistake if you want any level concurrency. You need to batch up the events and process them in bulk, then do redraws on the edge of each batch.




There are some big challenges, here. One is IME, which requires storing intermediate state and information on the editing session, and communicating that with the platform; another is accessibility, which requires keeping a tree of elements that the user can currently navigate to, and which may include elements which are not currently on screen.

So yes, maybe okay for games, but I’m unconvinced about imgui for GUI applications of the sort described in this post.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: