Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> and why all complicated rendering ends up on separate GPU layers.

No, it doesn't. The app renders to a single surface in a single GPU render pass unless the app uses a SurfaceView, which is generally only for media uses (camera, video, games).

Multiple layers are only used when asked for explicitly (View.setLayerType) or when required for proper blending. They are generally avoided otherwise as it's generally slower to use multiple layers.

You can absolutely do the "bad" things in the linked article in a native Android app and still hit 60fps pretty trivially. The accelerated properties, like View.setTranslationX/Y, only bypass what's typically a small amount of work (and don't use a caching layer). It's an incremental improvement, not something absolutely required. Scrolling in a RecyclerView or ListView, for example, doesn't even do that. It just moves the left/top of every view and re-renders, and that's plenty fast to hit 60fps.



This used to be true, but since Android M and N, where a lot more animations were added, a lot of animation now happens on separate GPU layers (and is rendered, if necessary, by separate threads).

This was especially necessary due to many of the ripple animations being introduced.


I think your confusing the RenderThread with GPU layers. There's only 1 rendering thread per app and it handles all rendering work done by that app. It's really no different than pre-M rendering other than a chunk of what used to be on the UI thread is now on a different thread. The general flow is the same.

The new part is that some animations (basically just the Ripple Animation) can happen on their own on that thread, but it doesn't use a GPU layer for it nor a different OS composition layer.


> but it doesn't use a GPU layer for it nor a different OS composition layer.

Really? As so often, there was a lot of talk about doing that beforehand, and it wasn’t discussed at all later on, so I had assumed that this had been done. Interesting that this didn’t happen.

What’d be the reason for that? Animating objects on a static background seems a prime case for GPU layers. Or was it the issue with the framebuffer sizes being too huge again?


Think about what the static background actually is. It's probably either an image (which is already just a static GL texture, no need to cache your bitmap in another bitmap), or it's something like a round rect which can actually be rendered faster than sampling from a texture (since it's a simple quad + a simple pixel shader - no texture fetches slowing things down). In such a scenario a GPU layer just ends up making things slower and uses more RAM.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: