in the case where we fade a 32-bits surface (ie: GL_MODULATE w/ a,a,a,a + blending),
we first make a copy of the background into a RGB buffer, then we blend the 32-bits
surface as usual (without the alpha component), and finally blend the copy of
the background on top with 1-a. This uses a lot of bandwidth, but no CPU time.
Use EGLImageKHR instead of copybit directly.
We now have the basis to use streaming YUV textures (well, in fact
we already are). When/if we use the GPU instead of the MDP we'll
need to make sure it supports the appropriate YUV format.
Also make sure we compile if EGL_ANDROID_image_native_buffer is not supported