we lost the concept of vertical stride when moving video playback to EGLImage.
Here we bring it back in a somewhat hacky-way that will work only for the
softgl/mdp backend.
in the case where we fade a 32-bits surface (ie: GL_MODULATE w/ a,a,a,a + blending),
we first make a copy of the background into a RGB buffer, then we blend the 32-bits
surface as usual (without the alpha component), and finally blend the copy of
the background on top with 1-a. This uses a lot of bandwidth, but no CPU time.
Use EGLImageKHR instead of copybit directly.
We now have the basis to use streaming YUV textures (well, in fact
we already are). When/if we use the GPU instead of the MDP we'll
need to make sure it supports the appropriate YUV format.
Also make sure we compile if EGL_ANDROID_image_native_buffer is not supported
This change makes SurfaceHolder.setType(GPU) obsolete (it's now ignored).
Added an API to android_native_window_t to allow extending the functionality without ever breaking binary compatibility. This is used to implement the new set_usage() API. This API needs to be called by software renderers because the default is to use usage flags suitable for h/w.
there was several issues:
- when a surface was made non-current, the last frame wasn't shown and the buffer could stay locked
- when a surface was made current the 2nd time, it would not dequeue a new buffer
now, queue/dequeue are done when the surface is made current.
for this to work, a new query() hook had to be added on android_native_window_t, it allows to retrieve some attributes of a window (currently only width and height).
* commit 'goog/readonly-korg-master':
Fixed Android issue #400, where the Intent documentation was inaccurate in a number of places, undoubtedly causing untold grief to innumerable masses.
Bug Fixed for libagl.
Merge commit '46e28db8818332e3cda4cc410cc89a1ed7ce4db6'
* commit '46e28db8818332e3cda4cc410cc89a1ed7ce4db6':
fix for [1969185] valgrind errors in new gl stuff
This bug was introduced when lighting computations was changed from eye-space to object-space.
The light position need to be transformed back to object-space each time the modelview matrix changes which requires us to compute the inverse of the modelview matrix. This computation was done with the assumption that normals where transformed (which was the case when the computation was made in eye-space), however, normals only require the inverse of the upper 3x3 matrix while transforming positions requires the inverse of the whole matrix.
This caused the interesting behavior that lights were more-or-less transformed properly, but not translated at all, which caused improper lighting with directional lights in particular.
There was also another smaller bug affecting directional lights: when vertices are read, only the active component are read, the other ones are ignored, later, the transformation operations are set up to ignore the unset values, howver, in the case of lighting, we use the vertex in object space (that is, before it is transformed), and therefore were using uninitalized values; in particular w.