simplified things a lot, the biggest change is that the concept
of "ClientID" is now gone, instead we simply use references.
Change-Id: Icbc57f80865884aa5f35ad0d0a0db26f19f9f7ce
the new native_window_set_buffers_geometry allows
to specify a size and format for all buffers to be
dequeued. the buffer will be scalled to the window's
size.
Change-Id: I2c378b85c88d29cdd827a5f319d5c704d79ba381
this change introduces R/W locks in the right places.
on the server-side, it guarantees that setBufferCount()
is synchronized with "retire" and "resize".
on the client-side, it guarantees that setBufferCount()
is synchronized with "dequeue", "lockbuffer" and "queue"
the new TextureMagager class now handle texture creation and upload
as well as EGL image creation and binding to GraphicBuffers. This is
used indirectly by Layer and directly by LayerBuffer
the new BufferManager class handles the set of buffers used for a
Layer (Surface), it abstracts how many buffer there is as well as
the use of EGLimage vs. regular texture ops (glTexImage2D).
Change-Id: I2da1ddcf27758e6731400f6cc4e20bef35c0a39a
this hack was used for gpus that don't support cached buffers
for s/w clients. currently we have no gpu with this issue.
this removes quite a bit of complexity.
Change-Id: I72564669f124f92805030e61983711f61c76b6d9
get rid of the "fake rtti" code, and use polymorphism instead.
also simplify how we log SF's state (using polymorphism)
Change-Id: I2bae7c98de4dd207a3e2b00083fa3fde7c467922
Use EGLImageKHR instead of copybit directly.
We now have the basis to use streaming YUV textures (well, in fact
we already are). When/if we use the GPU instead of the MDP we'll
need to make sure it supports the appropriate YUV format.
Also make sure we compile if EGL_ANDROID_image_native_buffer is not supported
Instead of using glTex{Sub}Image2D() to refresh the textures, we're using an EGLImageKHR object
backed up by a gralloc buffer. The data is updated using memcpy(). This is faster than
glTex{Sub}Image2D() because the texture is not swizzled. It also uses less memory because
EGLImageKHW is not limited to power-of-two dimensions.
A window is created and the browser is about to render into it the
very first time, at that point it does an IPC to SF to request a new
buffer. Meanwhile, the window manager removes that window from the
list and the shared memory block it uses is marked as invalid.
However, at that point, another window is created and is given the
same index (that just go freed), but a different identity and resets
the "invalid" bit in the shared block. When we go back to the buffer
allocation code, we're stuck because the surface we're allocating for
is gone and we don't detect it's invalid because the invalid bit has
been reset.
It is not sufficient to check for the invalid bit, I should
also check that identities match.
When EGLImage extension is not available, SurfaceFlinger will fallback to using
glTexImage2D and glTexSubImage2D instead, which requires 50% more memory and an
extra copy. However this code path has never been exercised and had some bugs
which this patch fix.
Mainly the scale factor wasn't computed right when falling back on glDrawElements.
We also fallback to this mode of operation if a buffer doesn't have the adequate
usage bits for EGLImage usage.
This changes only code that is currently not executed. Some refactoring was needed to
keep the change clean. This doesn't change anything functionaly.
The ANR is caused by SurfaceFlinger waiting for buffers of a removed surface to become availlable.
When it is removed from the current list, a Surface is marked as NO_INIT, which causes SF to return
immediately in the above case. For some reason, the surface here wasn't marked as NO_INIT.
This change makes the code more robust by always (irregadless or errors) setting the NO_INIT status
in all code paths where a surface is removed from the list.
Additionaly added more information in the logs, should this happen again.
Rewrote SurfaceFlinger's buffer management from the ground-up.
The design now support an arbitrary number of buffers per surface, however the current implementation is limited to four. Currently only 2 buffers are used in practice.
The main new feature is to be able to dequeue all buffers at once (very important when there are only two).
A client can dequeue all buffers until there are none available, it can lock all buffers except the last one that is used for composition. The client will block then, until a new buffer is enqueued.
The current implementation requires that buffers are locked in the same order they are dequeued and enqueued in the same order they are locked. Only one buffer can be locked at a time.
eg. Allowed sequence: DQ, DQ, LOCK, Q, LOCK, Q
eg. Forbidden sequence: DQ, DQ, LOCK, LOCK, Q, Q
This change makes SurfaceHolder.setType(GPU) obsolete (it's now ignored).
Added an API to android_native_window_t to allow extending the functionality without ever breaking binary compatibility. This is used to implement the new set_usage() API. This API needs to be called by software renderers because the default is to use usage flags suitable for h/w.
First, the window manager tells us when a surface is no longer needed. At this point, several things happen:
- the surface is removed from the active/visible list
- it is added to a purgatory list, where it waits for all clients to release their reference
- it destroys all data/state that can be spared
Later, when all clients are done, the remains of the Surface are disposed off: it is removed from the purgatory and destroyed.
In particular its gralloc buffers are destroyed at that point (when we're sure nobody is using them anymore).
Surfaces are now destroyed once all references from the clients are gone, but they go through a partial destruction as soon as the window manager requests it.
This last part is still buggy. see comments in SurfaceFlinger::destroySurface()