If the virtual display surface is being consumed by the CPU, it can't
be allowed with HAL_PIXEL_FORMAT_IMPLEMENTATION_DEFINED since there is
no way for the CPU consumer to find out what format gralloc chose. So
for CPU-consumer surfaces, just use the BufferQueue's default format,
which can be set by the consumer.
A better but more invasive change would be to let the consumer require
a certain format (or set of formats?), and disallow the producer from
requesting a different format.
Bug: 11479817
Change-Id: I5b20ee6ac1146550e8799b806e14661d279670c0
When GLES isn't writing to the output buffer directly, request an
implementation-defined format with minimal usage flags, leaving the
format choice up to gralloc. On some hardware this allows HWC to do
format conversions during composition that would otherwise need to be
done (with worse power and/or performance) by the consumer.
Bug: 8316155
Change-Id: Iee6ee8404282036f9fd1833067cfe11dbadbf0bf
We weren't dequeing and setting the output buffer until just before
set(). This didn't allow HWC to make decisions in prepare() based on
the output buffer format, dimensions, etc.
Now we dequeue the output buffer at the beginning of the composition
loop and provide it to HWC in prepare. In GLES-only rendering, we may
have to cancel the buffer and acquire a new one if GLES requests a
buffer with properties different than the one we already dequeued.
Bug: 10365313
Change-Id: I96b4b0a851920e4334ef05080d58097d46467ab8
this means they only have access to the consumer end of
the interface. we had a lot of code that assumed consumers
where holding a BufferQueue (i.e.: both ends), so most of
this change is untangling in fix that
Bug: 9265647
Change-Id: Ic2e2596ee14c7535f51bf26d9a897a0fc036d22c
we can now queue/dequeue a buffer in asynchrnous mode by using the
async parameter to these calls. async mode is only specified
with those calls (it is not modal anymore).
as a consequence it can only be specified when the buffer count
is not overidden, as error is returned otherwise.
Change-Id: Ic63f4f96f671cb9d65c4cecbcc192615e09a8b6b
this is the first step of a series of improvements to
BufferQueue. A few things happen in this change:
- setSynchronousMode() goes away as well as the SynchronousModeAllowed flag
- BufferQueue now defaults to (what used to be) synchronous mode
- a new "controlled by app" flag is passed when creating consumers and producers
those flags are used to put the BufferQueue in a mode where it
will never block if both flags are set. This is achieved by:
- returning an error from dequeueBuffer() if it would block
- making sure a buffer is always available by replacing
the previous buffer with the new one in queueBuffer()
(note: this is similar to what asynchrnous mode used to be)
Note: in this change EGL's swap-interval 0 is broken; this will be
fixed in another change.
Change-Id: I691f9507d6e2e158287e3039f2a79a4d4434211d
The previous implementation assumed that the HWC could read and write
the same buffer on frames that involved both GLES and HWC composition.
It turns out some hardware can't do this. The new implementation
maintains a scratch buffer pool to use on these mixed frames, but on
GLES-only or HWC-only frames still does composition directly into the
output buffer.
Bug: 8384764
Change-Id: I7a3addb34fad9bfcbdabbb8b635083e10223df69
If we're using a HWC that doesn't support virtual displays, or we have
more virtual displays than HWC supports concurrently, the
VirtualDisplaySurface should simply be a passthrough from source
(GLES) to sink.
This change also tries to distinguish between display types and HWC
display IDs a little better, though there's more to do here. Probably
needs a higher-level rethink; it's too error-prone now.
Bug: 8446838
Change-Id: I708d2cf262ec30177042304f174ca5b8da701df1
HWComposer didn't allow the virtual display output buffer to be set
directly, instead it always used the framebuffer target buffer.
DisplayDevice was only providing the framebuffer release fence to
DisplaySurfaces after a commit.
This change fixes both of these, so both HWComposer and DisplayDevice
should continue to work if VirtualDisplaySurface changes to use
separate framebuffer and output buffers. It's also more correct since
VirtualDisplaySurface uses the correct release fence when queueing the
buffer to the sink.
Bug: 8384764
Change-Id: I95c71e8d4f67705e23f122259ec8dd5dbce70dcf
Previously we only queued a virtual display buffer to the sink when
the next frame was about to be displayed. This may delay the "last"
frame of an animation indefinitely. Now we queue the buffer as soon as
HWC set() returns and gives us the release fence.
Bug: 8384764
Change-Id: I3844a188e0f6ef6ff28f3e11477cfa063a924b1a
BufferQueueInterposer allows a client to tap into a
IGraphicBufferProducer-based buffer queue, and modify buffers as they
pass from producer to consumer. VirtualDisplaySurface uses this to
layer HWC composition on top of GLES composition before passing the
buffer to the virtual display consumer.
Bug: 8384764
Change-Id: I61ae54f3d90de6a35f4f02bb5e64e7cc88e1cb83
DisplayDevice now has a DisplaySurface instead of using
FramebufferSurface directly. FramebufferSurface implements
DisplaySurface, and so does the new VirtualDisplaySurface class.
DisplayDevice now always has a surface, not just for virtual displays.
In this change VirtualDisplaySurface is just a stub; buffers still go
directly from GLES to the final consumer.
Bug: 8384764
Change-Id: I57cb668edbc6c37bfebda90b9222d435bf589f37