Refactored the code to eliminate potential deadlocks due to re-entrant
calls from the policy into the dispatcher. Also added some plumbing
that will be used to notify the framework about ANRs.
Change-Id: Iba7a10de0cb3c56cd7520d6ce716db52fdcc94ff
The old dispatch mechanism has been left in place and continues to
be used by default for now. To enable native input dispatch,
edit the ENABLE_NATIVE_DISPATCH constant in WindowManagerPolicy.
Includes part of the new input event NDK API. Some details TBD.
To wire up input dispatch, as the ViewRoot adds a window to the
window session it receives an InputChannel object as an output
argument. The InputChannel encapsulates the file descriptors for a
shared memory region and two pipe end-points. The ViewRoot then
provides the InputChannel to the InputQueue. Behind the
scenes, InputQueue simply attaches handlers to the native PollLoop object
that underlies the MessageQueue. This way MessageQueue doesn't need
to know anything about input dispatch per-se, it just exposes (in native
code) a PollLoop that other components can use to monitor file descriptor
state changes.
There can be zero or more targets for any given input event. Each
input target is specified by its input channel and some parameters
including flags, an X/Y coordinate offset, and the dispatch timeout.
An input target can request either synchronous dispatch (for foreground apps)
or asynchronous dispatch (fire-and-forget for wallpapers and "outside"
targets). Currently, finding the appropriate input targets for an event
requires a call back into the WindowManagerServer from native code.
In the future this will be refactored to avoid most of these callbacks
except as required to handle pending focus transitions.
End-to-end event dispatch mostly works!
To do: event injection, rate limiting, ANRs, testing, optimization, etc.
Change-Id: I8c36b2b9e0a2d27392040ecda0f51b636456de25
Surfaces can now be parcelized and sent to remote
processes. When a surface crosses a process
boundary, it looses its connection with the
current process and gets attached to the new one.
Change-Id: I39c7b055bcd3ea1162ef2718d3d4b866bf7c81c0
opaque 32-bits windows are now allocated as RGBX_8888 buffers and
SurfaceFlinger always uses GL_MODULATE instead of trying to
optimize to GL_REPLACE when possible (makes no sense on
h/w accelerated GL).
we still have a small hack for devices that don't support
RGBX_8888 in their gralloc implementation where we revert to
RGBA_8888.
the framebuffer implementation doesn't do anything special with this
but the surfaceflinger implementation makes sure the surface is not used
by two APIs simultaneously.
Change-Id: Id4ca8ef7093d68846abc2ac814327cc40a64b66b
We've gotten lucky to date: the previous calculation of bitmask array
sizes, (maxval+1)/8 only works properly when 'maxval' is one less than
a multiple of 8. Fortunately, this has either been the case for us,
or there has been sufficient 'unused' space at the end of the defined
max value range that we haven't wound up overreading/overwriting the
allocated buffers.
Change-Id: I563a93a86644ab9f19489565e06c28e06bb53abc
We now only consider a device to be a default keyboard if its name
has "-keypad". A hack, but whatever.
Also add some debug logging for the input state to help identify such
issues in the future.
Add a Flattenable interface to libutils which can be used to flatten
an object into bytestream + filedescriptor stream.
Parcel is modified to handle Flattenable. And GraphicBuffer implements
Flattenable.
Except for the overlay classes libui is now independent of libbinder.
Merge commit '425324e97bba75cd69bb6c81de6248529540e6fe'
* commit '425324e97bba75cd69bb6c81de6248529540e6fe':
Fix failure to open AVRCP input device due to EPERM.
Sleep for 100us and try to open the input device again if it fails, with a
maximum of 10 attempts.
We need the retry logic because setting permissions on a new input device is
racy. The init process watches for new input device (via uevent) and sets the
permission on them in devices.c:make_device(). However at the same time
EventHub.cpp watches for new input devices from the system_server process, and
immediately tries to open them. I can't see a simple way to avoid this race
condition.
As best as I can tell this race condition has always exisited.
There must have been some timing change that happened recently that causes us
to hit this race condition much more often. See repro notes in referenced bug.
Bug: 2375632
Surface::validate() could sometimes dereference a null pointer before checking it wasn't null.
This will prevent the application to crash when given bad parameters or used incorrectly.
However, the bug above probably has another cause.
we lost the concept of vertical stride when moving video playback to EGLImage.
Here we bring it back in a somewhat hacky-way that will work only for the
softgl/mdp backend.
Use EGLImageKHR instead of copybit directly.
We now have the basis to use streaming YUV textures (well, in fact
we already are). When/if we use the GPU instead of the MDP we'll
need to make sure it supports the appropriate YUV format.
Also make sure we compile if EGL_ANDROID_image_native_buffer is not supported
when running out of memory, a null handle is returned but the error code may not be set.
In that case we need to return NO_MEMORY instead of NO_ERROR, so that the calling code
won't try to dereference the null pointer.
This also fixes [2152536] ANR in browser
When SF is enqueuing buffers faster than SF dequeues them.
The update flag in SF is not counted and under some situations SF will only
dequeue the first buffer. The state at this point is not technically
corrupted, it's valid, but just delayed by one buffer.
In the case of the Browser ANR, because the last enqueued buffer was delayed
the resizing of the current buffer couldn't happen.
The system would always fall back onto its feet if anything -else- in
tried to draw, because the "late" buffer would be picked up then.
A window is created and the browser is about to render into it the
very first time, at that point it does an IPC to SF to request a new
buffer. Meanwhile, the window manager removes that window from the
list and the shared memory block it uses is marked as invalid.
However, at that point, another window is created and is given the
same index (that just go freed), but a different identity and resets
the "invalid" bit in the shared block. When we go back to the buffer
allocation code, we're stuck because the surface we're allocating for
is gone and we don't detect it's invalid because the invalid bit has
been reset.
It is not sufficient to check for the invalid bit, I should
also check that identities match.
When EGLImage extension is not available, SurfaceFlinger will fallback to using
glTexImage2D and glTexSubImage2D instead, which requires 50% more memory and an
extra copy. However this code path has never been exercised and had some bugs
which this patch fix.
Mainly the scale factor wasn't computed right when falling back on glDrawElements.
We also fallback to this mode of operation if a buffer doesn't have the adequate
usage bits for EGLImage usage.
This changes only code that is currently not executed. Some refactoring was needed to
keep the change clean. This doesn't change anything functionaly.
The ANR is caused by SurfaceFlinger waiting for buffers of a removed surface to become availlable.
When it is removed from the current list, a Surface is marked as NO_INIT, which causes SF to return
immediately in the above case. For some reason, the surface here wasn't marked as NO_INIT.
This change makes the code more robust by always (irregadless or errors) setting the NO_INIT status
in all code paths where a surface is removed from the list.
Additionaly added more information in the logs, should this happen again.