Refactored the input reader so that each raw input protocol is handled
by a separate subclass of the new InputMapper type. This way, behaviors
pertaining to keyboard, trackballs, touchscreens, switches and other
devices are clearly distinguished for improved maintainability.
Added partial support for describing capabilities of input devices
(incomplete and untested for now, will be fleshed out in later commits).
Simplified EventHub interface somewhat since InputReader is taking over
more of the work.
Cleaned up some of the interactions between InputManager and
WindowManagerService related to reading input state.
Fixed swiping finger from screen edge into display area.
Added logging of device information to 'dumpsys window'.
Change-Id: I17faffc33e3aec3a0f33f0b37e81a70609378612
Added several new coordinate values to MotionEvents to capture
touch major/minor area, tool major/minor area and orientation.
Renamed NDK input constants per convention.
Added InputDevice class in Java which will eventually provide
useful information about available input devices.
Added APIs for manufacturing new MotionEvent objects with multiple
pointers and all necessary coordinate data.
Fixed a bug in the input dispatcher where it could get stuck with
a pointer down forever.
Fixed a bug in the WindowManager where the input window list could
end up containing stale removed windows.
Fixed a bug in the WindowManager where the input channel was being
removed only after the final animation transition had taken place
which caused spurious WINDOW DIED log messages to be printed.
Change-Id: Ie55084da319b20aad29b28a0499b8dd98bb5da68
Sometimes the wrong fd was accessed when the device was addressed
by device id.
The earlier implementation assumed that two arrays were in sync
but one of them was compacted when devices were removed. Instead
of that dependency the device now keeps track of it's file descriptor.
Change-Id: I2b8a793d76b89ab464ae830482b309fe86031671
The old dispatch mechanism has been left in place and continues to
be used by default for now. To enable native input dispatch,
edit the ENABLE_NATIVE_DISPATCH constant in WindowManagerPolicy.
Includes part of the new input event NDK API. Some details TBD.
To wire up input dispatch, as the ViewRoot adds a window to the
window session it receives an InputChannel object as an output
argument. The InputChannel encapsulates the file descriptors for a
shared memory region and two pipe end-points. The ViewRoot then
provides the InputChannel to the InputQueue. Behind the
scenes, InputQueue simply attaches handlers to the native PollLoop object
that underlies the MessageQueue. This way MessageQueue doesn't need
to know anything about input dispatch per-se, it just exposes (in native
code) a PollLoop that other components can use to monitor file descriptor
state changes.
There can be zero or more targets for any given input event. Each
input target is specified by its input channel and some parameters
including flags, an X/Y coordinate offset, and the dispatch timeout.
An input target can request either synchronous dispatch (for foreground apps)
or asynchronous dispatch (fire-and-forget for wallpapers and "outside"
targets). Currently, finding the appropriate input targets for an event
requires a call back into the WindowManagerServer from native code.
In the future this will be refactored to avoid most of these callbacks
except as required to handle pending focus transitions.
End-to-end event dispatch mostly works!
To do: event injection, rate limiting, ANRs, testing, optimization, etc.
Change-Id: I8c36b2b9e0a2d27392040ecda0f51b636456de25
We've gotten lucky to date: the previous calculation of bitmask array
sizes, (maxval+1)/8 only works properly when 'maxval' is one less than
a multiple of 8. Fortunately, this has either been the case for us,
or there has been sufficient 'unused' space at the end of the defined
max value range that we haven't wound up overreading/overwriting the
allocated buffers.
Change-Id: I563a93a86644ab9f19489565e06c28e06bb53abc
We now only consider a device to be a default keyboard if its name
has "-keypad". A hack, but whatever.
Also add some debug logging for the input state to help identify such
issues in the future.
Sleep for 100us and try to open the input device again if it fails, with a
maximum of 10 attempts.
We need the retry logic because setting permissions on a new input device is
racy. The init process watches for new input device (via uevent) and sets the
permission on them in devices.c:make_device(). However at the same time
EventHub.cpp watches for new input devices from the system_server process, and
immediately tries to open them. I can't see a simple way to avoid this race
condition.
As best as I can tell this race condition has always exisited.
There must have been some timing change that happened recently that causes us
to hit this race condition much more often. See repro notes in referenced bug.
Bug: 2375632
This addresses a few parts of the bug:
- There was a small issue in the window manager where we could show a window
too early before the transition animation starts, which was introduced
by the recent wallpaper work. This was the cause of the flicker when
starting the dialer for the first time.
- There was a much larger problem that has existing forever where moving
an application token to the front or back was not synchronized with the
application animation transaction. This was the cause of the flicker
when hanging up (now that the in-call screen moves to the back instead
of closing and we always have a wallpaper visible). The approach to
solving this is to have the window manager go ahead and move the app
tokens (it must in order to keep in sync with the activity manager), but
to delay the actual window movement: perform the movement to front when
the animation starts, and to back when it ends. Actually, when the
animation ends, we just go and completely rebuild the window list to
ensure it is correct, because there can be ways people can add windows
while in this intermediate state where they could end up at the wrong
place once we do the delayed movement to the front or back. And it is
simply reasuring to know that every time we finish a full app transition,
we re-evaluate the world and put everything in its proper place.
Also included in this change are a few little tweaks to the input system,
to perform better logging, and completely ignore input devices that do not
have any of our input classes. There is also a little cleanup of evaluating
configuration changes to not do more work than needed when an input
devices appears or disappears, and to only log a config change message when
the config is truly changing.
Change-Id: Ifb2db77f8867435121722a6abeb946ec7c3ea9d3
The major things going on here:
- The MotionEvent API is now extended to included "pointer ID" information, for
applications to keep track of individual fingers as they move up and down.
PointerLocation has been updated to take advantage of this.
- The input system now has logic to generate MotionEvents with the new ID
information, synthesizing an identifier as new points are down and trying to
keep pointer ids consistent across events by looking at the distance between
the last and next set of pointers.
- We now support the new multitouch driver protocol, and will use that instead
of the old one if it is available. We do NOT use any finger id information
coming from the driver, but always synthesize pointer ids in user space.
(This is simply because we don't yet have a driver reporting this information
from which to base an implementation on.)
- Increase maximum number of fingers to 10. This code has only been used
with a driver that reports up to 2, so no idea how more will actually work.
- Oh and the input system can now detect and report physical DPAD devices.
This will be used to avoid unnecessarily listening to data from sensors
that function as event devices.
Signed-off-by: Mike Lockwood <lockwood@android.com>
The kernel can now publish a property describing the layout of virtual
hardware buttons on the touchscreen. These outside of the display
area (outside of the absolute x and y controller range the driver
reports), and when the user presses on them a key event will be
generated rather than a touch event.
This also includes a number of tweaks to the absolute controller
processing to make things work better on the new screens. For
example, we now reject down events outside of the display area.
Still left to be done is the ability to cancel a key down event,
so the user can slide up from the virtual keys to the touch screen
without causing a virtual key to execute.