* changes:
update most gl tests to use EGLUtils
added two EGL helpers for selecting a config matching a certain pixelformat or native window type
added NATIVE_WINDOW_FORMAT attribute to android_native_window_t
The major things going on here:
- The MotionEvent API is now extended to included "pointer ID" information, for
applications to keep track of individual fingers as they move up and down.
PointerLocation has been updated to take advantage of this.
- The input system now has logic to generate MotionEvents with the new ID
information, synthesizing an identifier as new points are down and trying to
keep pointer ids consistent across events by looking at the distance between
the last and next set of pointers.
- We now support the new multitouch driver protocol, and will use that instead
of the old one if it is available. We do NOT use any finger id information
coming from the driver, but always synthesize pointer ids in user space.
(This is simply because we don't yet have a driver reporting this information
from which to base an implementation on.)
- Increase maximum number of fingers to 10. This code has only been used
with a driver that reports up to 2, so no idea how more will actually work.
- Oh and the input system can now detect and report physical DPAD devices.
there was several issues:
- when a surface was made non-current, the last frame wasn't shown and the buffer could stay locked
- when a surface was made current the 2nd time, it would not dequeue a new buffer
now, queue/dequeue are done when the surface is made current.
for this to work, a new query() hook had to be added on android_native_window_t, it allows to retrieve some attributes of a window (currently only width and height).
This will be used to avoid unnecessarily listening to data from sensors
that function as event devices.
Signed-off-by: Mike Lockwood <lockwood@android.com>
The kernel can now publish a property describing the layout of virtual
hardware buttons on the touchscreen. These outside of the display
area (outside of the absolute x and y controller range the driver
reports), and when the user presses on them a key event will be
generated rather than a touch event.
This also includes a number of tweaks to the absolute controller
processing to make things work better on the new screens. For
example, we now reject down events outside of the display area.
Still left to be done is the ability to cancel a key down event,
so the user can slide up from the virtual keys to the touch screen
without causing a virtual key to execute.
Merge commit 'c7396025e59524e7ef639fd86fc23123939ee91c'
* commit 'c7396025e59524e7ef639fd86fc23123939ee91c':
Return CAMERA_ERROR_SERVER_DIED to camera app when camera service dies (bug 1956726)
Merge commit 'c44989d6c7bcc761fb37f54fd37aac2070ba8e5e'
* commit 'c44989d6c7bcc761fb37f54fd37aac2070ba8e5e':
move ui/Time.cpp to core/jni, since this is the only place it is used
Merge commit '3d7b8d1aa6a362292f56defbe8fb2d5653f79282'
* commit '3d7b8d1aa6a362292f56defbe8fb2d5653f79282':
Use a ref-counted callback interface for Camera.
This allows the camera service to hang onto the callback interface
until all callbacks have been processed. This prevents problems
where pending callbacks in binder worker threads are processed
after the Java camera object and its associated native resources
have been released.
Bug 1884362
- return "const" objects for overloaded operators to disallow constructs like: (a+b) = c;
- don't return references to non-static members, it's not always safe.
- Point.cpp was empty, so get rid of it
- make sure that all binder Bn classes define a ctor and dtor in their respective library.
This avoids duplication of the ctor/dtor in libraries where these objects are instantiated.
This is also cleaner, should we want these ctor/dtor to do something one day.
- same change as above for some Bp classes and various other non-binder classes
- moved the definition of CHECK_INTERFACE() in IInterface.h instead of having it everywhere.
- improved the CHECK_INTERFACE() macro so it calls a single method in Parcel, instead of inlining its code everywhere
- IBinder::getInterfaceDescriptor() now returns a "const String16&" instead of String16, which saves calls to String16 and ~String16
- implemented a cache for BpBinder::getInterfaceDescriptor(), since this does an IPC. HOWEVER, this method never seems to be called.
The cache makes BpBinder bigger, so we need to figure out if we need this method at all.
This is the second half of bug 1837832. Modifies the camera client
and camera service to use the new binder interface. Removes the
old binder interface. There will be one more part to this change
to surface the undefined callbacks to the Java layer so that
partners can implement new features without having to touch the
stack.
This is the first step in a multi-step change to move from the old
specific callbacks to a generic callback. This will allow future
flexibility in the interface without requiring binder rewrites.
Bug 1837832
- Currently the lock/unlock path is naive and is done for each drawing operation (glDrawElements and glDrawArrays). this should be improved eventually.
- factor all the lock/unlock code in SurfaceBuffer.
- fixed "showupdate" so it works even when we don't have preserving eglSwapBuffers().
- improved the situation with the dirty-region and fixed a problem that caused GL apps to not update.
- make use of LightRefBase() where needed, instead of duplicating its implementation
- add LightRefBase::getStrongCount()
- renamed EGLNativeWindowSurface.cpp to FramebufferNativeWindow.cpp
- disabled copybits test, since it clashes with the new gralloc api
- Camera/Video will be fixed later when we rework the overlay apis
Add a factory method that creates a Camera object from a remote client
Next:
The changes in authordriver.cpp and android_camera_input.cpp will come.
and the constructor for Camera object will be removed.
The WindowManager side of Surface.java holds a SurfaceControl, while the client-side holds a Surface. When the client is in the system process, Surface.java holds both (which is a problem we'll try to fix later).
SurfaceControl is used for controling the geometry of the surface (for the WM), while Surface is used to access the buffers (for SF's clients).
SurfaceFlingerClient now uses the SurfaceID instead of Surface*.
Currently Surface still has the SurfaceControl API and is implemented by calling into SurfaceControl.
To deal with Java's lack of destructors and delayed garbage collection, we used to duplicate Surface.cpp objects in some case; this caused some issues because Surface is supposed to be reference-counted and unique.