This is due to a regression introduced by change 24114: when no audio tracks are ready for mixing, 0s are written to audio hardware. However this should only happen if tracks have already been mixed since the audio flinger thread woke up.
Also do not write 0s to audio hardware in direct output threads when audio format is not linear PCM.
Appears to have been broken by:
commit 9779b221e999583ff89e0dfc40e56398737adbb3
Author: Mathias Agopian <mathias@google.com>
Date: Mon Sep 7 16:32:45 2009 -0700
fix [2068105] implement queueBuffer/lockBuffer/dequeueBuffer properly
For some reason we don't like to have "-lpthread" globally -- it's a no-op
on device builds, but required for many host tools and all sim binaries --
so adding the use of pthread calls requires adding the library explicitly.
AudioFlinger: verify that mCblk is not null before using it in Track and RecordTrack contructors.
IAudioFlinger: check result of remote transaction before reading IAudioTrack and IAudioRecord.
IAudioTrack and IAudioRecord: check result of remote transaction before reading IMemory.
we could have several thread waiting on the condition and they all need to wake-up.
also added a debug "mTid" field in the class, which contains the tid of the thread (as opposed to pthread_t), this
is useful when debugging under gdb for instance.
we ended-up locking a Mutex that had been destroyed.
This happened because we gave an sp<Source> to the outside world,
and were called after LayerBuffer had been destroyed.
Instead we now give a wp<LayerBuffer> to the outside and have it
do the destruction.
Add a parameter to ToneGenerator.startTone() allowing the caller to specify the tone duration. This is used by the phone application to have a precise control on the DTMF tone duration which was not possible with the use of delayed messaged.
Also modified AudioFlinger output threads so that 0s are written to the audio output stream when no more tracks are ready to mix instead of just sleeping. This avoids an issue where the end of a previous DTMF tone could stay in audio hardware buffers and be played just before the beginning of the next DTMF tone.
Rewrote SurfaceFlinger's buffer management from the ground-up.
The design now support an arbitrary number of buffers per surface, however the current implementation is limited to four. Currently only 2 buffers are used in practice.
The main new feature is to be able to dequeue all buffers at once (very important when there are only two).
A client can dequeue all buffers until there are none available, it can lock all buffers except the last one that is used for composition. The client will block then, until a new buffer is enqueued.
The current implementation requires that buffers are locked in the same order they are dequeued and enqueued in the same order they are locked. Only one buffer can be locked at a time.
eg. Allowed sequence: DQ, DQ, LOCK, Q, LOCK, Q
eg. Forbidden sequence: DQ, DQ, LOCK, LOCK, Q, Q
Do not ramp volume if the first frame of a track is processed after the track was stopped.
In the case of very short sounds, the track stop request can be received by AudioFlinger just after the start request before the first frame is mixed by AudioMixer. In this case, the track is already in stopped state and initial volume is applied with a ramp for the first frame processed which should not be the case: initial volume change is always applied immediatelly.
This addresses a few parts of the bug:
- There was a small issue in the window manager where we could show a window
too early before the transition animation starts, which was introduced
by the recent wallpaper work. This was the cause of the flicker when
starting the dialer for the first time.
- There was a much larger problem that has existing forever where moving
an application token to the front or back was not synchronized with the
application animation transaction. This was the cause of the flicker
when hanging up (now that the in-call screen moves to the back instead
of closing and we always have a wallpaper visible). The approach to
solving this is to have the window manager go ahead and move the app
tokens (it must in order to keep in sync with the activity manager), but
to delay the actual window movement: perform the movement to front when
the animation starts, and to back when it ends. Actually, when the
animation ends, we just go and completely rebuild the window list to
ensure it is correct, because there can be ways people can add windows
while in this intermediate state where they could end up at the wrong
place once we do the delayed movement to the front or back. And it is
simply reasuring to know that every time we finish a full app transition,
we re-evaluate the world and put everything in its proper place.
Also included in this change are a few little tweaks to the input system,
to perform better logging, and completely ignore input devices that do not
have any of our input classes. There is also a little cleanup of evaluating
configuration changes to not do more work than needed when an input
devices appears or disappears, and to only log a config change message when
the config is truly changing.
Change-Id: Ifb2db77f8867435121722a6abeb946ec7c3ea9d3
In practice, no one ever writes an apostrophe in an aapt string with the
intent of using it to quote whitespace -- they always mean to include a
literal apostrophe in the string and then are surprised when they find
the apostrophe missing. Make this an error so that it is discovered
right away instead of waiting until late in QA or after the strings have
already been sent for translation. (And fix a recently-introduced string
that has exactly this problem.)
Silence the warning about an empty span in a string, since this seems to
annoy people instead of finding any real problems.
Make the error about having a translated string with no base string into
a warning, since this is a big pain when making changes to an application
that has already had some translations done, and the dead translations
should be removed by a later translation import anyway.
In AudioFlinger::MixerThread::putTracks(), change the mFillingUpStatus flag to FS_FILLING for active tracks so that mute request is executed without ramping volume down when the track is moved from A2DP to hardware output.
Also modified AudioFlinger::setStreamOutput() so that the notification of the change is sent only once to AudioSystem.
(in this case the state is dumped without the proper locks held which could result to a crash)
in addition, the last transaction and swap times are printed to the dump as well as the time spent
*currently* in these function. For instance, if SF is unresponsive because eglSwapBuffers() is stuck,
this will show up here.
what happened is that the efective pixel format is calculated by SF but Surface nevew had access to it directly.
in particular this caused query(FORMAT) to return the requested format instead of the effective format.
this would happen is the window is made visible but the client didn't render yet into it. This happens often with SurfaceView.
Instead of filling the window with solid black, SF would simply ignore it which could lead to more disturbing artifacts.
in theory the window manager should not display a window before it has been drawn into, but it does happen occasionnaly.
Apparently the problem is caused by the fact that A2dpAudioStreamOut::standby() calls a2dp_stop() after the headset has been powered down.
The workaround consists in indicating to A2DP audio hardware that a close request is pending and that stanby() must be bypassed.
This change makes SurfaceHolder.setType(GPU) obsolete (it's now ignored).
Added an API to android_native_window_t to allow extending the functionality without ever breaking binary compatibility. This is used to implement the new set_usage() API. This API needs to be called by software renderers because the default is to use usage flags suitable for h/w.
This is because the AudioFlinger duplicating thread is closed while the output tracks are still active. This cause the output tracks objects to be destroyed at a time where they can be in use by the destination output mixer.
The fix consists in adding the OutputTrack to the track list (mTracks) of its destination thread so that a strong reference is help during the mixer processed and the track is detroyed only when safe by destination thread.
Also added detection of problems when creating the output track (e.g. no more tracks in mixer). In this case the output track is not added to output track list of duplicating thread.
When changing the audio output stream sampling rate with setParameters() make sure that all tracks have a sampling rate less or equal to 2 times the new output sampling rate.
Merge change 7419 from master that may help eliminate the problem.
This change was for a different use case (when disabling A2DP to switch output to SCO) but without a repro case it is worth trying.
The BT headset detection now makes the difference between car kits and headsets, which can be used by audio policy manager.
The headset connection is also detected earlier, that is when the headset is connected and not when the SCO socket is connected as it was the case before. This allows the audio policy manager to suspend A2DP output while ringing if a SCO headset is connected.
There was no garanty that the corresponding thread destructor had been already called when exiting the closeOutput() or closeInput() functions.
This contructor could be called by the thread after the exit condition is signalled. By way of consequence, closeOutputStream() could be called after
we exited closeOutput() function.
To solve the problem, the call to closeOutputStream() or closeInputStream() is moved to closeOutput() or closeInput().
The function checkForNewParameters_l() is called with the ThreadBase mutex mLock locked. In the case where the parameter change implies
an audio parameter modification (e.g. sampling rate) the function sendConfigEvent() is called which tries to lock mLock creating a deadlock.
The fix consists in creating a function equivalent to sendConfigEvent() that must be called with mLock locked and does not lock mLock.
Also added the possibility to have more than one set parameter request pending.
Use integers instead of void* as input/output handles at IAudioFlinger and IAudioPolicyService interfaces.
AudioFlinger maintains an always increasing count of opened inputs or outputs as unique ID.
* changes:
update most gl tests to use EGLUtils
added two EGL helpers for selecting a config matching a certain pixelformat or native window type
added NATIVE_WINDOW_FORMAT attribute to android_native_window_t
The major things going on here:
- The MotionEvent API is now extended to included "pointer ID" information, for
applications to keep track of individual fingers as they move up and down.
PointerLocation has been updated to take advantage of this.
- The input system now has logic to generate MotionEvents with the new ID
information, synthesizing an identifier as new points are down and trying to
keep pointer ids consistent across events by looking at the distance between
the last and next set of pointers.
- We now support the new multitouch driver protocol, and will use that instead
of the old one if it is available. We do NOT use any finger id information
coming from the driver, but always synthesize pointer ids in user space.
(This is simply because we don't yet have a driver reporting this information
from which to base an implementation on.)
- Increase maximum number of fingers to 10. This code has only been used
with a driver that reports up to 2, so no idea how more will actually work.
- Oh and the input system can now detect and report physical DPAD devices.
If the output stream handler passed was not the A2DP output stream, the request was ignored instead of being forwarded downstream to hardware interface.
there was several issues:
- when a surface was made non-current, the last frame wasn't shown and the buffer could stay locked
- when a surface was made current the 2nd time, it would not dequeue a new buffer
now, queue/dequeue are done when the surface is made current.
for this to work, a new query() hook had to be added on android_native_window_t, it allows to retrieve some attributes of a window (currently only width and height).
The current gralloc allocates buffer memory for render targets that will typically have NPOT dimensions. Assuming that the vendor driver supports converting the resulting NPOT android_native_buffer_t to a NPOT EGLImage, SurfaceFlinger calls glEGLImageTargetTexture2DOES(), and uses glGetError() to test whether the GL can support creating an EGL target texture with the specified NPOT EGLImage. If it is supported, the DIRECT_TEXTURE flag remains set, otherwise it is cleared.
Tangentially, if the driver advertises the GL_ARB_texture_non_power_of_two extension, the NPOT_EXTENSION flag is set, otherwise it is cleared.
If the driver supported creating an EGL target texture from a NPOT source EGLImage, it implicitly creates a NPOT texture. This does not need any glScalef() texture coordinate correction in LayerBase::drawWithOpenGL(). However, the same driver may not advertise the GL_ARB_texture_non_power_of_two extension nor generally support NPOT textures that were not derived from EGLImages. So SurfaceFlinger may flag only DIRECT_TEXTURE, not NPOT_EXTENSION.
Therefore, the test in LayerBase::drawWithOpenGL() should only perform the glScalef() if neither NPOT_EXTENSION or DIRECT_TEXTURE are flagged. Otherwise scaling is applied to NPOT EGL target textures when none is required.
It turns out we were not returning the density for anything retrieved from a
TypedArray... which basically means any bitmap references from a layout or style...!!!
This is now fixed.
Also fiddle with the density compatibility mode to turn on smoothing in certain situations,
helping the look of things when they need to scale and we couldn't do the scaling at
load time.
Merge commit '1521cd6e657ba4efa9382ab73d3cbba3bdf50ead'
* commit '1521cd6e657ba4efa9382ab73d3cbba3bdf50ead':
Reset the mDpiX and mDpiY values when qemu.sf.lcd_density is defined.