always rescale videos to their target size using copybit during yuv->rgb
conversion. this improves performance of the GPU pass and doesn't require
linear filtering to be enabled. Also always use 16-bits buffers.
the average processing time for 720p dropped from ~50ms to ~30ms
The ToneGenerator failed to initialize because no more tracks were available in AudioFlinger mixer.
All tracks were used because the duplicating output was failing to free the tracks on audio hardware output mixer when exiting due to a misplaced test on output activity: output tracks where only freed if the duplicating output was active when exiting.
The fix consists in freeing the output tracks when the duplicating thread is destroyed without condition.
Fixed AudioFlinger::openInput() broken in change ddb78e7753be03937ad57ce7c3c842c52bdad65e
so that an invalid IO handle (0) is returned in case of failure.
Applied the same correction to openOutput().
Modified RecordThread start procedure so that a failure occuring during the first read from audio input stream is detected and causes
the record start to fail.
Modified RecordThread stop procedure to make sure that audio input stream fd is closed before we exit the stop function.
Fixed AudioRecord JAVA and JNI implementation to take status of native AudioRecord::start() into account
and not change mRecordingState to RECORDSTATE_RECORDING if start fails.
The image buffer used by glTexImage2d() would be uninitialized when no copybit engine
can be found.
We now always initialize images, since the abscence of copybit is not necessarily fatal.
There was bug in the logic that calculated the relative timeout, the start time was
reset each time an event was received, which caused the timeout to never occur if
an application was constantly redrawing.
Now we always check for a timeout when we come back from the waitEvent() and
process the "anti-freeze" if needed, regardless of whether an event was received.
The problem comes from a deadlock with AudioPolicyService mutex: When the second ringtone starts,
this mutex is locked by AudioPolicyService::startOutput() which in turn calls setParameters() to change the output device.
Audioflinger::ThreadBase::setParameters() signals the parameter change to the AudioFlinger mixer thread and waits for a condition
indicating that the parameter change has been processed.
At the same time, the mixer thread detects that the audio track corresponding to the first ring tone has been killed and calls its destructor.
This calls AudioPolicyService::releaseOutput() which tries to lock the AudioPolicyService mutex.
If this happens before the mixer thread can process the setParameters() command we are deadlocked.
The deadlock ends because setParameters() uses a timeout when waiting for the condition.
This regression was introduced by change 33736 fixing issue 2265163.
The fix consists in calling AudioPolicyService::releaseOutput() from Track::destroy() instead of from Track destructor: as detroy() is never called from the mixer thread loop (as opposed to the destructor) the deadlock described above cannot occur.
Binary XML file line #37: Error inflating class <unknown> after adding a secondary account
Now that I have these debug logs, I want to keep them since they will make
debugging these kinds of issues a lot easier in the future. (Note in this
case there was no problem in the framework.)
Change-Id: If2b0bbeda4706b7c5dc1ba4a5db04b74f40e1543
This is a second attempt to fix the audio routed to earpiece syndrom.
The root cause identified this time is the crash of an application having an active AudioTrack playing on the VOICE_CALL stream type.
When this happens, the AudioTrack destructor is not called and the audio policy manager is not notified of the track stop.
Results a situation where the VOICE_CALL stream is considered as always in use by audio policy manager which makes that audio is routed to earpiece.
The fix consists in moving the track start/stop/close notification to audio policiy manager from AudioTrack to AudioFlinger Track objet.
The net result is that in the case of a client application crash, the AudioFlinger TrackHandle object (which implements the remote side of the IAudioTrack binder interface) destructor is called which in turn destroys the Track object and we can notify the audio policy manager of the track stop and removal.
The same modification is made for AudioRecord although no bug related to record has been reported yet.
Also fixed a potential problem if record stop is called while the record thread is exiting.
since we're using the GPU for composition, don't use a texture for dimming,
instead simply use an alpha-blended quad.
also workaround what looks like a GL driver bug by calling glFinish() before
glReadPixels().
2206097: Broken suggestions while composing message
2166583: Color artifacts with MDP dithering
2261119: Passion transition animations are rough
2216759: Screen flicker when dropdown list in background window shows or hides
This is part of enabling GPU composition instead of using the MDP. This change
is dependent on another change in the vendor project.
Specifically this change disables the use of EGLImageKHR for s/w buffers
for cache coherency reasons. memcpy is used instead.
Surface::validate() could sometimes dereference a null pointer before checking it wasn't null.
This will prevent the application to crash when given bad parameters or used incorrectly.
However, the bug above probably has another cause.
in the kernel requires a guard page, so 1M allocations fragment memory very
badly. Subtracting a couple of pages so that they fit in a power of
two allows the kernel to make more efficient use of its virtual address space.
Signed-off-by: Rebecca Schultz Zavin <rebecca@android.com>
This builds on the EGLImage solution. We simply use copybit to convert from the
YUV frame into an EGLImage created for that purpose and proceed with the
regular EGLImage code.
We need to do this because "regular" GL doesn't support YUV textures.
We could improve upon this by detecting exacly what the GL supports and bypass
this extra step if not required, but we'll do this later if needed.
This change goes with a kernel driver change that reduces the audio buffer size from 4800 bytes (~27ms) to 3072 bytes (~17ms).
- The AudioFlinger modifcations in change 0bca68cfff161abbc992fec82dc7c88079dd1a36 have been removed: the short sleep period was counter productive when the AudioTrack is using the call back thread as it causes to many preemptions.
- AudioFlinger mixer thread now detects long standby exit time and in this case anticipates start by writing 0s as soon as a track is enabled even if not ready for mixing.
- AudioTrack::start() is modified to start call back thread before starting the IAudioTrack so that thread startup time is masked by IAudioTrack start and mixer thread wakeup time.
To prevent buggy command implementations from poisoning binder threads'
scheduling class & priority for future command execution, we now reset the
cgroup and thread priority to foreground/normal when a binder service thread
finishes executing the designated command.
Change-Id: Ibc0ab2485751453f6dc96fdb4eb877fd02796e3f
we lost the concept of vertical stride when moving video playback to EGLImage.
Here we bring it back in a somewhat hacky-way that will work only for the
softgl/mdp backend.
Reduce sleep time in AudioFlinger mixer thread when no data has been written to output to speed up startup time when exiting standby.
The rest of the modifications for this issues is in kernel driver:
commit 0dbb0ee136ed8de757df1ae26d84556c1751deae for buffer size modification from 8192 to 4800 bytes.
Another kernel improvement that is not submitted yes will reduce delay when audio output is exiting standby.
add a way to convert a mapped "pushbuffer" buffer to a gralloc handle
which then can be safely used by surfaceflinger, without including
gralloc_priv.h
Temporarily make a function public that doesn't need to be. When
host gcc-4.0.3 is gone from the build servers we can undo this.
(Cherry-picked from eclair-mr2.)
Use EGLImageKHR instead of copybit directly.
We now have the basis to use streaming YUV textures (well, in fact
we already are). When/if we use the GPU instead of the MDP we'll
need to make sure it supports the appropriate YUV format.
Also make sure we compile if EGL_ANDROID_image_native_buffer is not supported