page.title=Audio @jd:body
This page exlains how to implement the audio Hardware Abstraction Layer (HAL) and configure the shared library.
The audio HAL is composed of three different interfaces that you must implement:
hardware/libhardware/include/hardware/audio.h - represents the main functions of
an audio device.
hardware/libhardware/include/hardware/audio_policy.h - represents the audio policy
manager, which handles things like audio routing and volume control policies.
hardware/libhardware/include/hardware/audio_effect.h - represents effects that can
be applied to audio such as downmixing, echo, or noise suppression.
See the implementation for the Galaxy Nexus at device/samsung/tuna/audio for an example.
In addition to implementing the HAL, you need to create a
device/<company_name>/<device_name>/audio/audio_policy.conf file
that declares the audio devices present on your product. For an example, see the file for
the Galaxy Nexus audio hardware in device/samsung/tuna/audio/audio_policy.conf.
Also, see
the system/core/include/system/audio.h and system/core/include/system/audio_policy.h
header files for a reference of the properties that you can define.
If your hardware and driver supports multichannel audio via HDMI, you can output the audio stream directly to the audio hardware. This bypasses the AudioFlinger mixer so it doesn't get downmixed to two channels.
The audio HAL must expose whether an output stream profile supports multichannel audio capabilities. If the HAL exposes its capabilities, the default policy manager allows multichannel playback over HDMI.
For more implementation details, see the
device/samsung/tuna/audio/audio_hw.c in the Android 4.1 release.
To specify that your product contains a multichannel audio output, edit the audio_policy.conf file to describe the multichannel
output for your product. The following is an example from the Galaxy Nexus that shows a "dynamic" channel mask, which means the audio policy manager
queries the actual channel masks supported by the HDMI sink after connection. You can also specify a static channel mask like AUDIO_CHANNEL_OUT_5POINT1
audio_hw_modules {
primary {
outputs {
...
hdmi {
sampling_rates 44100|48000
channel_masks dynamic
formats AUDIO_FORMAT_PCM_16_BIT
devices AUDIO_DEVICE_OUT_AUX_DIGITAL
flags AUDIO_OUTPUT_FLAG_DIRECT
}
...
}
...
}
...
}
AudioFlinger's mixer downmixes the content to stereo automatically when sent to an audio device that does not support multichannel audio.
Ensure the audio codecs your hardware and drivers support are properly declared for your product. See Exposing Codecs to the Framework for information on how to do this.
You need to package the HAL implementation into a shared library and copy it to the
appropriate location by creating an Android.mk file:
device/<company_name>/<device_name>/audio directory
to contain your library's source files.
Android.mk file to build the shared library. Ensure that the
Makefile contains the following line:
LOCAL_MODULE := audio.primary.<device_name>
Notice your library must be named audio_primary.<device_name>.so so
that Android can correctly load the library. The "primary" portion of this
filename indicates that this shared library is for the primary audio hardware located on the
device. The module names audio.a2dp.<device_name> and
audio.usb.<device_name> are also available for bluetooth and USB audio
interfaces. Here is an example of an Android.mk from the Galaxy
Nexus audio hardware:
LOCAL_PATH := $(call my-dir)
include $(CLEAR_VARS)
LOCAL_MODULE := audio.primary.tuna
LOCAL_MODULE_RELATIVE_PATH := hw
LOCAL_SRC_FILES := audio_hw.c ril_interface.c
LOCAL_C_INCLUDES += \
external/tinyalsa/include \
$(call include-path-for, audio-utils) \
$(call include-path-for, audio-effects)
LOCAL_SHARED_LIBRARIES := liblog libcutils libtinyalsa libaudioutils libdl
LOCAL_MODULE_TAGS := optional
include $(BUILD_SHARED_LIBRARY)
device/<company_name>/<device_name>/device.mk
Makefile:
PRODUCT_COPY_FILES := ... PRODUCT_COPY_FILES += \ frameworks/native/data/etc/android.android.hardware.audio.low_latency.xml:system/etc/permissions/android.hardware.audio.low_latency.xml \
audio_policy.conf file that you created earlier to the system/etc/ directory
in your product's device/<company_name>/<device_name>/device.mk
Makefile. For example:
PRODUCT_COPY_FILES += \
device/samsung/tuna/audio/audio_policy.conf:system/etc/audio_policy.conf
device/<company_name>/<device_name>/device.mk Makefile. For example, the
Galaxy Nexus requires the primary and bluetooth audio HAL modules:
PRODUCT_PACKAGES += \
audio.primary.tuna \
audio.a2dp.default
The Android platform provides audio effects on supported devices in the audiofx package, which is available for developers to access. For example, on the Nexus 10, the following pre-processing effects are supported:
Pre-processing effects are always paired with the use case mode in which the pre-processing is requested. In Android
app development, a use case is referred to as an AudioSource; and app developers
request to use the AudioSource abstraction instead of the actual audio hardware device.
The Android Audio Policy Manager maps an AudioSource to the actual hardware with AudioPolicyManagerBase::getDeviceForInputSource(int
inputSource). The following sources are exposed to developers:
android.media.MediaRecorder.AudioSource.CAMCORDERandroid.media.MediaRecorder.AudioSource.VOICE_COMMUNICATIONandroid.media.MediaRecorder.AudioSource.VOICE_CALLandroid.media.MediaRecorder.AudioSource.VOICE_DOWNLINKandroid.media.MediaRecorder.AudioSource.VOICE_UPLINKandroid.media.MediaRecorder.AudioSource.VOICE_RECOGNITIONandroid.media.MediaRecorder.AudioSource.MICandroid.media.MediaRecorder.AudioSource.DEFAULTThe default pre-processing effects that are applied for each AudioSource are
specified in the /system/etc/audio_effects.conf file. To specify
your own default effects for every AudioSource, create a /system/vendor/etc/audio_effects.conf file
and specify any pre-processing effects that you need to turn on. For an example,
see the implementation for the Nexus 10 in device/samsung/manta/audio_effects.conf
Warning: For the VOICE_RECOGNITION use case, do not enable
the noise suppression pre-processing effect. It should not be turned on by default when recording from this audio source,
and you should not enable it in your own audio_effects.conf file. Turning on the effect by default will cause the device to fail
the compatibility requirement regardless of whether this was on by default due to
configuration file, or the audio HAL implementation's default behavior.
The following example enables pre-processing for the VoIP AudioSource and Camcorder AudioSource.
By declaring the AudioSource configuration in this manner, the
framework will automatically request from the audio HAL the use of those
effects.
pre_processing {
voice_communication {
aec {}
ns {}
}
camcorder {
agc {}
}
}
For AudioSource tuning, there are no explicit requirements on audio gain or audio processing
with the exception of voice recognition (VOICE_RECOGNITION).
The following are the requirements for voice recognition:
Examples of tuning different effects for different sources are:
CAMCORDERVOICE_COMMUNICATIONVOICE_COMMUNICATION and main phone micCAMCORDERFor more information, see:
device/samsung/manta/audio_effects.conf file for the Nexus 10