page.title=Audio @jd:body

In this document

Android's audio HAL connects the higher level, audio-specific framework APIs in android.media to the underlying audio driver and hardware.

The following figure and list describe how audio functionality is implemented and the relevant source code that is involved in the implementation:

Application framework
At the application framework level is the app code, which utilizes the android.media APIs to interact with the audio hardware. Internally, this code calls corresponding JNI glue classes to access the native code that interacts with the auido hardware.
JNI
The JNI code associated with android.media is located in the frameworks/base/core/jni/ and frameworks/base/media/jni directories. This code calls the lower level native code to obtain access to the audio hardware.
Native framework
The native framework is defined in frameworks/av/media/libmedia and provides a native equivalent to the android.media package. The native framework calls the Binder IPC proxies to obtain access to audio-specific services of the media server.
Binder IPC
The Binder IPC proxies facilitate communication over process boundaries. They are located in the frameworks/av/media/libmedia directory and begin with the letter "I".
Media Server
The audio services in the media server, located in frameworks/av/services/audioflinger, is the actual code that interacts with your HAL implementations.
HAL
The hardware abstraction layer defines the standard interface that audio services calls into and that you must implement to have your audio hardware function correctly. The audio HAL interfaces are located in hardware/libhardware/include/hardware.
Kernel Driver
The audio driver interacts with the hardware and your implementation of the HAL. You can choose to use ALSA, OSS, or a custom driver of your own at this level. The HAL is driver-agnostic.

Note: If you do choose ALSA, we recommend using external/tinyalsa for the user portion of the driver because of its compatible licensing (The standard user-mode library is GPL licensed).

Implementing the HAL

The audio HAL is composed of three different interfaces that you must implement:

See the implementation for the Galaxy Nexus at device/samsung/tuna/audio for an example.

In addition to implementing the HAL, you need to create a device/<company_name>/<device_name>/audio/audio_policy.conf file that declares the audio devices present on your product. For an example, see the file for the Galaxy Nexus audio hardware in device/samsung/tuna/audio/audio_policy.conf. Also, see the system/core/include/system/audio.h and system/core/include/system/audio_policy.h header files for a reference of the properties that you can define.

Multi-channel support

If your hardware and driver supports multi-channel audio via HDMI, you can output the audio stream directly to the audio hardware. This bypasses the AudioFlinger mixer so it doesn't get downmixed to two channels.

The audio HAL must expose whether an output stream profile supports multi-channel audio capabilities. If the HAL exposes its capabilities, the default policy manager allows multichannel playback over HDMI.

For more implementation details, see the device/samsung/tuna/audio/audio_hw.c in the Jellybean release.

To specify that your product contains a multichannel audio output, edit the audio_policy.conf file to describe the multichannel output for your product. The following is an example from the Galaxy Nexus that shows a "dynamic" channel mask, which means the audio policy manager queries the actual channel masks supported by the HDMI sink after connection. You can also specify a static channel mask like AUDIO_CHANNEL_OUT_5POINT1

audio_hw_modules {
  primary {
    outputs {
        ...
        hdmi {  
          sampling_rates 44100|48000
          channel_masks dynamic
          formats AUDIO_FORMAT_PCM_16_BIT
          devices AUDIO_DEVICE_OUT_AUX_DIGITAL
          flags AUDIO_OUTPUT_FLAG_DIRECT
        }
        ...
    }
    ...
  }
  ...
}

If your product does not support multichannel audio, AudioFlinger's mixer downmixes the content to stereo automatically when sent to an audio device that does not support multichannel audio.

Media Codecs

Ensure that the audio codecs that your hardware and drivers support are properly declared for your product. See Exposing Codecs to the Framework for information on how to do this.

Configuring the Shared Library

You need to package the HAL implementation into a shared library and copy it to the appropriate location by creating an Android.mk file:

  1. Create a device/<company_name>/<device_name>/audio directory to contain your library's source files.
  2. Create an Android.mk file to build the shared library. Ensure that the Makefile contains the following line:
    LOCAL_MODULE := audio.primary.<device_name>
    

    Notice that your library must be named audio_primary.<device_name>.so so that Android can correctly load the library. The "primary" portion of this filename indicates that this shared library is for the primary audio hardware located on the device. The module names audio.a2dp.<device_name> and audio.usb.<device_name> are also available for bluetooth and USB audio interfaces. Here is an example of an Android.mk from the Galaxy Nexus audio hardware:

    LOCAL_PATH := $(call my-dir)
    
    include $(CLEAR_VARS)
    
    LOCAL_MODULE := audio.primary.tuna
    LOCAL_MODULE_PATH := $(TARGET_OUT_SHARED_LIBRARIES)/hw
    LOCAL_SRC_FILES := audio_hw.c ril_interface.c
    LOCAL_C_INCLUDES += \
            external/tinyalsa/include \
            $(call include-path-for, audio-utils) \
            $(call include-path-for, audio-effects)
    LOCAL_SHARED_LIBRARIES := liblog libcutils libtinyalsa libaudioutils libdl
    LOCAL_MODULE_TAGS := optional
    
    include $(BUILD_SHARED_LIBRARY)
    
  3. If your product supports low latency audio as specified by the Android CDD, copy the corresponding XML feature file into your product. For example, in your product's device/<company_name>/<device_name>/device.mk Makefile:
    PRODUCT_COPY_FILES := ...
    
    PRODUCT_COPY_FILES += \
    frameworks/native/data/etc/android.android.hardware.audio.low_latency.xml:system/etc/permissions/android.hardware.audio.low_latency.xml \
    
  4. Copy the audio_policy.conf file that you created earlier to the system/etc/ directory in your product's device/<company_name>/<device_name>/device.mk Makefile. For example:
    PRODUCT_COPY_FILES += \
            device/samsung/tuna/audio/audio_policy.conf:system/etc/audio_policy.conf
    
  5. Declare the shared modules of your audio HAL that are required by your product in the product's device/<company_name>/<device_name>/device.mk Makefile. For example, the Galaxy Nexus requires the primary and bluetooth audio HAL modules:
    PRODUCT_PACKAGES += \
            audio.primary.tuna \
            audio.a2dp.default
    

Audio preprocessing effects

The Android platform supports audio effects on supported devices in the audiofx package, which is available for developers to access. For example, on the Nexus 10, the following pre-processing effects are supported:

Pre-processing effects are always paired with the use case mode in which the pre-processing is requested. In Android app development, a use case is referred to as an AudioSource, and app developers request to use the AudioSource abstraction instead of the actual audio hardware device to use. The Android Audio Policy Manager maps an AudioSource to the actual hardware with AudioPolicyManagerBase::getDeviceForInputSource(int inputSource). In Android 4.2, the following sources are exposed to developers:

The default pre-processing effects that are applied for each AudioSource are specified in the /system/etc/audio_effects.conf file. To specify your own default effects for every AudioSource, create a /system/vendor/etc/audio_effects.conf file and specify any pre-processing effects that you need to turn on. For an example, see the implementation for the Nexus 10 in device/samsung/manta/audio_effects.conf

Warning: For the VOICE_RECOGNITION use case, do not enable the noise suppression pre-processing effect. It should not be turned on by default when recording from this audio source, and you should not enable it in your own audio_effects.conf file. Turning on the effect by default will cause the device to fail the compatibility requirement regardless of whether is was on by default due to configuration file, or the audio HAL implementation's default behavior.

The following example enables pre-processing for the VoIP AudioSource and Camcorder AudioSource. By declaring the AudioSource configuration in this manner, the framework will automatically request from the audio HAL the use of those effects

pre_processing {
   voice_communication {
       aec {}
       ns {}
   }
   camcorder {
       agc {}
   }
}

Source tuning

For AudioSource tuning, there are no explicit requirements on audio gain or audio processing with the exception of voice recognition (VOICE_RECOGNITION).

The following are the requirements for voice recognition:

Examples of tuning different effects for different sources are:

More information

For more information, see: