• Home
Name Date Size #Lines LOC

..--

figures/12-May-2024-

frameworks/12-May-2024-86,74464,634

interfaces/12-May-2024-7,2983,193

sa_profile/12-May-2024-5145

services/12-May-2024-46,23136,711

test/12-May-2024-16,79314,735

.gitattributesD12-May-2024631 1615

CODEOWNERSD12-May-2024772 1616

LICENSED12-May-20249.9 KiB177150

OAT.xmlD12-May-20245.4 KiB8630

README.mdD12-May-202417.6 KiB339265

README_zh.mdD12-May-202419.3 KiB362292

accessibility.gniD12-May-2024756 2018

audio_framework.versionscriptD12-May-2024977 3332

audio_ohcore.gniD12-May-2024761 2017

bluetooth_part.gniD12-May-2024758 2018

bundle.jsonD12-May-20246.8 KiB165164

cfi_blocklist.txtD12-May-20241.5 KiB2524

config.gniD12-May-20241.6 KiB4739

hisysevent.yamlD12-May-20242.1 KiB5133

multimedia_aafwk.gniD12-May-2024869 1816

ressche_part.gniD12-May-2024761 2018

README.md

1# Audio
2
3## Introduction
4
5The audio framework is used to implement audio-related features, including audio playback, audio recording, volume management, and device management.
6
7**Figure 1** Architecture of the audio framework
8
9
10![](figures/en-us_image_0000001152315135.png)
11
12### Basic Concepts
13
14-   **Sampling**
15
16    Sampling is a process to obtain discrete-time signals by extracting samples from analog signals in a continuous time domain at a specific interval.
17
18-   **Sampling rate**
19
20    Sampling rate is the number of samples extracted from a continuous signal per second to form a discrete signal. It is measured in Hz. Generally, the human hearing range is from 20 Hz to 20 kHz. Common audio sampling rates include 8 kHz, 11.025 kHz, 22.05 kHz, 16 kHz, 37.8 kHz, 44.1 kHz, 48 kHz, 96 kHz, and 192 kHz.
21
22-   **Channel**
23
24    Channels refer to different spatial positions where independent audio signals are recorded or played. The number of channels is the number of audio sources used during audio recording, or the number of speakers used for audio playback.
25
26-   **Audio frame**
27
28    Audio data is in stream form. For the convenience of audio algorithm processing and transmission, it is generally agreed that a data amount in a unit of 2.5 to 60 milliseconds is one audio frame. This unit is called sampling time, and its length is specific to codecs and the application requirements.
29
30-   **PCM**
31
32    Pulse code modulation (PCM) is a method used to digitally represent sampled analog signals. It converts continuous-time analog signals into discrete-time digital signal samples.
33
34## Directory Structure
35
36The structure of the repository directory is as follows:
37
38```
39/foundation/multimedia/audio_standard # Service code of the audio framework
40├── frameworks                         # Framework code
41│   ├── native                         # Internal native API implementation
42│   └── js                             # External JS API implementation
43│       └── napi                       # External native API implementation
44├── interfaces                         # API code
45│   ├── inner_api                      # Internal APIs
46│   └── kits                           # External APIs
47├── sa_profile                         # Service configuration profile
48├── services                           # Service code
49├── LICENSE                            # License file
50└── bundle.json                        # Build file
51```
52
53## Usage Guidelines
54
55### Audio Playback
56
57You can use the APIs provided in the current repository to convert audio data into audible analog signals, play the audio signals using output devices, and manage playback tasks. The following describes how to use the **AudioRenderer** class to develop the audio playback feature:
58
591.  Call **Create()** with the required stream type to create an **AudioRenderer** instance.
60
61    ```
62    AudioStreamType streamType = STREAM_MUSIC; // Stream type example.
63    std::unique_ptr<AudioRenderer> audioRenderer = AudioRenderer::Create(streamType);
64    ```
65
662.  (Optional) Call the static APIs **GetSupportedFormats()**, **GetSupportedChannels()**, **GetSupportedEncodingTypes()**, and **GetSupportedSamplingRates()** to obtain the supported values of parameters.
673.  Prepare the device and call **SetParams()** to set parameters.
68
69    ```
70    AudioRendererParams rendererParams;
71    rendererParams.sampleFormat = SAMPLE_S16LE;
72    rendererParams.sampleRate = SAMPLE_RATE_44100;
73    rendererParams.channelCount = STEREO;
74    rendererParams.encodingType = ENCODING_PCM;
75
76    audioRenderer->SetParams(rendererParams);
77    ```
78
794.  (Optional) Call **GetParams(rendererParams)** to obtain the parameters set.
805.  Call **Start()** to start an audio playback task.
816.  Call **GetBufferSize()** to obtain the length of the buffer to be written.
82
83    ```
84    audioRenderer->GetBufferSize(bufferLen);
85    ```
86
877.  Call **bytesToWrite()** to read the audio data from the source (such as an audio file) and pass it to a byte stream. You can repeatedly call this API to write rendering data.
88
89    ```
90    bytesToWrite = fread(buffer, 1, bufferLen, wavFile);
91    while ((bytesWritten < bytesToWrite) && ((bytesToWrite - bytesWritten) > minBytes)) {
92        bytesWritten += audioRenderer->Write(buffer + bytesWritten, bytesToWrite - bytesWritten);
93        if (bytesWritten < 0)
94            break;
95    }
96    ```
97
988.  Call **Drain()** to clear the streams to be played.
999.  Call **Stop()** to stop the output.
10010. After the playback task is complete, call **Release()** to release resources.
101
102The preceding steps describe the basic development scenario of audio playback.
103
104
10511. Call **SetVolume(float)** and **GetVolume()** to set and obtain the audio stream volume, which ranges from 0.0 to 1.0.
106
107For details, see [**audio_renderer.h**](https://gitee.com/openharmony/multimedia_audio_standard/blob/master/interfaces/inner_api/native/audiorenderer/include/audio_renderer.h) and [**audio_info.h**](https://gitee.com/openharmony/multimedia_audio_standard/blob/master/interfaces/inner_api/native/audiocommon/include/audio_info.h).
108
109### Audio Recording
110
111You can use the APIs provided in the current repository to record audio via an input device, convert the audio into audio data, and manage recording tasks. The following describes how to use the **AudioCapturer** class to develop the audio recording feature:
112
1131.  Call **Create()** with the required stream type to create an **AudioCapturer** instance.
114
115    ```
116    AudioStreamType streamType = STREAM_MUSIC;
117    std::unique_ptr<AudioCapturer> audioCapturer = AudioCapturer::Create(streamType);
118    ```
119
1202.  (Optional) Call the static APIs **GetSupportedFormats()**, **GetSupportedChannels()**, **GetSupportedEncodingTypes()**, and **GetSupportedSamplingRates()** to obtain the supported values of parameters.
1213.  Prepare the device and call **SetParams()** to set parameters.
122
123    ```
124    AudioCapturerParams capturerParams;
125    capturerParams.sampleFormat = SAMPLE_S16LE;
126    capturerParams.sampleRate = SAMPLE_RATE_44100;
127    capturerParams.channelCount = STEREO;
128    capturerParams.encodingType = ENCODING_PCM;
129
130    audioCapturer->SetParams(capturerParams);
131    ```
132
1334.  (Optional) Call **GetParams(capturerParams)** to obtain the parameters set.
1345.  Call **Start()** to start an audio recording task.
1356.  Call **GetBufferSize()** to obtain the length of the buffer to be written.
136
137    ```
138    audioCapturer->GetBufferSize(bufferLen);
139    ```
140
1417.  Call **bytesRead()** to read the captured audio data and convert it to a byte stream. The application will repeatedly call this API to read data until it is manually stopped.
142
143    ```
144    // set isBlocking = true/false for blocking/non-blocking read
145    bytesRead = audioCapturer->Read(*buffer, bufferLen, isBlocking);
146    while (numBuffersToCapture) {
147        bytesRead = audioCapturer->Read(*buffer, bufferLen, isBlockingRead);
148            if (bytesRead < 0) {
149                break;
150            } else if (bytesRead > 0) {
151            fwrite(buffer, size, bytesRead, recFile); // example shows writes the recorded data into a file
152            numBuffersToCapture--;
153        }
154    }
155    ```
156
1578.  (Optional) Call **Flush()** to clear the recording stream buffer.
1589.  Call **Stop()** to stop recording.
15910. After the recording task is complete, call **Release()** to release resources.
160
161For details, see [**audio_capturer.h**](https://gitee.com/openharmony/multimedia_audio_standard/blob/master/interfaces/inner_api/native/audiocapturer/include/audio_capturer.h) and [**audio_info.h**](https://gitee.com/openharmony/multimedia_audio_standard/blob/master/interfaces/inner_api/native/audiocommon/include/audio_info.h).
162
163### Audio Management
164You can use the APIs provided in the [**audio_system_manager.h**](https://gitee.com/openharmony/multimedia_audio_standard/blob/master/interfaces/inner_api/native/audiomanager/include/audio_system_manager.h) to control the volume and devices.
1651. Call **GetInstance()** to obtain an **AudioSystemManager** instance.
166    ```
167    AudioSystemManager *audioSystemMgr = AudioSystemManager::GetInstance();
168    ```
169#### Volume Control
1702. Call **GetMaxVolume()** and **GetMinVolume()** to obtain the maximum volume and minimum volume allowed for an audio stream.
171    ```
172    AudioVolumeType streamType = AudioVolumeType::STREAM_MUSIC;
173    int32_t maxVol = audioSystemMgr->GetMaxVolume(streamType);
174    int32_t minVol = audioSystemMgr->GetMinVolume(streamType);
175    ```
1763. Call **SetVolume()** and **GetVolume()** to set and obtain the volume of the audio stream, respectively.
177    ```
178    int32_t result = audioSystemMgr->SetVolume(streamType, 10);
179    int32_t vol = audioSystemMgr->GetVolume(streamType);
180    ```
1814. Call **SetMute()** and **IsStreamMute** to set and obtain the mute status of the audio stream, respectively.
182    ```
183    int32_t result = audioSystemMgr->SetMute(streamType, true);
184    bool isMute = audioSystemMgr->IsStreamMute(streamType);
185    ```
1865. Call **SetRingerMode()** and **GetRingerMode()** to set and obtain the ringer mode, respectively. The supported ringer modes are the enumerated values of **AudioRingerMode** defined in [**audio_info.h**](https://gitee.com/openharmony/multimedia_audio_standard/blob/master/interfaces/inner_api/native/audiocommon/include/audio_info.h).
187    ```
188    int32_t result = audioSystemMgr->SetRingerMode(RINGER_MODE_SILENT);
189    AudioRingerMode ringMode = audioSystemMgr->GetRingerMode();
190    ```
1916. Call **SetMicrophoneMute()** and **IsMicrophoneMute()** to set and obtain the mute status of the microphone, respectively.
192    ```
193    int32_t result = audioSystemMgr->SetMicrophoneMute(true);
194    bool isMicMute = audioSystemMgr->IsMicrophoneMute();
195    ```
196#### Device Control
1977. Call **GetDevices**, **deviceType_**, and **deviceRole_** to obtain information about the audio input and output devices. For details, see the enumerated values of **DeviceFlag**, **DeviceType**, and **DeviceRole** defined in [**audio_info.h**](https://gitee.com/openharmony/multimedia_audio_standard/blob/master/interfaces/inner_api/native/audiocommon/include/audio_info.h).
198    ```
199    DeviceFlag deviceFlag = OUTPUT_DEVICES_FLAG;
200    vector<sptr<AudioDeviceDescriptor>> audioDeviceDescriptors
201        = audioSystemMgr->GetDevices(deviceFlag);
202    sptr<AudioDeviceDescriptor> audioDeviceDescriptor = audioDeviceDescriptors[0];
203    cout << audioDeviceDescriptor->deviceType_;
204    cout << audioDeviceDescriptor->deviceRole_;
205    ```
2068. Call **SetDeviceActive()** and **IsDeviceActive()** to activate or deactivate an audio device and obtain the device activation status, respectively.
207     ```
208    ActiveDeviceType deviceType = SPEAKER;
209    int32_t result = audioSystemMgr->SetDeviceActive(deviceType, true);
210    bool isDevActive = audioSystemMgr->IsDeviceActive(deviceType);
211    ```
2129. (Optional) Call other APIs, such as **IsStreamActive()**, **SetAudioParameter()**, and **GetAudioParameter()**, provided in [**audio_system_manager.h**](https://gitee.com/openharmony/multimedia_audio_standard/blob/master/interfaces/inner_api/native/audiomanager/include/audio_system_manager.h) if required.
21310. Call **AudioManagerNapi::On** to subscribe to system volume changes. If a system volume change occurs, the following parameters are used to notify the application:
214**volumeType**: type of the system volume changed.
215**volume**: current volume level.
216**updateUi**: whether to show the change on the UI. (Set **updateUi** to **true** for a volume increase or decrease event, and set it to **false** for other changes.)
217    ```
218    const audioManager = audio.getAudioManager();
219
220    export default {
221      onCreate() {
222        audioManager.on('volumeChange', (volumeChange) ==> {
223          console.info('volumeType = '+volumeChange.volumeType);
224          console.info('volume = '+volumeChange.volume);
225          console.info('updateUi = '+volumeChange.updateUi);
226        }
227      }
228    }
229    ```
230
231#### Audio Scene
23211. Call **SetAudioScene()** and **getAudioScene()** to set and obtain the audio scene, respectively.
233    ```
234    int32_t result = audioSystemMgr->SetAudioScene(AUDIO_SCENE_PHONE_CALL);
235    AudioScene audioScene = audioSystemMgr->GetAudioScene();
236    ```
237For details about the supported audio scenes, see the enumerated values of **AudioScene** defined in [**audio_info.h**](https://gitee.com/openharmony/multimedia_audio_framework/blob/master/interfaces/inner_api/native/audiocommon/include/audio_info.h).
238#### Audio Stream Management
239You can use the APIs provided in [**audio_stream_manager.h**](https://gitee.com/openharmony/multimedia_audio_standard/blob/master/interfaces/inner_api/native/audiomanager/include/audio_stream_manager.h) to implement stream management.
2401. Call **GetInstance()** to obtain an **AudioSystemManager** instance.
241    ```
242    AudioStreamManager *audioStreamMgr = AudioStreamManager::GetInstance();
243    ```
244
2452. Call **RegisterAudioRendererEventListener()** to register a listener for renderer state changes. A callback will be invoked when the renderer state changes. You can override **OnRendererStateChange()** in the **AudioRendererStateChangeCallback** class.
246    ```
247    const int32_t clientPid;
248
249    class RendererStateChangeCallback : public AudioRendererStateChangeCallback {
250    public:
251        RendererStateChangeCallback = default;
252        ~RendererStateChangeCallback = default;
253    void OnRendererStateChange(
254        const std::vector<std::unique_ptr<AudioRendererChangeInfo>> &audioRendererChangeInfos) override
255    {
256        cout<<"OnRendererStateChange entered"<<endl;
257    }
258    };
259
260    std::shared_ptr<AudioRendererStateChangeCallback> callback = std::make_shared<RendererStateChangeCallback>();
261    int32_t state = audioStreamMgr->RegisterAudioRendererEventListener(clientPid, callback);
262    int32_t result = audioStreamMgr->UnregisterAudioRendererEventListener(clientPid);
263    ```
264
2653. Call **RegisterAudioCapturerEventListener()** to register a listener for capturer state changes. A callback will be invoked when the capturer state changes. You can override **OnCapturerStateChange()** in the **AudioCapturerStateChangeCallback** class.
266    ```
267    const int32_t clientPid;
268
269    class CapturerStateChangeCallback : public AudioCapturerStateChangeCallback {
270    public:
271        CapturerStateChangeCallback = default;
272        ~CapturerStateChangeCallback = default;
273    void OnCapturerStateChange(
274        const std::vector<std::unique_ptr<AudioCapturerChangeInfo>> &audioCapturerChangeInfos) override
275    {
276        cout<<"OnCapturerStateChange entered"<<endl;
277    }
278    };
279
280    std::shared_ptr<AudioCapturerStateChangeCallback> callback = std::make_shared<CapturerStateChangeCallback>();
281    int32_t state = audioStreamMgr->RegisterAudioCapturerEventListener(clientPid, callback);
282    int32_t result = audioStreamMgr->UnregisterAudioCapturerEventListener(clientPid);
283    ```
2844. Call **GetCurrentRendererChangeInfos()** to obtain information about all running renderers, including the client UID, session ID, renderer information, renderer state, and output device details.
285    ```
286    std::vector<std::unique_ptr<AudioRendererChangeInfo>> audioRendererChangeInfos;
287    int32_t currentRendererChangeInfo = audioStreamMgr->GetCurrentRendererChangeInfos(audioRendererChangeInfos);
288    ```
289
2905. Call **GetCurrentCapturerChangeInfos()** to obtain information about all running capturers, including the client UID, session ID, capturer information, capturer state, and input device details.
291    ```
292    std::vector<std::unique_ptr<AudioCapturerChangeInfo>> audioCapturerChangeInfos;
293    int32_t currentCapturerChangeInfo = audioStreamMgr->GetCurrentCapturerChangeInfos(audioCapturerChangeInfos);
294    ```
295    For details, see **audioRendererChangeInfos** and **audioCapturerChangeInfos** in [**audio_info.h**](https://gitee.com/openharmony/multimedia_audio_standard/blob/master/interfaces/inner_api/native/audiocommon/include/audio_info.h).
296
2976. Call **IsAudioRendererLowLatencySupported()** to check whether low latency is supported.
298    ```
299    const AudioStreamInfo &audioStreamInfo;
300    bool isLatencySupport = audioStreamMgr->IsAudioRendererLowLatencySupported(audioStreamInfo);
301    ```
302#### Using JavaScript APIs
303JavaScript applications can call the audio management APIs to control the volume and devices.
304For details, see [**js-apis-audio.md**](https://gitee.com/openharmony/docs/blob/master/en/application-dev/reference/apis/js-apis-audio.md#audiomanager).
305
306### Bluetooth SCO Call
307You can use the APIs provided in [**audio_bluetooth_manager.h**](https://gitee.com/openharmony/multimedia_audio_standard/blob/master/services/include/audio_bluetooth/client/audio_bluetooth_manager.h) to implement Bluetooth calls over synchronous connection-oriented (SCO) links.
308
3091. Call **OnScoStateChanged()** to listen for SCO link state changes.
310    ```
311    const BluetoothRemoteDevice &device;
312    int state;
313    void OnScoStateChanged(const BluetoothRemoteDevice &device, int state);
314    ```
315
3162. (Optional) Call the static API **RegisterBluetoothScoAgListener()** to register a Bluetooth SCO listener, and call **UnregisterBluetoothScoAgListener()** to unregister the listener when it is no longer required.
317## Supported Devices
318The following lists the device types supported by the audio framework.
319
3201. **USB Type-C Headset**
321
322    A digital headset that consists of its own digital-to-analog converter (DAC) and amplifier that functions as part of the headset.
323
3242. **WIRED Headset**
325
326    An analog headset that does not contain any DAC. It can have a 3.5 mm jack or a USB-C socket without DAC.
327
3283. **Bluetooth Headset**
329
330    A Bluetooth Advanced Audio Distribution Mode (A2DP) headset for wireless audio transmission.
331
3324. **Internal Speaker and MIC**
333
334    A device with a built-in speaker and microphone, which are used as default devices for playback and recording, respectively.
335
336## Repositories Involved
337
338[multimedia\_audio\_framework](https://gitee.com/openharmony/multimedia_audio_framework)
339

README_zh.md

1# 音频组件<a name="ZH-CN_TOPIC_0000001146901937"></a>
2
3-   [简介](#section119mcpsimp)
4    -   [基本概念](#section122mcpsimp)
5
6-   [目录](#section179mcpsimp)
7-   [使用说明](#section112738505318)
8    -   [音频播放](#section1147510562812)
9    -   [音频录制](#section295162052813)
10    -   [音频管理](#section645572311287)
11        -   [音量控制](#section645572311287_001)
12        -   [设备控制](#section645572311287_002)
13        -   [音频场景](#section645572311287_003)
14        -   [音频流管理](#section645572311287_004)
15        -   [JavaScript 用法](#section645572311287_005)
16    -   [铃声管理](#section645572311287_006)
17    -   [蓝牙SCO呼叫](#section645572311287_007)
18-   [支持设备](#section645572311287_008)
19-   [相关仓](#section340mcpsimp)
20
21## 简介<a name="section119mcpsimp"></a>
22
23音频组件用于实现音频相关的功能,包括音频播放,录制,音量管理和设备管理。
24
25**图 1**  音频组件架构图<a name="fig483116248288"></a>
26
27
28![](figures/zh-cn_image_0000001152315135.png)
29
30### 基本概念<a name="section122mcpsimp"></a>
31
32-   **采样**
33
34采样是指将连续时域上的模拟信号按照一定的时间间隔采样,获取到离散时域上离散信号的过程。
35
36-   **采样率**
37
38采样率为每秒从连续信号中提取并组成离散信号的采样次数,单位用赫兹(Hz)来表示。通常人耳能听到频率范围大约在20Hz~20kHz之间的声音。常用的音频采样频率有:8kHz、11.025kHz、22.05kHz、16kHz、37.8kHz、44.1kHz、48kHz、96kHz、192kHz等。
39
40-   **声道**
41
42声道是指声音在录制或播放时在不同空间位置采集或回放的相互独立的音频信号,所以声道数也就是声音录制时的音源数量或回放时相应的扬声器数量。
43
44-   **音频帧**
45
46音频数据是流式的,本身没有明确的一帧帧的概念,在实际的应用中,为了音频算法处理/传输的方便,一般约定俗成取2.5ms\~60ms为单位的数据量为一帧音频。这个时间被称之为“采样时间”,其长度没有特别的标准,它是根据编解码器和具体应用的需求来决定的。
47
48-   **PCM**
49
50PCM(Pulse Code Modulation),即脉冲编码调制,是一种将模拟信号数字化的方法,是将时间连续、取值连续的模拟信号转换成时间离散、抽样值离散的数字信号的过程。
51
52## 目录<a name="section179mcpsimp"></a>
53
54仓目录结构如下:
55
56```
57/foundation/multimedia/audio_standard  # 音频组件业务代码
58├── frameworks                         # 框架代码
59│   ├── native                         # 内部接口实现
60│   └── js                             # 外部接口实现
61│       └── napi                       # napi 外部接口实现
62├── interfaces                         # 接口代码
63│   ├── inner_api                      # 内部接口
64│   └── kits                           # 外部接口
65├── sa_profile                         # 服务配置文件
66├── services                           # 服务代码
67├── LICENSE                            # 证书文件
68└── bundle.json                        # 编译文件
69```
70
71## 使用说明<a name="section112738505318"></a>
72
73### 音频播放<a name="section1147510562812"></a>
74
75可以使用此仓库内提供的接口将音频数据转换为音频模拟信号,使用输出设备播放音频信号,以及管理音频播放任务。以下步骤描述了如何使用 **AudioRenderer** 开发音频播放功能:
76
771.  使用 **Create** 接口和所需流类型来获取 **AudioRenderer** 实例。
78
79    ```
80    AudioStreamType streamType = STREAM_MUSIC; // 流类型示例
81    std::unique_ptr<AudioRenderer> audioRenderer = AudioRenderer::Create(streamType);
82    ```
83
842.  (可选)静态接口 **GetSupportedFormats**(), **GetSupportedChannels**(), **GetSupportedEncodingTypes**(), **GetSupportedSamplingRates**() 可用于获取支持的参数。
853.  准备设备,调用实例的 **SetParams** 。
86
87    ```
88    AudioRendererParams rendererParams;
89    rendererParams.sampleFormat = SAMPLE_S16LE;
90    rendererParams.sampleRate = SAMPLE_RATE_44100;
91    rendererParams.channelCount = STEREO;
92    rendererParams.encodingType = ENCODING_PCM;
93
94    audioRenderer->SetParams(rendererParams);
95    ```
96
974.  (可选)使用 audioRenderer->**GetParams**(rendererParams) 来验证 SetParams。
985.  (可选)使用 **SetAudioEffectMode** 和 **GetAudioEffectMode** 接口来设置和获取当前音频流的音效模式。
99    ```
100    AudioEffectMode effectMode = EFFECT_DEFAULT;
101    int32_t result = audioRenderer->SetAudioEffectMode(effectMode);
102    AudioEffectMode mode = audioRenderer->GetAudioEffectMode();
103    ```
1046.  AudioRenderer 实例调用 audioRenderer->**Start**() 函数来启动播放任务。
1057.  使用 **GetBufferSize** 接口获取要写入的缓冲区长度。
106
107    ```
108    audioRenderer->GetBufferSize(bufferLen);
109    ```
110
1118.  从源(例如音频文件)读取要播放的音频数据并将其传输到字节流中。重复调用Write函数写入渲染数据。
112
113    ```
114    bytesToWrite = fread(buffer, 1, bufferLen, wavFile);
115    while ((bytesWritten < bytesToWrite) && ((bytesToWrite - bytesWritten) > minBytes)) {
116        bytesWritten += audioRenderer->Write(buffer + bytesWritten, bytesToWrite - bytesWritten);
117        if (bytesWritten < 0)
118            break;
119    }
120    ```
121
1229.  调用audioRenderer->**Drain**()来清空播放流。
12310.  调用audioRenderer->**Stop**()来停止输出。
12411. 播放任务完成后,调用AudioRenderer实例的audioRenderer->**Release**()函数来释放资源。
125
126以上提供了基本音频播放使用场景。
127
128
12912. 使用 audioRenderer->**SetVolume(float)** 和 audioRenderer->**GetVolume()** 来设置和获取当前音频流音量, 可选范围为 0.0 到 1.0。
130
131提供上述基本音频播放使用范例。更多接口说明请参考[**audio_renderer.h**](https://gitee.com/openharmony/multimedia_audio_standard/blob/master/interfaces/inner_api/native/audiorenderer/include/audio_renderer.h) 和 [**audio_info.h**](https://gitee.com/openharmony/multimedia_audio_standard/blob/master/interfaces/inner_api/native/audiocommon/include/audio_info.h)132
133### 音频录制<a name="section295162052813"></a>
134
135可以使用此仓库内提供的接口,让应用程序可以完成使用输入设备进行声音录制,将语音转换为音频数据,并管理录制的任务。以下步骤描述了如何使用 **AudioCapturer** 开发音频录制功能:
136
1371.  使用Create接口和所需流类型来获取 **AudioCapturer** 实例。
138
139    ```
140    AudioStreamType streamType = STREAM_MUSIC;
141    std::unique_ptr<AudioCapturer> audioCapturer = AudioCapturer::Create(streamType);
142    ```
143
1442.  (可选)静态接口 **GetSupportedFormats**(), **GetSupportedChannels**(), **GetSupportedEncodingTypes**(), **GetSupportedSamplingRates**() 可用于获取支持的参数。
1453.  准备设备,调用实例的 **SetParams** 。
146
147    ```
148    AudioCapturerParams capturerParams;
149    capturerParams.sampleFormat = SAMPLE_S16LE;
150    capturerParams.sampleRate = SAMPLE_RATE_44100;
151    capturerParams.channelCount = STEREO;
152    capturerParams.encodingType = ENCODING_PCM;
153
154    audioCapturer->SetParams(capturerParams);
155    ```
156
1574.  (可选)使用 audioCapturer->**GetParams**(capturerParams) 来验证 SetParams()。
1585.  AudioCapturer 实例调用 AudioCapturer->**Start**() 函数来启动录音任务。
1596.  使用 **GetBufferSize** 接口获取要写入的缓冲区长度。
160
161    ```
162    audioCapturer->GetBufferSize(bufferLen);
163    ```
164
1657.  读取录制的音频数据并将其转换为字节流。重复调用read函数读取数据直到主动停止。
166
167    ```
168    // set isBlocking = true/false for blocking/non-blocking read
169    bytesRead = audioCapturer->Read(*buffer, bufferLen, isBlocking);
170    while (numBuffersToCapture) {
171        bytesRead = audioCapturer->Read(*buffer, bufferLen, isBlockingRead);
172        if (bytesRead <= 0) {
173            break;
174        } else if (bytesRead > 0) {
175            fwrite(buffer, size, bytesRead, recFile); // example shows writes the recorded data into a file
176            numBuffersToCapture--;
177        }
178    }
179    ```
180
1818.  (可选)audioCapturer->**Flush**() 来清空录音流缓冲区。
1829.  AudioCapturer 实例调用 audioCapturer->**Stop**() 函数停止录音。
18310. 录音任务完成后,调用 AudioCapturer 实例的 audioCapturer->**Release**() 函数释放资源。
184
185提供上述基本音频录制使用范例。更多API请参考[**audio_capturer.h**](https://gitee.com/openharmony/multimedia_audio_standard/blob/master/interfaces/inner_api/native/audiocapturer/include/audio_capturer.h)和[**audio_info.h**](https://gitee.com/openharmony/multimedia_audio_standard/blob/master/interfaces/inner_api/native/audiocommon/include/audio_info.h)186
187### 音频管理<a name="section645572311287"></a>
188可以使用 [**audio_system_manager.h**](https://gitee.com/openharmony/multimedia_audio_standard/blob/master/interfaces/inner_api/native/audiomanager/include/audio_system_manager.h) 内的接口来控制音量和设备。
1891. 使用 **GetInstance** 接口获取 **AudioSystemManager** 实例.
190    ```
191    AudioSystemManager *audioSystemMgr = AudioSystemManager::GetInstance();
192    ```
193#### 音量控制<a name="section645572311287_001"></a>
1942. 使用 **GetMaxVolume** 和  **GetMinVolume** 接口去查询音频流支持的最大和最小音量等级,在此范围内设置音量。
195    ```
196    AudioVolumeType streamType = AudioVolumeType::STREAM_MUSIC;
197    int32_t maxVol = audioSystemMgr->GetMaxVolume(streamType);
198    int32_t minVol = audioSystemMgr->GetMinVolume(streamType);
199    ```
2003. 使用 **SetVolume** 和 **GetVolume** 接口来设置和获取指定音频流的音量等级。
201    ```
202    int32_t result = audioSystemMgr->SetVolume(streamType, 10);
203    int32_t vol = audioSystemMgr->GetVolume(streamType);
204    ```
2054. 使用 **SetMute** 和 **IsStreamMute** 接口来设置和获取指定音频流的静音状态。
206    ```
207    int32_t result = audioSystemMgr->SetMute(streamType, true);
208    bool isMute = audioSystemMgr->IsStreamMute(streamType);
2095. 使用 **SetRingerMode** 和 **GetRingerMode** 接口来设置和获取铃声模式。参考在 [**audio_info.h**](https://gitee.com/openharmony/multimedia_audio_standard/blob/master/interfaces/inner_api/native/audiocommon/include/audio_info.h)  定义的 **AudioRingerMode** 枚举来获取支持的铃声模式。
210    ```
211    int32_t result = audioSystemMgr->SetRingerMode(RINGER_MODE_SILENT);
212    AudioRingerMode ringMode = audioSystemMgr->GetRingerMode();
213    ```
2146. 使用 **SetMicrophoneMute** 和 **IsMicrophoneMute** 接口来设置和获取麦克风的静音状态。
215    ```
216    int32_t result = audioSystemMgr->SetMicrophoneMute(true);
217    bool isMicMute = audioSystemMgr->IsMicrophoneMute();
218    ```
219#### 设备控制<a name="section645572311287_002"></a>
2207. 使用 **GetDevices**, **deviceType_** 和 **deviceRole_** 接口来获取音频输入输出设备信息。 参考 [**audio_info.h**](https://gitee.com/openharmony/multimedia_audio_standard/blob/master/interfaces/inner_api/native/audiocommon/include/audio_info.h) 内定义的DeviceFlag, DeviceType 和 DeviceRole 枚举。
221    ```
222    DeviceFlag deviceFlag = OUTPUT_DEVICES_FLAG;
223    vector<sptr<AudioDeviceDescriptor>> audioDeviceDescriptors
224        = audioSystemMgr->GetDevices(deviceFlag);
225    sptr<AudioDeviceDescriptor> audioDeviceDescriptor = audioDeviceDescriptors[0];
226    cout << audioDeviceDescriptor->deviceType_;
227    cout << audioDeviceDescriptor->deviceRole_;
228    ```
2298. 使用 **SetDeviceActive** 和 **IsDeviceActive** 接口去激活/去激活音频设备和获取音频设备激活状态。
230     ```
231    ActiveDeviceType deviceType = SPEAKER;
232    int32_t result = audioSystemMgr->SetDeviceActive(deviceType, true);
233    bool isDevActive = audioSystemMgr->IsDeviceActive(deviceType);
234    ```
2359. 提供其他用途的接口如 **IsStreamActive**, **SetAudioParameter** and **GetAudioParameter**, 详细请参考 [**audio_system_manager.h**](https://gitee.com/openharmony/multimedia_audio_standard/blob/master/interfaces/inner_api/native/audiomanager/include/audio_system_manager.h)
23610. 应用程序可以使用 **AudioManagerNapi::On**注册系统音量的更改。 在此,如果应用程序监听到系统音量更改的事件,就会用以下参数通知应用程序:
237volumeType : 更改的系统音量的类型
238volume : 当前的音量等级
239updateUi : 是否需要显示变化详细信息。(如果音量被增大/减小,将updateUi标志设置为true,在其他情况下,updateUi设置为false)。
240    ```
241    const audioManager = audio.getAudioManager();
242
243    export default {
244      onCreate() {
245        audioManager.on('volumeChange', (volumeChange) ==> {
246          console.info('volumeType = '+volumeChange.volumeType);
247          console.info('volume = '+volumeChange.volume);
248          console.info('updateUi = '+volumeChange.updateUi);
249        }
250      }
251    }
252    ```
253
254#### 音频场景<a name="section645572311287_003"></a>
25511. 使用 **SetAudioScene** 和 **getAudioScene** 接口去更改和检查音频策略。
256    ```
257    int32_t result = audioSystemMgr->SetAudioScene(AUDIO_SCENE_PHONE_CALL);
258    AudioScene audioScene = audioSystemMgr->GetAudioScene();
259    ```
260有关支持的音频场景,请参阅 **AudioScene** 中的枚举[**audio_info.h**](https://gitee.com/openharmony/multimedia_audio_framework/blob/master/interfaces/inner_api/native/audiocommon/include/audio_info.h)261#### 音频流管理<a name="section645572311287_004"></a>
262可以使用[**audio_stream_manager.h**](https://gitee.com/openharmony/multimedia_audio_standard/blob/master/interfaces/inner_api/native/audiomanager/include/audio_stream_manager.h)提供的接口用于流管理功能。
2631. 使用 **GetInstance** 接口获得 **AudioSystemManager** 实例。
264    ```
265    AudioStreamManager *audioStreamMgr = AudioStreamManager::GetInstance();
266    ```
267
2682. 使用 **RegisterAudioRendererEventListener** 为渲染器状态更改注册侦听器。渲染器状态更改回调,该回调将在渲染器流状态更改时调用, 通过重写 **AudioRendererStateChangeCallback** 类中的函数 **OnRendererStateChange** 。
269    ```
270    const int32_t clientPid;
271
272    class RendererStateChangeCallback : public AudioRendererStateChangeCallback {
273    public:
274        RendererStateChangeCallback = default;
275        ~RendererStateChangeCallback = default;
276    void OnRendererStateChange(
277        const std::vector<std::unique_ptr<AudioRendererChangeInfo>> &audioRendererChangeInfos) override
278    {
279        cout<<"OnRendererStateChange entered"<<endl;
280    }
281    };
282
283    std::shared_ptr<AudioRendererStateChangeCallback> callback = std::make_shared<RendererStateChangeCallback>();
284    int32_t state = audioStreamMgr->RegisterAudioRendererEventListener(clientPid, callback);
285    int32_t result = audioStreamMgr->UnregisterAudioRendererEventListener(clientPid);
286    ```
287
2883. 使用 **RegisterAudioCapturerEventListener** 为捕获器状态更改注册侦听器。 捕获器状态更改回调,该回调将在捕获器流状态更改时调用, 通过重写 **AudioCapturerStateChangeCallback** 类中的函数 **OnCapturerStateChange** 。
289    ```
290    const int32_t clientPid;
291
292    class CapturerStateChangeCallback : public AudioCapturerStateChangeCallback {
293    public:
294        CapturerStateChangeCallback = default;
295        ~CapturerStateChangeCallback = default;
296    void OnCapturerStateChange(
297        const std::vector<std::unique_ptr<AudioCapturerChangeInfo>> &audioCapturerChangeInfos) override
298    {
299        cout<<"OnCapturerStateChange entered"<<endl;
300    }
301    };
302
303    std::shared_ptr<AudioCapturerStateChangeCallback> callback = std::make_shared<CapturerStateChangeCallback>();
304    int32_t state = audioStreamMgr->RegisterAudioCapturerEventListener(clientPid, callback);
305    int32_t result = audioStreamMgr->UnregisterAudioCapturerEventListener(clientPid);
306    ```
3074. 使用 **GetCurrentRendererChangeInfos** 获取所有当前正在运行的流渲染器信息,包括clientuid、sessionid、renderinfo、renderstate和输出设备详细信息。
308    ```
309    std::vector<std::unique_ptr<AudioRendererChangeInfo>> audioRendererChangeInfos;
310    int32_t currentRendererChangeInfo = audioStreamMgr->GetCurrentRendererChangeInfos(audioRendererChangeInfos);
311    ```
312
3135. 使用 **GetCurrentCapturerChangeInfos** 获取所有当前正在运行的流捕获器信息,包括clientuid、sessionid、capturerInfo、capturerState和输入设备详细信息。
314    ```
315    std::vector<std::unique_ptr<AudioCapturerChangeInfo>> audioCapturerChangeInfos;
316    int32_t currentCapturerChangeInfo = audioStreamMgr->GetCurrentCapturerChangeInfos(audioCapturerChangeInfos);
317    ```
318    有关结构,请参阅[**audio_info.h**](https://gitee.com/openharmony/multimedia_audio_standard/blob/master/interfaces/inner_api/native/audiocommon/include/audio_info.h) **audioRendererChangeInfos** 和 **audioCapturerChangeInfos**.
319
3206. 使用 **IsAudioRendererLowLatencySupported** 检查低延迟功能是否支持。
321    ```
322    const AudioStreamInfo &audioStreamInfo;
323    bool isLatencySupport = audioStreamMgr->IsAudioRendererLowLatencySupported(audioStreamInfo);
324    ```
3257. 使用 **GetEffectInfoArray** 接口查询指定[**StreamUsage**](https://gitee.com/openharmony/multimedia_audio_framework/blob/master/interfaces/inner_api/native/audiocommon/include/audio_info.h)下可以支持的音效模式。
326    ```
327    AudioSceneEffectInfo audioSceneEffectInfo;
328    int32_t status = audioStreamMgr->GetEffectInfoArray(audioSceneEffectInfo,streamUsage);
329    ```
330    有关支持的音效模式,请参阅[**audio_effect.h**](https://gitee.com/openharmony/multimedia_audio_framework/blob/master/interfaces/inner_api/native/audiocommon/include/audio_effect.h)中的枚举**AudioEffectMode**。
331
332#### JavaScript 用法:<a name="section645572311287_005"></a>
333JavaScript应用可以使用系统提供的音频管理接口,来控制音量和设备。\
334请参考 [**js-apis-audio.md**](https://gitee.com/openharmony/docs/blob/master/zh-cn/application-dev/reference/apis/js-apis-audio.md#audiomanager) 来获取音量和设备管理相关JavaScript接口的用法。
335
336### 蓝牙SCO呼叫<a name="section645572311287_007"></a>
337可以使用提供的接口 [**audio_bluetooth_manager.h**](https://gitee.com/openharmony/multimedia_audio_standard/blob/master/services/include/audio_bluetooth/client/audio_bluetooth_manager.h) 实现同步连接导向链路(SCO)的蓝牙呼叫。
338
3391. 为监听SCO状态更改,您可以使用 **OnScoStateChanged**.
340```
341const BluetoothRemoteDevice &device;
342int state;
343void OnScoStateChanged(const BluetoothRemoteDevice &device, int state);
344```
345
3462. (可选) 静态接口 **RegisterBluetoothScoAgListener**(), **UnregisterBluetoothScoAgListener**(), 可用于注册蓝牙SCO的侦听器。
347## 支持设备<a name="section645572311287_008"></a>
348以下是音频子系统支持的设备类型列表。
349
3501. **USB Type-C Headset**\
351    数字耳机,包括自己的DAC(数模转换器)和作为耳机一部分的放大器。
3522. **WIRED Headset**\
353    模拟耳机内部不包含任何DAC。它可以有3.5mm插孔或不带DAC的C型插孔。
3543. **Bluetooth Headset**\
355    蓝牙A2DP(高级音频分配模式)耳机,用于无线传输音频。
3564. **Internal Speaker and MIC**\
357    支持内置扬声器和麦克风,并将分别用作播放和录制的默认设备。
358
359## 相关仓<a name="section340mcpsimp"></a>
360
361[multimedia\_audio\_framework](https://gitee.com/openharmony/multimedia_audio_framework)
362