1# Audio<a name="EN-US_TOPIC_0000001146901937"></a> 2 3 - [Introduction](#introduction) 4 - [Basic Concepts](#basic-concepts) 5 - [Directory Structure](#directory-structure) 6 - [Usage Guidelines](#usage-guidelines) 7 - [Audio Playback](#audio-playback) 8 - [Audio Recording](#audio-recording) 9 - [Audio Management](#audio-management) 10 - [Supported Devices](#supported-devices) 11 - [Repositories Involved](#repositories-involved) 12 13## Introduction<a name="introduction"></a> 14The **audio\_standard** repository is used to implement audio-related features, including audio playback, recording, volume management and device management. 15 16**Figure 1** Position in the subsystem architecture<a name="fig483116248288"></a> 17 18 19 20 21### Basic Concepts<a name="basic-concepts"></a> 22 23- **Sampling** 24 25Sampling is a process to obtain discrete-time signals by extracting samples from analog signals in a continuous time domain at a specific interval. 26 27- **Sampling rate** 28 29Sampling rate is the number of samples extracted from a continuous signal per second to form a discrete signal. It is measured in Hz. Generally, human hearing range is from 20 Hz to 20 kHz. Common audio sampling rates include 8 kHz, 11.025 kHz, 22.05 kHz, 16 kHz, 37.8 kHz, 44.1 kHz, 48 kHz, and 96 kHz. 30 31- **Channel** 32 33Channels refer to different spatial positions where independent audio signals are recorded or played. The number of channels is the number of audio sources used during audio recording, or the number of speakers used for audio playback. 34 35- **Audio frame** 36 37Audio data is in stream form. For the convenience of audio algorithm processing and transmission, it is generally agreed that a data amount in a unit of 2.5 to 60 milliseconds is one audio frame. This unit is called sampling time, and its length is specific to codecs and the application requirements. 38 39- **PCM** 40 41Pulse code modulation \(PCM\) is a method used to digitally represent sampled analog signals. It converts continuous-time analog signals into discrete-time digital signal samples. 42 43## Directory Structure<a name="directory-structure"></a> 44 45The structure of the repository directory is as follows: 46 47``` 48/foundation/multimedia/audio_framework # Audio code 49├── frameworks # Framework code 50│ ├── native # Internal Native API Implementation. 51| | Pulseaudio, libsndfile build configuration and pulseaudio-hdi modules 52│ └── js # External JS API Implementation 53 └── napi # JS NAPI API Implementation 54├── interfaces # Interfaces 55│ ├── inner_api # Internal Native APIs 56│ └── kits # External JS APIs 57├── sa_profile # Service configuration profile 58├── services # Service code 59├── LICENSE # License file 60└── ohos.build # Build file 61``` 62 63## Usage Guidelines<a name="usage-guidelines"></a> 64### Audio Playback<a name="audio-playback"></a> 65You can use APIs provided in this repository to convert audio data into audible analog signals, play the audio signals using output devices, and manage playback tasks. The following steps describe how to use **AudioRenderer** to develop the audio playback function: 661. Use **Create** API with required renderer configuration to get **AudioRenderer** instance. 67 ``` 68 AudioRendererOptions rendererOptions; 69 rendererOptions.streamInfo.samplingRate = AudioSamplingRate::SAMPLE_RATE_44100; 70 rendererOptions.streamInfo.encoding = AudioEncodingType::ENCODING_PCM; 71 rendererOptions.streamInfo.format = AudioSampleFormat::SAMPLE_S16LE; 72 rendererOptions.streamInfo.channels = AudioChannel::STEREO; 73 rendererOptions.rendererInfo.contentType = ContentType::CONTENT_TYPE_MUSIC; 74 rendererOptions.rendererInfo.streamUsage = StreamUsage::STREAM_USAGE_MEDIA; 75 rendererOptions.rendererInfo.rendererFlags = 0; 76 77 unique_ptr<AudioRenderer> audioRenderer = AudioRenderer::Create(rendererOptions); 78 ``` 792. (Optional) Static APIs **GetSupportedFormats**(), **GetSupportedChannels**(), **GetSupportedEncodingTypes**(), **GetSupportedSamplingRates**() can be used to get the supported values of the params. 80 813. (Optional) use audioRenderer->**GetRendererInfo**(AudioRendererInfo &) and audioRenderer->**GetStreamInfo**(AudioStreamInfo &) to retrieve the current renderer configuration values. 82 834. Inorder to listen to Audio Interrupt and state change events, it would be required to register to **RendererCallbacks** using audioRenderer->**SetRendererCallback** 84 ``` 85 class AudioRendererCallbackImpl : public AudioRendererCallback { 86 void OnInterrupt(const InterruptEvent &interruptEvent) override 87 { 88 if (interruptEvent.forceType == INTERRUPT_FORCE) { // Forced actions taken by the framework 89 switch (interruptEvent.hintType) { 90 case INTERRUPT_HINT_PAUSE: 91 // Force paused. Pause Writing. 92 isRenderPaused_ = true; 93 case INTERRUPT_HINT_STOP: 94 // Force stopped. Stop Writing. 95 isRenderStopped_ = true; 96 } 97 } 98 if (interruptEvent.forceType == INTERRUPT_SHARE) { // Actions not forced, apps can choose to handle. 99 switch (interruptEvent.hintType) { 100 case INTERRUPT_HINT_PAUSE: 101 // Do Pause, if required. 102 case INTERRUPT_HINT_RESUME: 103 // After force pause, resume if needed when this hint is received. 104 audioRenderer->Start(); 105 } 106 } 107 } 108 109 void OnStateChange(const RendererState state, const StateChangeCmdType cmdType) override 110 { 111 switch (state) { 112 case RENDERER_PREPARED: 113 // Renderer prepared 114 case RENDERER_RUNNING: 115 // Renderer in running state 116 case RENDERER_STOPPED: 117 // Renderer stopped 118 case RENDERER_RELEASED: 119 // Renderer released 120 case RENDERER_PAUSED: 121 // Renderer paused 122 } 123 } 124 } 125 126 std::shared_ptr<AudioRendererCallback> audioRendererCB = std::make_shared<AudioRendererCallbackImpl>(); 127 audioRenderer->SetRendererCallback(audioRendererCB); 128 ``` 129 130 Implement **AudioRendererCallback** class, override **OnInterrupt** function and register this instance using **SetRendererCallback** API. 131 On registering to the callback, the application would receive the interrupt events. 132 133 This will have information on the audio interrupt forced action taken by the Audio framework and also the action hints to be handled by the application. Refer to **audio_renderer.h** and **audio_info.h** for more details. 134 135 Similarly, renderer state change callbacks can be received by overriding **OnStateChange** function in **AudioRendererCallback** class. Refer to **audio_renderer.h** for the list of renderer states. 136 1375. In order to get callbacks for frame mark position and/or frame period position, register for the corresponding callbacks in audio renderer using audioRenderer->**SetRendererPositionCallback** and/or audioRenderer->**SetRendererPeriodPositionCallback** functions respectively. 138 ``` 139 class RendererPositionCallbackImpl : public RendererPositionCallback { 140 void OnMarkReached(const int64_t &framePosition) override 141 { 142 // frame mark reached 143 // framePosition is the frame mark number 144 } 145 } 146 147 std::shared_ptr<RendererPositionCallback> framePositionCB = std::make_shared<RendererPositionCallbackImpl>(); 148 //markPosition is the frame mark number for which callback is requested. 149 audioRenderer->SetRendererPositionCallback(markPosition, framePositionCB); 150 151 class RendererPeriodPositionCallbackImpl : public RendererPeriodPositionCallback { 152 void OnPeriodReached(const int64_t &frameNumber) override 153 { 154 // frame period reached 155 // frameNumber is the frame period number 156 } 157 } 158 159 std::shared_ptr<RendererPeriodPositionCallback> periodPositionCB = std::make_shared<RendererPeriodPositionCallbackImpl>(); 160 //framePeriodNumber is the frame period number for which callback is requested. 161 audioRenderer->SetRendererPeriodPositionCallback(framePeriodNumber, periodPositionCB); 162 ``` 163 For unregistering the position callbacks, call the corresponding audioRenderer->**UnsetRendererPositionCallback** and/or audioRenderer->**UnsetRendererPeriodPositionCallback** APIs. 164 1656. Call audioRenderer->**Start**() function on the AudioRenderer instance to start the playback task. 1667. Get the buffer length to be written, using **GetBufferSize** API. 167 ``` 168 audioRenderer->GetBufferSize(bufferLen); 169 ``` 1708. Read the audio data to be played from the source(for example, an audio file) and transfer it into the bytes stream. Call the **Write** function repeatedly to write the render data. 171 ``` 172 bytesToWrite = fread(buffer, 1, bufferLen, wavFile); 173 while ((bytesWritten < bytesToWrite) && ((bytesToWrite - bytesWritten) > minBytes)) { 174 int32_t retBytes = audioRenderer->Write(buffer + bytesWritten, bytesToWrite - bytesWritten); 175 if (bytesWritten < 0) 176 break; 177 bytesWritten += retBytes; 178 } 179 ``` 1809. In case of audio interrupts, application can encounter write failures. Interrupt unaware applications can check the renderer state using **GetStatus** API before writing audio data further. 181Interrupt aware applications will have more details accessible via AudioRendererCallback.. 182 ``` 183 while ((bytesWritten < bytesToWrite) && ((bytesToWrite - bytesWritten) > minBytes)) { 184 int32_t retBytes = audioRenderer->Write(buffer.get() + bytesWritten, bytesToWrite - bytesWritten); 185 if (retBytes < 0) { // Error occurred 186 if (audioRenderer_->GetStatus() == RENDERER_PAUSED) { // Query the state and take appropriate action 187 isRenderPaused_ = true; 188 int32_t seekPos = bytesWritten - bytesToWrite; 189 fseek(wavFile, seekPos, SEEK_CUR)) 190 } 191 break; 192 } 193 bytesWritten += retBytes; 194 } 195 ``` 19610. Call audioRenderer->**Drain**() to drain the playback stream. 197 19811. Call audioRenderer->**Stop**() function to Stop rendering. 19912. After the playback task is complete, call the audioRenderer->**Release**() function on the AudioRenderer instance to release the stream resources. 200 20113. Use audioRenderer->**SetVolume(float)** and audioRenderer->**GetVolume()** to set and get Track Volume. Value ranges from 0.0 to 1.0 202 203Provided the basic playback usecase above. 204 205Please refer [**audio_renderer.h**](https://gitee.com/openharmony/multimedia_audio_framework/blob/master/interfaces/inner_api/native/audiorenderer/include/audio_renderer.h) and [**audio_info.h**](https://gitee.com/openharmony/multimedia_audio_framework/blob/master/interfaces/inner_api/native/audiocommon/include/audio_info.h) for more such useful APIs. 206 207 208### Audio Recording<a name="audio-recording"></a> 209You can use the APIs provided in this repository for your application to record voices using input devices, convert the voices into audio data, and manage recording tasks. The following steps describe how to use **AudioCapturer** to develop the audio recording function: 210 2111. Use **Create** API with required capturer configuration to get **AudioCapturer** instance. 212 ``` 213 AudioCapturerOptions capturerOptions; 214 capturerOptions.streamInfo.samplingRate = AudioSamplingRate::SAMPLE_RATE_48000; 215 capturerOptions.streamInfo.encoding = AudioEncodingType::ENCODING_PCM; 216 capturerOptions.streamInfo.format = AudioSampleFormat::SAMPLE_S16LE; 217 capturerOptions.streamInfo.channels = AudioChannel::MONO; 218 capturerOptions.capturerInfo.sourceType = SourceType::SOURCE_TYPE_MIC; 219 capturerOptions.capturerInfo.capturerFlags = CAPTURER_FLAG;; 220 221 unique_ptr<AudioCapturer> audioCapturer = AudioCapturer::Create(capturerOptions); 222 ``` 2232. (Optional) Static APIs **GetSupportedFormats**(), **GetSupportedChannels**(), **GetSupportedEncodingTypes**(), **GetSupportedSamplingRates()** can be used to get the supported values of the params. 224 2254. (Optional) use audioCapturer->**GetCapturerInfo**(AudioCapturerInfo &) and audioCapturer->**GetStreamInfo**(AudioStreamInfo &) to retrieve the current capturer configuration values. 226 2275. Capturer state change callbacks can be received by overriding **OnStateChange** function in **AudioCapturerCallback** class, and registering the callback instance using audioCapturer->**SetCapturerCallback** API. 228 ``` 229 class AudioCapturerCallbackImpl : public AudioCapturerCallback { 230 void OnStateChange(const CapturerState state) override 231 { 232 switch (state) { 233 case CAPTURER_PREPARED: 234 // Capturer prepared 235 case CAPTURER_RUNNING: 236 // Capturer in running state 237 case CAPTURER_STOPPED: 238 // Capturer stopped 239 case CAPTURER_RELEASED: 240 // Capturer released 241 } 242 } 243 } 244 245 std::shared_ptr<AudioCapturerCallback> audioCapturerCB = std::make_shared<AudioCapturerCallbackImpl>(); 246 audioCapturer->SetCapturerCallback(audioCapturerCB); 247 ``` 248 2496. In order to get callbacks for frame mark position and/or frame period position, register for the corresponding callbacks in audio capturer using audioCapturer->**SetCapturerPositionCallback** and/or audioCapturer->**SetCapturerPeriodPositionCallback** functions respectively. 250 ``` 251 class CapturerPositionCallbackImpl : public CapturerPositionCallback { 252 void OnMarkReached(const int64_t &framePosition) override 253 { 254 // frame mark reached 255 // framePosition is the frame mark number 256 } 257 } 258 259 std::shared_ptr<CapturerPositionCallback> framePositionCB = std::make_shared<CapturerPositionCallbackImpl>(); 260 //markPosition is the frame mark number for which callback is requested. 261 audioCapturer->SetCapturerPositionCallback(markPosition, framePositionCB); 262 263 class CapturerPeriodPositionCallbackImpl : public CapturerPeriodPositionCallback { 264 void OnPeriodReached(const int64_t &frameNumber) override 265 { 266 // frame period reached 267 // frameNumber is the frame period number 268 } 269 } 270 271 std::shared_ptr<CapturerPeriodPositionCallback> periodPositionCB = std::make_shared<CapturerPeriodPositionCallbackImpl>(); 272 //framePeriodNumber is the frame period number for which callback is requested. 273 audioCapturer->SetCapturerPeriodPositionCallback(framePeriodNumber, periodPositionCB); 274 ``` 275 For unregistering the position callbacks, call the corresponding audioCapturer->**UnsetCapturerPositionCallback** and/or audioCapturer->**UnsetCapturerPeriodPositionCallback** APIs. 276 2777. Call audioCapturer->**Start**() function on the AudioCapturer instance to start the recording task. 278 2798. Get the buffer length to be read, using **GetBufferSize** API. 280 ``` 281 audioCapturer->GetBufferSize(bufferLen); 282 ``` 2839. Read the captured audio data and convert it to a byte stream. Call the read function repeatedly to read data until you want to stop recording 284 ``` 285 // set isBlocking = true/false for blocking/non-blocking read 286 bytesRead = audioCapturer->Read(*buffer, bufferLen, isBlocking); 287 while (numBuffersToCapture) { 288 bytesRead = audioCapturer->Read(*buffer, bufferLen, isBlockingRead); 289 if (bytesRead < 0) { 290 break; 291 } else if (bytesRead > 0) { 292 fwrite(buffer, size, bytesRead, recFile); // example shows writes the recorded data into a file 293 numBuffersToCapture--; 294 } 295 } 296 ``` 29710. (Optional) Call audioCapturer->**Flush**() to flush the capture buffer of this stream. 29811. Call the audioCapturer->**Stop**() function on the AudioCapturer instance to stop the recording. 29912. After the recording task is complete, call the audioCapturer->**Release**() function on the AudioCapturer instance to release the stream resources. 300 301Provided the basic recording usecase above. Please refer [**audio_capturer.h**](https://gitee.com/openharmony/multimedia_audio_framework/blob/master/interfaces/inner_api/native/audiocapturer/include/audio_capturer.h) and [**audio_info.h**](https://gitee.com/openharmony/multimedia_audio_framework/blob/master/interfaces/inner_api/native/audiocommon/include/audio_info.h) for more APIs. 302 303### Audio Management<a name="audio-management"></a> 304You can use the APIs provided in [**audio_system_manager.h**](https://gitee.com/openharmony/multimedia_audio_framework/blob/master/interfaces/inner_api/native/audiomanager/include/audio_system_manager.h) to control volume and device. 3051. Use **GetInstance** API to get **AudioSystemManager** instance. 306 ``` 307 AudioSystemManager *audioSystemMgr = AudioSystemManager::GetInstance(); 308 ``` 309#### Volume Control 3102. Use **GetMaxVolume** and **GetMinVolume** APIs to query the Maximum & Minimum volume level allowed for the stream. Use this volume range to set the volume. 311 ``` 312 AudioVolumeType streamType = AudioVolumeType::STREAM_MUSIC; 313 int32_t maxVol = audioSystemMgr->GetMaxVolume(streamType); 314 int32_t minVol = audioSystemMgr->GetMinVolume(streamType); 315 ``` 3163. Use **SetVolume** and **GetVolume** APIs to set and get the volume level of the stream. 317 ``` 318 int32_t result = audioSystemMgr->SetVolume(streamType, 10); 319 int32_t vol = audioSystemMgr->GetVolume(streamType); 320 ``` 3214. Use **SetMute** and **IsStreamMute** APIs to set and get the mute status of the stream. 322 ``` 323 int32_t result = audioSystemMgr->SetMute(streamType, true); 324 bool isMute = audioSystemMgr->IsStreamMute(streamType); 3255. Use **SetRingerMode** and **GetRingerMode** APIs to set and get ringer modes. Refer **AudioRingerMode** enum in [**audio_info.h**](https://gitee.com/openharmony/multimedia_audio_framework/blob/master/interfaces/inner_api/native/audiocommon/include/audio_info.h) for supported ringer modes. 326 ``` 327 int32_t result = audioSystemMgr->SetRingerMode(RINGER_MODE_SILENT); 328 AudioRingerMode ringMode = audioSystemMgr->GetRingerMode(); 329 ``` 3306. Use **SetMicrophoneMute** and **IsMicrophoneMute** APIs to mute/unmute the mic and to check if mic muted. 331 ``` 332 int32_t result = audioSystemMgr->SetMicrophoneMute(true); 333 bool isMicMute = audioSystemMgr->IsMicrophoneMute(); 334 ``` 335#### Device control 3367. Use **GetDevices**, **deviceType_** and **deviceRole_** APIs to get audio I/O devices information. For DeviceFlag, DeviceType and DeviceRole enums refer [**audio_info.h**](https://gitee.com/openharmony/multimedia_audio_framework/blob/master/interfaces/inner_api/native/audiocommon/include/audio_info.h). 337 ``` 338 DeviceFlag deviceFlag = ALL_DEVICES_FLAG; 339 vector<sptr<AudioDeviceDescriptor>> audioDeviceDescriptors = audioSystemMgr->GetDevices(deviceFlag); 340 for (auto &audioDeviceDescriptor : audioDeviceDescriptors) { 341 cout << audioDeviceDescriptor->deviceType_ << endl; 342 cout << audioDeviceDescriptor->deviceRole_ << endl; 343 } 344 ``` 3458. Use **SetDeviceActive** and **IsDeviceActive** APIs to Actiavte/Deactivate the device and to check if the device is active. 346 ``` 347 ActiveDeviceType deviceType = SPEAKER; 348 int32_t result = audioSystemMgr->SetDeviceActive(deviceType, true); 349 bool isDevActive = audioSystemMgr->IsDeviceActive(deviceType); 350 ``` 351 3529. Use **SetDeviceChangeCallback** API to register for device change events. Clients will receive callback when a device is connected/disconnected. Currently audio subsystem supports sending device change events for WIRED_HEADSET, USB_HEADSET and BLUETOOTH_A2DP device. 353**OnDeviceChange** function will be called and client will receive **DeviceChangeAction** object, which will contain following parameters:\ 354*type* : **DeviceChangeType** which specifies whether device is connected or disconnected.\ 355*deviceDescriptors* : Array of **AudioDeviceDescriptor** object which specifies the type of device and its role(input/output device). 356 ``` 357 class DeviceChangeCallback : public AudioManagerDeviceChangeCallback { 358 public: 359 DeviceChangeCallback = default; 360 ~DeviceChangeCallback = default; 361 void OnDeviceChange(const DeviceChangeAction &deviceChangeAction) override 362 { 363 cout << deviceChangeAction.type << endl; 364 for (auto &audioDeviceDescriptor : deviceChangeAction.deviceDescriptors) { 365 switch (audioDeviceDescriptor->deviceType_) { 366 case DEVICE_TYPE_WIRED_HEADSET: { 367 if (deviceChangeAction.type == CONNECT) { 368 cout << wired headset connected << endl; 369 } else { 370 cout << wired headset disconnected << endl; 371 } 372 break; 373 } 374 case DEVICE_TYPE_USB_HEADSET:{ 375 if (deviceChangeAction.type == CONNECT) { 376 cout << usb headset connected << endl; 377 } else { 378 cout << usb headset disconnected << endl; 379 } 380 break; 381 } 382 case DEVICE_TYPE_BLUETOOTH_A2DP:{ 383 if (deviceChangeAction.type == CONNECT) { 384 cout << Bluetooth device connected << endl; 385 } else { 386 cout << Bluetooth device disconnected << endl; 387 } 388 break; 389 } 390 default: { 391 cout << "Unsupported device" << endl; 392 break; 393 } 394 } 395 } 396 } 397 }; 398 399 auto callback = std::make_shared<DeviceChangeCallback>(); 400 audioSystemMgr->SetDeviceChangeCallback(callback); 401 ``` 402 40310. Other useful APIs such as **IsStreamActive**, **SetAudioParameter** and **GetAudioParameter** are also provided. Please refer [**audio_system_manager.h**](https://gitee.com/openharmony/multimedia_audio_framework/blob/master/interfaces/inner_api/native/audiomanager/include/audio_system_manager.h) for more details 404 40511. Applications can register for change in system volume using **AudioManagerNapi::On**. Here when an application registers to volume change event, whenever there is change in volume, the application is notified with following parameters: 406volumeType : The AudioVolumeType for which volume is updated 407volume : The curret volume level set. 408updateUi : Whether the volume change details need to be shown or not. (If volume is updated through volume key up/down we set the updateUi flag to true, in other scenarios the updateUi is set as false). 409 ``` 410 const audioManager = audio.getAudioManager(); 411 412 export default { 413 onCreate() { 414 audioManager.on('volumeChange', (volumeChange) ==> { 415 console.info('volumeType = '+volumeChange.volumeType); 416 console.info('volume = '+volumeChange.volume); 417 console.info('updateUi = '+volumeChange.updateUi); 418 } 419 } 420 } 421 ``` 422 423#### Audio Scene 42412. Use **SetAudioscene** and **getAudioScene** APIs to change and check the audio strategy, respectively. 425 ``` 426 int32_t result = audioSystemMgr->SetAudioScene(AUDIO_SCENE_PHONE_CALL); 427 AudioScene audioScene = audioSystemMgr->GetAudioScene(); 428 ``` 429Please refer **AudioScene** enum in [**audio_info.h**](https://gitee.com/openharmony/multimedia_audio_framework/blob/master/interfaces/inner_api/native/audiocommon/include/audio_info.h) for supported audio scenes. 430 431#### JavaScript Usage: 432JavaScript apps can use the APIs provided by audio manager to control the volume and the device.\ 433Please refer [**js-apis-audio.md**](https://gitee.com/openharmony/docs/blob/master/en/application-dev/reference/apis/js-apis-audio.md#audiomanager) for complete JavaScript APIs available for audio manager. 434 435### Ringtone Management<a name="ringtone-management"></a> 436You can use the APIs provided in [**iringtone_sound_manager.h**](https://gitee.com/openharmony/multimedia_audio_framework/blob/master/interfaces/inner_api/native/audioringtone/include/iringtone_sound_manager.h) and [**iringtone_player.h**](https://gitee.com/openharmony/multimedia_audio_framework/blob/master/interfaces/inner_api/native/audioringtone/include/iringtone_player.h) for ringtone playback functions. 4371. Use **CreateRingtoneManager** API to get **IRingtoneSoundManager** instance. 438 ``` 439 std::shared_ptr<IRingtoneSoundManager> ringtoneManagerClient = RingtoneFactory::CreateRingtoneManager(); 440 ``` 4412. Use **SetSystemRingtoneUri** API to set the system ringtone uri. 442 ``` 443 std::string uri = "/data/media/test.wav"; 444 RingtoneType ringtoneType = RINGTONE_TYPE_DEFAULT; 445 ringtoneManagerClient->SetSystemRingtoneUri(context, uri, ringtoneType); 446 ``` 4473. Use **GetRingtonePlayer** API to get **IRingtonePlayer** instance. 448 ``` 449 std::unique_ptr<IRingtonePlayer> ringtonePlayer = ringtoneManagerClient->GetRingtonePlayer(context, ringtoneType); 450 ``` 4514. Use **Configure** API to configure the ringtone player. 452 ``` 453 float volume = 1; 454 bool loop = true; 455 ringtonePlayer.Configure(volume, loop); 456 ``` 4575. Use **Start**, **Stop**, and **Release** APIs on ringtone player instance to control playback states. 458 ``` 459 ringtonePlayer.Start(); 460 ringtonePlayer.Stop(); 461 ringtonePlayer.Release(); 462 ``` 4636. Use **GetTitle** API to get the title of current system ringtone. 4647. Use **GetRingtoneState** to the the ringtone playback state - **RingtoneState** 4658. Use **GetAudioRendererInfo** to get the **AudioRendererInfo** to check the content type and stream usage. 466 467 468## Supported devices<a name="supported-devices"></a> 469Currently following are the list of device types supported by audio subsystem. 470 4711. **USB Type-C Headset**\ 472 Digital headset which includes their own DAC (Digital to Analogue Converter) and amp as part of the headset. 4732. **WIRED Headset**\ 474 Analog headset which doesn't contain any DAC inside. It can have 3.5mm jack or Type-C jack without DAC. 4753. **Bluetooth Headset**\ 476 Bluetooth A2DP(Advanced Audio Distribution Profile) headset used for streaming audio wirelessly. 4774. **Internal Speaker and MIC**\ 478 Internal speaker and mic is supported and will be used as default device for playback and record respectively. 479 480## Repositories Involved<a name="repositories-involved"></a> 481 482[multimedia\_audio\_framework](https://gitee.com/openharmony/multimedia_audio_framework) 483