• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1# Using AudioRenderer for Audio Playback (ArkTS)
2
3The AudioRenderer is used to play Pulse Code Modulation (PCM) audio data. Unlike the AVPlayer, the AudioRenderer can perform data preprocessing before audio input. Therefore, the AudioRenderer is more suitable if you have extensive audio development experience and want to implement more flexible playback features.
4
5## Development Guidelines
6
7The full rendering process involves creating an **AudioRenderer** instance, configuring audio rendering parameters, starting and stopping rendering, and releasing the instance. In this topic, you will learn how to use the AudioRenderer to render audio data. Before the development, you are advised to read [AudioRenderer](../reference/apis/js-apis-audio.md#audiorenderer8) for the API reference.
8
9The figure below shows the state changes of the AudioRenderer. After an **AudioRenderer** instance is created, different APIs can be called to switch the AudioRenderer to different states and trigger the required behavior. If an API is called when the AudioRenderer is not in the given state, the system may throw an exception or generate other undefined behavior. Therefore, you are advised to check the AudioRenderer state before triggering state transition.
10
11To prevent the UI thread from being blocked, most **AudioRenderer** calls are asynchronous. Each API provides the callback and promise functions. The following examples use the callback functions.
12
13**Figure 1** AudioRenderer state transition
14
15![AudioRenderer state transition](figures/audiorenderer-status-change.png)
16
17During application development, you are advised to use [on('stateChange')](../reference/apis/js-apis-audio.md#onstatechange-8) to subscribe to state changes of the AudioRenderer. This is because some operations can be performed only when the AudioRenderer is in a given state. If the application performs an operation when the AudioRenderer is not in the given state, the system may throw an exception or generate other undefined behavior.
18
19- **prepared**: The AudioRenderer enters this state by calling [createAudioRenderer()](../reference/apis/js-apis-audio.md#audiocreateaudiorenderer8).
20
21- **running**: The AudioRenderer enters this state by calling [start()](../reference/apis/js-apis-audio.md#start8) when it is in the **prepared**, **paused**, or **stopped** state.
22
23- **paused**: The AudioRenderer enters this state by calling [pause()](../reference/apis/js-apis-audio.md#pause8) when it is in the **running** state. When the audio playback is paused, it can call [start()](../reference/apis/js-apis-audio.md#start8) to resume the playback.
24
25- **stopped**: The AudioRenderer enters this state by calling [stop()](../reference/apis/js-apis-audio.md#stop8) when it is in the **paused** or **running** state.
26
27- **released**: The AudioRenderer enters this state by calling [release()](../reference/apis/js-apis-audio.md#release8) when it is in the **prepared**, **paused**, or **stopped** state. In this state, the AudioRenderer releases all occupied hardware and software resources and will not transit to any other state.
28
29### How to Develop
30
311. Set audio rendering parameters and create an **AudioRenderer** instance. For details about the parameters, see [AudioRendererOptions](../reference/apis/js-apis-audio.md#audiorendereroptions8).
32
33    ```ts
34    import audio from '@ohos.multimedia.audio';
35
36    let audioStreamInfo: audio.AudioStreamInfo = {
37      samplingRate: audio.AudioSamplingRate.SAMPLE_RATE_44100,
38      channels: audio.AudioChannel.CHANNEL_1,
39      sampleFormat: audio.AudioSampleFormat.SAMPLE_FORMAT_S16LE,
40      encodingType: audio.AudioEncodingType.ENCODING_TYPE_RAW
41    };
42
43    let audioRendererInfo: audio.AudioRendererInfo = {
44      usage: audio.StreamUsage.STREAM_USAGE_VOICE_COMMUNICATION,
45      rendererFlags: 0
46    };
47
48    let audioRendererOptions: audio.AudioRendererOptions = {
49      streamInfo: audioStreamInfo,
50      rendererInfo: audioRendererInfo
51    };
52
53    audio.createAudioRenderer(audioRendererOptions, (err, data) => {
54      if (err) {
55        console.error(`Invoke createAudioRenderer failed, code is ${err.code}, message is ${err.message}`);
56        return;
57      } else {
58        console.info('Invoke createAudioRenderer succeeded.');
59        let audioRenderer = data;
60      }
61    });
62    ```
63
642. Call **start()** to switch the AudioRenderer to the **running** state and start rendering.
65
66    ```ts
67    import { BusinessError } from '@ohos.base';
68
69    audioRenderer.start((err: BusinessError) => {
70      if (err) {
71        console.error(`Renderer start failed, code is ${err.code}, message is ${err.message}`);
72      } else {
73        console.info('Renderer start success.');
74      }
75    });
76    ```
77
783. Specify the address of the file to render. Open the file and call **write()** to continuously write audio data to the buffer for rendering and playing. To implement personalized playback, process the audio data before writing it.
79
80    ```ts
81    import fs from '@ohos.file.fs';
82
83    let context = getContext(this);
84    async function read() {
85      const bufferSize: number = await audioRenderer.getBufferSize();
86      let path = context.filesDir;
87
88      const filePath = path + '/voice_call_data.wav';
89      let file: fs.File = fs.openSync(filePath, fs.OpenMode.READ_ONLY);
90      let buf = new ArrayBuffer(bufferSize);
91      let readsize: number = await fs.read(file.fd, buf);
92      let writeSize: number = await audioRenderer.write(buf);
93    }
94    ```
95
964. Call **stop()** to stop rendering.
97
98    ```ts
99    import { BusinessError } from '@ohos.base';
100
101    audioRenderer.stop((err: BusinessError) => {
102      if (err) {
103        console.error(`Renderer stop failed, code is ${err.code}, message is ${err.message}`);
104      } else {
105        console.info('Renderer stopped.');
106      }
107    });
108    ```
109
1105. Call **release()** to release the instance.
111
112    ```ts
113    import { BusinessError } from '@ohos.base';
114
115    audioRenderer.release((err: BusinessError) => {
116      if (err) {
117        console.error(`Renderer release failed, code is ${err.code}, message is ${err.message}`);
118      } else {
119        console.info('Renderer released.');
120      }
121    });
122    ```
123
124### Sample Code
125
126Refer to the sample code below to render an audio file using AudioRenderer.
127
128```ts
129import audio from '@ohos.multimedia.audio';
130import fs from '@ohos.file.fs';
131
132const TAG = 'AudioRendererDemo';
133
134let context = getContext(this);
135let renderModel: audio.AudioRenderer | undefined = undefined;
136let audioStreamInfo: audio.AudioStreamInfo = {
137  samplingRate: audio.AudioSamplingRate.SAMPLE_RATE_48000, // Sampling rate.
138  channels: audio.AudioChannel.CHANNEL_2, // Channel.
139  sampleFormat: audio.AudioSampleFormat.SAMPLE_FORMAT_S16LE, // Sampling format.
140  encodingType: audio.AudioEncodingType.ENCODING_TYPE_RAW // Encoding format.
141}
142let audioRendererInfo: audio.AudioRendererInfo = {
143  usage: audio.StreamUsage.STREAM_USAGE_MUSIC, // Audio stream usage type.
144  rendererFlags: 0 // AudioRenderer flag.
145}
146let audioRendererOptions: audio.AudioRendererOptions = {
147  streamInfo: audioStreamInfo,
148  rendererInfo: audioRendererInfo
149}
150
151// Create an AudioRenderer instance, and set the events to listen for.
152async function init() {
153  audio.createAudioRenderer(audioRendererOptions, (err, renderer) => { // Create an AudioRenderer instance.
154    if (!err) {
155      console.info(`${TAG}: creating AudioRenderer success`);
156      renderModel = renderer;
157      if (renderModel !== undefined) {
158        (renderModel as audio.AudioRenderer).on('stateChange', (state: audio.AudioState) => { // Set the events to listen for. A callback is invoked when the AudioRenderer is switched to the specified state.
159          if (state == 2) {
160            console.info('audio renderer state is: STATE_RUNNING');
161          }
162        });
163        (renderModel as audio.AudioRenderer).on('markReach', 1000, (position: number) => { // Subscribe to the markReach event. A callback is triggered when the number of rendered frames reaches 1000.
164          if (position == 1000) {
165            console.info('ON Triggered successfully');
166          }
167        });
168      }
169    } else {
170      console.info(`${TAG}: creating AudioRenderer failed, error: ${err.message}`);
171    }
172  });
173}
174
175// Start audio rendering.
176async function start() {
177  if (renderModel !== undefined) {
178    let stateGroup = [audio.AudioState.STATE_PREPARED, audio.AudioState.STATE_PAUSED, audio.AudioState.STATE_STOPPED];
179    if (stateGroup.indexOf((renderModel as audio.AudioRenderer).state.valueOf()) === -1) { // Rendering can be started only when the AudioRenderer is in the prepared, paused, or stopped state.
180      console.error(TAG + 'start failed');
181      return;
182    }
183    await (renderModel as audio.AudioRenderer).start(); // Start rendering.
184
185    const bufferSize = await (renderModel as audio.AudioRenderer).getBufferSize();
186
187    let path = context.filesDir;
188    const filePath = path + '/test.wav'; // Use the sandbox path to obtain the file. The actual file path is /data/storage/el2/base/haps/entry/files/test.wav.
189
190    let file = fs.openSync(filePath, fs.OpenMode.READ_ONLY);
191    let stat = await fs.stat(filePath);
192    let buf = new ArrayBuffer(bufferSize);
193    let len = stat.size % bufferSize === 0 ? Math.floor(stat.size / bufferSize) : Math.floor(stat.size / bufferSize + 1);
194    class Options {
195      offset: number = 0;
196      length: number = 0
197    }
198    for (let i = 0; i < len; i++) {
199      let options: Options = {
200        offset: i * bufferSize,
201        length: bufferSize
202      };
203      let readsize = await fs.read(file.fd, buf, options);
204
205      // buf indicates the audio data to be written to the buffer. Before calling AudioRenderer.write(), you can preprocess the audio data for personalized playback. The AudioRenderer reads the audio data written to the buffer for rendering.
206
207      let writeSize: number = await (renderModel as audio.AudioRenderer).write(buf);
208        if ((renderModel as audio.AudioRenderer).state.valueOf() === audio.AudioState.STATE_RELEASED) { // Release the instance if the AudioRenderer is in the released state.
209        fs.close(file);
210      }
211      if ((renderModel as audio.AudioRenderer).state.valueOf() === audio.AudioState.STATE_RUNNING) {
212        if (i === len - 1) { // The rendering stops if the file finishes reading.
213          fs.close(file);
214          await (renderModel as audio.AudioRenderer).stop();
215        }
216      }
217    }
218  }
219}
220
221// Pause the rendering.
222async function pause() {
223  if (renderModel !== undefined) {
224    // Rendering can be paused only when the AudioRenderer is in the running state.
225    if ((renderModel as audio.AudioRenderer).state.valueOf() !== audio.AudioState.STATE_RUNNING) {
226      console.info('Renderer is not running');
227      return;
228    }
229    await (renderModel as audio.AudioRenderer).pause(); // Pause rendering.
230    if ((renderModel as audio.AudioRenderer).state.valueOf() === audio.AudioState.STATE_PAUSED) {
231      console.info('Renderer is paused.');
232    } else {
233      console.error('Pausing renderer failed.');
234    }
235  }
236}
237
238// Stop rendering.
239async function stop() {
240  if (renderModel !== undefined) {
241    // Rendering can be stopped only when the AudioRenderer is in the running or paused state.
242    if ((renderModel as audio.AudioRenderer).state.valueOf() !== audio.AudioState.STATE_RUNNING && (renderModel as audio.AudioRenderer).state.valueOf() !== audio.AudioState.STATE_PAUSED) {
243      console.info('Renderer is not running or paused.');
244      return;
245    }
246    await (renderModel as audio.AudioRenderer).stop(); // Stop rendering.
247    if ((renderModel as audio.AudioRenderer).state.valueOf() === audio.AudioState.STATE_STOPPED) {
248      console.info('Renderer stopped.');
249    } else {
250      console.error('Stopping renderer failed.');
251    }
252  }
253}
254
255// Release the instance.
256async function release() {
257  if (renderModel !== undefined) {
258    // The AudioRenderer can be released only when it is not in the released state.
259    if (renderModel.state.valueOf() === audio.AudioState.STATE_RELEASED) {
260      console.info('Renderer already released');
261      return;
262    }
263    await renderModel.release(); // Release the instance.
264    if (renderModel.state.valueOf() === audio.AudioState.STATE_RELEASED) {
265      console.info('Renderer released');
266    } else {
267      console.error('Renderer release failed.');
268    }
269  }
270}
271```
272
273When audio streams with the same or higher priority need to use the output device, the current audio playback will be interrupted. The application can respond to and handle the interruption event. For details about how to process concurrent audio playback, see [Audio Playback Concurrency Policies](audio-playback-concurrency.md).
274