• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1# Developing Audio Call
2
3During an audio call, audio output (playing the peer voice) and audio input (recording the local voice) are carried out simultaneously. You can use the AudioRenderer to implement audio output and the AudioCapturer to implement audio input.
4
5Before starting or stopping using the audio call service, the application needs to check the [audio scene](audio-call-overview.md#audio-scene) and [ringer mode](audio-call-overview.md#ringer-mode) to adopt proper audio management and prompt policies.
6
7The sample code below demonstrates the basic process of using the AudioRenderer and AudioCapturer to implement the audio call service, without the process of call data transmission. In actual development, the peer call data transmitted over the network needs to be decoded and played, and the sample code uses the process of reading an audio file instead; the local call data needs to be encoded and packed and then sent to the peer over the network, and the sample code uses the process of writing an audio file instead.
8
9## Using AudioRenderer to Play the Peer Voice
10
11This process is similar to the process of [using AudioRenderer to develop audio playback](using-audiorenderer-for-playback.md). The key differences lie in the **audioRendererInfo** parameter and audio data source. In the **audioRendererInfo** parameter used for audio calling, **content** must be set to **CONTENT_TYPE_SPEECH**, and **usage** must be set to **STREAM_USAGE_VOICE_COMMUNICATION**.
12
13```ts
14import audio from '@ohos.multimedia.audio';
15import fs from '@ohos.file.fs';
16const TAG = 'VoiceCallDemoForAudioRenderer';
17// The process is similar to the process of using AudioRenderer to develop audio playback. The key differences lie in the audioRendererInfo parameter and audio data source.
18export default class VoiceCallDemoForAudioRenderer {
19  private renderModel = undefined;
20  private audioStreamInfo = {
21    samplingRate: audio.AudioSamplingRate.SAMPLE_RATE_48000, // Sampling rate.
22    channels: audio.AudioChannel.CHANNEL_2, // Channel.
23    sampleFormat: audio.AudioSampleFormat.SAMPLE_FORMAT_S16LE, // Sampling format.
24    encodingType: audio.AudioEncodingType.ENCODING_TYPE_RAW // Encoding format.
25  }
26  private audioRendererInfo = {
27    // Parameters corresponding to the call scenario need to be used.
28    content: audio.ContentType.CONTENT_TYPE_SPEECH, // Audio content type: speech.
29    usage: audio.StreamUsage.STREAM_USAGE_VOICE_COMMUNICATION, // Audio stream usage type: voice communication.
30    rendererFlags: 0 // AudioRenderer flag. The default value is 0.
31  }
32  private audioRendererOptions = {
33    streamInfo: this.audioStreamInfo,
34    rendererInfo: this.audioRendererInfo
35  }
36  // Create an AudioRenderer instance, and set the events to listen for.
37  init() {
38    audio.createAudioRenderer(this.audioRendererOptions, (err, renderer) => { // Create an AudioRenderer instance.
39      if (!err) {
40        console.info(`${TAG}: creating AudioRenderer success`);
41        this.renderModel = renderer;
42        this.renderModel.on('stateChange', (state) => { // Set the events to listen for. A callback is invoked when the AudioRenderer is switched to the specified state.
43          if (state == 1) {
44            console.info('audio renderer state is: STATE_PREPARED');
45          }
46          if (state == 2) {
47            console.info('audio renderer state is: STATE_RUNNING');
48          }
49        });
50        this.renderModel.on('markReach', 1000, (position) => { // Subscribe to the markReach event. A callback is triggered when the number of rendered frames reaches 1000.
51          if (position == 1000) {
52            console.info('ON Triggered successfully');
53          }
54        });
55      } else {
56        console.info(`${TAG}: creating AudioRenderer failed, error: ${err.message}`);
57      }
58    });
59  }
60  // Start audio rendering.
61  async start() {
62    let stateGroup = [audio.AudioState.STATE_PREPARED, audio.AudioState.STATE_PAUSED, audio.AudioState.STATE_STOPPED];
63    if (stateGroup.indexOf(this.renderModel.state) === -1) { // Rendering can be started only when the AudioRenderer is in the STATE_PREPARED, STATE_PAUSED, or STATE_STOPPED state.
64      console.error(TAG + 'start failed');
65      return;
66    }
67    await this.renderModel.start(); // Start rendering.
68    const bufferSize = await this.renderModel.getBufferSize();
69    // The process of reading audio file data is used as an example. In actual audio call development, audio data transmitted from the peer needs to be read.
70    let context = getContext(this);
71    let path = context.filesDir;
72
73    const filePath = path + '/voice_call_data.wav'; // Sandbox path. The actual path is /data/storage/el2/base/haps/entry/files/voice_call_data.wav.
74    let file = fs.openSync(filePath, fs.OpenMode.READ_ONLY);
75    let stat = await fs.stat(filePath);
76    let buf = new ArrayBuffer(bufferSize);
77    let len = stat.size % bufferSize === 0 ? Math.floor(stat.size / bufferSize) : Math.floor(stat.size / bufferSize + 1);
78    for (let i = 0; i < len; i++) {
79      let options = {
80        offset: i * bufferSize,
81        length: bufferSize
82      };
83      let readsize = await fs.read(file.fd, buf, options);
84      // buf indicates the audio data to be written to the buffer. Before calling AudioRenderer.write(), you can preprocess the audio data for personalized playback. The AudioRenderer reads the audio data written to the buffer for rendering.
85      let writeSize = await new Promise((resolve, reject) => {
86        this.renderModel.write(buf, (err, writeSize) => {
87          if (err) {
88            reject(err);
89          } else {
90            resolve(writeSize);
91          }
92        });
93      });
94      if (this.renderModel.state === audio.AudioState.STATE_RELEASED) { // The rendering stops if the AudioRenderer is in the STATE_RELEASED state.
95        fs.close(file);
96        await this.renderModel.stop();
97      }
98      if (this.renderModel.state === audio.AudioState.STATE_RUNNING) {
99        if (i === len - 1) { // The rendering stops if the file finishes reading.
100          fs.close(file);
101          await this.renderModel.stop();
102        }
103      }
104    }
105  }
106  // Pause the rendering.
107  async pause() {
108    // Rendering can be paused only when the AudioRenderer is in the STATE_RUNNING state.
109    if (this.renderModel.state !== audio.AudioState.STATE_RUNNING) {
110      console.info('Renderer is not running');
111      return;
112    }
113    await this.renderModel.pause(); // Pause rendering.
114    if (this.renderModel.state === audio.AudioState.STATE_PAUSED) {
115      console.info('Renderer is paused.');
116    } else {
117      console.error('Pausing renderer failed.');
118    }
119  }
120  // Stop rendering.
121  async stop() {
122    // Rendering can be stopped only when the AudioRenderer is in the STATE_RUNNING or STATE_PAUSED state.
123    if (this.renderModel.state !== audio.AudioState.STATE_RUNNING && this.renderModel.state !== audio.AudioState.STATE_PAUSED) {
124      console.info('Renderer is not running or paused.');
125      return;
126    }
127    await this.renderModel.stop(); // Stop rendering.
128    if (this.renderModel.state === audio.AudioState.STATE_STOPPED) {
129      console.info('Renderer stopped.');
130    } else {
131      console.error('Stopping renderer failed.');
132    }
133  }
134  // Release the instance.
135  async release() {
136    // The AudioRenderer can be released only when it is not in the STATE_RELEASED state.
137    if (this.renderModel.state === audio.AudioState.STATE_RELEASED) {
138      console.info('Renderer already released');
139      return;
140    }
141    await this.renderModel.release(); // Release the instance.
142    if (this.renderModel.state === audio.AudioState.STATE_RELEASED) {
143      console.info('Renderer released');
144    } else {
145      console.error('Renderer release failed.');
146    }
147  }
148}
149```
150
151## Using AudioCapturer to Record the Local Voice
152
153This process is similar to the process of [using AudioCapturer to develop audio recording](using-audiocapturer-for-recording.md). The key differences lie in the **audioCapturerInfo** parameter and audio data stream direction. In the **audioCapturerInfo** parameter used for audio calling, **source** must be set to **SOURCE_TYPE_VOICE_COMMUNICATION**.
154
155```ts
156import audio from '@ohos.multimedia.audio';
157import fs from '@ohos.file.fs';
158const TAG = 'VoiceCallDemoForAudioCapturer';
159// The process is similar to the process of using AudioCapturer to develop audio recording. The key differences lie in the audioCapturerInfo parameter and audio data stream direction.
160export default class VoiceCallDemoForAudioCapturer {
161  private audioCapturer = undefined;
162  private audioStreamInfo = {
163    samplingRate: audio.AudioSamplingRate.SAMPLE_RATE_44100, // Sampling rate.
164    channels: audio.AudioChannel.CHANNEL_1, // Channel.
165    sampleFormat: audio.AudioSampleFormat.SAMPLE_FORMAT_S16LE, // Sampling format.
166    encodingType: audio.AudioEncodingType.ENCODING_TYPE_RAW // Encoding format.
167  }
168  private audioCapturerInfo = {
169    // Parameters corresponding to the call scenario need to be used.
170    source: audio.SourceType.SOURCE_TYPE_VOICE_COMMUNICATION, // Audio source type: voice communication.
171    capturerFlags: 0 // AudioCapturer flag. The default value is 0.
172  }
173  private audioCapturerOptions = {
174    streamInfo: this.audioStreamInfo,
175    capturerInfo: this.audioCapturerInfo
176  }
177  // Create an AudioCapturer instance, and set the events to listen for.
178  init() {
179    audio.createAudioCapturer(this.audioCapturerOptions, (err, capturer) => { // Create an AudioCapturer instance.
180      if (err) {
181        console.error(`Invoke createAudioCapturer failed, code is ${err.code}, message is ${err.message}`);
182        return;
183      }
184      console.info(`${TAG}: create AudioCapturer success`);
185      this.audioCapturer = capturer;
186      this.audioCapturer.on('markReach', 1000, (position) => { // Subscribe to the markReach event. A callback is triggered when the number of captured frames reaches 1000.
187        if (position === 1000) {
188          console.info('ON Triggered successfully');
189        }
190      });
191      this.audioCapturer.on('periodReach', 2000, (position) => { // Subscribe to the periodReach event. A callback is triggered when the number of captured frames reaches 2000.
192        if (position === 2000) {
193          console.info('ON Triggered successfully');
194        }
195      });
196    });
197  }
198  // Start audio recording.
199  async start() {
200    let stateGroup = [audio.AudioState.STATE_PREPARED, audio.AudioState.STATE_PAUSED, audio.AudioState.STATE_STOPPED];
201    if (stateGroup.indexOf(this.audioCapturer.state) === -1) { // Recording can be started only when the AudioRenderer is in the STATE_PREPARED, STATE_PAUSED, or STATE_STOPPED state.
202      console.error(`${TAG}: start failed`);
203      return;
204    }
205    await this.audioCapturer.start(); // Start recording.
206    // The following describes how to write audio data to a file. In actual audio call development, the local audio data needs to be encoded and packed, and then sent to the peer through the network.
207    let context = getContext(this);
208    const path = context.filesDir + '/voice_call_data.wav'; // Path for storing the recorded audio file.
209    let file = fs.openSync(path, 0o2 | 0o100); // Create the file if it does not exist.
210    let fd = file.fd;
211    let numBuffersToCapture = 150; // Write data for 150 times.
212    let count = 0;
213    while (numBuffersToCapture) {
214      let bufferSize = await this.audioCapturer.getBufferSize();
215      let buffer = await this.audioCapturer.read(bufferSize, true);
216      let options = {
217        offset: count * bufferSize,
218        length: bufferSize
219      };
220      if (buffer === undefined) {
221        console.error(`${TAG}: read buffer failed`);
222      } else {
223        let number = fs.writeSync(fd, buffer, options);
224        console.info(`${TAG}: write date: ${number}`);
225      }
226      numBuffersToCapture--;
227      count++;
228    }
229  }
230  // Stop recording.
231  async stop() {
232    // The AudioCapturer can be stopped only when it is in STATE_RUNNING or STATE_PAUSED state.
233    if (this.audioCapturer.state !== audio.AudioState.STATE_RUNNING && this.audioCapturer.state !== audio.AudioState.STATE_PAUSED) {
234      console.info('Capturer is not running or paused');
235      return;
236    }
237    await this.audioCapturer.stop(); // Stop recording.
238    if (this.audioCapturer.state === audio.AudioState.STATE_STOPPED) {
239      console.info('Capturer stopped');
240    } else {
241      console.error('Capturer stop failed');
242    }
243  }
244  // Release the instance.
245  async release() {
246    // The AudioCapturer can be released only when it is not in the STATE_RELEASED or STATE_NEW state.
247    if (this.audioCapturer.state === audio.AudioState.STATE_RELEASED || this.audioCapturer.state === audio.AudioState.STATE_NEW) {
248      console.info('Capturer already released');
249      return;
250    }
251    await this.audioCapturer.release(); // Release the instance.
252    if (this.audioCapturer.state == audio.AudioState.STATE_RELEASED) {
253      console.info('Capturer released');
254    } else {
255      console.error('Capturer release failed');
256    }
257  }
258}
259```
260