• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1# 开发音频通话功能
2
3在音频通话场景下,音频输出(播放对端声音)和音频输入(录制本端声音)会同时进行,应用可以通过使用AudioRenderer来实现音频输出,通过使用AudioCapturer来实现音频输入,同时使用AudioRenderer和AudioCapturer即可实现音频通话功能。
4
5在音频通话开始和结束时,应用可以自行检查当前的[音频场景模式](audio-call-overview.md#音频场景模式)和[铃声模式](audio-call-overview.md#铃声模式),以便采取合适的音频管理及提示策略。
6
7以下代码示范了同时使用AudioRenderer和AudioCapturer实现音频通话功能的基本过程,其中未包含音频通话数据的传输过程,实际开发中,需要将网络传输来的对端通话数据解码播放,此处仅以读取音频文件的数据代替;同时需要将本端录制的通话数据编码打包,通过网络发送给对端,此处仅以将数据写入音频文件代替。
8
9## 使用AudioRenderer播放对端的通话声音
10
11  该过程与[使用AudioRenderer开发音频播放功能](using-audiorenderer-for-playback.md)过程相似,关键区别在于audioRendererInfo参数和音频数据来源。audioRendererInfo参数中,音频内容类型需设置为语音,CONTENT_TYPE_SPEECH,音频流使用类型需设置为语音通信,STREAM_USAGE_VOICE_COMMUNICATION。
12
13```ts
14import audio from '@ohos.multimedia.audio';
15import fs from '@ohos.file.fs';
16const TAG = 'VoiceCallDemoForAudioRenderer';
17// 与使用AudioRenderer开发音频播放功能过程相似,关键区别在于audioRendererInfo参数和音频数据来源
18export default class VoiceCallDemoForAudioRenderer {
19  private renderModel = undefined;
20  private audioStreamInfo = {
21    samplingRate: audio.AudioSamplingRate.SAMPLE_RATE_48000, // 采样率
22    channels: audio.AudioChannel.CHANNEL_2, // 通道
23    sampleFormat: audio.AudioSampleFormat.SAMPLE_FORMAT_S16LE, // 采样格式
24    encodingType: audio.AudioEncodingType.ENCODING_TYPE_RAW // 编码格式
25  }
26  private audioRendererInfo = {
27    // 需使用通话场景相应的参数
28    content: audio.ContentType.CONTENT_TYPE_SPEECH, // 音频内容类型:语音
29    usage: audio.StreamUsage.STREAM_USAGE_VOICE_COMMUNICATION, // 音频流使用类型:语音通信
30    rendererFlags: 0 // 音频渲染器标志:默认为0即可
31  }
32  private audioRendererOptions = {
33    streamInfo: this.audioStreamInfo,
34    rendererInfo: this.audioRendererInfo
35  }
36  // 初始化,创建实例,设置监听事件
37  init() {
38    audio.createAudioRenderer(this.audioRendererOptions, (err, renderer) => { // 创建AudioRenderer实例
39      if (!err) {
40        console.info(`${TAG}: creating AudioRenderer success`);
41        this.renderModel = renderer;
42        this.renderModel.on('stateChange', (state) => { // 设置监听事件,当转换到指定的状态时触发回调
43          if (state == 1) {
44            console.info('audio renderer state is: STATE_PREPARED');
45          }
46          if (state == 2) {
47            console.info('audio renderer state is: STATE_RUNNING');
48          }
49        });
50        this.renderModel.on('markReach', 1000, (position) => { // 订阅markReach事件,当渲染的帧数达到1000帧时触发回调
51          if (position == 1000) {
52            console.info('ON Triggered successfully');
53          }
54        });
55      } else {
56        console.info(`${TAG}: creating AudioRenderer failed, error: ${err.message}`);
57      }
58    });
59  }
60  // 开始一次音频渲染
61  async start() {
62    let stateGroup = [audio.AudioState.STATE_PREPARED, audio.AudioState.STATE_PAUSED, audio.AudioState.STATE_STOPPED];
63    if (stateGroup.indexOf(this.renderModel.state) === -1) { // 当且仅当状态为STATE_PREPARED、STATE_PAUSED和STATE_STOPPED之一时才能启动渲染
64      console.error(TAG + 'start failed');
65      return;
66    }
67    await this.renderModel.start(); // 启动渲染
68    const bufferSize = await this.renderModel.getBufferSize();
69    // 此处仅以读取音频文件的数据举例,实际音频通话开发中,需要读取的是通话对端传输来的音频数据
70    let context = getContext(this);
71    let path = context.filesDir;
72
73    const filePath = path + '/voice_call_data.wav'; // 沙箱路径,实际路径为/data/storage/el2/base/haps/entry/files/voice_call_data.wav
74    let file = fs.openSync(filePath, fs.OpenMode.READ_ONLY);
75    let stat = await fs.stat(filePath);
76    let buf = new ArrayBuffer(bufferSize);
77    let len = stat.size % bufferSize === 0 ? Math.floor(stat.size / bufferSize) : Math.floor(stat.size / bufferSize + 1);
78    for (let i = 0; i < len; i++) {
79      let options = {
80        offset: i * bufferSize,
81        length: bufferSize
82      };
83      let readsize = await fs.read(file.fd, buf, options);
84      // buf是要写入缓冲区的音频数据,在调用AudioRenderer.write()方法前可以进行音频数据的预处理,实现个性化的音频播放功能,AudioRenderer会读出写入缓冲区的音频数据进行渲染
85      let writeSize = await new Promise((resolve, reject) => {
86        this.renderModel.write(buf, (err, writeSize) => {
87          if (err) {
88            reject(err);
89          } else {
90            resolve(writeSize);
91          }
92        });
93      });
94      if (this.renderModel.state === audio.AudioState.STATE_RELEASED) { // 如果渲染器状态为STATE_RELEASED,停止渲染
95        fs.close(file);
96        await this.renderModel.stop();
97      }
98      if (this.renderModel.state === audio.AudioState.STATE_RUNNING) {
99        if (i === len - 1) { // 如果音频文件已经被读取完,停止渲染
100          fs.close(file);
101          await this.renderModel.stop();
102        }
103      }
104    }
105  }
106  // 暂停渲染
107  async pause() {
108    // 只有渲染器状态为STATE_RUNNING的时候才能暂停
109    if (this.renderModel.state !== audio.AudioState.STATE_RUNNING) {
110      console.info('Renderer is not running');
111      return;
112    }
113    await this.renderModel.pause(); // 暂停渲染
114    if (this.renderModel.state === audio.AudioState.STATE_PAUSED) {
115      console.info('Renderer is paused.');
116    } else {
117      console.error('Pausing renderer failed.');
118    }
119  }
120  // 停止渲染
121  async stop() {
122    // 只有渲染器状态为STATE_RUNNING或STATE_PAUSED的时候才可以停止
123    if (this.renderModel.state !== audio.AudioState.STATE_RUNNING && this.renderModel.state !== audio.AudioState.STATE_PAUSED) {
124      console.info('Renderer is not running or paused.');
125      return;
126    }
127    await this.renderModel.stop(); // 停止渲染
128    if (this.renderModel.state === audio.AudioState.STATE_STOPPED) {
129      console.info('Renderer stopped.');
130    } else {
131      console.error('Stopping renderer failed.');
132    }
133  }
134  // 销毁实例,释放资源
135  async release() {
136    // 渲染器状态不是STATE_RELEASED状态,才能release
137    if (this.renderModel.state === audio.AudioState.STATE_RELEASED) {
138      console.info('Renderer already released');
139      return;
140    }
141    await this.renderModel.release(); // 释放资源
142    if (this.renderModel.state === audio.AudioState.STATE_RELEASED) {
143      console.info('Renderer released');
144    } else {
145      console.error('Renderer release failed.');
146    }
147  }
148}
149```
150
151## 使用AudioCapturer录制本端的通话声音
152
153  该过程与[使用AudioCapturer开发音频录制功能](using-audiocapturer-for-recording.md)过程相似,关键区别在于audioCapturerInfo参数和音频数据流向。audioCapturerInfo参数中音源类型需设置为语音通话,SOURCE_TYPE_VOICE_COMMUNICATION。
154
155```ts
156import audio from '@ohos.multimedia.audio';
157import fs from '@ohos.file.fs';
158const TAG = 'VoiceCallDemoForAudioCapturer';
159// 与使用AudioCapturer开发音频录制功能过程相似,关键区别在于audioCapturerInfo参数和音频数据流向
160export default class VoiceCallDemoForAudioCapturer {
161  private audioCapturer = undefined;
162  private audioStreamInfo = {
163    samplingRate: audio.AudioSamplingRate.SAMPLE_RATE_44100, // 采样率
164    channels: audio.AudioChannel.CHANNEL_1, // 通道
165    sampleFormat: audio.AudioSampleFormat.SAMPLE_FORMAT_S16LE, // 采样格式
166    encodingType: audio.AudioEncodingType.ENCODING_TYPE_RAW // 编码格式
167  }
168  private audioCapturerInfo = {
169    // 需使用通话场景相应的参数
170    source: audio.SourceType.SOURCE_TYPE_VOICE_COMMUNICATION, // 音源类型:语音通话
171    capturerFlags: 0 // 音频采集器标志:默认为0即可
172  }
173  private audioCapturerOptions = {
174    streamInfo: this.audioStreamInfo,
175    capturerInfo: this.audioCapturerInfo
176  }
177  // 初始化,创建实例,设置监听事件
178  init() {
179    audio.createAudioCapturer(this.audioCapturerOptions, (err, capturer) => { // 创建AudioCapturer实例
180      if (err) {
181        console.error(`Invoke createAudioCapturer failed, code is ${err.code}, message is ${err.message}`);
182        return;
183      }
184      console.info(`${TAG}: create AudioCapturer success`);
185      this.audioCapturer = capturer;
186      this.audioCapturer.on('markReach', 1000, (position) => { // 订阅markReach事件,当采集的帧数达到1000时触发回调
187        if (position === 1000) {
188          console.info('ON Triggered successfully');
189        }
190      });
191      this.audioCapturer.on('periodReach', 2000, (position) => { // 订阅periodReach事件,当采集的帧数达到2000时触发回调
192        if (position === 2000) {
193          console.info('ON Triggered successfully');
194        }
195      });
196    });
197  }
198  // 开始一次音频采集
199  async start() {
200    let stateGroup = [audio.AudioState.STATE_PREPARED, audio.AudioState.STATE_PAUSED, audio.AudioState.STATE_STOPPED];
201    if (stateGroup.indexOf(this.audioCapturer.state) === -1) { // 当且仅当状态为STATE_PREPARED、STATE_PAUSED和STATE_STOPPED之一时才能启动采集
202      console.error(`${TAG}: start failed`);
203      return;
204    }
205    await this.audioCapturer.start(); // 启动采集
206    // 此处仅以将音频数据写入文件举例,实际音频通话开发中,需要将本端采集的音频数据编码打包,通过网络发送给通话对端
207    let context = getContext(this);
208    const path = context.filesDir + '/voice_call_data.wav'; // 采集到的音频文件存储路径
209    let file = fs.openSync(path, 0o2 | 0o100); // 如果文件不存在则创建文件
210    let fd = file.fd;
211    let numBuffersToCapture = 150; // 循环写入150次
212    let count = 0;
213    while (numBuffersToCapture) {
214      let bufferSize = await this.audioCapturer.getBufferSize();
215      let buffer = await this.audioCapturer.read(bufferSize, true);
216      let options = {
217        offset: count * bufferSize,
218        length: bufferSize
219      };
220      if (buffer === undefined) {
221        console.error(`${TAG}: read buffer failed`);
222      } else {
223        let number = fs.writeSync(fd, buffer, options);
224        console.info(`${TAG}: write date: ${number}`);
225      }
226      numBuffersToCapture--;
227      count++;
228    }
229  }
230  // 停止采集
231  async stop() {
232    // 只有采集器状态为STATE_RUNNING或STATE_PAUSED的时候才可以停止
233    if (this.audioCapturer.state !== audio.AudioState.STATE_RUNNING && this.audioCapturer.state !== audio.AudioState.STATE_PAUSED) {
234      console.info('Capturer is not running or paused');
235      return;
236    }
237    await this.audioCapturer.stop(); // 停止采集
238    if (this.audioCapturer.state === audio.AudioState.STATE_STOPPED) {
239      console.info('Capturer stopped');
240    } else {
241      console.error('Capturer stop failed');
242    }
243  }
244  // 销毁实例,释放资源
245  async release() {
246    // 采集器状态不是STATE_RELEASED或STATE_NEW状态,才能release
247    if (this.audioCapturer.state === audio.AudioState.STATE_RELEASED || this.audioCapturer.state === audio.AudioState.STATE_NEW) {
248      console.info('Capturer already released');
249      return;
250    }
251    await this.audioCapturer.release(); // 释放资源
252    if (this.audioCapturer.state == audio.AudioState.STATE_RELEASED) {
253      console.info('Capturer released');
254    } else {
255      console.error('Capturer release failed');
256    }
257  }
258}
259```
260