Home
last modified time | relevance | path

Searched refs:speech (Results 1 – 25 of 62) sorted by relevance

123

/external/webrtc/webrtc/common_audio/vad/
Dvad_core_unittest.cc59 int16_t speech[kMaxFrameLength]; in TEST_F() local
66 memset(speech, 0, sizeof(speech)); in TEST_F()
70 EXPECT_EQ(0, WebRtcVad_CalcVad8khz(self, speech, kFrameLengths[j])); in TEST_F()
73 EXPECT_EQ(0, WebRtcVad_CalcVad16khz(self, speech, kFrameLengths[j])); in TEST_F()
76 EXPECT_EQ(0, WebRtcVad_CalcVad32khz(self, speech, kFrameLengths[j])); in TEST_F()
79 EXPECT_EQ(0, WebRtcVad_CalcVad48khz(self, speech, kFrameLengths[j])); in TEST_F()
86 speech[i] = static_cast<int16_t>(i * i); in TEST_F()
90 EXPECT_EQ(1, WebRtcVad_CalcVad8khz(self, speech, kFrameLengths[j])); in TEST_F()
93 EXPECT_EQ(1, WebRtcVad_CalcVad16khz(self, speech, kFrameLengths[j])); in TEST_F()
96 EXPECT_EQ(1, WebRtcVad_CalcVad32khz(self, speech, kFrameLengths[j])); in TEST_F()
[all …]
Dvad_filterbank_unittest.cc40 int16_t speech[kMaxFrameLength]; in TEST_F() local
42 speech[i] = static_cast<int16_t>(i * i); in TEST_F()
50 WebRtcVad_CalculateFeatures(self, speech, kFrameLengths[j], in TEST_F()
62 memset(speech, 0, sizeof(speech)); in TEST_F()
66 EXPECT_EQ(0, WebRtcVad_CalculateFeatures(self, speech, kFrameLengths[j], in TEST_F()
77 speech[i] = 1; in TEST_F()
82 EXPECT_EQ(0, WebRtcVad_CalculateFeatures(self, speech, kFrameLengths[j], in TEST_F()
Dvad_unittest.cc67 int16_t speech[kMaxFrameLength]; in TEST_F() local
69 speech[i] = static_cast<int16_t>(i * i); in TEST_F()
76 WebRtcVad_Process(nullptr, kRates[0], speech, kFrameLengths[0])); in TEST_F()
82 EXPECT_EQ(-1, WebRtcVad_Process(handle, kRates[0], speech, kFrameLengths[0])); in TEST_F()
102 EXPECT_EQ(-1, WebRtcVad_Process(handle, 9999, speech, kFrameLengths[0])); in TEST_F()
114 speech, in TEST_F()
119 speech, in TEST_F()
/external/tensorflow/tensorflow/contrib/lite/models/testdata/g3doc/
DREADME.md3 Sample test data has been provided for speech related models in Tensorflow Lite
4 to help users working with speech models to verify and test their models.
6 For the hotword, speaker-id and automatic speech recognition sample models, the
7 architecture assumes that the models receive their input from a speech
8 pre-processing module. The speech pre-processing module receives the audio
12 applied to the power spectra). The text-to-speech model assumes that the inputs
27 The speech hotword model block diagram is shown in Figure below. It has an input
36 verification. It runs after the hotword triggers. The speech speaker-id model
43 ### Text-to-speech (TTS) Model
45 The text-to-speech model is the neural network model used to generate speech
[all …]
/external/webrtc/webrtc/modules/audio_coding/codecs/pcm16b/
Dpcm16b.c15 size_t WebRtcPcm16b_Encode(const int16_t* speech, in WebRtcPcm16b_Encode() argument
20 uint16_t s = speech[i]; in WebRtcPcm16b_Encode()
29 int16_t* speech) { in WebRtcPcm16b_Decode() argument
32 speech[i] = encoded[2 * i] << 8 | encoded[2 * i + 1]; in WebRtcPcm16b_Decode()
Dpcm16b.h41 size_t WebRtcPcm16b_Encode(const int16_t* speech,
62 int16_t* speech);
/external/sl4a/Common/src/com/googlecode/android_scripting/facade/
DSpeechRecognitionFacade.java20 import android.speech.RecognizerIntent;
52 new Intent(android.speech.RecognizerIntent.ACTION_RECOGNIZE_SPEECH); in recognizeSpeech()
68 if (data.hasExtra(android.speech.RecognizerIntent.EXTRA_RESULTS)) { in recognizeSpeech()
72 data.getStringArrayListExtra(android.speech.RecognizerIntent.EXTRA_RESULTS); in recognizeSpeech()
DTextToSpeechFacade.java20 import android.speech.tts.TextToSpeech;
21 import android.speech.tts.TextToSpeech.OnInitListener;
/external/sonic/doc/
Dindex.md21 Sonic is free software for speeding up or slowing down speech. While similar to
34 to improve their productivity with free software speech engines, like espeak.
48 In short, Sonic is better for speech, while WSOLA is better for music.
52 for speech (contrary to the inventor's estimate of WSOLA). Listen to [this
55 introduces unacceptable levels of distortion, making speech impossible to
58 However, there are decent free software algorithms for speeding up speech. They
59 are all in the TD-PSOLA family. For speech rates below 2X, sonic uses PICOLA,
131 double speed of speech. A pitch of 0.95 means to lower the pitch by about 5%,
134 speech is played. A 2.0 value will make you sound like a chipmunk talking very
153 You read the sped up speech samples from sonic like this:
/external/autotest/client/site_tests/desktopui_SpeechSynthesisSemiAuto/
Ddesktopui_SpeechSynthesisSemiAuto.py20 speech = dbus.Interface(proxy, "org.chromium.SpeechSynthesizerInterface")
21 res = speech.Speak("Welcome to Chromium O S")
/external/sonic/debian/
Dcontrol14 Description: Simple utility to speed up or slow down speech
24 Description: Simple library to speed up or slow down speech
27 down speech. It has only basic dependencies, and is meant to
/external/sonic/
DREADME1 Sonic is a simple algorithm for speeding up or slowing down speech. However,
3 speech rate. The Sonic library is a very simple ANSI C library that is designed
7 to improve their productivity with open source speech engines, like espeak.
/external/webrtc/resources/audio_coding/
DREAD.ME3 testfile32kHz.pcm - mono speech file samples at 32 kHz
4 teststereo32kHz.pcm - stereo speech file samples at 32 kHz
/external/webrtc/webrtc/modules/audio_coding/codecs/cng/
Daudio_encoder_cng_unittest.cc273 EXPECT_FALSE(encoded_info_.speech); in TEST_F()
294 EXPECT_TRUE(encoded_info_.speech); in TEST_F()
299 EXPECT_TRUE(encoded_info_.speech); in TEST_F()
304 EXPECT_TRUE(encoded_info_.speech); in TEST_F()
308 EXPECT_FALSE(encoded_info_.speech); in TEST_F()
Dwebrtc_cng.h106 int WebRtcCng_Encode(CNG_enc_inst* cng_inst, int16_t* speech,
Dwebrtc_cng.c228 int WebRtcCng_Encode(CNG_enc_inst* cng_inst, int16_t* speech, in WebRtcCng_Encode() argument
265 speechBuf[i] = speech[i]; in WebRtcCng_Encode()
/external/tensorflow/tensorflow/core/api_def/base_api/
Dapi_def_Mfcc.pbtxt42 summary: "Transforms a spectrogram into a form that\'s useful for speech recognition."
48 history in the speech recognition world, and https://en.wikipedia.org/wiki/Mel-frequency_cepstrum
/external/webrtc/webrtc/modules/audio_coding/codecs/red/
Daudio_encoder_copy_red.cc81 RTC_DCHECK_EQ(info.speech, info.redundant[0].speech); in EncodeInternal()
/external/libgsm/
DREADME2 GSM 06.10 13 kbit/s RPE/LTP speech compression available
11 European GSM 06.10 provisional standard for full-rate speech
/external/sonic/samples/
DREADME1 These wav files show how Sonic performs at increasing speech rates. All sound
20 Sonic also performs well at increasing the speed of synthesized speech.
/external/tensorflow/tensorflow/examples/speech_commands/
DREADME.md3 This is a basic speech recognition example. For more information, see the
/external/robolectric-shadows/shadows/framework/src/main/java/org/robolectric/shadows/
DShadowTextToSpeech.java4 import android.speech.tts.TextToSpeech;
/external/webrtc/webrtc/modules/audio_coding/codecs/
Daudio_encoder.h31 bool speech = true; member
/external/robolectric-shadows/robolectric/src/test/java/org/robolectric/shadows/
DShadowTextToSpeechTest.java7 import android.speech.tts.TextToSpeech;
/external/tensorflow/tensorflow/docs_src/mobile/
Dmobile_intro.md37 speech-driven interface, and many of these require on-device processing. Most of
145 that deep learning can offer very natural-sounding speech.
157 on the cloud. A good example of this is hotword detection in speech. Since
159 lot of traffic to cloud-based speech recognition once one is recognized. Without
200 capture process, such as the motor drowning out speech or not being able to hear

123