1 /* 2 * Copyright 2022 Google LLC 3 * 4 * Licensed under the Apache License, Version 2.0 (the "License"); 5 * you may not use this file except in compliance with the License. 6 * You may obtain a copy of the License at 7 * 8 * https://www.apache.org/licenses/LICENSE-2.0 9 * 10 * Unless required by applicable law or agreed to in writing, software 11 * distributed under the License is distributed on an "AS IS" BASIS, 12 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 * See the License for the specific language governing permissions and 14 * limitations under the License. 15 */ 16 17 package com.google.cloud.speech.v1; 18 19 import com.google.api.core.BetaApi; 20 import com.google.api.gax.core.BackgroundResource; 21 import com.google.api.gax.httpjson.longrunning.OperationsClient; 22 import com.google.api.gax.longrunning.OperationFuture; 23 import com.google.api.gax.rpc.BidiStreamingCallable; 24 import com.google.api.gax.rpc.OperationCallable; 25 import com.google.api.gax.rpc.UnaryCallable; 26 import com.google.cloud.speech.v1.stub.SpeechStub; 27 import com.google.cloud.speech.v1.stub.SpeechStubSettings; 28 import com.google.longrunning.Operation; 29 import java.io.IOException; 30 import java.util.concurrent.TimeUnit; 31 import javax.annotation.Generated; 32 33 // AUTO-GENERATED DOCUMENTATION AND CLASS. 34 /** 35 * Service Description: Service that implements Google Cloud Speech API. 36 * 37 * <p>This class provides the ability to make remote calls to the backing service through method 38 * calls that map to API methods. Sample code to get started: 39 * 40 * <pre>{@code 41 * // This snippet has been automatically generated and should be regarded as a code template only. 42 * // It will require modifications to work: 43 * // - It may require correct/in-range values for request initialization. 44 * // - It may require specifying regional endpoints when creating the service client as shown in 45 * // https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library 46 * try (SpeechClient speechClient = SpeechClient.create()) { 47 * RecognitionConfig config = RecognitionConfig.newBuilder().build(); 48 * RecognitionAudio audio = RecognitionAudio.newBuilder().build(); 49 * RecognizeResponse response = speechClient.recognize(config, audio); 50 * } 51 * }</pre> 52 * 53 * <p>Note: close() needs to be called on the SpeechClient object to clean up resources such as 54 * threads. In the example above, try-with-resources is used, which automatically calls close(). 55 * 56 * <p>The surface of this class includes several types of Java methods for each of the API's 57 * methods: 58 * 59 * <ol> 60 * <li>A "flattened" method. With this type of method, the fields of the request type have been 61 * converted into function parameters. It may be the case that not all fields are available as 62 * parameters, and not every API method will have a flattened method entry point. 63 * <li>A "request object" method. This type of method only takes one parameter, a request object, 64 * which must be constructed before the call. Not every API method will have a request object 65 * method. 66 * <li>A "callable" method. This type of method takes no parameters and returns an immutable API 67 * callable object, which can be used to initiate calls to the service. 68 * </ol> 69 * 70 * <p>See the individual methods for example code. 71 * 72 * <p>Many parameters require resource names to be formatted in a particular way. To assist with 73 * these names, this class includes a format method for each type of name, and additionally a parse 74 * method to extract the individual identifiers contained within names that are returned. 75 * 76 * <p>This class can be customized by passing in a custom instance of SpeechSettings to create(). 77 * For example: 78 * 79 * <p>To customize credentials: 80 * 81 * <pre>{@code 82 * // This snippet has been automatically generated and should be regarded as a code template only. 83 * // It will require modifications to work: 84 * // - It may require correct/in-range values for request initialization. 85 * // - It may require specifying regional endpoints when creating the service client as shown in 86 * // https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library 87 * SpeechSettings speechSettings = 88 * SpeechSettings.newBuilder() 89 * .setCredentialsProvider(FixedCredentialsProvider.create(myCredentials)) 90 * .build(); 91 * SpeechClient speechClient = SpeechClient.create(speechSettings); 92 * }</pre> 93 * 94 * <p>To customize the endpoint: 95 * 96 * <pre>{@code 97 * // This snippet has been automatically generated and should be regarded as a code template only. 98 * // It will require modifications to work: 99 * // - It may require correct/in-range values for request initialization. 100 * // - It may require specifying regional endpoints when creating the service client as shown in 101 * // https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library 102 * SpeechSettings speechSettings = SpeechSettings.newBuilder().setEndpoint(myEndpoint).build(); 103 * SpeechClient speechClient = SpeechClient.create(speechSettings); 104 * }</pre> 105 * 106 * <p>To use REST (HTTP1.1/JSON) transport (instead of gRPC) for sending and receiving requests over 107 * the wire: 108 * 109 * <pre>{@code 110 * // This snippet has been automatically generated and should be regarded as a code template only. 111 * // It will require modifications to work: 112 * // - It may require correct/in-range values for request initialization. 113 * // - It may require specifying regional endpoints when creating the service client as shown in 114 * // https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library 115 * SpeechSettings speechSettings = SpeechSettings.newHttpJsonBuilder().build(); 116 * SpeechClient speechClient = SpeechClient.create(speechSettings); 117 * }</pre> 118 * 119 * <p>Please refer to the GitHub repository's samples for more quickstart code snippets. 120 */ 121 @Generated("by gapic-generator-java") 122 public class SpeechClient implements BackgroundResource { 123 private final SpeechSettings settings; 124 private final SpeechStub stub; 125 private final OperationsClient httpJsonOperationsClient; 126 private final com.google.longrunning.OperationsClient operationsClient; 127 128 /** Constructs an instance of SpeechClient with default settings. */ create()129 public static final SpeechClient create() throws IOException { 130 return create(SpeechSettings.newBuilder().build()); 131 } 132 133 /** 134 * Constructs an instance of SpeechClient, using the given settings. The channels are created 135 * based on the settings passed in, or defaults for any settings that are not set. 136 */ create(SpeechSettings settings)137 public static final SpeechClient create(SpeechSettings settings) throws IOException { 138 return new SpeechClient(settings); 139 } 140 141 /** 142 * Constructs an instance of SpeechClient, using the given stub for making calls. This is for 143 * advanced usage - prefer using create(SpeechSettings). 144 */ create(SpeechStub stub)145 public static final SpeechClient create(SpeechStub stub) { 146 return new SpeechClient(stub); 147 } 148 149 /** 150 * Constructs an instance of SpeechClient, using the given settings. This is protected so that it 151 * is easy to make a subclass, but otherwise, the static factory methods should be preferred. 152 */ SpeechClient(SpeechSettings settings)153 protected SpeechClient(SpeechSettings settings) throws IOException { 154 this.settings = settings; 155 this.stub = ((SpeechStubSettings) settings.getStubSettings()).createStub(); 156 this.operationsClient = 157 com.google.longrunning.OperationsClient.create(this.stub.getOperationsStub()); 158 this.httpJsonOperationsClient = OperationsClient.create(this.stub.getHttpJsonOperationsStub()); 159 } 160 SpeechClient(SpeechStub stub)161 protected SpeechClient(SpeechStub stub) { 162 this.settings = null; 163 this.stub = stub; 164 this.operationsClient = 165 com.google.longrunning.OperationsClient.create(this.stub.getOperationsStub()); 166 this.httpJsonOperationsClient = OperationsClient.create(this.stub.getHttpJsonOperationsStub()); 167 } 168 getSettings()169 public final SpeechSettings getSettings() { 170 return settings; 171 } 172 getStub()173 public SpeechStub getStub() { 174 return stub; 175 } 176 177 /** 178 * Returns the OperationsClient that can be used to query the status of a long-running operation 179 * returned by another API method call. 180 */ getOperationsClient()181 public final com.google.longrunning.OperationsClient getOperationsClient() { 182 return operationsClient; 183 } 184 185 /** 186 * Returns the OperationsClient that can be used to query the status of a long-running operation 187 * returned by another API method call. 188 */ 189 @BetaApi getHttpJsonOperationsClient()190 public final OperationsClient getHttpJsonOperationsClient() { 191 return httpJsonOperationsClient; 192 } 193 194 // AUTO-GENERATED DOCUMENTATION AND METHOD. 195 /** 196 * Performs synchronous speech recognition: receive results after all audio has been sent and 197 * processed. 198 * 199 * <p>Sample code: 200 * 201 * <pre>{@code 202 * // This snippet has been automatically generated and should be regarded as a code template only. 203 * // It will require modifications to work: 204 * // - It may require correct/in-range values for request initialization. 205 * // - It may require specifying regional endpoints when creating the service client as shown in 206 * // https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library 207 * try (SpeechClient speechClient = SpeechClient.create()) { 208 * RecognitionConfig config = RecognitionConfig.newBuilder().build(); 209 * RecognitionAudio audio = RecognitionAudio.newBuilder().build(); 210 * RecognizeResponse response = speechClient.recognize(config, audio); 211 * } 212 * }</pre> 213 * 214 * @param config Required. Provides information to the recognizer that specifies how to process 215 * the request. 216 * @param audio Required. The audio data to be recognized. 217 * @throws com.google.api.gax.rpc.ApiException if the remote call fails 218 */ recognize(RecognitionConfig config, RecognitionAudio audio)219 public final RecognizeResponse recognize(RecognitionConfig config, RecognitionAudio audio) { 220 RecognizeRequest request = 221 RecognizeRequest.newBuilder().setConfig(config).setAudio(audio).build(); 222 return recognize(request); 223 } 224 225 // AUTO-GENERATED DOCUMENTATION AND METHOD. 226 /** 227 * Performs synchronous speech recognition: receive results after all audio has been sent and 228 * processed. 229 * 230 * <p>Sample code: 231 * 232 * <pre>{@code 233 * // This snippet has been automatically generated and should be regarded as a code template only. 234 * // It will require modifications to work: 235 * // - It may require correct/in-range values for request initialization. 236 * // - It may require specifying regional endpoints when creating the service client as shown in 237 * // https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library 238 * try (SpeechClient speechClient = SpeechClient.create()) { 239 * RecognizeRequest request = 240 * RecognizeRequest.newBuilder() 241 * .setConfig(RecognitionConfig.newBuilder().build()) 242 * .setAudio(RecognitionAudio.newBuilder().build()) 243 * .build(); 244 * RecognizeResponse response = speechClient.recognize(request); 245 * } 246 * }</pre> 247 * 248 * @param request The request object containing all of the parameters for the API call. 249 * @throws com.google.api.gax.rpc.ApiException if the remote call fails 250 */ recognize(RecognizeRequest request)251 public final RecognizeResponse recognize(RecognizeRequest request) { 252 return recognizeCallable().call(request); 253 } 254 255 // AUTO-GENERATED DOCUMENTATION AND METHOD. 256 /** 257 * Performs synchronous speech recognition: receive results after all audio has been sent and 258 * processed. 259 * 260 * <p>Sample code: 261 * 262 * <pre>{@code 263 * // This snippet has been automatically generated and should be regarded as a code template only. 264 * // It will require modifications to work: 265 * // - It may require correct/in-range values for request initialization. 266 * // - It may require specifying regional endpoints when creating the service client as shown in 267 * // https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library 268 * try (SpeechClient speechClient = SpeechClient.create()) { 269 * RecognizeRequest request = 270 * RecognizeRequest.newBuilder() 271 * .setConfig(RecognitionConfig.newBuilder().build()) 272 * .setAudio(RecognitionAudio.newBuilder().build()) 273 * .build(); 274 * ApiFuture<RecognizeResponse> future = speechClient.recognizeCallable().futureCall(request); 275 * // Do something. 276 * RecognizeResponse response = future.get(); 277 * } 278 * }</pre> 279 */ recognizeCallable()280 public final UnaryCallable<RecognizeRequest, RecognizeResponse> recognizeCallable() { 281 return stub.recognizeCallable(); 282 } 283 284 // AUTO-GENERATED DOCUMENTATION AND METHOD. 285 /** 286 * Performs asynchronous speech recognition: receive results via the google.longrunning.Operations 287 * interface. Returns either an `Operation.error` or an `Operation.response` which contains a 288 * `LongRunningRecognizeResponse` message. For more information on asynchronous speech 289 * recognition, see the [how-to](https://cloud.google.com/speech-to-text/docs/async-recognize). 290 * 291 * <p>Sample code: 292 * 293 * <pre>{@code 294 * // This snippet has been automatically generated and should be regarded as a code template only. 295 * // It will require modifications to work: 296 * // - It may require correct/in-range values for request initialization. 297 * // - It may require specifying regional endpoints when creating the service client as shown in 298 * // https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library 299 * try (SpeechClient speechClient = SpeechClient.create()) { 300 * RecognitionConfig config = RecognitionConfig.newBuilder().build(); 301 * RecognitionAudio audio = RecognitionAudio.newBuilder().build(); 302 * LongRunningRecognizeResponse response = 303 * speechClient.longRunningRecognizeAsync(config, audio).get(); 304 * } 305 * }</pre> 306 * 307 * @param config Required. Provides information to the recognizer that specifies how to process 308 * the request. 309 * @param audio Required. The audio data to be recognized. 310 * @throws com.google.api.gax.rpc.ApiException if the remote call fails 311 */ 312 public final OperationFuture<LongRunningRecognizeResponse, LongRunningRecognizeMetadata> longRunningRecognizeAsync(RecognitionConfig config, RecognitionAudio audio)313 longRunningRecognizeAsync(RecognitionConfig config, RecognitionAudio audio) { 314 LongRunningRecognizeRequest request = 315 LongRunningRecognizeRequest.newBuilder().setConfig(config).setAudio(audio).build(); 316 return longRunningRecognizeAsync(request); 317 } 318 319 // AUTO-GENERATED DOCUMENTATION AND METHOD. 320 /** 321 * Performs asynchronous speech recognition: receive results via the google.longrunning.Operations 322 * interface. Returns either an `Operation.error` or an `Operation.response` which contains a 323 * `LongRunningRecognizeResponse` message. For more information on asynchronous speech 324 * recognition, see the [how-to](https://cloud.google.com/speech-to-text/docs/async-recognize). 325 * 326 * <p>Sample code: 327 * 328 * <pre>{@code 329 * // This snippet has been automatically generated and should be regarded as a code template only. 330 * // It will require modifications to work: 331 * // - It may require correct/in-range values for request initialization. 332 * // - It may require specifying regional endpoints when creating the service client as shown in 333 * // https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library 334 * try (SpeechClient speechClient = SpeechClient.create()) { 335 * LongRunningRecognizeRequest request = 336 * LongRunningRecognizeRequest.newBuilder() 337 * .setConfig(RecognitionConfig.newBuilder().build()) 338 * .setAudio(RecognitionAudio.newBuilder().build()) 339 * .setOutputConfig(TranscriptOutputConfig.newBuilder().build()) 340 * .build(); 341 * LongRunningRecognizeResponse response = speechClient.longRunningRecognizeAsync(request).get(); 342 * } 343 * }</pre> 344 * 345 * @param request The request object containing all of the parameters for the API call. 346 * @throws com.google.api.gax.rpc.ApiException if the remote call fails 347 */ 348 public final OperationFuture<LongRunningRecognizeResponse, LongRunningRecognizeMetadata> longRunningRecognizeAsync(LongRunningRecognizeRequest request)349 longRunningRecognizeAsync(LongRunningRecognizeRequest request) { 350 return longRunningRecognizeOperationCallable().futureCall(request); 351 } 352 353 // AUTO-GENERATED DOCUMENTATION AND METHOD. 354 /** 355 * Performs asynchronous speech recognition: receive results via the google.longrunning.Operations 356 * interface. Returns either an `Operation.error` or an `Operation.response` which contains a 357 * `LongRunningRecognizeResponse` message. For more information on asynchronous speech 358 * recognition, see the [how-to](https://cloud.google.com/speech-to-text/docs/async-recognize). 359 * 360 * <p>Sample code: 361 * 362 * <pre>{@code 363 * // This snippet has been automatically generated and should be regarded as a code template only. 364 * // It will require modifications to work: 365 * // - It may require correct/in-range values for request initialization. 366 * // - It may require specifying regional endpoints when creating the service client as shown in 367 * // https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library 368 * try (SpeechClient speechClient = SpeechClient.create()) { 369 * LongRunningRecognizeRequest request = 370 * LongRunningRecognizeRequest.newBuilder() 371 * .setConfig(RecognitionConfig.newBuilder().build()) 372 * .setAudio(RecognitionAudio.newBuilder().build()) 373 * .setOutputConfig(TranscriptOutputConfig.newBuilder().build()) 374 * .build(); 375 * OperationFuture<LongRunningRecognizeResponse, LongRunningRecognizeMetadata> future = 376 * speechClient.longRunningRecognizeOperationCallable().futureCall(request); 377 * // Do something. 378 * LongRunningRecognizeResponse response = future.get(); 379 * } 380 * }</pre> 381 */ 382 public final OperationCallable< 383 LongRunningRecognizeRequest, LongRunningRecognizeResponse, LongRunningRecognizeMetadata> longRunningRecognizeOperationCallable()384 longRunningRecognizeOperationCallable() { 385 return stub.longRunningRecognizeOperationCallable(); 386 } 387 388 // AUTO-GENERATED DOCUMENTATION AND METHOD. 389 /** 390 * Performs asynchronous speech recognition: receive results via the google.longrunning.Operations 391 * interface. Returns either an `Operation.error` or an `Operation.response` which contains a 392 * `LongRunningRecognizeResponse` message. For more information on asynchronous speech 393 * recognition, see the [how-to](https://cloud.google.com/speech-to-text/docs/async-recognize). 394 * 395 * <p>Sample code: 396 * 397 * <pre>{@code 398 * // This snippet has been automatically generated and should be regarded as a code template only. 399 * // It will require modifications to work: 400 * // - It may require correct/in-range values for request initialization. 401 * // - It may require specifying regional endpoints when creating the service client as shown in 402 * // https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library 403 * try (SpeechClient speechClient = SpeechClient.create()) { 404 * LongRunningRecognizeRequest request = 405 * LongRunningRecognizeRequest.newBuilder() 406 * .setConfig(RecognitionConfig.newBuilder().build()) 407 * .setAudio(RecognitionAudio.newBuilder().build()) 408 * .setOutputConfig(TranscriptOutputConfig.newBuilder().build()) 409 * .build(); 410 * ApiFuture<Operation> future = speechClient.longRunningRecognizeCallable().futureCall(request); 411 * // Do something. 412 * Operation response = future.get(); 413 * } 414 * }</pre> 415 */ 416 public final UnaryCallable<LongRunningRecognizeRequest, Operation> longRunningRecognizeCallable()417 longRunningRecognizeCallable() { 418 return stub.longRunningRecognizeCallable(); 419 } 420 421 // AUTO-GENERATED DOCUMENTATION AND METHOD. 422 /** 423 * Performs bidirectional streaming speech recognition: receive results while sending audio. This 424 * method is only available via the gRPC API (not REST). 425 * 426 * <p>Sample code: 427 * 428 * <pre>{@code 429 * // This snippet has been automatically generated and should be regarded as a code template only. 430 * // It will require modifications to work: 431 * // - It may require correct/in-range values for request initialization. 432 * // - It may require specifying regional endpoints when creating the service client as shown in 433 * // https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library 434 * try (SpeechClient speechClient = SpeechClient.create()) { 435 * BidiStream<StreamingRecognizeRequest, StreamingRecognizeResponse> bidiStream = 436 * speechClient.streamingRecognizeCallable().call(); 437 * StreamingRecognizeRequest request = StreamingRecognizeRequest.newBuilder().build(); 438 * bidiStream.send(request); 439 * for (StreamingRecognizeResponse response : bidiStream) { 440 * // Do something when a response is received. 441 * } 442 * } 443 * }</pre> 444 */ 445 public final BidiStreamingCallable<StreamingRecognizeRequest, StreamingRecognizeResponse> streamingRecognizeCallable()446 streamingRecognizeCallable() { 447 return stub.streamingRecognizeCallable(); 448 } 449 450 @Override close()451 public final void close() { 452 stub.close(); 453 } 454 455 @Override shutdown()456 public void shutdown() { 457 stub.shutdown(); 458 } 459 460 @Override isShutdown()461 public boolean isShutdown() { 462 return stub.isShutdown(); 463 } 464 465 @Override isTerminated()466 public boolean isTerminated() { 467 return stub.isTerminated(); 468 } 469 470 @Override shutdownNow()471 public void shutdownNow() { 472 stub.shutdownNow(); 473 } 474 475 @Override awaitTermination(long duration, TimeUnit unit)476 public boolean awaitTermination(long duration, TimeUnit unit) throws InterruptedException { 477 return stub.awaitTermination(duration, unit); 478 } 479 } 480