• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1# Full Guide To Oboe
2Oboe is a C++ library which makes it easy to build high-performance audio apps on Android. Apps communicate with Oboe by reading and writing data to streams.
3
4## Audio streams
5
6Oboe moves audio data between your app and the audio inputs and outputs on your Android device. Your app passes data in and out using a callback function or by reading from and writing to *audio streams*, represented by the class `AudioStream`. The read/write calls can be blocking or non-blocking.
7
8A stream is defined by the following:
9
10*   The *audio* *device* that is the source or sink for the data in the stream.
11*   The *sharing mode* that determines whether a stream has exclusive access to an audio device that might otherwise be shared among multiple streams.
12*   The *format* of the audio data in the stream.
13
14### Audio device
15
16Each stream is attached to a single audio device.
17
18An audio device is a hardware interface or virtual endpoint that acts as a source or sink for a continuous stream of digital audio data. Don't confuse an *audio device*
19(a built-in mic or bluetooth headset) with the *Android device* (the phone or watch) that is running your app.
20
21On API 23 and above you can use the `AudioManager` method [getDevices()](https://developer.android.com/reference/android/media/AudioManager.html#getDevices(int)) to discover the audio devices that are available on your Android device. The method returns information about the [type](https://developer.android.com/reference/android/media/AudioDeviceInfo.html) of each device.
22
23Each audio device has a unique ID on the Android device. You can  use the ID to bind an audio stream to a specific audio device.  However, in most cases you can let Oboe choose the default primary device rather than specifying one yourself.
24
25The audio device attached to a stream determines whether the stream is for input or output. A stream can only move data in one direction. When you define a stream you also set its direction. When you open a stream Android checks to ensure that the audio device and stream direction agree.
26
27### Sharing mode
28
29A stream has a sharing mode:
30
31*   `SharingMode::Exclusive` (available on API 26+) means the stream has exclusive access to an endpoint on its audio device; the endpoint cannot be used by any other audio stream. If the exclusive endpoint is already in use, it might not be possible for the stream to obtain access to it. Exclusive streams provide the lowest possible latency by bypassing the mixer stage, but they are also more likely to get disconnected. You should close exclusive streams as soon as you no longer need them, so that other apps can access that endpoint. Not all audio devices provide exclusive endpoints. System sounds and sounds from other apps can still be heard when an exclusive stream is in use as they use a different endpoint.
32
33![Oboe exclusive sharing mode diagram](images/oboe-sharing-mode-exclusive.jpg)
34
35*   `SharingMode::Shared` allows Oboe streams to share an endpoint. The operating system will mix all the shared streams assigned to the same endpoint on the audio device.
36
37![Oboe exclusive sharing mode diagram](images/oboe-sharing-mode-shared.jpg)
38
39
40You can explicitly request the sharing mode when you create a stream, although you are not guaranteed to receive that mode. By default, the sharing mode is `Shared`.
41
42### Audio format
43
44The data passed through a stream has the usual digital audio attributes, which you must specify when you define a stream. These are as follows:
45
46*   Sample format
47*   Samples per frame
48*   Sample rate
49
50Oboe permits these sample formats:
51
52| AudioFormat | C data type | Notes |
53| :------------ | :---------- | :---- |
54| I16 | int16_t | common 16-bit samples, [Q0.15 format](https://source.android.com/devices/audio/data_formats#androidFormats) |
55| Float | float | -1.0 to +1.0 |
56| I24 | N/A | 24-bit samples packed into 3 bytes, [Q0.23 format](https://source.android.com/devices/audio/data_formats#androidFormats). Added in API 31 |
57| I32 | int32_t | common 32-bit samples, [Q0.31 format](https://source.android.com/devices/audio/data_formats#androidFormats). Added in API 31 |
58| IEC61937 | N/A | compressed audio wrapped in IEC61937 for HDMI or S/PDIF passthrough. Added in API 34 |
59
60Oboe might perform sample conversion on its own. For example, if an app is writing AudioFormat::Float data but the HAL uses AudioFormat::I16, Oboe might convert the samples automatically. Conversion can happen in either direction. If your app processes audio input, it is wise to verify the input format and be prepared to convert data if necessary, as in this example:
61
62    AudioFormat dataFormat = stream->getDataFormat();
63    //... later
64    if (dataFormat == AudioFormat::I16) {
65         convertFloatToPcm16(...)
66    }
67
68## Creating an audio stream
69
70The Oboe library follows a [builder design pattern](https://en.wikipedia.org/wiki/Builder_pattern) and provides the class `AudioStreamBuilder`.
71
72### Set the audio stream configuration using an AudioStreamBuilder.
73
74Use the builder functions that correspond to the stream parameters. These optional set functions are available:
75
76    AudioStreamBuilder streamBuilder;
77
78    streamBuilder.setDeviceId(deviceId);
79    streamBuilder.setDirection(direction);
80    streamBuilder.setSharingMode(shareMode);
81    streamBuilder.setSampleRate(sampleRate);
82    streamBuilder.setChannelCount(channelCount);
83    streamBuilder.setFormat(format);
84    streamBuilder.setPerformanceMode(perfMode);
85
86Note that these methods do not report errors, such as an undefined constant or value out of range. They will be checked when the stream is opened.
87
88If you do not specify the deviceId, the default is the primary output device.
89If you do not specify the stream direction, the default is an output stream.
90For all parameters, you can explicitly set a value, or let the system
91assign the optimal value by not specifying the parameter at all or setting
92it to `kUnspecified`.
93
94To be safe, check the state of the audio stream after you create it, as explained in step 3, below.
95
96### Open the Stream
97
98Declare a **shared pointer** for the stream. Make sure it is declared with the appropriate scope. The best place is as a member variable in a managing class or as a global. Avoid declaring it as a local variable because the stream may get deleted when the function returns.
99
100    std::shared_ptr<oboe::AudioStream> mStream;
101
102After you've configured the `AudioStreamBuilder`, call `openStream()` to open the stream:
103
104    Result result = streamBuilder.openStream(mStream);
105    if (result != OK){
106        __android_log_print(ANDROID_LOG_ERROR,
107                            "AudioEngine",
108                            "Error opening stream %s",
109                            convertToText(result));
110    }
111
112
113### Verifying stream configuration and additional properties
114
115You should verify the stream's configuration after opening it.
116
117The following properties are guaranteed to be set. However, if these properties
118are unspecified, a default value will still be set, and should be queried by the
119appropriate accessor.
120
121* framesPerDataCallback
122* sampleRate
123* channelCount
124* format
125* direction
126
127The following properties may be changed by the underlying stream construction
128*even if explicitly set* and therefore should always be queried by the appropriate
129accessor. The property settings will depend on device capabilities.
130
131* bufferCapacityInFrames
132* sharingMode (exclusive provides lowest latency)
133* performanceMode
134
135The following properties are only set by the underlying stream. They cannot be
136set by the application, but should be queried by the appropriate accessor.
137
138* framesPerBurst
139
140The following properties have unusual behavior
141
142* deviceId is respected when the underlying API is AAudio (API level >=28), but not when it
143is OpenSLES. It can be set regardless, but *will not* throw an error if an OpenSLES stream
144is used. The default device will be used, rather than whatever is specified.
145
146* mAudioApi is only a property of the builder, however
147AudioStream::getAudioApi() can be used to query the underlying API which the
148stream uses. The property set in the builder is not guaranteed, and in
149general, the API should be chosen by Oboe to allow for best performance and
150stability considerations. Since Oboe is designed to be as uniform across both
151APIs as possible, this property should not generally be needed.
152
153* mBufferSizeInFrames can only be set on an already open stream (as opposed to a
154builder), since it depends on run-time behavior.
155The actual size used may not be what was requested.
156Oboe or the underlyng API will limit the size between zero and the buffer capacity.
157It may also be limited further to reduce glitching on particular devices.
158This feature is not supported when using a callback with OpenSL ES.
159
160The following properties are helpful for older devices to achieve optimal results.
161
162* `setChannelConversionAllowed()` enables channel conversions. This is false by default.
163* `setFormatConversionAllowed()` enables format conversions. This is false by default.
164* `setSampleRateConversionQuality()` enables sample rate conversions.
165  This defaults to SampleRateConversionQuality::Medium.
166
167Many of the stream's properties may vary (whether or not you set
168them) depending on the capabilities of the audio device and the Android device on
169which it's running. If you need to know these values then you must query them using
170the accessor after the stream has been opened. Additionally,
171the underlying parameters a stream is granted are useful to know if
172they have been left unspecified. As a matter of good defensive programming, you
173should check the stream's configuration before using it.
174
175
176There are functions to retrieve the stream setting that corresponds to each
177builder setting:
178
179
180| AudioStreamBuilder set methods | AudioStream get methods |
181| :------------------------ | :----------------- |
182| `setDataCallback()` |  `getDataCallback()` |
183| `setErrorCallback()` |  `getErrorCallback()` |
184| `setDirection()` | `getDirection()` |
185| `setSharingMode()` | `getSharingMode()` |
186| `setPerformanceMode()` | `getPerformanceMode()` |
187| `setSampleRate()` | `getSampleRate()` |
188| `setChannelCount()` | `getChannelCount()` |
189| `setFormat()` | `getFormat()` |
190| `setBufferCapacityInFrames()` | `getBufferCapacityInFrames()` |
191| `setFramesPerDataCallback()` | `getFramesPerDataCallback()` |
192|  --  | `getFramesPerBurst()` |
193| `setDeviceId()` (not respected on OpenSLES) | `getDeviceId()` |
194| `setAudioApi()` (mainly for debugging) | `getAudioApi()` |
195| `setChannelConversionAllowed()` | `isChannelConversionAllowed()` |
196| `setFormatConversionAllowed()` | `setFormatConversionAllowed()` |
197| `setSampleRateConversionQuality` | `getSampleRateConversionQuality()` |
198
199### AAudio specific AudioStreamBuilder fields
200
201Some AudioStreamBuilder fields are only applied to AAudio
202
203The following AudioStreamBuilder fields were added in API 28 to
204specify additional information about the AudioStream to the device. Currently,
205they have little effect on the stream, but setting them helps applications
206interact better with other services.
207
208For more information see: [Usage/ContentTypes](https://source.android.com/devices/audio/attributes).
209The InputPreset may be used by the device to process the input stream (such as gain control). By default
210it is set to VoiceRecognition, which is optimized for low latency.
211
212* `setUsage(oboe::Usage usage)`  - The purpose for creating the stream.
213* `setContentType(oboe::ContentType contentType)` - The type of content carried
214  by the stream.
215* `setInputPreset(oboe::InputPreset inputPreset)` - The recording configuration
216  for an audio input.
217* `setSessionId(oboe::SessionId sessionId)` - Allocate SessionID to connect to the
218  Java AudioEffects API.
219
220In API 29, `setAllowedCapturePolicy(oboe::AllowedCapturePolicy allowedCapturePolicy)` was added.
221This specifies whether this stream audio may or may not be captured by other apps or the system.
222
223In API 30, `setPrivacySensitiveMode(oboe::PrivacySensitiveMode privacySensitiveMode)` was added.
224Concurrent capture is not permitted for privacy sensitive input streams.
225
226In API 31, the following APIs were added:
227* `setPackageName(std::string packageName)` - Declare the name of the package creating the stream.
228  The default, if you do not call this function, is a random package in the calling uid.
229* `setAttributionTag(std::string attributionTag)` - Declare the attribution tag of the context creating the stream.
230  Attribution can be used in complex apps to logically separate parts of the app.
231
232In API 32, the following APIs were added:
233* `setIsContentSpatialized(bool isContentSpatialized)` - Marks that the content is already spatialized
234  to prevent double-processing.
235* `setSpatializationBehavior(oboe::SpatializationBehavior spatializationBehavior)` - Marks what the default
236  spatialization behavior should be.
237* `setChannelMask(oboe::ChannelMask)` - Requests a specific channel mask. The number of channels may be
238  different than setChannelCount. The last called will be respected if this function and setChannelCount()
239  are called.
240
241In API 34, the following APIs were added to streams to get properties of the hardware.
242* `getHardwareChannelCount()`
243* `getHardwareSampleRate()`
244* `getHardwareFormat()`
245
246
247## Using an audio stream
248
249### State transitions
250
251An Oboe stream is usually in one of five stable states (the error state, Disconnected, is described at the end of this section):
252
253*   Open
254*   Started
255*   Paused
256*   Flushed
257*   Stopped
258
259Data only flows through a stream when the stream is in the Started state. To
260move a stream between states, use one of the functions that request a state
261transition:
262
263    Result result;
264    result = stream->requestStart();
265    result = stream->requestStop();
266    result = stream->requestPause();
267    result = stream->requestFlush();
268
269Note that you can only request pause or flush on an output stream:
270
271These functions are asynchronous, and the state change doesn't happen
272immediately. When you request a state change, the stream moves to one of the
273corresponding transient states:
274
275*   Starting
276*   Pausing
277*   Flushing
278*   Stopping
279*   Closing
280
281The state diagram below shows the stable states as rounded rectangles, and the transient states as dotted rectangles.
282Though it's not shown, you can call `close()` from any state
283
284![Oboe Lifecycle](images/oboe-lifecycle.png)
285
286Oboe doesn't provide callbacks to alert you to state changes. One special
287function,
288`AudioStream::waitForStateChange()` can be used to wait for a state change.
289Note that most apps will not need to call `waitForStateChange()` and can just
290request state changes whenever they are needed.
291
292The function does not detect a state change on its own, and does not wait for a
293specific state. It waits until the current state
294is *different* than `inputState`, which you specify.
295
296For example, after requesting to pause, a stream should immediately enter
297the transient state Pausing, and arrive sometime later at the Paused state - though there's no guarantee it will.
298Since you can't wait for the Paused state, use `waitForStateChange()` to wait for *any state
299other than Pausing*. Here's how that's done:
300
301```
302StreamState inputState = StreamState::Pausing;
303StreamState nextState = StreamState::Uninitialized;
304int64_t timeoutNanos = 100 * kNanosPerMillisecond;
305result = stream->requestPause();
306result = stream->waitForStateChange(inputState, &nextState, timeoutNanos);
307```
308
309
310If the stream's state is not Pausing (the `inputState`, which we assumed was the
311current state at call time), the function returns immediately. Otherwise, it
312blocks until the state is no longer Pausing or the timeout expires. When the
313function returns, the parameter `nextState` shows the current state of the
314stream.
315
316You can use this same technique after calling request start, stop, or flush,
317using the corresponding transient state as the inputState. Do not call
318`waitForStateChange()` after calling `AudioStream::close()` since the underlying stream resources
319will be deleted as soon as it closes. And do not call `close()`
320while `waitForStateChange()` is running in another thread.
321
322### Reading and writing to an audio stream
323
324There are two ways to move data in or out of a stream.
3251) Read from or write directly to the stream.
3262) Specify a data callback object that will get called when the stream is ready.
327
328The callback technique offers the lowest latency performance because the callback code can run in a high priority thread.
329Also, attempting to open a low latency output stream without an audio callback (with the intent to use writes)
330may result in a non low latency stream.
331
332The read/write technique may be easier when you do not need low latency. Or, when doing both input and output, it is common to use a callback for output and then just do a non-blocking read from the input stream. Then you have both the input and output data available in one high priority thread.
333
334After the stream is started you can read or write to it using the methods
335`AudioStream::read(buffer, numFrames, timeoutNanos)`
336and
337`AudioStream::write(buffer, numFrames, timeoutNanos)`.
338
339For a blocking read or write that transfers the specified number of frames, set timeoutNanos greater than zero. For a non-blocking call, set timeoutNanos to zero. In this case the result is the actual number of frames transferred.
340
341When you read input, you should verify the correct number of
342frames was read. If not, the buffer might contain unknown data that could cause an
343audio glitch. You can pad the buffer with zeros to create a
344silent dropout:
345
346    Result result = mStream->read(audioData, numFrames, timeout);
347    if (result < 0) {
348        // Error!
349    }
350    if (result != numFrames) {
351        // pad the buffer with zeros
352        memset(static_cast<sample_type*>(audioData) + result * samplesPerFrame, 0,
353               (numFrames - result) * mStream->getBytesPerFrame());
354    }
355
356You can prime the stream's buffer before starting the stream by writing data or silence into it. This must be done in a non-blocking call with timeoutNanos set to zero.
357
358The data in the buffer must match the data format returned by `mStream->getDataFormat()`.
359
360### Closing an audio stream
361
362When you are finished using a stream, close it:
363
364    stream->close();
365
366Do not close a stream while it is being written to or read from another thread as this will cause your app to crash. After you close a stream you should not call any of its methods except for quering it properties.
367
368### Disconnected audio stream
369
370An audio stream can become disconnected at any time if one of these events happens:
371
372*   The associated audio device is no longer connected (for example when headphones are unplugged).
373*   An error occurs internally.
374*   An audio device is no longer the primary audio device.
375
376When a stream is disconnected, it has the state "Disconnected" and calls to `write()` or other functions will return `Result::ErrorDisconnected`.  When a stream is disconnected, all you can do is close it.
377
378If you need to be informed when an audio device is disconnected, write a class
379which extends `AudioStreamErrorCallback` and then register your class using `builder.setErrorCallback(yourCallbackClass)`. It is recommended to pass a shared_ptr.
380If you register a callback, then it will automatically close the stream in a separate thread if the stream is disconnected.
381
382Note that error callbacks will only be called when a data callback has been specified
383and the stream is started. If you are not using a data callback then the read(), write()
384and requestStart() methods will return errors if the stream is disconnected.
385
386Your error callback can implement the following methods (called in a separate thread):
387
388* `onErrorBeforeClose(stream, error)` - called when the stream has been disconnected but not yet closed,
389  so you can still reference the underlying stream (e.g.`getXRunCount()`).
390You can also inform any other threads that may be calling the stream to stop doing so.
391Do not delete the stream or modify its stream state in this callback.
392* `onErrorAfterClose(stream, error)` - called when the stream has been stopped and closed by Oboe so the stream cannot be used and calling getState() will return closed.
393During this callback, stream properties (those requested by the builder) can be queried, as well as frames written and read.
394The stream can be deleted at the end of this method (as long as it not referenced in other threads).
395Methods that reference the underlying stream should not be called (e.g. `getTimestamp()`, `getXRunCount()`, `read()`, `write()`, etc.).
396Opening a separate stream is also a valid use of this callback, especially if the error received is `Error::Disconnected`.
397However, it is important to note that the new audio device may have vastly different properties than the stream that was disconnected.
398
399See the SoundBoard sample for an example of setErrorCallback.
400
401## Optimizing performance
402
403You can optimize the performance of an audio application by using special high-priority threads.
404
405### Using a high priority data callback
406
407If your app reads or writes audio data from an ordinary thread, it may be preempted or experience timing jitter. This can cause audio glitches.
408Using larger buffers might guard against such glitches, but a large buffer also introduces longer audio latency.
409For applications that require low latency, an audio stream can use an asynchronous callback function to transfer data to and from your app.
410The callback runs in a high-priority thread that has better performance.
411
412Your code can access the callback mechanism by implementing the virtual class
413`AudioStreamDataCallback`. The stream periodically executes `onAudioReady()` (the
414callback function) to acquire the data for its next burst.
415
416The total number of samples that you need to fill is numFrames * numChannels.
417
418    class AudioEngine : AudioStreamDataCallback {
419    public:
420        DataCallbackResult AudioEngine::onAudioReady(
421                AudioStream *oboeStream,
422                void *audioData,
423                int32_t numFrames){
424            // Fill the output buffer with random white noise.
425            const int numChannels = AAudioStream_getChannelCount(stream);
426            // This code assumes the format is AAUDIO_FORMAT_PCM_FLOAT.
427            float *output = (float *)audioData;
428            for (int frameIndex = 0; frameIndex < numFrames; frameIndex++) {
429                for (int channelIndex = 0; channelIndex < numChannels; channelIndex++) {
430                    float noise = (float)(drand48() - 0.5);
431                    *output++ = noise;
432                }
433            }
434            return DataCallbackResult::Continue;
435        }
436
437        bool AudioEngine::start() {
438            ...
439            // register the callback
440            streamBuilder.setDataCallback(this);
441        }
442    private:
443        // application data goes here
444    }
445
446
447Note that the callback must be registered on the stream with `setDataCallback`. Any
448application-specific data can be included within the class itself.
449
450The callback function should not perform a read or write on the stream that invoked it. If the callback belongs to an input stream, your code should process the data that is supplied in the audioData buffer (specified as the second argument). If the callback belongs to an output stream, your code should place data into the buffer.
451
452It is possible to process more than one stream in the callback. You can use one stream as the master, and pass pointers to other streams in the class's private data. Register a callback for the master stream. Then use non-blocking I/O on the other streams.  Here is an example of a round-trip callback that passes an input stream to an output stream. The master calling stream is the output
453stream. The input stream is included in the class.
454
455The callback does a non-blocking read from the input stream placing the data into the buffer of the output stream.
456
457    class AudioEngine : AudioStreamDataCallback {
458    public:
459
460        DataCallbackResult AudioEngine::onAudioReady(
461                AudioStream *oboeStream,
462                void *audioData,
463                int32_t numFrames) {
464            const int64_t timeoutNanos = 0; // for a non-blocking read
465            auto result = recordingStream->read(audioData, numFrames, timeoutNanos);
466            // result has type ResultWithValue<int32_t>, which for convenience is coerced
467            // to a Result type when compared with another Result.
468            if (result == Result::OK) {
469                if (result.value() < numFrames) {
470                    // replace the missing data with silence
471                    memset(static_cast<sample_type*>(audioData) + result.value() * samplesPerFrame, 0,
472                        (numFrames - result.value()) * oboeStream->getBytesPerFrame());
473
474                }
475                return DataCallbackResult::Continue;
476            }
477            return DataCallbackResult::Stop;
478        }
479
480        bool AudioEngine::start() {
481            ...
482            streamBuilder.setDataCallback(this);
483        }
484
485        void setRecordingStream(AudioStream *stream) {
486          recordingStream = stream;
487        }
488
489    private:
490        AudioStream *recordingStream;
491    }
492
493
494Note that in this example it is assumed the input and output streams have the same number of channels, format and sample rate. The format of the streams can be mismatched - as long as the code handles the translations properly.
495
496#### Data Callback - Do's and Don'ts
497You should never perform an operation which could block inside `onAudioReady`. Examples of blocking operations include:
498
499- allocate memory using, for example, malloc() or new
500- file operations such as opening, closing, reading or writing
501- network operations such as streaming
502- use mutexes or other synchronization primitives
503- sleep
504- stop or close the stream
505- Call read() or write() on the stream which invoked it
506
507The following methods are OK to call:
508
509- AudioStream::get*()
510- oboe::convertResultToText()
511
512### Setting performance mode
513
514Every AudioStream has a *performance mode* which has a large effect on your app's behavior. There are three modes:
515
516* `PerformanceMode::None` is the default mode. It uses a basic stream that balances latency and power savings.
517* `PerformanceMode::LowLatency` uses smaller buffers and an optimized data path for reduced latency.
518* `PerformanceMode::PowerSaving` uses larger internal buffers and a data path that trades off latency for lower power.
519
520You can select the performance mode by calling `setPerformanceMode()`,
521and discover the current mode by calling `getPerformanceMode()`.
522
523If low latency is more important than power savings in your application, use `PerformanceMode::LowLatency`.
524This is useful for apps that are very interactive, such as games or keyboard synthesizers.
525
526If saving power is more important than low latency in your application, use `PerformanceMode::PowerSaving`.
527This is typical for apps that play back previously generated music, such as streaming audio or MIDI file players.
528
529In the current version of Oboe, in order to achieve the lowest possible latency you must use the `PerformanceMode::LowLatency` performance mode along with a high-priority data callback. Follow this example:
530
531```
532// Create a callback object
533MyOboeStreamCallback myCallback;
534
535// Create a stream builder
536AudioStreamBuilder builder;
537builder.setDataCallback(myCallback);
538builder.setPerformanceMode(PerformanceMode::LowLatency);
539```
540
541## Thread safety
542
543The Oboe API is not completely [thread safe](https://en.wikipedia.org/wiki/Thread_safety).
544You cannot call some of the Oboe functions concurrently from more than one thread at a time.
545This is because Oboe avoids using mutexes, which can cause thread preemption and glitches.
546
547To be safe, don't call `waitForStateChange()` or read or write to the same stream from two different threads. Similarly, don't close a stream in one thread while reading or writing to it in another thread.
548
549Calls that return stream settings, like `AudioStream::getSampleRate()` and `AudioStream::getChannelCount()`, are thread safe.
550
551These calls are also thread safe:
552
553* `convertToText()`
554* `AudioStream::get*()` except for `getTimestamp()` and `getState()`
555
556<b>Note:</b> When a stream uses an error callback, it's safe to read/write from the callback thread while also closing the stream from the thread in which it is running.
557
558
559## Code samples
560
561Code samples are available in the [samples folder](../samples).
562
563## Known Issues
564
565The following methods are defined, but will return `Result::ErrorUnimplemented` for OpenSLES streams:
566
567* `getFramesRead()`
568* `getFramesWritten()`
569* `getTimestamp()`
570
571Additionally, `setDeviceId()` will not be respected by OpenSLES streams.
572