• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1# Get started with microcontrollers
2
3This document will help you get started using TensorFlow Lite for
4Microcontrollers. It explains how to run the framework's example applications,
5then walks through the code for a simple application that runs inference on a
6microcontroller.
7
8## Get a supported device
9
10To follow this guide, you'll need a supported hardware device. The example
11application we'll be using has been tested on the following devices:
12
13*   [Arduino Nano 33 BLE Sense](https://store.arduino.cc/usa/nano-33-ble-sense-with-headers)
14    (using Arduino IDE)
15*   [SparkFun Edge](https://www.sparkfun.com/products/15170) (building directly
16    from source)
17*   [STM32F746 Discovery kit](https://www.st.com/en/evaluation-tools/32f746gdiscovery.html)
18    (using Mbed)
19*   [Adafruit EdgeBadge](https://www.adafruit.com/product/4400) (using Arduino
20    IDE)
21*   [Adafruit TensorFlow Lite for Microcontrollers Kit](https://www.adafruit.com/product/4317)
22    (using Arduino IDE)
23*   [Adafruit Circuit Playground Bluefruit](https://learn.adafruit.com/tensorflow-lite-for-circuit-playground-bluefruit-quickstart?view=all)
24    (using Arduino IDE)
25
26Learn more about supported platforms in
27[TensorFlow Lite for Microcontrollers](index.md).
28
29## Explore the examples
30
31TensorFlow Lite for Microcontrollers comes with several example applications
32that demonstrate its use for various tasks. At the time of writing, the
33following are available:
34
35*   [Hello World](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/micro/examples/hello_world) -
36    Demonstrates the absolute basics of using TensorFlow Lite for
37    Microcontrollers
38*   [Micro speech](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/micro/examples/micro_speech) -
39    Captures audio with a microphone in order to detect the words "yes" and "no"
40*   [Person detection](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/micro/examples/person_detection) -
41    Captures camera data with an image sensor in order to detect the presence or
42    absence of a person
43*   [Magic wand](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/micro/examples/magic_wand) -
44    Captures accelerometer data in order to classify three different physical
45    gestures
46
47Each example application has a `README.md` file that explains how it can be
48deployed to its supported platforms.
49
50The rest of this guide walks through the
51[Hello World](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/micro/examples/hello_world)
52example application.
53
54## The Hello World example
55
56This example is designed to demonstrate the absolute basics of using TensorFlow
57Lite for Microcontrollers. It includes the full end-to-end workflow of training
58a model, converting it for use with TensorFlow Lite, and running inference on a
59microcontroller.
60
61In the example, a model is trained to replicate a sine function. It takes a
62single number as its input, and outputs the number's
63[sine](https://en.wikipedia.org/wiki/Sine). When deployed to a microcontroller,
64its predictions are used to either blink LEDs or control an animation.
65
66The example includes the following:
67
68*   A Jupyter notebook that demonstrates how the model is trained and converted
69*   A C++ 11 application that runs inference using the model, tested to work
70    with Arduino, SparkFun Edge, STM32F746G discovery kit, and macOS
71*   A unit test that demonstrates the process of running inference
72
73### Run the example
74
75To run the example on your device, walk through the instructions in the
76`README.md`:
77
78<a class="button button-primary" href="https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/micro/examples/hello_world/README.md">Hello
79World README.md</a>
80
81## How to run inference
82
83The following section walks through the *Hello World* example's
84[`hello_world_test.cc`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/micro/examples/hello_world/hello_world_test.cc),
85which demonstrates how to run inference using TensorFlow Lite for
86Microcontrollers.
87
88The test loads the model and then uses it to run inference several times.
89
90### Include the library headers
91
92To use the TensorFlow Lite for Microcontrollers library, we must include the
93following header files:
94
95```C++
96#include "tensorflow/lite/micro/kernels/all_ops_resolver.h"
97#include "tensorflow/lite/micro/micro_error_reporter.h"
98#include "tensorflow/lite/micro/micro_interpreter.h"
99#include "tensorflow/lite/schema/schema_generated.h"
100#include "tensorflow/lite/version.h"
101```
102
103-   [`all_ops_resolver.h`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/micro/kernels/all_ops_resolver.h)
104    provides the operations used by the interpreter to run the model.
105-   [`micro_error_reporter.h`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/micro/micro_error_reporter.h)
106    outputs debug information.
107-   [`micro_interpreter.h`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/micro/micro_interpreter.h)
108    contains code to load and run models.
109-   [`schema_generated.h`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/schema/schema_generated.h)
110    contains the schema for the TensorFlow Lite
111    [`FlatBuffer`](https://google.github.io/flatbuffers/) model file format.
112-   [`version.h`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/version.h)
113    provides versioning information for the TensorFlow Lite schema.
114
115### Include the model
116
117The TensorFlow Lite for Microcontrollers interpreter expects the model to be
118provided as a C++ array. In the *Hello World* example, the model is defined in
119`sine_model_data.h` and `sine_model_data.cc`. The header is included with the
120following line:
121
122```C++
123#include "tensorflow/lite/micro/examples/hello_world/sine_model_data.h"
124```
125
126### Set up the unit test
127
128The code we are walking through is a unit test that uses the TensorFlow Lite for
129Microcontrollers unit test framework. To load the framework, we include the
130following file:
131
132```C++
133#include "tensorflow/lite/micro/testing/micro_test.h"
134```
135
136The test is defined using the following macros:
137
138```C++
139TF_LITE_MICRO_TESTS_BEGIN
140
141TF_LITE_MICRO_TEST(LoadModelAndPerformInference) {
142```
143
144The remainder of the code demonstrates how to load the model and run inference.
145
146### Set up logging
147
148To set up logging, a `tflite::ErrorReporter` pointer is created using a pointer
149to a `tflite::MicroErrorReporter` instance:
150
151```C++
152tflite::MicroErrorReporter micro_error_reporter;
153tflite::ErrorReporter* error_reporter = &micro_error_reporter;
154```
155
156This variable will be passed into the interpreter, which allows it to write
157logs. Since microcontrollers often have a variety of mechanisms for logging, the
158implementation of `tflite::MicroErrorReporter` is designed to be customized for
159your particular device.
160
161### Load a model
162
163In the following code, the model is instantiated using data from a `char` array,
164`g_sine_model_data`, which is declared in `sine_model_data.h`. We then check the
165model to ensure its schema version is compatible with the version we are using:
166
167```C++
168const tflite::Model* model = ::tflite::GetModel(g_sine_model_data);
169if (model->version() != TFLITE_SCHEMA_VERSION) {
170  error_reporter->Report(
171      "Model provided is schema version %d not equal "
172      "to supported version %d.\n",
173      model->version(), TFLITE_SCHEMA_VERSION);
174}
175```
176
177### Instantiate operations resolver
178
179An
180[`AllOpsResolver`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/micro/kernels/all_ops_resolver.h)
181instance is declared. This will be used by the interpreter to access the
182operations that are used by the model:
183
184```C++
185tflite::ops::micro::AllOpsResolver resolver;
186```
187
188The `AllOpsResolver` loads all of the operations available in TensorFlow Lite
189for Microcontrollers, which uses a lot of memory. Since a given model will only
190use a subset of these operations, it's recommended that real world applications
191load only the operations that are needed.
192
193This is done using a different class, `MicroMutableOpResolver`. You can see how
194to use it in the *Micro speech* example's
195[`micro_speech_test.cc`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/micro/examples/micro_speech/micro_speech_test.cc).
196
197### Allocate memory
198
199We need to preallocate a certain amount of memory for input, output, and
200intermediate arrays. This is provided as a `uint8_t` array of size
201`tensor_arena_size`:
202
203```C++
204const int tensor_arena_size = 2 * 1024;
205uint8_t tensor_arena[tensor_arena_size];
206```
207
208The size required will depend on the model you are using, and may need to be
209determined by experimentation.
210
211### Instantiate interpreter
212
213We create a `tflite::MicroInterpreter` instance, passing in the variables
214created earlier:
215
216```C++
217tflite::MicroInterpreter interpreter(model, resolver, tensor_arena,
218                                     tensor_arena_size, error_reporter);
219```
220
221### Allocate tensors
222
223We tell the interpreter to allocate memory from the `tensor_arena` for the
224model's tensors:
225
226```C++
227interpreter.AllocateTensors();
228```
229
230### Validate input shape
231
232The `MicroInterpreter` instance can provide us with a pointer to the model's
233input tensor by calling `.input(0)`, where `0` represents the first (and only)
234input tensor:
235
236```C++
237  // Obtain a pointer to the model's input tensor
238  TfLiteTensor* input = interpreter.input(0);
239```
240
241We then inspect this tensor to confirm that its shape and type are what we are
242expecting:
243
244```C++
245// Make sure the input has the properties we expect
246TF_LITE_MICRO_EXPECT_NE(nullptr, input);
247// The property "dims" tells us the tensor's shape. It has one element for
248// each dimension. Our input is a 2D tensor containing 1 element, so "dims"
249// should have size 2.
250TF_LITE_MICRO_EXPECT_EQ(2, input->dims->size);
251// The value of each element gives the length of the corresponding tensor.
252// We should expect two single element tensors (one is contained within the
253// other).
254TF_LITE_MICRO_EXPECT_EQ(1, input->dims->data[0]);
255TF_LITE_MICRO_EXPECT_EQ(1, input->dims->data[1]);
256// The input is a 32 bit floating point value
257TF_LITE_MICRO_EXPECT_EQ(kTfLiteFloat32, input->type);
258```
259
260The enum value `kTfLiteFloat32` is a reference to one of the TensorFlow Lite
261data types, and is defined in
262[`common.h`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/c/common.h).
263
264### Provide an input value
265
266To provide an input to the model, we set the contents of the input tensor, as
267follows:
268
269```C++
270input->data.f[0] = 0.;
271```
272
273In this case, we input a floating point value representing `0`.
274
275### Run the model
276
277To run the model, we can call `Invoke()` on our `tflite::MicroInterpreter`
278instance:
279
280```C++
281TfLiteStatus invoke_status = interpreter.Invoke();
282if (invoke_status != kTfLiteOk) {
283  error_reporter->Report("Invoke failed\n");
284}
285```
286
287We can check the return value, a `TfLiteStatus`, to determine if the run was
288successful. The possible values of `TfLiteStatus`, defined in
289[`common.h`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/c/common.h),
290are `kTfLiteOk` and `kTfLiteError`.
291
292The following code asserts that the value is `kTfLiteOk`, meaning inference was
293successfully run.
294
295```C++
296TF_LITE_MICRO_EXPECT_EQ(kTfLiteOk, invoke_status);
297```
298
299### Obtain the output
300
301The model's output tensor can be obtained by calling `output(0)` on the
302`tflite::MicroIntepreter`, where `0` represents the first (and only) output
303tensor.
304
305In the example, the model's output is a single floating point value contained
306within a 2D tensor:
307
308```C++
309TfLiteTensor* output = interpreter.output(0);
310TF_LITE_MICRO_EXPECT_EQ(2, output->dims->size);
311TF_LITE_MICRO_EXPECT_EQ(1, input->dims->data[0]);
312TF_LITE_MICRO_EXPECT_EQ(1, input->dims->data[1]);
313TF_LITE_MICRO_EXPECT_EQ(kTfLiteFloat32, output->type);
314```
315
316We can read the value directly from the output tensor and assert that it is what
317we expect:
318
319```C++
320// Obtain the output value from the tensor
321float value = output->data.f[0];
322// Check that the output value is within 0.05 of the expected value
323TF_LITE_MICRO_EXPECT_NEAR(0., value, 0.05);
324```
325
326### Run inference again
327
328The remainder of the code runs inference several more times. In each instance,
329we assign a value to the input tensor, invoke the interpreter, and read the
330result from the output tensor:
331
332```C++
333input->data.f[0] = 1.;
334interpreter.Invoke();
335value = output->data.f[0];
336TF_LITE_MICRO_EXPECT_NEAR(0.841, value, 0.05);
337
338input->data.f[0] = 3.;
339interpreter.Invoke();
340value = output->data.f[0];
341TF_LITE_MICRO_EXPECT_NEAR(0.141, value, 0.05);
342
343input->data.f[0] = 5.;
344interpreter.Invoke();
345value = output->data.f[0];
346TF_LITE_MICRO_EXPECT_NEAR(-0.959, value, 0.05);
347```
348
349### Read the application code
350
351Once you have walked through this unit test, you should be able to understand
352the example's application code, located in
353[`main_functions.cc`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/micro/examples/hello_world/main_functions.cc).
354It follows a similar process, but generates an input value based on how many
355inferences have been run, and calls a device-specific function that displays the
356model's output to the user.
357
358## Next steps
359
360To understand how the library can be used with a variety of models and
361applications, we recommend deploying the other examples and walking through
362their code.
363
364<a class="button button-primary" href="https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/micro/examples">Example
365applications on GitHub</a>
366
367To learn how to use the library in your own project, read
368[Understand the C++ library](library.md).
369
370For information about training and convert models for deployment on
371microcontrollers, read [Build and convert models](build_convert.md).
372