• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1## Accuracy evaluation for ILSVRC 2012 (Imagenet Large Scale Visual Recognition Challenge) image classification task
2
3This binary can evaluate the accuracy of TFLite models trained for the [ILSVRC 2012 image classification task]
4(http://www.image-net.org/challenges/LSVRC/2012/).
5The binary takes the path to validation images and labels as inputs. It outputs the accuracy after running the TFLite model on the validation sets.
6
7To run the binary download the ILSVRC 2012 devkit [see instructions](#downloading-ilsvrc) and run the [`generate_validation_ground_truth` script](#ground-truth-label-generation) to generate the ground truth labels.
8
9## Parameters
10The binary takes the following parameters:
11
12*   `model_file` : `string` \
13    Path to the TFlite model file.
14
15*   `ground_truth_images_path`: `string` \
16    The path to the directory containing ground truth images.
17
18*   `ground_truth_labels`: `string` \
19    Path to ground truth labels file. This file should contain the same number
20    of labels as the number images in the ground truth directory. The labels are
21    assumed to be in the same order as the sorted filename of images. See
22    [ground truth label generation](#ground-truth-label-generation) section for
23    more information about how to generate labels for images.
24
25*   `model_output_labels`: `string` \
26    Path to the file containing labels, that is used to interpret the output of
27    the model. E.g. in case of mobilenets, this is the path to
28    `mobilenet_labels.txt` where each label is in the same order as the output
29    1001 dimension tensor.
30
31*   `output_file_path`: `string` \
32    This is the path to the output file. The output is a CSV file that has
33    top-10 accuracies in each row. Each line of output file is the cumulative
34    accuracy after processing images in a sorted order. So first line is
35    accuracy after processing the first image, second line is accuracy after
36    processing first two images. The last line of the file is accuracy after
37    processing the entire validation set.
38
39and the following optional parameters:
40
41*   `blacklist_file_path`: `string` \
42    Path to blacklist file. This file contains the indices of images that are
43    blacklisted for evaluation. 1762 images are blacklisted in ILSVRC dataset.
44    For details please refer to readme.txt of ILSVRC2014 devkit.
45
46*   `num_images`: `int` (default=0) \
47    The number of images to process, if 0, all images in the directory are
48    processed otherwise only num_images will be processed.
49
50*   `num_threads`: `int` (default=4) \
51    The number of threads to use for evaluation. Note: This does not change the
52    number of TFLite Interpreter threads, but shards the dataset to speed up
53    evaluation.
54
55*   `proto_output_file_path`: `string` \
56    Optionally, the computed accuracies can be output to a file as a
57    string-serialized instance of tflite::evaluation::TopkAccuracyEvalMetrics.
58
59*   `num_ranks`: `int` (default=10) \
60    The number of top-K accuracies to return. For example, if num_ranks=5, top-1
61    to top-5 accuracy fractions are returned.
62
63The following optional parameters can be used to modify the inference runtime:
64
65*   `num_interpreter_threads`: `int` (default=1) \
66    This modifies the number of threads used by the TFLite Interpreter for
67    inference.
68
69*   `delegate`: `string` \
70    If provided, tries to use the specified delegate for accuracy evaluation.
71    Valid values: "nnapi", "gpu".
72
73## Downloading ILSVRC
74In order to use this tool to run evaluation on the full 50K ImageNet dataset,
75download the data set from http://image-net.org/request.
76
77## Ground truth label generation
78The ILSVRC 2012 devkit `validation_ground_truth.txt` contains IDs that correspond to synset of the image.
79The accuracy binary however expects the ground truth labels to contain the actual name of
80category instead of synset ids. A conversion script has been provided to convert the validation ground truth to
81category labels. The `validation_ground_truth.txt` can be converted by the following steps:
82
83```
84ILSVRC_2012_DEVKIT_DIR=[set to path to ILSVRC 2012 devkit]
85VALIDATION_LABELS=[set to  path to output]
86
87python generate_validation_labels.py \
88--ilsvrc_devkit_dir=${ILSVRC_2012_DEVKIT_DIR} \
89--validation_labels_output=${VALIDATION_LABELS}
90```
91
92## Running the binary
93
94### On Android
95
96(0) Refer to https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/android for configuring NDK and SDK.
97
98(1) Build using the following command:
99
100```
101bazel build -c opt \
102  --config=android_arm \
103  //tensorflow/lite/tools/accuracy/ilsvrc:imagenet_accuracy_eval
104```
105
106(2) Connect your phone. Push the binary to your phone with adb push
107     (make the directory if required):
108
109```
110adb push bazel-bin/third_party/tensorflow/lite/tools/accuracy/ilsvrc/imagenet_accuracy_eval /data/local/tmp
111```
112
113(3) Make the binary executable.
114
115```
116adb shell chmod +x /data/local/tmp/imagenet_accuracy_eval
117```
118
119(4) Push the TFLite model  that you need to test. For example:
120
121```
122adb push mobilenet_quant_v1_224.tflite /data/local/tmp
123```
124
125(5) Push the imagenet images to device, make sure device has sufficient storage available before pushing the dataset:
126
127```
128adb shell mkdir /data/local/tmp/ilsvrc_images && \
129adb push ${IMAGENET_IMAGES_DIR} /data/local/tmp/ilsvrc_images
130```
131
132(6) Push the generated validation ground labels to device.
133
134```
135adb push ${VALIDATION_LABELS} /data/local/tmp/ilsvrc_validation_labels.txt
136```
137
138(7) Push the model labels text file to device.
139
140```
141adb push ${MODEL_LABELS_TXT} /data/local/tmp/model_output_labels.txt
142```
143
144(8) Run the binary.
145
146```
147adb shell /data/local/tmp/imagenet_accuracy_eval \
148  --model_file=/data/local/tmp/mobilenet_quant_v1_224.tflite \
149  --ground_truth_images_path=/data/local/tmp/ilsvrc_images \
150  --ground_truth_labels=/data/local/tmp/ilsvrc_validation_labels.txt \
151  --model_output_labels=/data/local/tmp/model_output_labels.txt \
152  --output_file_path=/data/local/tmp/accuracy_output.txt \
153  --num_images=0 # Run on all images.
154```
155
156###  On Desktop
157
158(1) Build and run using the following command:
159
160```
161bazel run -c opt \
162  --cxxopt='--std=c++11' \
163  -- \
164  //tensorflow/lite/tools/accuracy/ilsvrc:imagenet_accuracy_eval \
165  --model_file=mobilenet_quant_v1_224.tflite \
166  --ground_truth_images_path=${IMAGENET_IMAGES_DIR} \
167  --ground_truth_labels=${VALIDATION_LABELS} \
168  --model_output_labels=${MODEL_LABELS_TXT} \
169  --output_file_path=/tmp/accuracy_output.txt \
170  --num_images=0 # Run on all images.
171```
172