• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1Demonstrations of biolatency, the Linux eBPF/bcc version.
2
3
4biolatency traces block device I/O (disk I/O), and records the distribution
5of I/O latency (time), printing this as a histogram when Ctrl-C is hit.
6For example:
7
8# ./biolatency
9Tracing block device I/O... Hit Ctrl-C to end.
10^C
11     usecs           : count     distribution
12       0 -> 1        : 0        |                                      |
13       2 -> 3        : 0        |                                      |
14       4 -> 7        : 0        |                                      |
15       8 -> 15       : 0        |                                      |
16      16 -> 31       : 0        |                                      |
17      32 -> 63       : 0        |                                      |
18      64 -> 127      : 1        |                                      |
19     128 -> 255      : 12       |********                              |
20     256 -> 511      : 15       |**********                            |
21     512 -> 1023     : 43       |*******************************       |
22    1024 -> 2047     : 52       |**************************************|
23    2048 -> 4095     : 47       |**********************************    |
24    4096 -> 8191     : 52       |**************************************|
25    8192 -> 16383    : 36       |**************************            |
26   16384 -> 32767    : 15       |**********                            |
27   32768 -> 65535    : 2        |*                                     |
28   65536 -> 131071   : 2        |*                                     |
29
30The latency of the disk I/O is measured from the issue to the device to its
31completion. A -Q option can be used to include time queued in the kernel.
32
33This example output shows a large mode of latency from about 128 microseconds
34to about 32767 microseconds (33 milliseconds). The bulk of the I/O was
35between 1 and 8 ms, which is the expected block device latency for
36rotational storage devices.
37
38The highest latency seen while tracing was between 65 and 131 milliseconds:
39the last row printed, for which there were 2 I/O.
40
41For efficiency, biolatency uses an in-kernel eBPF map to store timestamps
42with requests, and another in-kernel map to store the histogram (the "count")
43column, which is copied to user-space only when output is printed. These
44methods lower the performance overhead when tracing is performed.
45
46
47In the following example, the -m option is used to print a histogram using
48milliseconds as the units (which eliminates the first several rows), -T to
49print timestamps with the output, and to print 1 second summaries 5 times:
50
51# ./biolatency -mT 1 5
52Tracing block device I/O... Hit Ctrl-C to end.
53
5406:20:16
55     msecs           : count     distribution
56       0 -> 1        : 36       |**************************************|
57       2 -> 3        : 1        |*                                     |
58       4 -> 7        : 3        |***                                   |
59       8 -> 15       : 17       |*****************                     |
60      16 -> 31       : 33       |**********************************    |
61      32 -> 63       : 7        |*******                               |
62      64 -> 127      : 6        |******                                |
63
6406:20:17
65     msecs           : count     distribution
66       0 -> 1        : 96       |************************************  |
67       2 -> 3        : 25       |*********                             |
68       4 -> 7        : 29       |***********                           |
69       8 -> 15       : 62       |***********************               |
70      16 -> 31       : 100      |**************************************|
71      32 -> 63       : 62       |***********************               |
72      64 -> 127      : 18       |******                                |
73
7406:20:18
75     msecs           : count     distribution
76       0 -> 1        : 68       |*************************             |
77       2 -> 3        : 76       |****************************          |
78       4 -> 7        : 20       |*******                               |
79       8 -> 15       : 48       |*****************                     |
80      16 -> 31       : 103      |**************************************|
81      32 -> 63       : 49       |******************                    |
82      64 -> 127      : 17       |******                                |
83
8406:20:19
85     msecs           : count     distribution
86       0 -> 1        : 522      |*************************************+|
87       2 -> 3        : 225      |****************                      |
88       4 -> 7        : 38       |**                                    |
89       8 -> 15       : 8        |                                      |
90      16 -> 31       : 1        |                                      |
91
9206:20:20
93     msecs           : count     distribution
94       0 -> 1        : 436      |**************************************|
95       2 -> 3        : 106      |*********                             |
96       4 -> 7        : 34       |**                                    |
97       8 -> 15       : 19       |*                                     |
98      16 -> 31       : 1        |                                      |
99
100How the I/O latency distribution changes over time can be seen.
101
102
103
104The -Q option begins measuring I/O latency from when the request was first
105queued in the kernel, and includes queuing latency:
106
107# ./biolatency -Q
108Tracing block device I/O... Hit Ctrl-C to end.
109^C
110     usecs           : count     distribution
111       0 -> 1        : 0        |                                      |
112       2 -> 3        : 0        |                                      |
113       4 -> 7        : 0        |                                      |
114       8 -> 15       : 0        |                                      |
115      16 -> 31       : 0        |                                      |
116      32 -> 63       : 0        |                                      |
117      64 -> 127      : 0        |                                      |
118     128 -> 255      : 3        |*                                     |
119     256 -> 511      : 37       |**************                        |
120     512 -> 1023     : 30       |***********                           |
121    1024 -> 2047     : 18       |*******                               |
122    2048 -> 4095     : 22       |********                              |
123    4096 -> 8191     : 14       |*****                                 |
124    8192 -> 16383    : 48       |*******************                   |
125   16384 -> 32767    : 96       |**************************************|
126   32768 -> 65535    : 31       |************                          |
127   65536 -> 131071   : 26       |**********                            |
128  131072 -> 262143   : 12       |****                                  |
129
130This better reflects the latency suffered by the application (if it is
131synchronous I/O), whereas the default mode without kernel queueing better
132reflects the performance of the device.
133
134Note that the storage device (and storage device controller) usually have
135queues of their own, which are always included in the latency, with or
136without -Q.
137
138
139The -D option will print a histogram per disk. Eg:
140
141# ./biolatency -D
142Tracing block device I/O... Hit Ctrl-C to end.
143^C
144
145Bucket disk = 'xvdb'
146     usecs               : count     distribution
147         0 -> 1          : 0        |                                        |
148         2 -> 3          : 0        |                                        |
149         4 -> 7          : 0        |                                        |
150         8 -> 15         : 0        |                                        |
151        16 -> 31         : 0        |                                        |
152        32 -> 63         : 0        |                                        |
153        64 -> 127        : 0        |                                        |
154       128 -> 255        : 1        |                                        |
155       256 -> 511        : 33       |**********************                  |
156       512 -> 1023       : 36       |************************                |
157      1024 -> 2047       : 58       |****************************************|
158      2048 -> 4095       : 51       |***********************************     |
159      4096 -> 8191       : 21       |**************                          |
160      8192 -> 16383      : 2        |*                                       |
161
162Bucket disk = 'xvdc'
163     usecs               : count     distribution
164         0 -> 1          : 0        |                                        |
165         2 -> 3          : 0        |                                        |
166         4 -> 7          : 0        |                                        |
167         8 -> 15         : 0        |                                        |
168        16 -> 31         : 0        |                                        |
169        32 -> 63         : 0        |                                        |
170        64 -> 127        : 0        |                                        |
171       128 -> 255        : 1        |                                        |
172       256 -> 511        : 38       |***********************                 |
173       512 -> 1023       : 42       |*************************               |
174      1024 -> 2047       : 66       |****************************************|
175      2048 -> 4095       : 40       |************************                |
176      4096 -> 8191       : 14       |********                                |
177
178Bucket disk = 'xvda1'
179     usecs               : count     distribution
180         0 -> 1          : 0        |                                        |
181         2 -> 3          : 0        |                                        |
182         4 -> 7          : 0        |                                        |
183         8 -> 15         : 0        |                                        |
184        16 -> 31         : 0        |                                        |
185        32 -> 63         : 0        |                                        |
186        64 -> 127        : 0        |                                        |
187       128 -> 255        : 0        |                                        |
188       256 -> 511        : 18       |**********                              |
189       512 -> 1023       : 67       |*************************************   |
190      1024 -> 2047       : 35       |*******************                     |
191      2048 -> 4095       : 71       |****************************************|
192      4096 -> 8191       : 65       |************************************    |
193      8192 -> 16383      : 65       |************************************    |
194     16384 -> 32767      : 20       |***********                             |
195     32768 -> 65535      : 7        |***                                     |
196
197This output sows that xvda1 has much higher latency, usually between 0.5 ms
198and 32 ms, whereas xvdc is usually between 0.2 ms and 4 ms.
199
200
201USAGE message:
202
203# ./biolatency -h
204usage: biolatency [-h] [-T] [-Q] [-m] [-D] [interval] [count]
205
206Summarize block device I/O latency as a histogram
207
208positional arguments:
209  interval            output interval, in seconds
210  count               number of outputs
211
212optional arguments:
213  -h, --help          show this help message and exit
214  -T, --timestamp     include timestamp on output
215  -Q, --queued        include OS queued time in I/O time
216  -m, --milliseconds  millisecond histogram
217  -D, --disks         print a histogram per disk device
218
219examples:
220    ./biolatency            # summarize block I/O latency as a histogram
221    ./biolatency 1 10       # print 1 second summaries, 10 times
222    ./biolatency -mT 1      # 1s summaries, milliseconds, and timestamps
223    ./biolatency -Q         # include OS queued time in I/O time
224    ./biolatency -D         # show each disk device separately
225