• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1Demonstrations of biotop, the Linux eBPF/bcc version.
2
3
4Short for block device I/O top, biotop summarizes which processes are
5performing disk I/O. It's top for disks. Sample output:
6
7# ./biotop
8Tracing... Output every 1 secs. Hit Ctrl-C to end
9
1008:04:11 loadavg: 1.48 0.87 0.45 1/287 14547
11
12PID    COMM             D MAJ MIN DISK       I/O  Kbytes  AVGms
1314501  cksum            R 202 1   xvda1      361   28832   3.39
146961   dd               R 202 1   xvda1     1628   13024   0.59
1513855  dd               R 202 1   xvda1     1627   13016   0.59
16326    jbd2/xvda1-8     W 202 1   xvda1        3     168   3.00
171880   supervise        W 202 1   xvda1        2       8   6.71
181873   supervise        W 202 1   xvda1        2       8   2.51
191871   supervise        W 202 1   xvda1        2       8   1.57
201876   supervise        W 202 1   xvda1        2       8   1.22
211892   supervise        W 202 1   xvda1        2       8   0.62
221878   supervise        W 202 1   xvda1        2       8   0.78
231886   supervise        W 202 1   xvda1        2       8   1.30
241894   supervise        W 202 1   xvda1        2       8   3.46
251869   supervise        W 202 1   xvda1        2       8   0.73
261888   supervise        W 202 1   xvda1        2       8   1.48
27
28By default the screen refreshes every 1 second, and shows the top 20 disk
29consumers, sorted on total Kbytes. The first line printed is the header,
30which has the time and then the contents of /proc/loadavg.
31
32For the interval summarized by the output above, the "cksum" command performed
33361 disk reads to the "xvda1" device, for a total of 28832 Kbytes, with an
34average I/O time of 3.39 ms. Two "dd" processes were also reading from the
35same disk, which a higher I/O rate and lower latency. While the average I/O
36size is not printed, it can be determined by dividing the Kbytes column by
37the I/O column.
38
39The columns through to Kbytes show the workload applied. The final column,
40AVGms, shows resulting performance. Other bcc tools can be used to get more
41details when needed: biolatency and biosnoop.
42
43Many years ago I created the original "iotop", and later regretted not calling
44it diskiotop or blockiotop, as "io" alone is ambiguous. This time it is biotop.
45
46
47The -C option can be used to prevent the screen from clearing (my preference).
48Here's using it with a 5 second interval:
49
50# ./biotop -C 5
51Tracing... Output every 5 secs. Hit Ctrl-C to end
52
5308:09:44 loadavg: 0.42 0.44 0.39 2/282 22115
54
55PID    COMM             D MAJ MIN DISK       I/O  Kbytes  AVGms
5622069  dd               R 202 1   xvda1     5993   47976   0.33
57326    jbd2/xvda1-8     W 202 1   xvda1        3     168   2.67
581866   svscan           R 202 1   xvda1       33     132   1.24
591880   supervise        W 202 1   xvda1       10      40   0.56
601873   supervise        W 202 1   xvda1       10      40   0.79
611871   supervise        W 202 1   xvda1       10      40   0.78
621876   supervise        W 202 1   xvda1       10      40   0.68
631892   supervise        W 202 1   xvda1       10      40   0.71
641878   supervise        W 202 1   xvda1       10      40   0.65
651886   supervise        W 202 1   xvda1       10      40   0.78
661894   supervise        W 202 1   xvda1       10      40   0.80
671869   supervise        W 202 1   xvda1       10      40   0.91
681888   supervise        W 202 1   xvda1       10      40   0.63
6922069  bash             R 202 1   xvda1        1      16  19.94
709251   kworker/u16:2    W 202 16  xvdb         2       8   0.13
71
7208:09:49 loadavg: 0.47 0.44 0.39 1/282 22231
73
74PID    COMM             D MAJ MIN DISK       I/O  Kbytes  AVGms
7522069  dd               R 202 1   xvda1    13450  107600   0.35
7622199  cksum            R 202 1   xvda1      941   45548   4.63
77326    jbd2/xvda1-8     W 202 1   xvda1        3     168   2.93
7824467  kworker/0:2      W 202 16  xvdb         1      64   0.28
791880   supervise        W 202 1   xvda1       10      40   0.81
801873   supervise        W 202 1   xvda1       10      40   0.81
811871   supervise        W 202 1   xvda1       10      40   1.03
821876   supervise        W 202 1   xvda1       10      40   0.76
831892   supervise        W 202 1   xvda1       10      40   0.74
841878   supervise        W 202 1   xvda1       10      40   0.94
851886   supervise        W 202 1   xvda1       10      40   0.76
861894   supervise        W 202 1   xvda1       10      40   0.69
871869   supervise        W 202 1   xvda1       10      40   0.72
881888   supervise        W 202 1   xvda1       10      40   1.70
8922199  bash             R 202 1   xvda1        2      20   0.35
90482    xfsaild/md0      W 202 16  xvdb         5      13   0.27
91482    xfsaild/md0      W 202 32  xvdc         2       8   0.33
9231331  pickup           R 202 1   xvda1        1       4   0.31
93
9408:09:54 loadavg: 0.51 0.45 0.39 2/282 22346
95
96PID    COMM             D MAJ MIN DISK       I/O  Kbytes  AVGms
9722069  dd               R 202 1   xvda1    14689  117512   0.32
98326    jbd2/xvda1-8     W 202 1   xvda1        3     168   2.33
991880   supervise        W 202 1   xvda1       10      40   0.65
1001873   supervise        W 202 1   xvda1       10      40   1.08
1011871   supervise        W 202 1   xvda1       10      40   0.66
1021876   supervise        W 202 1   xvda1       10      40   0.79
1031892   supervise        W 202 1   xvda1       10      40   0.67
1041878   supervise        W 202 1   xvda1       10      40   0.66
1051886   supervise        W 202 1   xvda1       10      40   1.02
1061894   supervise        W 202 1   xvda1       10      40   0.88
1071869   supervise        W 202 1   xvda1       10      40   0.89
1081888   supervise        W 202 1   xvda1       10      40   1.25
109
11008:09:59 loadavg: 0.55 0.46 0.40 2/282 22461
111
112PID    COMM             D MAJ MIN DISK       I/O  Kbytes  AVGms
11322069  dd               R 202 1   xvda1    14442  115536   0.33
114326    jbd2/xvda1-8     W 202 1   xvda1        3     168   3.46
1151880   supervise        W 202 1   xvda1       10      40   0.87
1161873   supervise        W 202 1   xvda1       10      40   0.87
1171871   supervise        W 202 1   xvda1       10      40   0.78
1181876   supervise        W 202 1   xvda1       10      40   0.86
1191892   supervise        W 202 1   xvda1       10      40   0.89
1201878   supervise        W 202 1   xvda1       10      40   0.87
1211886   supervise        W 202 1   xvda1       10      40   0.86
1221894   supervise        W 202 1   xvda1       10      40   1.06
1231869   supervise        W 202 1   xvda1       10      40   1.12
1241888   supervise        W 202 1   xvda1       10      40   0.98
125
12608:10:04 loadavg: 0.59 0.47 0.40 3/282 22576
127
128PID    COMM             D MAJ MIN DISK       I/O  Kbytes  AVGms
12922069  dd               R 202 1   xvda1    14179  113432   0.34
130326    jbd2/xvda1-8     W 202 1   xvda1        3     168   2.39
1311880   supervise        W 202 1   xvda1       10      40   0.81
1321873   supervise        W 202 1   xvda1       10      40   1.02
1331871   supervise        W 202 1   xvda1       10      40   1.15
1341876   supervise        W 202 1   xvda1       10      40   1.10
1351892   supervise        W 202 1   xvda1       10      40   0.77
1361878   supervise        W 202 1   xvda1       10      40   0.72
1371886   supervise        W 202 1   xvda1       10      40   0.81
1381894   supervise        W 202 1   xvda1       10      40   0.86
1391869   supervise        W 202 1   xvda1       10      40   0.83
1401888   supervise        W 202 1   xvda1       10      40   0.79
14124467  kworker/0:2      R 202 32  xvdc         3      12   0.26
1421056   cron             R 202 1   xvda1        2       8   0.30
14324467  kworker/0:2      R 202 16  xvdb         1       4   0.23
144
14508:10:09 loadavg: 0.54 0.46 0.40 2/281 22668
146
147PID    COMM             D MAJ MIN DISK       I/O  Kbytes  AVGms
14822069  dd               R 202 1   xvda1      250    2000   0.34
149326    jbd2/xvda1-8     W 202 1   xvda1        3     168   2.40
1501880   supervise        W 202 1   xvda1        8      32   0.93
1511873   supervise        W 202 1   xvda1        8      32   0.76
1521871   supervise        W 202 1   xvda1        8      32   0.60
1531876   supervise        W 202 1   xvda1        8      32   0.61
1541892   supervise        W 202 1   xvda1        8      32   0.68
1551878   supervise        W 202 1   xvda1        8      32   0.90
1561886   supervise        W 202 1   xvda1        8      32   0.57
1571894   supervise        W 202 1   xvda1        8      32   0.97
1581869   supervise        W 202 1   xvda1        8      32   0.69
1591888   supervise        W 202 1   xvda1        8      32   0.67
160
161This shows another "dd" command reading from xvda1. On this system, various
162"supervise" processes do 8 disk writes per second, every second (they are
163creating and updating "status" files).
164
165
166USAGE message:
167
168# ./biotop.py -h
169usage: biotop.py [-h] [-C] [-r MAXROWS] [interval] [count]
170
171Block device (disk) I/O by process
172
173positional arguments:
174  interval              output interval, in seconds
175  count                 number of outputs
176
177optional arguments:
178  -h, --help            show this help message and exit
179  -C, --noclear         don't clear the screen
180  -r MAXROWS, --maxrows MAXROWS
181                        maximum rows to print, default 20
182
183examples:
184    ./biotop            # block device I/O top, 1 second refresh
185    ./biotop -C         # don't clear the screen
186    ./biotop 5          # 5 second summaries
187    ./biotop 5 10       # 5 second summaries, 10 times only
188