• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1$Id$
2
3= profiling =
4
5== what information is interesting? ==
6* pipeline throughput
7  if we know the cpu-load for a given datastream, we could extrapolate what the
8  system can handle
9  -> qos profiling
10* load distribution
11  which element causes which cpu load/memory usage
12
13
14= qos profiling =
15* what data is needed ?
16  * (streamtime,proportion) pairs from sinks
17    draw a graph with gnuplot or similar
18  * number of frames in total
19  * number of audio/video frames dropped from each element that support QOS
20  * could be expressed as percent in relation to total-frames
21
22* query data (e.g. via. gst-launch)
23  * add -r, --report option to gst-launch
24  * during playing we capture QOS-events to record 'streamtime,proportion' pairs
25    gst_pad_add_event_probe(video_sink->sink_pad,handler,data)
26  * during playback we like to know when an element drops frames
27    what about elements sending a qos_action message?
28  * after EOS, send qos-queries to each element in the pipeline
29    * qos-query will return:
30      number of frames rendered
31      number of frames dropped
32    * print a nice table with the results
33      * QOS stats first
34    * writes a gnuplot data file
35      * list of 'streamtime,proportion,<drop>' tuples
36
37
38= core profiling =
39* scheduler keeps a list of usecs the process function of each element was
40  running
41* process functions are: loop, chain, get, they are driven by gst_pad_push() and
42  gst_pad_pull_range()
43* scheduler keeps a sum of all times
44* each gst-element has a profile_percentage field
45
46* when going to play
47  * scheduler sets sum and all usecs in the list to 0
48* when handling an element
49  * remember old usecs t_old
50  * take time t1
51  * call elements processing function
52  * take time t2
53  * t_new=t2-t1
54  * sum+=(t_new-t_old)
55  * profile_percentage=t_new/sum;
56  * should the percentage be averaged?
57     * profile_percentage=(profile_percentage+(t_new/sum))/2.0;
58
59* the profile_percentage shows how much CPU time the element uses in relation
60  to the whole pipeline
61
62= rusage + pad-probes =
63* check get_rusage() based cpu usage detection in buzztard
64  this together with pad_probes could gives us decent application level profiles
65* different elements
66  * 1:1 elements are easy to handle
67  * 0:1 elements need a start timer
68  * 1:0 elements need a end timer
69  * n:1, 1:m and n:m type elements are tricky
70    adapter based elements might have a fluctuating usage in addition
71
72  // result data
73  struct {
74    beg_min,beg_max;
75    end_min,end_max;
76  } profile_data;
77
78  // install probes
79  gst_bin_iterate_elements(pipeline)
80    gst_element_iterate_pads(element)
81      if (gst_pad_get_direction(pad)==GST_PAD_SRC)
82        gst_pad_add_buffer_probe(pad,end_timer,profile_data)
83      else
84        gst_pad_add_buffer_probe(pad,beg_timer,profile_data)
85
86  // listen to bus state-change messages to
87  // * reset counters on NULL_TO_READY
88  // * print results on READY_TO_NULL
89
90= PerformanceMonitor =
91Write a ld-preload lib that can gather data from gstreamer and logs it to files.
92The idea is not avoid adding API for performance measurement to gstreamer.
93
94== Services ==
95library provides some common services used by the sensor modules.
96* logging
97* timestamps
98
99== Sensors ==
100Sensors do measurements and deliver timestampe performance data.
101* bitrates and latency via gst_pad_push/pull per link
102* qos ratio via gst_event_new_qos(), gst_pad_send_event()
103* cpu/mem via get_rusage
104  * when (gst_clock_get_time) ?
105  * we want it per thread
106* queue fill levels
107* number of
108  * threads
109  * open files
110
111== Wanted Sensors ==
112* dropped buffers
113
114== Log Format ==
115* we have global data, data per {link,element,thread}
116
117<timestamp> [<sensor-data>] [<sensor-data>]
118
119* sample
120timestamp [qos-ratio] [cpu-load={sum,17284,17285}]
12100126437  [0.5]       [0.7,0.2,0.5]
12200126437  [0.8]       [0.9,0.2,0.7]
123
124* questions
125** should we have the log config in the header or in some separate config?
126   - if config, we just specify the config when capturing put that
127     in the first log line
128   - otherwise the analyzer ui has to parse it from the first line
129
130== Running ==
131LD_PRELOAD=libgstperfmon.so GST_PERFMON_DETAILS="qos-ratio,cpu-load=all" <application>
132LD_PRELOAD=libgstperfmon.so GST_PERFMON_DETAILS="qos-ratio,cpu-load=sum" <application>
133LD_PRELOAD=libgstperfmon.so GST_PERFMON_DETAILS="*" <application>
134
135== Exploration
136pygtk ui, mathplotlib
137
138== Ideas ==
139* can be used in media test suite as a monitor
140
141