• Home
  • Raw
  • Download

Lines Matching +full:trace +full:- +full:mapping

9 Tracing in Perfetto is an asynchronous multiple-writer single-reader pipeline.
16 * Highly optimized for low-overhead writing. NOT optimized for low-latency
18 * Trace data is eventually committed in the central trace buffer by the end
19 of the trace or when explicit flush requests are issued via the IPC channel.
20 * Producers are untrusted and should not be able to see each-other's trace data,
23 In the general case, there are two types buffers involved in a trace. When
25 stage of buffering (one per-CPU) involved:
32 `buffers` section of the [trace config](config.md). In the most simple cases,
39 [Perfetto SDK](/docs/instrumentation/tracing-sdk.md).
40 At the end of the trace (or during, if in [streaming mode]) these buffers are
41 written into the output trace file.
43 These buffers can contain a mixture of trace packets coming from different data
44 sources and even different producer processes. What-goes-where is defined in the
45 [buffers mapping section](config.md#dynamic-buffer-mapping) of the trace config.
47 cross-talking and information leaking across producer processes.
55 1. Zero-copy on the writer path. This buffer allows direct serialization of the
60 the job of moving trace packets from the shared memory buffer (blue) into the
69 per-CPU buffers. These are unavoidable because the kernel cannot write directly
70 into user-space buffers. The `traced_probes` process will periodically read
75 ## Life of a trace packet
77 Here is a summary to understand the dataflow of trace packets across buffers.
82 memory buffer and directly serialize proto-encoded tracing data onto it.
105 case of tracing without `write_into_file` (when the trace file is written only
106 at the end of the trace), the buffer will hold as much data as it has been
109 The total length of the trace will be `(buffer size) / (aggregated write rate)`.
114 activity of the system. 1-2 MB/s is a typical figure on Android traces with
129 the kernel configuration and nice-ness level of the `traced` process.
137 buffer size of 128-512 KB is good enough.
142 WARNING: if a data source writes very large trace packets in a single batch,
150 ScreenshotDataSource::Trace([](ScreenshotDataSource::TraceContext ctx) {
159 create bursts of 2MB back-to-back without yielding; it is limited only by the
185 kernel -> userspace path if the `traced_probes` process gets blocked for too
188 At the trace proto level, losses in this path are recorded:
190 beginning and end of the trace. If the `overrun` field is non-zero, data has
200 -------------------- -------------------- -------------------- ------ ------
201 ftrace_cpu_overrun_e 0 data_loss trace 0
202 ftrace_cpu_overrun_e 1 data_loss trace 0
203 ftrace_cpu_overrun_e 2 data_loss trace 0
204 ftrace_cpu_overrun_e 3 data_loss trace 0
205 ftrace_cpu_overrun_e 4 data_loss trace 0
206 ftrace_cpu_overrun_e 5 data_loss trace 0
207 ftrace_cpu_overrun_e 6 data_loss trace 0
208 ftrace_cpu_overrun_e 7 data_loss trace 0
221 At the trace proto level, losses in this path are recorded:
234 -------------------- -------------------- -------------------- --------- -----
235 traced_buf_trace_wri 0 data_loss trace 0
244 These losses are recorded, at the trace proto level, in
249 These losses are recorded, at the trace proto level, in
258 -------------------- -------------------- -------------------- ------- -----
259 traced_buf_chunks_di 0 info trace 0
260 traced_buf_chunks_ov 0 data_loss trace 0
263 Summary: the best way to detect and debug data losses is to use Trace Processor
269 A "writer sequence" is the sequence of trace packets emitted by a given
274 * Trace packets written from a sequence are emitted in the trace file in the
281 global timestamp order, the service can still emit them in the trace file in
284 * Trace packets are atomic. If a trace packet is emitted in the trace file, it
286 trace packet is large and spans across several shared memory buffer pages, the
287 service will save it in the trace file only if it can observe that all
290 * If a trace packet is lost (e.g. because of wrapping in the ring buffer
291 or losses in the shared memory buffer), no further trace packet will be
301 ## Incremental state in trace packets
303 In many cases trace packets are fully independent of each other and can be
306 to inter-frame video encoding techniques, where some frames require the keyframe
313 parent process (the thread-group). To solve this, when both the
315 Perfetto trace, the latter does capture process<>thread associations from
316 the /proc pseudo-filesystem, whenever a new thread-id is seen by ftrace.
317 A typical trace in this case looks as follows:
327 the trace. In lack of data losses this is fine to be able to reconstruct all
332 2. The [Track Event library](/docs/instrumentation/track-events) in the Perfetto
338 Trace Processor has built-in mechanism that detect loss of interning data and
341 When using tracing in ring-buffer mode, these types of losses are very likely to
349 periodically drop the interning / process mapping tables and re-emit the
351 problem in the context of ring-buffer traces, as long as the
353 of trace data in the central trace buffer.
361 ## Flushes and windowed trace importing
364 is the non-synchronous nature of trace commits. As explained in the
365 [Life of a trace packet](#life-of-a-trace-packet) section above, trace data is
374 buffer for very long times and can end up being committed in the trace buffer
378 particular CPU is idle most of the time or gets hot-unplugged (ftrace uses
379 per-cpu buffers). In this case a CPU might record little-or-no data for several
380 minutes while the other CPUs pump thousands of new trace events per second.
387 sorted by timestamp at import time. The trace in this case will contain very
393 * When recording long traces, Trace Processor can show import errors of the form
394 "XXX event out-of-order". This is because. in order to limit the memory usage
395 at import time, Trace Processor sorts events using a sliding window. If trace
396 packets are too out-of-order (trace file order vs timestamp order), the
402 [`flush_period_ms`][TraceConfig] in the trace config (10-30 seconds is usually
408 By default, a flush issued only at the end of the trace.
411 pass the `--full-sort` option to `trace_processor_shell` when importing the
412 trace. Doing so will disable the windowed sorting at the cost of a higher
413 memory usage (the trace file will be fully buffered in memory before parsing).
415 [streaming mode]: /docs/concepts/config#long-traces
416 [TraceConfig]: /docs/reference/trace-config-proto.autogen#TraceConfig
417 [FtraceConfig]: /docs/reference/trace-config-proto.autogen#FtraceConfig
418 [IncrStateConfig]: /docs/reference/trace-config-proto.autogen#FtraceConfig.IncrementalStateConfig
419 [FtraceCpuStats]: /docs/reference/trace-packet-proto.autogen#FtraceCpuStats
420 [FtraceEventBundle]: /docs/reference/trace-packet-proto.autogen#FtraceEventBundle
421 [TracePacket]: /docs/reference/trace-packet-proto.autogen#TracePacket
422 [BufferStats]: /docs/reference/trace-packet-proto.autogen#TraceStats.BufferStats