1# SPDX-License-Identifier: GPL-2.0-only 2# 3# Architectures that offer an FUNCTION_TRACER implementation should 4# select HAVE_FUNCTION_TRACER: 5# 6 7config USER_STACKTRACE_SUPPORT 8 bool 9 10config NOP_TRACER 11 bool 12 13config HAVE_FUNCTION_TRACER 14 bool 15 help 16 See Documentation/trace/ftrace-design.rst 17 18config HAVE_FUNCTION_GRAPH_TRACER 19 bool 20 help 21 See Documentation/trace/ftrace-design.rst 22 23config HAVE_DYNAMIC_FTRACE 24 bool 25 help 26 See Documentation/trace/ftrace-design.rst 27 28config HAVE_DYNAMIC_FTRACE_WITH_REGS 29 bool 30 31config HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS 32 bool 33 34config HAVE_DYNAMIC_FTRACE_WITH_ARGS 35 bool 36 help 37 If this is set, then arguments and stack can be found from 38 the pt_regs passed into the function callback regs parameter 39 by default, even without setting the REGS flag in the ftrace_ops. 40 This allows for use of regs_get_kernel_argument() and 41 kernel_stack_pointer(). 42 43config HAVE_FTRACE_MCOUNT_RECORD 44 bool 45 help 46 See Documentation/trace/ftrace-design.rst 47 48config HAVE_SYSCALL_TRACEPOINTS 49 bool 50 help 51 See Documentation/trace/ftrace-design.rst 52 53config HAVE_FENTRY 54 bool 55 help 56 Arch supports the gcc options -pg with -mfentry 57 58config HAVE_NOP_MCOUNT 59 bool 60 help 61 Arch supports the gcc options -pg with -mrecord-mcount and -nop-mcount 62 63config HAVE_OBJTOOL_MCOUNT 64 bool 65 help 66 Arch supports objtool --mcount 67 68config HAVE_C_RECORDMCOUNT 69 bool 70 help 71 C version of recordmcount available? 72 73config TRACER_MAX_TRACE 74 bool 75 76config TRACE_CLOCK 77 bool 78 79config RING_BUFFER 80 bool 81 select TRACE_CLOCK 82 select IRQ_WORK 83 84config EVENT_TRACING 85 select CONTEXT_SWITCH_TRACER 86 select GLOB 87 bool 88 89config CONTEXT_SWITCH_TRACER 90 bool 91 92config RING_BUFFER_ALLOW_SWAP 93 bool 94 help 95 Allow the use of ring_buffer_swap_cpu. 96 Adds a very slight overhead to tracing when enabled. 97 98config PREEMPTIRQ_TRACEPOINTS 99 bool 100 depends on TRACE_PREEMPT_TOGGLE || TRACE_IRQFLAGS 101 select TRACING 102 default y 103 help 104 Create preempt/irq toggle tracepoints if needed, so that other parts 105 of the kernel can use them to generate or add hooks to them. 106 107# All tracer options should select GENERIC_TRACER. For those options that are 108# enabled by all tracers (context switch and event tracer) they select TRACING. 109# This allows those options to appear when no other tracer is selected. But the 110# options do not appear when something else selects it. We need the two options 111# GENERIC_TRACER and TRACING to avoid circular dependencies to accomplish the 112# hiding of the automatic options. 113 114config TRACING 115 bool 116 select RING_BUFFER 117 select STACKTRACE if STACKTRACE_SUPPORT 118 select TRACEPOINTS 119 select NOP_TRACER 120 select BINARY_PRINTF 121 select EVENT_TRACING 122 select TRACE_CLOCK 123 124config GENERIC_TRACER 125 bool 126 select TRACING 127 128# 129# Minimum requirements an architecture has to meet for us to 130# be able to offer generic tracing facilities: 131# 132config TRACING_SUPPORT 133 bool 134 depends on TRACE_IRQFLAGS_SUPPORT 135 depends on STACKTRACE_SUPPORT 136 default y 137 138menuconfig FTRACE 139 bool "Tracers" 140 depends on TRACING_SUPPORT 141 default y if DEBUG_KERNEL 142 help 143 Enable the kernel tracing infrastructure. 144 145if FTRACE 146 147config BOOTTIME_TRACING 148 bool "Boot-time Tracing support" 149 depends on TRACING 150 select BOOT_CONFIG 151 help 152 Enable developer to setup ftrace subsystem via supplemental 153 kernel cmdline at boot time for debugging (tracing) driver 154 initialization and boot process. 155 156config FUNCTION_TRACER 157 bool "Kernel Function Tracer" 158 depends on HAVE_FUNCTION_TRACER 159 select KALLSYMS 160 select GENERIC_TRACER 161 select CONTEXT_SWITCH_TRACER 162 select GLOB 163 select TASKS_RCU if PREEMPTION 164 select TASKS_RUDE_RCU 165 help 166 Enable the kernel to trace every kernel function. This is done 167 by using a compiler feature to insert a small, 5-byte No-Operation 168 instruction at the beginning of every kernel function, which NOP 169 sequence is then dynamically patched into a tracer call when 170 tracing is enabled by the administrator. If it's runtime disabled 171 (the bootup default), then the overhead of the instructions is very 172 small and not measurable even in micro-benchmarks. 173 174config FUNCTION_GRAPH_TRACER 175 bool "Kernel Function Graph Tracer" 176 depends on HAVE_FUNCTION_GRAPH_TRACER 177 depends on FUNCTION_TRACER 178 depends on !X86_32 || !CC_OPTIMIZE_FOR_SIZE 179 default y 180 help 181 Enable the kernel to trace a function at both its return 182 and its entry. 183 Its first purpose is to trace the duration of functions and 184 draw a call graph for each thread with some information like 185 the return value. This is done by setting the current return 186 address on the current task structure into a stack of calls. 187 188config DYNAMIC_FTRACE 189 bool "enable/disable function tracing dynamically" 190 depends on FUNCTION_TRACER 191 depends on HAVE_DYNAMIC_FTRACE 192 default y 193 help 194 This option will modify all the calls to function tracing 195 dynamically (will patch them out of the binary image and 196 replace them with a No-Op instruction) on boot up. During 197 compile time, a table is made of all the locations that ftrace 198 can function trace, and this table is linked into the kernel 199 image. When this is enabled, functions can be individually 200 enabled, and the functions not enabled will not affect 201 performance of the system. 202 203 See the files in /sys/kernel/debug/tracing: 204 available_filter_functions 205 set_ftrace_filter 206 set_ftrace_notrace 207 208 This way a CONFIG_FUNCTION_TRACER kernel is slightly larger, but 209 otherwise has native performance as long as no tracing is active. 210 211config DYNAMIC_FTRACE_WITH_REGS 212 def_bool y 213 depends on DYNAMIC_FTRACE 214 depends on HAVE_DYNAMIC_FTRACE_WITH_REGS 215 216config DYNAMIC_FTRACE_WITH_DIRECT_CALLS 217 def_bool y 218 depends on DYNAMIC_FTRACE_WITH_REGS 219 depends on HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS 220 221config DYNAMIC_FTRACE_WITH_ARGS 222 def_bool y 223 depends on DYNAMIC_FTRACE 224 depends on HAVE_DYNAMIC_FTRACE_WITH_ARGS 225 226config FUNCTION_PROFILER 227 bool "Kernel function profiler" 228 depends on FUNCTION_TRACER 229 default n 230 help 231 This option enables the kernel function profiler. A file is created 232 in debugfs called function_profile_enabled which defaults to zero. 233 When a 1 is echoed into this file profiling begins, and when a 234 zero is entered, profiling stops. A "functions" file is created in 235 the trace_stat directory; this file shows the list of functions that 236 have been hit and their counters. 237 238 If in doubt, say N. 239 240config STACK_TRACER 241 bool "Trace max stack" 242 depends on HAVE_FUNCTION_TRACER 243 select FUNCTION_TRACER 244 select STACKTRACE 245 select KALLSYMS 246 help 247 This special tracer records the maximum stack footprint of the 248 kernel and displays it in /sys/kernel/debug/tracing/stack_trace. 249 250 This tracer works by hooking into every function call that the 251 kernel executes, and keeping a maximum stack depth value and 252 stack-trace saved. If this is configured with DYNAMIC_FTRACE 253 then it will not have any overhead while the stack tracer 254 is disabled. 255 256 To enable the stack tracer on bootup, pass in 'stacktrace' 257 on the kernel command line. 258 259 The stack tracer can also be enabled or disabled via the 260 sysctl kernel.stack_tracer_enabled 261 262 Say N if unsure. 263 264config TRACE_PREEMPT_TOGGLE 265 bool 266 help 267 Enables hooks which will be called when preemption is first disabled, 268 and last enabled. 269 270config IRQSOFF_TRACER 271 bool "Interrupts-off Latency Tracer" 272 default n 273 depends on TRACE_IRQFLAGS_SUPPORT 274 select TRACE_IRQFLAGS 275 select GENERIC_TRACER 276 select TRACER_MAX_TRACE 277 select RING_BUFFER_ALLOW_SWAP 278 select TRACER_SNAPSHOT 279 select TRACER_SNAPSHOT_PER_CPU_SWAP 280 help 281 This option measures the time spent in irqs-off critical 282 sections, with microsecond accuracy. 283 284 The default measurement method is a maximum search, which is 285 disabled by default and can be runtime (re-)started 286 via: 287 288 echo 0 > /sys/kernel/debug/tracing/tracing_max_latency 289 290 (Note that kernel size and overhead increase with this option 291 enabled. This option and the preempt-off timing option can be 292 used together or separately.) 293 294config PREEMPT_TRACER 295 bool "Preemption-off Latency Tracer" 296 default n 297 depends on PREEMPTION 298 select GENERIC_TRACER 299 select TRACER_MAX_TRACE 300 select RING_BUFFER_ALLOW_SWAP 301 select TRACER_SNAPSHOT 302 select TRACER_SNAPSHOT_PER_CPU_SWAP 303 select TRACE_PREEMPT_TOGGLE 304 help 305 This option measures the time spent in preemption-off critical 306 sections, with microsecond accuracy. 307 308 The default measurement method is a maximum search, which is 309 disabled by default and can be runtime (re-)started 310 via: 311 312 echo 0 > /sys/kernel/debug/tracing/tracing_max_latency 313 314 (Note that kernel size and overhead increase with this option 315 enabled. This option and the irqs-off timing option can be 316 used together or separately.) 317 318config SCHED_TRACER 319 bool "Scheduling Latency Tracer" 320 select GENERIC_TRACER 321 select CONTEXT_SWITCH_TRACER 322 select TRACER_MAX_TRACE 323 select TRACER_SNAPSHOT 324 help 325 This tracer tracks the latency of the highest priority task 326 to be scheduled in, starting from the point it has woken up. 327 328config HWLAT_TRACER 329 bool "Tracer to detect hardware latencies (like SMIs)" 330 select GENERIC_TRACER 331 select TRACER_MAX_TRACE 332 help 333 This tracer, when enabled will create one or more kernel threads, 334 depending on what the cpumask file is set to, which each thread 335 spinning in a loop looking for interruptions caused by 336 something other than the kernel. For example, if a 337 System Management Interrupt (SMI) takes a noticeable amount of 338 time, this tracer will detect it. This is useful for testing 339 if a system is reliable for Real Time tasks. 340 341 Some files are created in the tracing directory when this 342 is enabled: 343 344 hwlat_detector/width - time in usecs for how long to spin for 345 hwlat_detector/window - time in usecs between the start of each 346 iteration 347 348 A kernel thread is created that will spin with interrupts disabled 349 for "width" microseconds in every "window" cycle. It will not spin 350 for "window - width" microseconds, where the system can 351 continue to operate. 352 353 The output will appear in the trace and trace_pipe files. 354 355 When the tracer is not running, it has no affect on the system, 356 but when it is running, it can cause the system to be 357 periodically non responsive. Do not run this tracer on a 358 production system. 359 360 To enable this tracer, echo in "hwlat" into the current_tracer 361 file. Every time a latency is greater than tracing_thresh, it will 362 be recorded into the ring buffer. 363 364config OSNOISE_TRACER 365 bool "OS Noise tracer" 366 select GENERIC_TRACER 367 select TRACER_MAX_TRACE 368 help 369 In the context of high-performance computing (HPC), the Operating 370 System Noise (osnoise) refers to the interference experienced by an 371 application due to activities inside the operating system. In the 372 context of Linux, NMIs, IRQs, SoftIRQs, and any other system thread 373 can cause noise to the system. Moreover, hardware-related jobs can 374 also cause noise, for example, via SMIs. 375 376 The osnoise tracer leverages the hwlat_detector by running a similar 377 loop with preemption, SoftIRQs and IRQs enabled, thus allowing all 378 the sources of osnoise during its execution. The osnoise tracer takes 379 note of the entry and exit point of any source of interferences, 380 increasing a per-cpu interference counter. It saves an interference 381 counter for each source of interference. The interference counter for 382 NMI, IRQs, SoftIRQs, and threads is increased anytime the tool 383 observes these interferences' entry events. When a noise happens 384 without any interference from the operating system level, the 385 hardware noise counter increases, pointing to a hardware-related 386 noise. In this way, osnoise can account for any source of 387 interference. At the end of the period, the osnoise tracer prints 388 the sum of all noise, the max single noise, the percentage of CPU 389 available for the thread, and the counters for the noise sources. 390 391 In addition to the tracer, a set of tracepoints were added to 392 facilitate the identification of the osnoise source. 393 394 The output will appear in the trace and trace_pipe files. 395 396 To enable this tracer, echo in "osnoise" into the current_tracer 397 file. 398 399config TIMERLAT_TRACER 400 bool "Timerlat tracer" 401 select OSNOISE_TRACER 402 select GENERIC_TRACER 403 help 404 The timerlat tracer aims to help the preemptive kernel developers 405 to find sources of wakeup latencies of real-time threads. 406 407 The tracer creates a per-cpu kernel thread with real-time priority. 408 The tracer thread sets a periodic timer to wakeup itself, and goes 409 to sleep waiting for the timer to fire. At the wakeup, the thread 410 then computes a wakeup latency value as the difference between 411 the current time and the absolute time that the timer was set 412 to expire. 413 414 The tracer prints two lines at every activation. The first is the 415 timer latency observed at the hardirq context before the 416 activation of the thread. The second is the timer latency observed 417 by the thread, which is the same level that cyclictest reports. The 418 ACTIVATION ID field serves to relate the irq execution to its 419 respective thread execution. 420 421 The tracer is build on top of osnoise tracer, and the osnoise: 422 events can be used to trace the source of interference from NMI, 423 IRQs and other threads. It also enables the capture of the 424 stacktrace at the IRQ context, which helps to identify the code 425 path that can cause thread delay. 426 427config MMIOTRACE 428 bool "Memory mapped IO tracing" 429 depends on HAVE_MMIOTRACE_SUPPORT && PCI 430 select GENERIC_TRACER 431 help 432 Mmiotrace traces Memory Mapped I/O access and is meant for 433 debugging and reverse engineering. It is called from the ioremap 434 implementation and works via page faults. Tracing is disabled by 435 default and can be enabled at run-time. 436 437 See Documentation/trace/mmiotrace.rst. 438 If you are not helping to develop drivers, say N. 439 440config ENABLE_DEFAULT_TRACERS 441 bool "Trace process context switches and events" 442 depends on !GENERIC_TRACER 443 select TRACING 444 help 445 This tracer hooks to various trace points in the kernel, 446 allowing the user to pick and choose which trace point they 447 want to trace. It also includes the sched_switch tracer plugin. 448 449config FTRACE_SYSCALLS 450 bool "Trace syscalls" 451 depends on HAVE_SYSCALL_TRACEPOINTS 452 select GENERIC_TRACER 453 select KALLSYMS 454 help 455 Basic tracer to catch the syscall entry and exit events. 456 457config TRACER_SNAPSHOT 458 bool "Create a snapshot trace buffer" 459 select TRACER_MAX_TRACE 460 help 461 Allow tracing users to take snapshot of the current buffer using the 462 ftrace interface, e.g.: 463 464 echo 1 > /sys/kernel/debug/tracing/snapshot 465 cat snapshot 466 467config TRACER_SNAPSHOT_PER_CPU_SWAP 468 bool "Allow snapshot to swap per CPU" 469 depends on TRACER_SNAPSHOT 470 select RING_BUFFER_ALLOW_SWAP 471 help 472 Allow doing a snapshot of a single CPU buffer instead of a 473 full swap (all buffers). If this is set, then the following is 474 allowed: 475 476 echo 1 > /sys/kernel/debug/tracing/per_cpu/cpu2/snapshot 477 478 After which, only the tracing buffer for CPU 2 was swapped with 479 the main tracing buffer, and the other CPU buffers remain the same. 480 481 When this is enabled, this adds a little more overhead to the 482 trace recording, as it needs to add some checks to synchronize 483 recording with swaps. But this does not affect the performance 484 of the overall system. This is enabled by default when the preempt 485 or irq latency tracers are enabled, as those need to swap as well 486 and already adds the overhead (plus a lot more). 487 488config TRACE_BRANCH_PROFILING 489 bool 490 select GENERIC_TRACER 491 492choice 493 prompt "Branch Profiling" 494 default BRANCH_PROFILE_NONE 495 help 496 The branch profiling is a software profiler. It will add hooks 497 into the C conditionals to test which path a branch takes. 498 499 The likely/unlikely profiler only looks at the conditions that 500 are annotated with a likely or unlikely macro. 501 502 The "all branch" profiler will profile every if-statement in the 503 kernel. This profiler will also enable the likely/unlikely 504 profiler. 505 506 Either of the above profilers adds a bit of overhead to the system. 507 If unsure, choose "No branch profiling". 508 509config BRANCH_PROFILE_NONE 510 bool "No branch profiling" 511 help 512 No branch profiling. Branch profiling adds a bit of overhead. 513 Only enable it if you want to analyse the branching behavior. 514 Otherwise keep it disabled. 515 516config PROFILE_ANNOTATED_BRANCHES 517 bool "Trace likely/unlikely profiler" 518 select TRACE_BRANCH_PROFILING 519 help 520 This tracer profiles all likely and unlikely macros 521 in the kernel. It will display the results in: 522 523 /sys/kernel/debug/tracing/trace_stat/branch_annotated 524 525 Note: this will add a significant overhead; only turn this 526 on if you need to profile the system's use of these macros. 527 528config PROFILE_ALL_BRANCHES 529 bool "Profile all if conditionals" if !FORTIFY_SOURCE 530 select TRACE_BRANCH_PROFILING 531 help 532 This tracer profiles all branch conditions. Every if () 533 taken in the kernel is recorded whether it hit or miss. 534 The results will be displayed in: 535 536 /sys/kernel/debug/tracing/trace_stat/branch_all 537 538 This option also enables the likely/unlikely profiler. 539 540 This configuration, when enabled, will impose a great overhead 541 on the system. This should only be enabled when the system 542 is to be analyzed in much detail. 543endchoice 544 545config TRACING_BRANCHES 546 bool 547 help 548 Selected by tracers that will trace the likely and unlikely 549 conditions. This prevents the tracers themselves from being 550 profiled. Profiling the tracing infrastructure can only happen 551 when the likelys and unlikelys are not being traced. 552 553config BRANCH_TRACER 554 bool "Trace likely/unlikely instances" 555 depends on TRACE_BRANCH_PROFILING 556 select TRACING_BRANCHES 557 help 558 This traces the events of likely and unlikely condition 559 calls in the kernel. The difference between this and the 560 "Trace likely/unlikely profiler" is that this is not a 561 histogram of the callers, but actually places the calling 562 events into a running trace buffer to see when and where the 563 events happened, as well as their results. 564 565 Say N if unsure. 566 567config BLK_DEV_IO_TRACE 568 bool "Support for tracing block IO actions" 569 depends on SYSFS 570 depends on BLOCK 571 select RELAY 572 select DEBUG_FS 573 select TRACEPOINTS 574 select GENERIC_TRACER 575 select STACKTRACE 576 help 577 Say Y here if you want to be able to trace the block layer actions 578 on a given queue. Tracing allows you to see any traffic happening 579 on a block device queue. For more information (and the userspace 580 support tools needed), fetch the blktrace tools from: 581 582 git://git.kernel.dk/blktrace.git 583 584 Tracing also is possible using the ftrace interface, e.g.: 585 586 echo 1 > /sys/block/sda/sda1/trace/enable 587 echo blk > /sys/kernel/debug/tracing/current_tracer 588 cat /sys/kernel/debug/tracing/trace_pipe 589 590 If unsure, say N. 591 592config KPROBE_EVENTS 593 depends on KPROBES 594 depends on HAVE_REGS_AND_STACK_ACCESS_API 595 bool "Enable kprobes-based dynamic events" 596 select TRACING 597 select PROBE_EVENTS 598 select DYNAMIC_EVENTS 599 default y 600 help 601 This allows the user to add tracing events (similar to tracepoints) 602 on the fly via the ftrace interface. See 603 Documentation/trace/kprobetrace.rst for more details. 604 605 Those events can be inserted wherever kprobes can probe, and record 606 various register and memory values. 607 608 This option is also required by perf-probe subcommand of perf tools. 609 If you want to use perf tools, this option is strongly recommended. 610 611config KPROBE_EVENTS_ON_NOTRACE 612 bool "Do NOT protect notrace function from kprobe events" 613 depends on KPROBE_EVENTS 614 depends on DYNAMIC_FTRACE 615 default n 616 help 617 This is only for the developers who want to debug ftrace itself 618 using kprobe events. 619 620 If kprobes can use ftrace instead of breakpoint, ftrace related 621 functions are protected from kprobe-events to prevent an infinite 622 recursion or any unexpected execution path which leads to a kernel 623 crash. 624 625 This option disables such protection and allows you to put kprobe 626 events on ftrace functions for debugging ftrace by itself. 627 Note that this might let you shoot yourself in the foot. 628 629 If unsure, say N. 630 631config UPROBE_EVENTS 632 bool "Enable uprobes-based dynamic events" 633 depends on ARCH_SUPPORTS_UPROBES 634 depends on MMU 635 depends on PERF_EVENTS 636 select UPROBES 637 select PROBE_EVENTS 638 select DYNAMIC_EVENTS 639 select TRACING 640 default y 641 help 642 This allows the user to add tracing events on top of userspace 643 dynamic events (similar to tracepoints) on the fly via the trace 644 events interface. Those events can be inserted wherever uprobes 645 can probe, and record various registers. 646 This option is required if you plan to use perf-probe subcommand 647 of perf tools on user space applications. 648 649config BPF_EVENTS 650 depends on BPF_SYSCALL 651 depends on (KPROBE_EVENTS || UPROBE_EVENTS) && PERF_EVENTS 652 bool 653 default y 654 help 655 This allows the user to attach BPF programs to kprobe, uprobe, and 656 tracepoint events. 657 658config DYNAMIC_EVENTS 659 def_bool n 660 661config PROBE_EVENTS 662 def_bool n 663 664config BPF_KPROBE_OVERRIDE 665 bool "Enable BPF programs to override a kprobed function" 666 depends on BPF_EVENTS 667 depends on FUNCTION_ERROR_INJECTION 668 default n 669 help 670 Allows BPF to override the execution of a probed function and 671 set a different return value. This is used for error injection. 672 673config FTRACE_MCOUNT_RECORD 674 def_bool y 675 depends on DYNAMIC_FTRACE 676 depends on HAVE_FTRACE_MCOUNT_RECORD 677 678config FTRACE_MCOUNT_USE_PATCHABLE_FUNCTION_ENTRY 679 bool 680 depends on FTRACE_MCOUNT_RECORD 681 682config FTRACE_MCOUNT_USE_CC 683 def_bool y 684 depends on $(cc-option,-mrecord-mcount) 685 depends on !FTRACE_MCOUNT_USE_PATCHABLE_FUNCTION_ENTRY 686 depends on FTRACE_MCOUNT_RECORD 687 688config FTRACE_MCOUNT_USE_OBJTOOL 689 def_bool y 690 depends on HAVE_OBJTOOL_MCOUNT 691 depends on !FTRACE_MCOUNT_USE_PATCHABLE_FUNCTION_ENTRY 692 depends on !FTRACE_MCOUNT_USE_CC 693 depends on FTRACE_MCOUNT_RECORD 694 695config FTRACE_MCOUNT_USE_RECORDMCOUNT 696 def_bool y 697 depends on !FTRACE_MCOUNT_USE_PATCHABLE_FUNCTION_ENTRY 698 depends on !FTRACE_MCOUNT_USE_CC 699 depends on !FTRACE_MCOUNT_USE_OBJTOOL 700 depends on FTRACE_MCOUNT_RECORD 701 702config TRACING_MAP 703 bool 704 depends on ARCH_HAVE_NMI_SAFE_CMPXCHG 705 help 706 tracing_map is a special-purpose lock-free map for tracing, 707 separated out as a stand-alone facility in order to allow it 708 to be shared between multiple tracers. It isn't meant to be 709 generally used outside of that context, and is normally 710 selected by tracers that use it. 711 712config SYNTH_EVENTS 713 bool "Synthetic trace events" 714 select TRACING 715 select DYNAMIC_EVENTS 716 default n 717 help 718 Synthetic events are user-defined trace events that can be 719 used to combine data from other trace events or in fact any 720 data source. Synthetic events can be generated indirectly 721 via the trace() action of histogram triggers or directly 722 by way of an in-kernel API. 723 724 See Documentation/trace/events.rst or 725 Documentation/trace/histogram.rst for details and examples. 726 727 If in doubt, say N. 728 729config HIST_TRIGGERS 730 bool "Histogram triggers" 731 depends on ARCH_HAVE_NMI_SAFE_CMPXCHG 732 select TRACING_MAP 733 select TRACING 734 select DYNAMIC_EVENTS 735 select SYNTH_EVENTS 736 default n 737 help 738 Hist triggers allow one or more arbitrary trace event fields 739 to be aggregated into hash tables and dumped to stdout by 740 reading a debugfs/tracefs file. They're useful for 741 gathering quick and dirty (though precise) summaries of 742 event activity as an initial guide for further investigation 743 using more advanced tools. 744 745 Inter-event tracing of quantities such as latencies is also 746 supported using hist triggers under this option. 747 748 See Documentation/trace/histogram.rst. 749 If in doubt, say N. 750 751config TRACE_EVENT_INJECT 752 bool "Trace event injection" 753 depends on TRACING 754 help 755 Allow user-space to inject a specific trace event into the ring 756 buffer. This is mainly used for testing purpose. 757 758 If unsure, say N. 759 760config TRACEPOINT_BENCHMARK 761 bool "Add tracepoint that benchmarks tracepoints" 762 help 763 This option creates the tracepoint "benchmark:benchmark_event". 764 When the tracepoint is enabled, it kicks off a kernel thread that 765 goes into an infinite loop (calling cond_resched() to let other tasks 766 run), and calls the tracepoint. Each iteration will record the time 767 it took to write to the tracepoint and the next iteration that 768 data will be passed to the tracepoint itself. That is, the tracepoint 769 will report the time it took to do the previous tracepoint. 770 The string written to the tracepoint is a static string of 128 bytes 771 to keep the time the same. The initial string is simply a write of 772 "START". The second string records the cold cache time of the first 773 write which is not added to the rest of the calculations. 774 775 As it is a tight loop, it benchmarks as hot cache. That's fine because 776 we care most about hot paths that are probably in cache already. 777 778 An example of the output: 779 780 START 781 first=3672 [COLD CACHED] 782 last=632 first=3672 max=632 min=632 avg=316 std=446 std^2=199712 783 last=278 first=3672 max=632 min=278 avg=303 std=316 std^2=100337 784 last=277 first=3672 max=632 min=277 avg=296 std=258 std^2=67064 785 last=273 first=3672 max=632 min=273 avg=292 std=224 std^2=50411 786 last=273 first=3672 max=632 min=273 avg=288 std=200 std^2=40389 787 last=281 first=3672 max=632 min=273 avg=287 std=183 std^2=33666 788 789 790config RING_BUFFER_BENCHMARK 791 tristate "Ring buffer benchmark stress tester" 792 depends on RING_BUFFER 793 help 794 This option creates a test to stress the ring buffer and benchmark it. 795 It creates its own ring buffer such that it will not interfere with 796 any other users of the ring buffer (such as ftrace). It then creates 797 a producer and consumer that will run for 10 seconds and sleep for 798 10 seconds. Each interval it will print out the number of events 799 it recorded and give a rough estimate of how long each iteration took. 800 801 It does not disable interrupts or raise its priority, so it may be 802 affected by processes that are running. 803 804 If unsure, say N. 805 806config TRACE_EVAL_MAP_FILE 807 bool "Show eval mappings for trace events" 808 depends on TRACING 809 help 810 The "print fmt" of the trace events will show the enum/sizeof names 811 instead of their values. This can cause problems for user space tools 812 that use this string to parse the raw data as user space does not know 813 how to convert the string to its value. 814 815 To fix this, there's a special macro in the kernel that can be used 816 to convert an enum/sizeof into its value. If this macro is used, then 817 the print fmt strings will be converted to their values. 818 819 If something does not get converted properly, this option can be 820 used to show what enums/sizeof the kernel tried to convert. 821 822 This option is for debugging the conversions. A file is created 823 in the tracing directory called "eval_map" that will show the 824 names matched with their values and what trace event system they 825 belong too. 826 827 Normally, the mapping of the strings to values will be freed after 828 boot up or module load. With this option, they will not be freed, as 829 they are needed for the "eval_map" file. Enabling this option will 830 increase the memory footprint of the running kernel. 831 832 If unsure, say N. 833 834config FTRACE_RECORD_RECURSION 835 bool "Record functions that recurse in function tracing" 836 depends on FUNCTION_TRACER 837 help 838 All callbacks that attach to the function tracing have some sort 839 of protection against recursion. Even though the protection exists, 840 it adds overhead. This option will create a file in the tracefs 841 file system called "recursed_functions" that will list the functions 842 that triggered a recursion. 843 844 This will add more overhead to cases that have recursion. 845 846 If unsure, say N 847 848config FTRACE_RECORD_RECURSION_SIZE 849 int "Max number of recursed functions to record" 850 default 128 851 depends on FTRACE_RECORD_RECURSION 852 help 853 This defines the limit of number of functions that can be 854 listed in the "recursed_functions" file, that lists all 855 the functions that caused a recursion to happen. 856 This file can be reset, but the limit can not change in 857 size at runtime. 858 859config RING_BUFFER_RECORD_RECURSION 860 bool "Record functions that recurse in the ring buffer" 861 depends on FTRACE_RECORD_RECURSION 862 # default y, because it is coupled with FTRACE_RECORD_RECURSION 863 default y 864 help 865 The ring buffer has its own internal recursion. Although when 866 recursion happens it wont cause harm because of the protection, 867 but it does cause an unwanted overhead. Enabling this option will 868 place where recursion was detected into the ftrace "recursed_functions" 869 file. 870 871 This will add more overhead to cases that have recursion. 872 873config GCOV_PROFILE_FTRACE 874 bool "Enable GCOV profiling on ftrace subsystem" 875 depends on GCOV_KERNEL 876 help 877 Enable GCOV profiling on ftrace subsystem for checking 878 which functions/lines are tested. 879 880 If unsure, say N. 881 882 Note that on a kernel compiled with this config, ftrace will 883 run significantly slower. 884 885config FTRACE_SELFTEST 886 bool 887 888config FTRACE_STARTUP_TEST 889 bool "Perform a startup test on ftrace" 890 depends on GENERIC_TRACER 891 select FTRACE_SELFTEST 892 help 893 This option performs a series of startup tests on ftrace. On bootup 894 a series of tests are made to verify that the tracer is 895 functioning properly. It will do tests on all the configured 896 tracers of ftrace. 897 898config EVENT_TRACE_STARTUP_TEST 899 bool "Run selftest on trace events" 900 depends on FTRACE_STARTUP_TEST 901 default y 902 help 903 This option performs a test on all trace events in the system. 904 It basically just enables each event and runs some code that 905 will trigger events (not necessarily the event it enables) 906 This may take some time run as there are a lot of events. 907 908config EVENT_TRACE_TEST_SYSCALLS 909 bool "Run selftest on syscall events" 910 depends on EVENT_TRACE_STARTUP_TEST 911 help 912 This option will also enable testing every syscall event. 913 It only enables the event and disables it and runs various loads 914 with the event enabled. This adds a bit more time for kernel boot 915 up since it runs this on every system call defined. 916 917 TBD - enable a way to actually call the syscalls as we test their 918 events 919 920config RING_BUFFER_STARTUP_TEST 921 bool "Ring buffer startup self test" 922 depends on RING_BUFFER 923 help 924 Run a simple self test on the ring buffer on boot up. Late in the 925 kernel boot sequence, the test will start that kicks off 926 a thread per cpu. Each thread will write various size events 927 into the ring buffer. Another thread is created to send IPIs 928 to each of the threads, where the IPI handler will also write 929 to the ring buffer, to test/stress the nesting ability. 930 If any anomalies are discovered, a warning will be displayed 931 and all ring buffers will be disabled. 932 933 The test runs for 10 seconds. This will slow your boot time 934 by at least 10 more seconds. 935 936 At the end of the test, statics and more checks are done. 937 It will output the stats of each per cpu buffer. What 938 was written, the sizes, what was read, what was lost, and 939 other similar details. 940 941 If unsure, say N 942 943config RING_BUFFER_VALIDATE_TIME_DELTAS 944 bool "Verify ring buffer time stamp deltas" 945 depends on RING_BUFFER 946 help 947 This will audit the time stamps on the ring buffer sub 948 buffer to make sure that all the time deltas for the 949 events on a sub buffer matches the current time stamp. 950 This audit is performed for every event that is not 951 interrupted, or interrupting another event. A check 952 is also made when traversing sub buffers to make sure 953 that all the deltas on the previous sub buffer do not 954 add up to be greater than the current time stamp. 955 956 NOTE: This adds significant overhead to recording of events, 957 and should only be used to test the logic of the ring buffer. 958 Do not use it on production systems. 959 960 Only say Y if you understand what this does, and you 961 still want it enabled. Otherwise say N 962 963config MMIOTRACE_TEST 964 tristate "Test module for mmiotrace" 965 depends on MMIOTRACE && m 966 help 967 This is a dumb module for testing mmiotrace. It is very dangerous 968 as it will write garbage to IO memory starting at a given address. 969 However, it should be safe to use on e.g. unused portion of VRAM. 970 971 Say N, unless you absolutely know what you are doing. 972 973config PREEMPTIRQ_DELAY_TEST 974 tristate "Test module to create a preempt / IRQ disable delay thread to test latency tracers" 975 depends on m 976 help 977 Select this option to build a test module that can help test latency 978 tracers by executing a preempt or irq disable section with a user 979 configurable delay. The module busy waits for the duration of the 980 critical section. 981 982 For example, the following invocation generates a burst of three 983 irq-disabled critical sections for 500us: 984 modprobe preemptirq_delay_test test_mode=irq delay=500 burst_size=3 985 986 What's more, if you want to attach the test on the cpu which the latency 987 tracer is running on, specify cpu_affinity=cpu_num at the end of the 988 command. 989 990 If unsure, say N 991 992config SYNTH_EVENT_GEN_TEST 993 tristate "Test module for in-kernel synthetic event generation" 994 depends on SYNTH_EVENTS 995 help 996 This option creates a test module to check the base 997 functionality of in-kernel synthetic event definition and 998 generation. 999 1000 To test, insert the module, and then check the trace buffer 1001 for the generated sample events. 1002 1003 If unsure, say N. 1004 1005config KPROBE_EVENT_GEN_TEST 1006 tristate "Test module for in-kernel kprobe event generation" 1007 depends on KPROBE_EVENTS 1008 help 1009 This option creates a test module to check the base 1010 functionality of in-kernel kprobe event definition. 1011 1012 To test, insert the module, and then check the trace buffer 1013 for the generated kprobe events. 1014 1015 If unsure, say N. 1016 1017config HIST_TRIGGERS_DEBUG 1018 bool "Hist trigger debug support" 1019 depends on HIST_TRIGGERS 1020 help 1021 Add "hist_debug" file for each event, which when read will 1022 dump out a bunch of internal details about the hist triggers 1023 defined on that event. 1024 1025 The hist_debug file serves a couple of purposes: 1026 1027 - Helps developers verify that nothing is broken. 1028 1029 - Provides educational information to support the details 1030 of the hist trigger internals as described by 1031 Documentation/trace/histogram-design.rst. 1032 1033 The hist_debug output only covers the data structures 1034 related to the histogram definitions themselves and doesn't 1035 display the internals of map buckets or variable values of 1036 running histograms. 1037 1038 If unsure, say N. 1039 1040endif # FTRACE 1041