• Home
  • Raw
  • Download

Lines Matching +full:max +full:- +full:memory +full:- +full:bandwidth

1 .. SPDX-License-Identifier: GPL-2.0
9 :Authors: - Fenghua Yu <fenghua.yu@intel.com>
10 - Tony Luck <tony.luck@intel.com>
11 - Vikas Shivappa <vikas.shivappa@intel.com>
25 MBM (Memory Bandwidth Monitoring) "cqm_mbm_total", "cqm_mbm_local"
26 MBA (Memory Bandwidth Allocation) "mba"
27 SMBA (Slow Memory Bandwidth Allocation) ""
28 BMEC (Bandwidth Monitoring Event Configuration) ""
38 # mount -t resctrl resctrl [-o cdp[,cdpl2][,mba_MBps]] /sys/fs/resctrl
48 bandwidth in MBps
54 pseudo-locking is a unique way of using cache control to "pin" or
56 "Cache Pseudo-Locking".
93 own settings for cache use which can over-ride
125 Corresponding region is pseudo-locked. No
128 Memory bandwidth(MB) subdirectory contains the following files
132 The minimum memory bandwidth percentage which
136 The granularity in which the memory bandwidth
140 available bandwidth control steps are:
145 non-linear. This field is purely informational
151 request different memory bandwidth percentages:
153 "max":
156 "per-thread":
157 bandwidth percentages are directly applied to
178 If the system supports Bandwidth Monitoring Event
179 Configuration (BMEC), then the bandwidth events will
191 and mbm_local_bytes events, respectively, when the Bandwidth
195 changed, the bandwidth counters for all RMIDs of both events
205 6 Dirty Victims from the QOS domain to all types of memory
206 5 Reads to slow memory in the non-local NUMA domain
207 4 Reads to slow memory in the local NUMA domain
208 3 Non-temporal writes to non-local NUMA domain
209 2 Non-temporal writes to local NUMA domain
210 1 Reads to memory in the non-local NUMA domain
211 0 Reads to memory in the local NUMA domain
216 0x15 to count all the local memory events.
239 * To change the mbm_local_bytes to count all the slow memory reads on
252 counter can be considered for re-use.
265 mask f7 has non-consecutive 1-bits
315 When the resource group is in pseudo-locked mode this file will
317 pseudo-locked region.
328 Each resource has its own line and format - see below for details.
339 cache pseudo-locked region is created by first writing
340 "pseudo-locksetup" to the "mode" file before writing the cache
341 pseudo-locked region's schemata to the resource group's "schemata"
342 file. On successful pseudo-locked region creation the mode will
343 automatically change to "pseudo-locked".
359 -------------------------
364 1) If the task is a member of a non-default group, then the schemata
374 -------------------------
375 1) If a task is a member of a MON group, or non-default CTRL_MON group
396 are evicted and re-used while the occupancy in the new group rises as
397 the task accesses memory and loads into the cache are counted based on
411 max_threshold_occupancy - generic concepts
412 ------------------------------------------
418 limbo RMIDs but which are not ready to be used, user may see an -EBUSY
424 Schemata files - general concepts
425 ---------------------------------
431 ---------
443 ---------------------
450 0x3, 0x6 and 0xC are legal 4-bit masks with two bits set, but 0x5, 0x9
451 and 0xA are not. On a system with a 20-bit mask each bit represents 5%
455 Memory bandwidth Allocation and monitoring
458 For Memory bandwidth resource, by default the user controls the resource
459 by indicating the percentage of total memory bandwidth.
461 The minimum bandwidth percentage value for each cpu model is predefined
462 and can be looked up through "info/MB/min_bandwidth". The bandwidth
464 be looked up at "info/MB/bandwidth_gran". The available bandwidth
468 The bandwidth throttling is a core specific mechanism on some of Intel
469 SKUs. Using a high bandwidth and a low bandwidth setting on two threads
471 low bandwidth (see "thread_throttle_mode").
473 The fact that Memory bandwidth allocation(MBA) may be a core
474 specific mechanism where as memory bandwidth monitoring(MBM) is done at
476 via the MBA and then monitor the bandwidth to see if the controls are
479 1. User may *not* see increase in actual bandwidth when percentage
482 This can occur when aggregate L2 external bandwidth is more than L3
483 external bandwidth. Consider an SKL SKU with 24 cores on a package and
484 where L2 external is 10GBps (hence aggregate L2 external bandwidth is
485 240GBps) and L3 external bandwidth is 100GBps. Now a workload with '20
486 threads, having 50% bandwidth, each consuming 5GBps' consumes the max L3
487 bandwidth of 100GBps although the percentage value specified is only 50%
488 << 100%. Hence increasing the bandwidth percentage will not yield any
489 more bandwidth. This is because although the L2 external bandwidth still
490 has capacity, the L3 external bandwidth is fully used. Also note that
493 2. Same bandwidth percentage may mean different actual bandwidth
496 For the same SKU in #1, a 'single thread, with 10% bandwidth' and '4
497 thread, with 10% bandwidth' can consume upto 10GBps and 40GBps although
498 they have same percentage bandwidth of 10%. This is simply because as
499 threads start using more cores in an rdtgroup, the actual bandwidth may
500 increase or vary although user specified bandwidth percentage is same.
503 resctrl added support for specifying the bandwidth in MBps as well. The
505 Controller(mba_sc)" which reads the actual bandwidth using MBM counters
506 and adjust the memory bandwidth percentages to ensure::
508 "actual bandwidth < user specified bandwidth".
510 By default, the schemata would take the bandwidth percentage values
516 ----------------------------------------------------------------
522 ------------------------------------------------------------------
530 ------------------------
542 Memory bandwidth Allocation (default mode)
543 ------------------------------------------
545 Memory b/w domain is L3 cache.
550 Memory bandwidth Allocation specified in MBps
551 ---------------------------------------------
553 Memory bandwidth domain is L3 cache.
558 Slow Memory Bandwidth Allocation (SMBA)
559 ---------------------------------------
560 AMD hardware supports Slow Memory Bandwidth Allocation (SMBA).
561 CXL.memory is the only supported "slow" memory device. With the
562 support of SMBA, the hardware enables bandwidth allocation on
563 the slow memory devices. If there are multiple such devices in
567 The presence of SMBA (with CXL.memory) is independent of slow memory
571 The bandwidth domain for slow memory is L3 cache. Its schemata file
578 ---------------------------------
593 --------------------------------------------------
594 Reading the schemata file will show the current bandwidth limit on all
597 configure the bandwidth limit.
613 --------------------------------------------------------------------
632 Cache Pseudo-Locking
635 application can fill. Cache pseudo-locking builds on the fact that a
636 CPU can still read and write data pre-allocated outside its current
637 allocated area on a cache hit. With cache pseudo-locking, data can be
640 pseudo-locked memory is made accessible to user space where an
642 a region of memory with reduced average read latency.
644 The creation of a cache pseudo-locked region is triggered by a request
646 to be pseudo-locked. The cache pseudo-locked region is created as follows:
648 - Create a CAT allocation CLOSNEW with a CBM matching the schemata
649 from the user of the cache region that will contain the pseudo-locked
650 memory. This region must not overlap with any current CAT allocation/CLOS
652 while the pseudo-locked region exists.
653 - Create a contiguous region of memory of the same size as the cache
655 - Flush the cache, disable hardware prefetchers, disable preemption.
656 - Make CLOSNEW the active CLOS and touch the allocated memory to load
658 - Set the previous CLOS as active.
659 - At this point the closid CLOSNEW can be released - the cache
660 pseudo-locked region is protected as long as its CBM does not appear in
661 any CAT allocation. Even though the cache pseudo-locked region will from
663 any CLOS will be able to access the memory in the pseudo-locked region since
665 - The contiguous region of memory loaded into the cache is exposed to
666 user-space as a character device.
668 Cache pseudo-locking increases the probability that data will remain
672 “locked” data from cache. Power management C-states may shrink or
673 power off cache. Deeper C-states will automatically be restricted on
674 pseudo-locked region creation.
676 It is required that an application using a pseudo-locked region runs
678 with the cache on which the pseudo-locked region resides. A sanity check
679 within the code will not allow an application to map pseudo-locked memory
681 pseudo-locked region resides. The sanity check is only done during the
685 Pseudo-locking is accomplished in two stages:
688 of cache that should be dedicated to pseudo-locking. At this time an
689 equivalent portion of memory is allocated, loaded into allocated
691 2) During the second stage a user-space application maps (mmap()) the
692 pseudo-locked memory into its address space.
694 Cache Pseudo-Locking Interface
695 ------------------------------
696 A pseudo-locked region is created using the resctrl interface as follows:
699 2) Change the new resource group's mode to "pseudo-locksetup" by writing
700 "pseudo-locksetup" to the "mode" file.
701 3) Write the schemata of the pseudo-locked region to the "schemata" file. All
705 On successful pseudo-locked region creation the "mode" file will contain
706 "pseudo-locked" and a new character device with the same name as the resource
708 by user space in order to obtain access to the pseudo-locked memory region.
710 An example of cache pseudo-locked region creation and usage can be found below.
712 Cache Pseudo-Locking Debugging Interface
713 ----------------------------------------
714 The pseudo-locking debugging interface is enabled by default (if
717 There is no explicit way for the kernel to test if a provided memory
718 location is present in the cache. The pseudo-locking debugging interface uses
720 the pseudo-locked region:
722 1) Memory access latency using the pseudo_lock_mem_latency tracepoint. Data
724 example below). In this test the pseudo-locked region is traversed at
732 When a pseudo-locked region is created a new debugfs directory is created for
734 write-only file, pseudo_lock_measure, is present in this directory. The
735 measurement of the pseudo-locked region depends on the number written to this
756 In this example a pseudo-locked region named "newlock" was created. Here is
790 In this example a pseudo-locked region named "newlock" was created on the L2
803 # _-----=> irqs-off
804 # / _----=> need-resched
805 # | / _---=> hardirq/softirq
806 # || / _--=> preempt-depth
808 # TASK-PID CPU# |||| TIMESTAMP FUNCTION
810 pseudo_lock_mea-1672 [002] .... 3132.860500: pseudo_lock_l2: hits=4097 miss=0
819 for cache bit masks, minimum b/w of 10% with a memory bandwidth
823 # mount -t resctrl resctrl /sys/fs/resctrl
837 maximum memory b/w of 50% on socket0 and 50% on socket 1.
838 Tasks in group "p1" may also use 50% memory b/w on both sockets.
839 Note that unlike cache masks, memory b/w cannot specify whether these
845 max b/w in MB rather than the percentage values.
851 In the above example the tasks in "p1" and "p0" on socket 0 would use a max b/w
856 Again two sockets, but this time with a more realistic 20-bit mask.
859 processor 1 on socket 0 on a 2-socket and dual core machine. To avoid noisy
860 neighbors, each of the two real-time tasks exclusively occupies one quarter
864 # mount -t resctrl resctrl /sys/fs/resctrl
868 50% of the L3 cache on socket 0 and 50% of memory b/w cannot be used by
887 # taskset -cp 1 1234
894 # taskset -cp 2 5678
896 For the same 2 socket system with memory b/w resource and CAT L3 the
900 For our first real time task this would request 20% memory b/w on socket 0.
903 # echo -e "L3:0=f8000;1=fffff\nMB:0=20;1=100" > p0/schemata
905 For our second real time task this would request an other 20% memory b/w
909 # echo -e "L3:0=f8000;1=fffff\nMB:0=20;1=100" > p0/schemata
913 A single socket system which has real-time tasks running on core 4-7 and
914 non real-time workload assigned to core 0-3. The real-time tasks share text
920 # mount -t resctrl resctrl /sys/fs/resctrl
924 50% of the L3 cache on socket 0, and 50% of memory bandwidth on socket 0
930 to the "top" 50% of the cache on socket 0 and 50% of memory bandwidth on
937 Finally we move core 4-7 over to the new group and make sure that the
939 also get 50% of memory bandwidth assuming that the cores 4-7 are SMT
940 siblings and only the real time threads are scheduled on the cores 4-7.
953 system with two L2 cache instances that can be configured with an 8-bit
958 # mount -t resctrl resctrl /sys/fs/resctrl/
975 -sh: echo: write error: Invalid argument
1010 -sh: echo: write error: Invalid argument
1014 Example of Cache Pseudo-Locking
1016 Lock portion of L2 cache from cache id 1 using CBM 0x3. Pseudo-locked
1021 # mount -t resctrl resctrl /sys/fs/resctrl/
1024 Ensure that there are bits available that can be pseudo-locked, since only
1025 unused bits can be pseudo-locked the bits to be pseudo-locked needs to be
1034 Create a new resource group that will be associated with the pseudo-locked
1035 region, indicate that it will be used for a pseudo-locked region, and
1036 configure the requested pseudo-locked region capacity bitmask::
1039 # echo pseudo-locksetup > newlock/mode
1042 On success the resource group's mode will change to pseudo-locked, the
1043 bit_usage will reflect the pseudo-locked region, and the character device
1044 exposing the pseudo-locked region will exist::
1047 pseudo-locked
1050 # ls -l /dev/pseudo_lock/newlock
1051 crw------- 1 root root 243, 0 Apr 3 05:01 /dev/pseudo_lock/newlock
1056 * Example code to access one page of pseudo-locked cache region
1069 * cores associated with the pseudo-locked region. Here the cpu
1106 /* Application interacts with pseudo-locked memory @mapping */
1120 ----------------------------
1128 1. Read the cbmmasks from each directory or the per-resource "bit_usage"
1159 $ flock -s /sys/fs/resctrl/ find /sys/fs/resctrl
1163 $ cat create-dir.sh
1165 mask = function-of(output.txt)
1169 $ flock /sys/fs/resctrl/ ./create-dir.sh
1188 exit(-1);
1200 exit(-1);
1212 exit(-1);
1221 if (fd == -1) {
1223 exit(-1);
1237 ----------------------
1244 ------------------------------------------------------------------------
1248 # mount -t resctrl resctrl /sys/fs/resctrl
1288 --------------------------------------------
1291 # mount -t resctrl resctrl /sys/fs/resctrl
1308 ---------------------------------------------------------------------
1319 # mount -t resctrl resctrl /sys/fs/resctrl
1343 -----------------------------------
1345 A single socket system which has real time tasks running on cores 4-7
1350 # mount -t resctrl resctrl /sys/fs/resctrl
1354 Move the cpus 4-7 over to p1::
1366 Intel MBM Counters May Report System Memory Bandwidth Incorrectly
1367 -----------------------------------------------------------------
1371 Problem: Intel Memory Bandwidth Monitoring (MBM) counters track metrics
1374 metrics, may report incorrect system bandwidth for certain RMID values.
1376 Implication: Due to the errata, system memory bandwidth may not match
1382 +---------------+---------------+---------------+-----------------+
1384 +---------------+---------------+---------------+-----------------+
1386 +---------------+---------------+---------------+-----------------+
1388 +---------------+---------------+---------------+-----------------+
1390 +---------------+---------------+---------------+-----------------+
1392 +---------------+---------------+---------------+-----------------+
1394 +---------------+---------------+---------------+-----------------+
1396 +---------------+---------------+---------------+-----------------+
1398 +---------------+---------------+---------------+-----------------+
1400 +---------------+---------------+---------------+-----------------+
1402 +---------------+---------------+---------------+-----------------+
1404 +---------------+---------------+---------------+-----------------+
1406 +---------------+---------------+---------------+-----------------+
1408 +---------------+---------------+---------------+-----------------+
1410 +---------------+---------------+---------------+-----------------+
1412 +---------------+---------------+---------------+-----------------+
1414 +---------------+---------------+---------------+-----------------+
1416 +---------------+---------------+---------------+-----------------+
1418 +---------------+---------------+---------------+-----------------+
1420 +---------------+---------------+---------------+-----------------+
1422 +---------------+---------------+---------------+-----------------+
1424 +---------------+---------------+---------------+-----------------+
1426 +---------------+---------------+---------------+-----------------+
1428 +---------------+---------------+---------------+-----------------+
1430 +---------------+---------------+---------------+-----------------+
1432 +---------------+---------------+---------------+-----------------+
1434 +---------------+---------------+---------------+-----------------+
1436 +---------------+---------------+---------------+-----------------+
1438 +---------------+---------------+---------------+-----------------+
1446 …958/https://www.intel.com/content/www/us/en/processors/xeon/scalable/xeon-scalable-spec-update.html
1448 2. Erratum BDF102 in Intel Xeon E5-2600 v4 Processor Product Family Specification Update:
1449 …w.intel.com/content/dam/www/public/us/en/documents/specification-updates/xeon-e5-v4-spec-update.pdf
1452 …are.intel.com/content/www/us/en/develop/articles/intel-resource-director-technology-rdt-reference-