lockstat - report kernel lock and profiling statistics
lockstat [-ACEHI] [-e event_list] [-i rate] [-b | -t | -h | -s depth] [-n nrecords] [-l lock [, size]] [-d duration] [-f function [, size]] [-T] [-ckgwWRpP] [-D count] [-o filename] [-x opt [=val]] command [args]
The lockstat utility gathers and displays kernel locking and profiling statistics. lockstat allows you to specify which events to watch (for example, spin on adaptive mutex, block on read access to rwlock due to waiting writers, and so forth) how much data to gather for each event, and how to display the data. By default, lockstat monitors all lock contention events, gathers frequency and timing data about those events, and displays the data in decreasing frequency order, so that the most common events appear first.
lockstat gathers data until the specified command completes. For example, to gather statistics for a fixed-time interval, use sleep(1) as the command, as follows:
example# lockstat sleep 5
When the -I option is specified, lockstat establishes a per-processor high-level periodic interrupt source to gather profiling data. The interrupt handler simply generates a lockstat event whose caller is the interrupted PC (program counter). The profiling event is just like any other lockstat event, so all of the normal lockstat options are applicable.
lockstat relies on DTrace to modify the running kernel's text to intercept events of interest. This imposes a small but measurable overhead on all system activity, so access to lockstat is restricted to super-user by default. The system administrator can permit other users to use lockstat by granting them additional DTrace privileges. Refer to the Solaris Dynamic Tracing Guide for more information about DTrace security features.
The following options are supported:
If no event selection options are specified, the default is -C.
-A
-C
-E
-e event_list
-H
-I
-i rate
-x arg[=val]
-b
-h
-s depth
-t
-d duration
-f func[,size]
-l lock[,size]
-n nrecords
-T
-c
-D count
-g
-k
-o filename
-P
-p
-R
-W
-w
The following headers appear over various columns of data.
Count or ops/s
indv
genr
cuml
rcnt
nsec
Lock
CPU+PIL
Caller
Example 1 Measuring Kernel Lock Contention
example# lockstat sleep 5 Adaptive mutex spin: 2210 events in 5.055 seconds (437 events/sec)
Count indv cuml rcnt nsec Lock Caller ------------------------------------------------------------------------ 269 12% 12% 1.00 2160 service_queue background+0xdc 249 11% 23% 1.00 86 service_queue qenable_locked+0x64 228 10% 34% 1.00 131 service_queue background+0x15c 68 3% 37% 1.00 79 0x30000024070 untimeout+0x1c 59 3% 40% 1.00 384 0x300066fa8e0 background+0xb0 43 2% 41% 1.00 30 rqcred_lock svc_getreq+0x3c 42 2% 43% 1.00 341 0x30006834eb8 background+0xb0 41 2% 45% 1.00 135 0x30000021058 untimeout+0x1c 40 2% 47% 1.00 39 rqcred_lock svc_getreq+0x260 37 2% 49% 1.00 2372 0x300068e83d0 hmestart+0x1c4 36 2% 50% 1.00 77 0x30000021058 timeout_common+0x4 36 2% 52% 1.00 354 0x300066fa120 background+0xb0 32 1% 53% 1.00 97 0x30000024070 timeout_common+0x4 31 1% 55% 1.00 2923 0x300069883d0 hmestart+0x1c4 29 1% 56% 1.00 366 0x300066fb290 background+0xb0 28 1% 57% 1.00 117 0x3000001e040 untimeout+0x1c 25 1% 59% 1.00 93 0x3000001e040 timeout_common+0x4 22 1% 60% 1.00 25 0x30005161110 sync_stream_buf+0xdc 21 1% 60% 1.00 291 0x30006834eb8 putq+0xa4 19 1% 61% 1.00 43 0x3000515dcb0 mdf_alloc+0xc 18 1% 62% 1.00 456 0x30006834eb8 qenable+0x8 18 1% 63% 1.00 61 service_queue queuerun+0x168 17 1% 64% 1.00 268 0x30005418ee8 vmem_free+0x3c [...] R/W reader blocked by writer: 76 events in 5.055 seconds (15 events/sec) Count indv cuml rcnt nsec Lock Caller ------------------------------------------------------------------------ 23 30% 30% 1.00 22590137 0x300098ba358 ufs_dirlook+0xd0 17 22% 53% 1.00 5820995 0x3000ad815e8 find_bp+0x10 13 17% 70% 1.00 2639918 0x300098ba360 ufs_iget+0x198 4 5% 75% 1.00 3193015 0x300098ba360 ufs_getattr+0x54 3 4% 79% 1.00 7953418 0x3000ad817c0 find_bp+0x10 3 4% 83% 1.00 935211 0x3000ad815e8 find_read_lof+0x14 2 3% 86% 1.00 16357310 0x300073a4720 find_bp+0x10 2 3% 88% 1.00 2072433 0x300073a4720 find_read_lof+0x14 2 3% 91% 1.00 1606153 0x300073a4370 find_bp+0x10 1 1% 92% 1.00 2656909 0x300107e7400 ufs_iget+0x198 [...]
Example 2 Measuring Hold Times
example# lockstat -H -D 10 sleep 1 Adaptive mutex spin: 513 events
Count indv cuml rcnt nsec Lock Caller ------------------------------------------------------------------------- 480 5% 5% 1.00 1136 0x300007718e8 putnext+0x40 286 3% 9% 1.00 666 0x3000077b430 getf+0xd8 271 3% 12% 1.00 537 0x3000077b430 msgio32+0x2fc 270 3% 15% 1.00 3670 0x300007718e8 strgetmsg+0x3d4 270 3% 18% 1.00 1016 0x300007c38b0 getq_noenab+0x200 264 3% 20% 1.00 1649 0x300007718e8 strgetmsg+0xa70 216 2% 23% 1.00 6251 tcp_mi_lock tcp_snmp_get+0xfc 206 2% 25% 1.00 602 thread_free_lock clock+0x250 138 2% 27% 1.00 485 0x300007c3998 putnext+0xb8 138 2% 28% 1.00 3706 0x300007718e8 strrput+0x5b8 ------------------------------------------------------------------------- [...]
Example 3 Measuring Hold Times for Stack Traces Containing a Specific Function
example# lockstat -H -f tcp_rput_data -s 50 -D 10 sleep 1 Adaptive mutex spin: 11 events in 1.023 seconds (11 events/sec)
------------------------------------------------------------------------- Count indv cuml rcnt nsec Lock Caller 9 82% 82% 1.00 2540 0x30000031380 tcp_rput_data+0x2b90 nsec ------ Time Distribution ------ count Stack 256 |@@@@@@@@@@@@@@@@ 5 tcp_rput_data+0x2b90 512 |@@@@@@ 2 putnext+0x78 1024 |@@@ 1 ip_rput+0xec4 2048 | 0 _c_putnext+0x148 4096 | 0 hmeread+0x31c 8192 | 0 hmeintr+0x36c 16384 |@@@ 1 sbus_intr_wrapper+0x30 [...] Count indv cuml rcnt nsec Lock Caller 1 9% 91% 1.00 1036 0x30000055380 freemsg+0x44 nsec ------ Time Distribution ------ count Stack 1024 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 1 freemsg+0x44 tcp_rput_data+0x2fd0 putnext+0x78 ip_rput+0xec4 _c_putnext+0x148 hmeread+0x31c hmeintr+0x36c sbus_intr_wrapper+0x30 ------------------------------------------------------------------------- [...]
Example 4 Basic Kernel Profiling
For basic profiling, we don't care whether the profiling interrupt sampled foo()+0x4c or foo()+0x78; we care only that it sampled somewhere in foo(), so we use -k. The CPU and PIL aren't relevant to basic profiling because we are measuring the system as a whole, not a particular CPU or interrupt level, so we use -W.
example# lockstat -kIW -D 20 ./polltest Profiling interrupt: 82 events in 0.424 seconds (194 events/sec)
Count indv cuml rcnt nsec Hottest CPU+PIL Caller ----------------------------------------------------------------------- 8 10% 10% 1.00 698 cpu[1] utl0 6 7% 17% 1.00 299 cpu[0] read 5 6% 23% 1.00 124 cpu[1] getf 4 5% 28% 1.00 327 cpu[0] fifo_read 4 5% 33% 1.00 112 cpu[1] poll 4 5% 38% 1.00 212 cpu[1] uiomove 4 5% 43% 1.00 361 cpu[1] mutex_tryenter 3 4% 46% 1.00 682 cpu[0] write 3 4% 50% 1.00 89 cpu[0] pcache_poll 3 4% 54% 1.00 118 cpu[1] set_active_fd 3 4% 57% 1.00 105 cpu[0] syscall_trap32 3 4% 61% 1.00 640 cpu[1] (usermode) 2 2% 63% 1.00 127 cpu[1] fifo_poll 2 2% 66% 1.00 300 cpu[1] fifo_write 2 2% 68% 1.00 669 cpu[0] releasef 2 2% 71% 1.00 112 cpu[1] bt_getlowbit 2 2% 73% 1.00 247 cpu[1] splx 2 2% 76% 1.00 503 cpu[0] mutex_enter 2 2% 78% 1.00 467 cpu[0]+10 disp_lock_enter 2 2% 80% 1.00 139 cpu[1] default_copyin ----------------------------------------------------------------------- [...]
Example 5 Generated-load Profiling
In the example above, 5% of the samples were in poll(). This tells us how much time was spent inside poll() itself, but tells us nothing about how much work was generated by poll(); that is, how much time we spent in functions called by poll(). To determine that, we use the -g option. The example below shows that although polltest spends only 5% of its time in poll() itself, poll()-induced work accounts for 34% of the load.
Note that the functions that generate the profiling interrupt (lockstat_intr(), cyclic_fire(), and so forth) appear in every stack trace, and therefore are considered to have generated 100% of the load. This illustrates an important point: the generated load percentages do not add up to 100% because they are not independent. If 72% of all stack traces contain both foo() and bar(), then both foo() and bar() are 72% load generators.
example# lockstat -kgIW -D 20 ./polltest Profiling interrupt: 80 events in 0.412 seconds (194 events/sec)
Count genr cuml rcnt nsec Hottest CPU+PIL Caller ------------------------------------------------------------------------- 80 100% ---- 1.00 310 cpu[1] lockstat_intr 80 100% ---- 1.00 310 cpu[1] cyclic_fire 80 100% ---- 1.00 310 cpu[1] cbe_level14 80 100% ---- 1.00 310 cpu[1] current_thread 27 34% ---- 1.00 176 cpu[1] poll 20 25% ---- 1.00 221 cpu[0] write 19 24% ---- 1.00 249 cpu[1] read 17 21% ---- 1.00 232 cpu[0] write32 17 21% ---- 1.00 207 cpu[1] pcache_poll 14 18% ---- 1.00 319 cpu[0] fifo_write 13 16% ---- 1.00 214 cpu[1] read32 10 12% ---- 1.00 208 cpu[1] fifo_read 10 12% ---- 1.00 787 cpu[1] utl0 9 11% ---- 1.00 178 cpu[0] pcacheset_resolve 9 11% ---- 1.00 262 cpu[0] uiomove 7 9% ---- 1.00 506 cpu[1] (usermode) 5 6% ---- 1.00 195 cpu[1] fifo_poll 5 6% ---- 1.00 136 cpu[1] syscall_trap32 4 5% ---- 1.00 139 cpu[0] releasef 3 4% ---- 1.00 277 cpu[1] polllock ------------------------------------------------------------------------- [...]
Example 6 Gathering Lock Contention and Profiling Data for a Specific Module
In this example we use the -f option not to specify a single function, but rather to specify the entire text space of the sbus module. We gather both lock contention and profiling statistics so that contention can be correlated with overall load on the module.
example# modinfo | grep sbus 24 102a8b6f b8b4 59 1 sbus (SBus (sysio) nexus driver)
example# lockstat -kICE -f 0x102a8b6f,0xb8b4 sleep 10 Adaptive mutex spin: 39 events in 10.042 seconds (4 events/sec)
Count indv cuml rcnt nsec Lock Caller ------------------------------------------------------------------------- 15 38% 38% 1.00 206 0x30005160528 sync_stream_buf 7 18% 56% 1.00 14 0x30005160d18 sync_stream_buf 6 15% 72% 1.00 27 0x300060c3118 sync_stream_buf 5 13% 85% 1.00 24 0x300060c3510 sync_stream_buf 2 5% 90% 1.00 29 0x300060c2d20 sync_stream_buf 2 5% 95% 1.00 24 0x30005161cf8 sync_stream_buf 1 3% 97% 1.00 21 0x30005161110 sync_stream_buf 1 3% 100% 1.00 23 0x30005160130 sync_stream_buf [...] Adaptive mutex block: 9 events in 10.042 seconds (1 events/sec) Count indv cuml rcnt nsec Lock Caller ------------------------------------------------------------------------- 4 44% 44% 1.00 156539 0x30005160528 sync_stream_buf 2 22% 67% 1.00 763516 0x30005160d18 sync_stream_buf 1 11% 78% 1.00 462130 0x300060c3510 sync_stream_buf 1 11% 89% 1.00 288749 0x30005161110 sync_stream_buf 1 11% 100% 1.00 1015374 0x30005160130 sync_stream_buf [...] Profiling interrupt: 229 events in 10.042 seconds (23 events/sec) Count indv cuml rcnt nsec Hottest CPU+PIL Caller ------------------------------------------------------------------------- 89 39% 39% 1.00 426 cpu[0]+6 sync_stream_buf 64 28% 67% 1.00 398 cpu[0]+6 sbus_intr_wrapper 23 10% 77% 1.00 324 cpu[0]+6 iommu_dvma_kaddr_load 21 9% 86% 1.00 512 cpu[0]+6 iommu_tlb_flush 14 6% 92% 1.00 342 cpu[0]+6 iommu_dvma_unload 13 6% 98% 1.00 306 cpu[1] iommu_dvma_sync 5 2% 100% 1.00 389 cpu[1] iommu_dma_bindhdl ------------------------------------------------------------------------- [...]
Example 7 Determining the Average PIL (processor interrupt level) for a CPU
example# lockstat -Iw -l cpu[3] ./testprog Profiling interrupt: 14791 events in 152.463 seconds (97 events/sec) Count indv cuml rcnt nsec CPU+PIL Hottest Caller ----------------------------------------------------------------------- 13641 92% 92% 1.00 253 cpu[3] (usermode) 579 4% 96% 1.00 325 cpu[3]+6 ip_ocsum+0xe8 375 3% 99% 1.00 411 cpu[3]+10 splx 154 1% 100% 1.00 527 cpu[3]+4 fas_intr_svc+0x80 41 0% 100% 1.00 293 cpu[3]+13 send_mondo+0x18 1 0% 100% 1.00 266 cpu[3]+12 zsa_rxint+0x400 ----------------------------------------------------------------------- [...]
Example 8 Determining which Subsystem is Causing the System to be Busy
example# lockstat -s 10 -I sleep 20 Profiling interrupt: 4863 events in 47.375 seconds (103 events/sec) Count indv cuml rcnt nsec CPU+PIL Caller ----------------------------------------------------------------------- 1929 40% 40% 0.00 3215 cpu[0] usec_delay+0x78 nsec ------ Time Distribution ------ count Stack 4096 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 1872 ata_wait+0x90 8192 | 27 acersb_get_intr_status+0x34 16384 | 29 ata_set_feature+0x124 32768 | 1 ata_disk_start+0x15c ata_hba_start+0xbc ghd_waitq_process_and \ _mutex_hold+0x70 ghd_waitq_process_and \ _mutex_exit+0x4 ghd_transport+0x12c ata_disk_tran_start+0x108 ----------------------------------------------------------------------- [...]
See attributes(5) for descriptions of the following attributes:
|
dtrace(1M), plockstat(1M), attributes(5), lockstat(7D), mutex(9F), rwlock(9F)
The profiling support provided by lockstat -I replaces the old (and undocumented) /usr/bin/kgmon and /dev/profile.
Tail-call elimination can affect call sites. For example, if foo()+0x50 calls bar() and the last thing bar() does is call mutex_exit(), the compiler can arrange for bar() to branch to mutex_exit()with a return address of foo()+0x58. Thus, the mutex_exit() in bar() will appear as though it occurred at foo()+0x58.
The PC in the stack frame in which an interrupt occurs can be bogus because, between function calls, the compiler is free to use the return address register for local storage.
When using the -I and -s options together, the interrupted PC will usually not appear anywhere in the stack since the interrupt handler is entered asynchronously, not by a function call from that PC.
The lockstat technology is provided on an as-is basis. The format and content of lockstat output reflect the current Solaris kernel implementation and are therefore subject to change in future releases.
Закладки на сайте Проследить за страницей |
Created 1996-2024 by Maxim Chirkov Добавить, Поддержать, Вебмастеру |