Performance Monitoring in IBM AIX
filemon
Monitors the performance of the file system, and reports the I/O activity on behalf of logical files, virtual memory segments, logical volumes, and physical volumes.
The filemon command monitors a trace of file system and I/O system events, and reports on the file and I/O access performance during that period.
In its normal mode, the filemon command runs in the background while one or more application programs or system commands are being executed and monitored. The filemon command automatically starts and monitors a trace of the program's file system and I/O events in real time. By default, the trace is started immediately; optionally, tracing may be deferred until the user issues a trcon command. The user can issue trcoff and trcon commands while the filemon command is running in order to turn off and on monitoring, as desired. When tracing is stopped by a trcstop command, the filemon command generates an I/O activity report and exits.
The filemon command can also process a trace file that has been previously recorded by the trace facility. The file and I/O activity report will be based on the events recorded in that file.
Detailed I/O analysis with the filemon command
The filemon command uses the trace facility
to obtain a detailed picture of I/O activity during a time interval on the
various layers of file system utilization, including the logical file system,
virtual memory segments, LVM, and physical disk layers.
The filemon command can be used to collect
data on all layers, or layers can be specified with the -O layer option.
The default is to collect data on the VM, LVM, and physical layers. Both summary
and detailed reports are generated. Since it uses the trace facility, the filemon command
can be run only by the root user or by a member of the system group.
To
determine whether the filemon command is installed and
available, run the following command:
# lslpp -lI perfagent.tools
Tracing
is started by the filemon command, optionally suspended
with the trcoff subcommand and resumed with the trcon subcomand.
As soon as tracing is terminated, the filemon command writes
its report to stdout.
Note: Only data for those files opened after
the filemon command was started will be collected, unless
you specify the -u flag.
The filemon command
can read the I/O trace data from a specified file, instead of from the real-time
trace process. In this case, the filemon report summarizes
the I/O activity for the system and period represented by the trace file.
This offline processing method is useful when it is necessary to postprocess
a trace file from a remote machine or perform the trace data collection at
one time and postprocess it at another time.
The trcrpt -r command
must be executed on the trace logfile and redirected to another file, as follows:
# gennames > gennames.out # trcrpt -r trace.out > trace.rpt
At this point an
adjusted trace logfile is fed into the filemon command
to report on I/O activity captured by a previously recorded trace session
as follows:
# filemon -i trace.rpt -n gennames.out | pg
In
this example, the filemon command reads file system trace
events from the input file trace.rpt. Because the trace
data is already captured on a file, the filemon command
does not put itself in the background to allow application programs to be
run. After the entire file is read, an I/O activity report for the virtual
memory, logical volume, and physical volume levels is displayed on standard
output (which, in this example, is piped to the pg command).
If
the trace command was run with the -C all flag,
then run the trcrpt command also with the -C all flag
(see Formatting a report from trace -C output).
The following
sequence of commands gives an example of the filemon command
usage:
# filemon -o fm.out -O all; cp /smit.log /dev/null ; trcstop
The report produced by this sequence, in an otherwise-idle system, is as follows:
Thu Aug 19 11:30:49 1999
System: AIX texmex Node: 4 Machine: 000691854C00
0.369 secs in measured interval
Cpu utilization: 9.0%
Most Active Files
------------------------------------------------------------------------
#MBs #opns #rds #wrs file volume:inode
------------------------------------------------------------------------
0.1 1 14 0 smit.log /dev/hd4:858
0.0 1 0 13 null
0.0 2 4 0 ksh.cat /dev/hd2:16872
0.0 1 2 0 cmdtrace.cat /dev/hd2:16739
Most Active Segments
------------------------------------------------------------------------
#MBs #rpgs #wpgs segid segtype volume:inode
------------------------------------------------------------------------
0.1 13 0 5e93 ???
0.0 2 0 22ed ???
0.0 1 0 5c77 persistent
Most Active Logical Volumes
------------------------------------------------------------------------
util #rblk #wblk KB/s volume description
------------------------------------------------------------------------
0.06 112 0 151.9 /dev/hd4 /
0.04 16 0 21.7 /dev/hd2 /usr
Most Active Physical Volumes
------------------------------------------------------------------------
util #rblk #wblk KB/s volume description
------------------------------------------------------------------------
0.10 128 0 173.6 /dev/hdisk0 N/A
------------------------------------------------------------------------
Detailed File Stats
------------------------------------------------------------------------
FILE: /smit.log volume: /dev/hd4 (/) inode: 858
opens: 1
total bytes xfrd: 57344
reads: 14 (0 errs)
read sizes (bytes): avg 4096.0 min 4096 max 4096 sdev 0.0
read times (msec): avg 1.709 min 0.002 max 19.996 sdev 5.092
FILE: /dev/null
opens: 1
total bytes xfrd: 50600
writes: 13 (0 errs)
write sizes (bytes): avg 3892.3 min 1448 max 4096 sdev 705.6
write times (msec): avg 0.007 min 0.003 max 0.022 sdev 0.006
FILE: /usr/lib/nls/msg/en_US/ksh.cat volume: /dev/hd2 (/usr) inode: 16872
opens: 2
total bytes xfrd: 16384
reads: 4 (0 errs)
read sizes (bytes): avg 4096.0 min 4096 max 4096 sdev 0.0
read times (msec): avg 0.042 min 0.015 max 0.070 sdev 0.025
lseeks: 10
FILE: /usr/lib/nls/msg/en_US/cmdtrace.cat volume: /dev/hd2 (/usr) inode: 16739
opens: 1
total bytes xfrd: 8192
reads: 2 (0 errs)
read sizes (bytes): avg 4096.0 min 4096 max 4096 sdev 0.0
read times (msec): avg 0.062 min 0.049 max 0.075 sdev 0.013
lseeks: 8
------------------------------------------------------------------------
Detailed VM Segment Stats (4096 byte pages)
------------------------------------------------------------------------
SEGMENT: 5e93 segtype: ???
segment flags:
reads: 13 (0 errs)
read times (msec): avg 1.979 min 0.957 max 5.970 sdev 1.310
read sequences: 1
read seq. lengths: avg 13.0 min 13 max 13 sdev 0.0
SEGMENT: 22ed segtype: ???
segment flags: inode
reads: 2 (0 errs)
read times (msec): avg 8.102 min 7.786 max 8.418 sdev 0.316
read sequences: 2
read seq. lengths: avg 1.0 min 1 max 1 sdev 0.0
SEGMENT: 5c77 segtype: persistent
segment flags: pers defer
reads: 1 (0 errs)
read times (msec): avg 13.810 min 13.810 max 13.810 sdev 0.000
read sequences: 1
read seq. lengths: avg 1.0 min 1 max 1 sdev 0.0
------------------------------------------------------------------------
Detailed Logical Volume Stats (512 byte blocks)
------------------------------------------------------------------------
VOLUME: /dev/hd4 description: /
reads: 5 (0 errs)
read sizes (blks): avg 22.4 min 8 max 40 sdev 12.8
read times (msec): avg 4.847 min 0.938 max 13.792 sdev 4.819
read sequences: 3
read seq. lengths: avg 37.3 min 8 max 64 sdev 22.9
seeks: 3 (60.0%)
seek dist (blks): init 6344,
avg 40.0 min 8 max 72 sdev 32.0
time to next req(msec): avg 70.473 min 0.224 max 331.020 sdev 130.364
throughput: 151.9 KB/sec
utilization: 0.06
VOLUME: /dev/hd2 description: /usr
reads: 2 (0 errs)
read sizes (blks): avg 8.0 min 8 max 8 sdev 0.0
read times (msec): avg 8.078 min 7.769 max 8.387 sdev 0.309
read sequences: 2
read seq. lengths: avg 8.0 min 8 max 8 sdev 0.0
seeks: 2 (100.0%)
seek dist (blks): init 608672,
avg 16.0 min 16 max 16 sdev 0.0
time to next req(msec): avg 162.160 min 8.497 max 315.823 sdev 153.663
throughput: 21.7 KB/sec
utilization: 0.04
------------------------------------------------------------------------
Detailed Physical Volume Stats (512 byte blocks)
------------------------------------------------------------------------
VOLUME: /dev/hdisk0 description: N/A
reads: 7 (0 errs)
read sizes (blks): avg 18.3 min 8 max 40 sdev 12.6
read times (msec): avg 5.723 min 0.905 max 20.448 sdev 6.567
read sequences: 5
read seq. lengths: avg 25.6 min 8 max 64 sdev 22.9
seeks: 5 (71.4%)
seek dist (blks): init 4233888,
avg 171086.0 min 8 max 684248 sdev 296274.2
seek dist (%tot blks):init 48.03665,
avg 1.94110 min 0.00009 max 7.76331 sdev 3.36145
time to next req(msec): avg 50.340 min 0.226 max 315.865 sdev 108.483
throughput: 173.6 KB/sec
utilization: 0.10
Using the filemon command in systems with real workloads would result in much larger reports and might require more trace buffer space. Space and CPU time consumption for the filemon command can degrade system performance to some extent. Use a nonproduction system to experiment with the filemon command before starting it in a production environment. Also, use offline processing and on systems with many CPUs use the -C all flag with the trace command.
Note: Although
the filemon command reports average, minimum, maximum,
and standard deviation in its detailed-statistics sections, the results should
not be used to develop confidence intervals or other formal statistical inferences.
In general, the distribution of data points is neither random nor symmetrical.
0 comments:
Post a Comment