AIX basic commands

AIX release

AIX release:

server1{root}# oslevel
5.3.0.0

AIX Maintenance Levels:

server1{root}# oslevel -r
5300-09
server1{root}# lslpp -h bos.rte
  Fileset         Level     Action       Status       Date         Time
  ----------------------------------------------------------------------------
Path: /usr/lib/objrepos
  bos.rte
                  5.3.7.0   COMMIT       COMPLETE     04/01/08     15:51:10
                  5.3.9.0   COMMIT       COMPLETE     07/28/09     08:50:00
 
Path: /etc/objrepos
  bos.rte
                  5.3.7.0   COMMIT       COMPLETE     04/01/08     15:51:10
server1{root}# oslevel -s
5300-09-03-0918

Remark:
The minimum AIX levels for POWER6 based models are AIX 5.2 TL10 and AIX 5L V5.3 TL06.

General hardware information

server1{root}# prtconf
System Model: IBM,9117-MMA
Machine Serial Number: 069CF00
Processor Type: PowerPC_POWER6
Processor Implementation Mode: POWER 6
Processor Version: PV_6_Compat
Number Of Processors: 8
Processor Clock Speed: 4704 MHz
CPU Type: 64-bit
Kernel Type: 64-bit
LPAR Info: 2 server1
Memory Size: 49152 MB
Good Memory Size: 49152 MB
Platform Firmware level: EM320_031
Firmware Version: IBM,EM320_031
Console Login: enable
Auto Restart: true
Full Core: false
.
.

Process scheduler

As on every Unix system you have a process scheduler which manage priorities between all processes. IBM do not recommend to change the priorities of Oracle processes. In some situation you may want to submit a command with high priority or change the one of a long running process that has too low priority (should not occur anyway). Commands are nice (to submit a new one) and renice (to change priority of an existing process):

To list (same tool to change their values) the default parameter of the AIX process scheduler:

server2{root}# schedo -L
NAME                      CUR    DEF    BOOT   MIN    MAX    UNIT           TYPE
     DEPENDENCIES
--------------------------------------------------------------------------------
%usDelta                  100    100    100    0      100                      D
--------------------------------------------------------------------------------
affinity_lim              7      7      7      0      100    dispatches        D
--------------------------------------------------------------------------------
allowMCMmigrate           0      0      0      0      1      boolean           D
.
.
server2{root}# ps -elf | grep -e lsn -e PPID | grep -v grep
       F S      UID     PID    PPID   C PRI NI ADDR    SZ    WCHAN    STIME    TTY  TIME CMD
  240001 A   ora320  639374       1   0  60 20 3458c9400 22508        * 17:22:17 pts/12  0:00 /ora320/software/bin/tnslsnr listener_db1 -inherit
server2{root}# renice -n 5 -p 639374
server2{root}# ps -elf | grep -e lsn -e PPID | grep -v grep
       F S      UID     PID    PPID   C PRI NI ADDR    SZ    WCHAN    STIME    TTY  TIME CMD
  240001 A   ora320  639374       1   0  70 25 3458c9400 22508        * 17:22:17 pts/12  0:00 /ora320/software/bin/tnslsnr listener_db1 -inherit

General AIX performance tools

The following AIX performance tools provide general information and metrics related to performance.

Resource Command
CPU Monitoring vmstat, iostat, topas, sar, time/timex
Memory Monitoring vmstat, topas, ps, lsps, ipcs
I/O Subsystem vmstat, topas, iostat, lvmstat, lsps, lsatt/lsdev, lspv/lsvg/lslv
Network netstat, topas, atmstat, entstat, tokstat, fddistat, nfsstat, ifconfig
Processes & Threads ps, pstat, topas

In depth tools

The following AIX performance tools provide in-depth information and metrics related to performance.

Resource Command
CPU Monitoring netpmon
Memory Monitoring svmon, netpmon, and filemon
I/O Subsystem filemon, fileplace
Network netpmon, tcpdump
Processes & Threads svmon, truss,kdb, dbx, gprof, fuser, prof

Trace based tools

An Event Based Trace Facility collects information about events that occur on the system such as scheduling dispatches, interrupts, and I/O. Trace points are inserted in the kernel code to record the events to a trace buffer. User level tools are provided to view the trace events in a time sequenced fashion. The events can be analyzed to gain a better understanding of the dynamics of the system.

Resource Command
CPU Monitoring tprof, curt, splat, trace, trcrpt
Memory Monitoring trace, trcrpt
I/O Subsystem trace, trcrpt
Network iptrace, ipreport, trace, trcrpt
Processes & Threads tprof, pprof, trace, trcrpt

Generic monitoring

# topas
# nmon

Remark:
Source code of nmon can be downloaded and even compiled on Linux.

Memory monitoring

Physical memory:

server1{root}# bootinfo -r
50331648
server1{root}# lsattr -El sys0 -a realmem
realmem 50331648 Amount of usable physical memory in Kbytes False

To display virtual memory statistics:

vmstat [number] [period]
 
System configuration: lcpu=16 mem=49151MB
 
kthr    memory              page              faults        cpu
----- ----------- ------------------------ ------------ -----------
 r  b   avm   fre  re  pi  po  fr   sr  cy  in   sy  cs us sy id wa
 0  3 3261477 316568   0   0   0   0    0   0  71 1334 709  0  0 88 12

To display VMM kernel setting:

# vmo -a
# vmo -L

I/O monitoring

# filemon

And then:

# trcstop

Disk I/O statistics:

server1{root}# iostat
 
System configuration: lcpu=16 drives=40 paths=80 vdisks=0
 
tty:      tin         tout    avg-cpu: % user % sys % idle % iowait
          0.2         52.6               18.9   2.6   70.6      7.9
 
Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk2           0.0       0.0       0.0          0         0
hdisk9           2.3     160.4       4.1   15206932  86802036
hdisk6           9.0     508.2      24.9   57384235  265816676
hdisk12          4.0     167.6       7.0   57450652  49140004
hdisk11         18.0     1775.5      54.9   98174024  1031033652
hdisk10          6.5     423.2      17.4   135609936  133547132
hdisk15          0.0       0.0       0.0        138         0
hdisk8          14.4     614.1      36.8   131828384  258755012
hdisk5           3.0     195.6       5.9    3531466  120870164
hdisk7           0.0       0.0       0.0          0         0
hdisk13          0.0       3.5       0.1     199399   2046760
hdisk14          0.0       0.0       0.0        138         0
hdisk16          0.0       0.0       0.0        138         0
hdisk17          0.1       0.9       0.2       4733    599116
hdisk24          7.4     615.9       8.1   183950101  207774476
hdisk18          0.0       0.0       0.0      30423         0
hdisk23          2.6     262.0       3.1   80770601  85879356
hdisk20          5.3     499.5       5.5   152948021  164713928
hdisk22          5.1     576.8       6.1   189567165  177268580
hdisk21          5.1     241.8       6.3   70954565  82825452
hdisk25          3.1     373.2       3.3   116488185  120847952
hdisk19          1.7      57.4       2.5   25768822  10753400
hdisk34          0.0       0.0       0.0          0         0
hdisk36          0.0       0.0       0.0          0         0
hdisk27          1.4       7.9       1.0    1393554   3653120
hdisk28          2.5      56.6       2.7    8204565  27762512
hdisk37          0.0       0.0       0.0          0         0
hdisk29          0.8      85.0       2.1    1566018  52491900
hdisk26          2.2     337.2       4.7   37079801  177369756
hdisk35          0.0       0.0       0.0          0         0
hdisk41          1.1      12.2       1.5    1421199   6355908
hdisk39          0.0       0.0       0.0          0         0
hdisk42          1.2      21.0       1.8    2840739  10500640
hdisk38          0.0       0.0       0.0          0         0
hdisk40          0.0       0.0       0.0          0         0
hdisk43          0.0       0.0       0.0          0         0
hdisk3           0.0       0.0       0.0          0         0
hdisk4           7.4     312.5      15.8   50085598  148669720
hdisk0          10.2     365.6      31.9   31805340  200744258
hdisk1          14.8     410.7      43.0   60467514  200744258
server1{root}# iostat -D hdisk11
 
System configuration: lcpu=16 drives=40 paths=80 vdisks=0
 
hdisk11        xfer:  %tm_act      bps      tps      bread      bwrtn
                        18.0      1.8M    54.9      158.1K       1.7M
               read:      rps  avgserv  minserv  maxserv   timeouts      fails
                        13.6      7.1      0.0      0.0           0          0
              write:      wps  avgserv  minserv  maxserv   timeouts      fails
                        41.3      2.1      0.0      0.0           0          0
              queue:  avgtime  mintime  maxtime  avgwqsz    avgsqsz     sqfull
                         5.2      0.0      0.0      0.0        0.0        54.9
--------------------------------------------------------------------------------
server1{root}# iostat -d hdisk11
 
System configuration: lcpu=16 drives=40 paths=80 vdisks=0
 
Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk11         18.0     1775.5      54.9   98174024  1031033652

Network monitoring

server1{root}# ifconfig -a
en6: flags=1e080863,c0<up,broadcast,notrailers,running,simplex,multicast,grouprt,64bit,checksum_offload(active),largesend,chain>
        inet 10.75.32.79 netmask 0xffffffc0 broadcast 10.75.32.127
         tcp_sendspace 131072 tcp_recvspace 65536 rfc1323 0
lo0: flags=e08084b<up,broadcast,loopback,running,simplex,multicast,grouprt,64bit>
        inet 127.0.0.1 netmask 0xff000000 broadcast 127.255.255.255
        inet6 ::1/0
         tcp_sendspace 131072 tcp_recvspace 131072 rfc1323 1
 
server1{root}# lsdev -Cc if
en0 Defined    Standard Ethernet Network Interface
en1 Defined    Standard Ethernet Network Interface
en2 Defined    Standard Ethernet Network Interface
en3 Defined    Standard Ethernet Network Interface
en4 Defined    Standard Ethernet Network Interface
en5 Defined    Standard Ethernet Network Interface
en6 Available  Standard Ethernet Network Interface
et0 Defined    IEEE 802.3 Ethernet Network Interface
et1 Defined    IEEE 802.3 Ethernet Network Interface
et2 Defined    IEEE 802.3 Ethernet Network Interface
et3 Defined    IEEE 802.3 Ethernet Network Interface
et4 Defined    IEEE 802.3 Ethernet Network Interface
et5 Defined    IEEE 802.3 Ethernet Network Interface
et6 Defined    IEEE 802.3 Ethernet Network Interface
lo0 Available  Loopback Network Interface
</up,broadcast,loopback,running,simplex,multicast,grouprt,64bit>
</up,broadcast,notrailers,running,simplex,multicast,grouprt,64bit,checksum_offload(active),largesend,chain>
server1{root}# entstat -d en6 |grep -i media
Media Speed Selected: Autonegotiate
Media Speed Running: 1000 Mbps / 1 Gbps, Full Duplex
Media Speed Selected: Autonegotiate
Media Speed Running: Autonegotiate

CPU monitoring

Number of cores:

server1{root}# lscfg | grep proc
+ proc0                                                                            Processor
+ proc2                                                                            Processor
+ proc4                                                                            Processor
+ proc6                                                                            Processor
+ proc8                                                                            Processor
+ proc10                                                                           Processor
+ proc12                                                                           Processor
+ proc14                                                                           Processor

If your processor are POWER5 or newer (ptrconf) you may want to activate Simultaneous multithreading (SMT). The command to display current setting is smtctl, that could even display that your hardware is not capable of doing it:

server2{root}# smtctl
smtctl: SMT is not supported on this system.
server3{root}# smtctl
 
This system is SMT capable.
 
SMT is currently enabled.
 
SMT boot mode is not set.
SMT threads are bound to the same physical processor.
 
proc0 has 2 SMT threads.
Bind processor 0 is bound with proc0
Bind processor 1 is bound with proc0
 
 
proc2 has 2 SMT threads.
Bind processor 2 is bound with proc2
Bind processor 3 is bound with proc2
 
 
proc4 has 2 SMT threads.
Bind processor 4 is bound with proc4
Bind processor 5 is bound with proc4
 
 
proc6 has 2 SMT threads.
Bind processor 6 is bound with proc6
Bind processor 7 is bound with proc6
 
 
proc8 has 2 SMT threads.
Bind processor 8 is bound with proc8
Bind processor 9 is bound with proc8
 
 
proc10 has 2 SMT threads.
Bind processor 10 is bound with proc10
Bind processor 11 is bound with proc10
 
 
proc12 has 2 SMT threads.
Bind processor 12 is bound with proc12
Bind processor 13 is bound with proc12
 
 
proc14 has 2 SMT threads.
Bind processor 14 is bound with proc14
Bind processor 15 is bound with proc14

When you activate it you see 2 more rows in v$osstat table (NUM_VCPUS and NUM_LCPUS):

SQL> SELECT * FROM v$osstat;
 
STAT_NAME                                                             VALUE  OSSTAT_ID
---------------------------------------------------------------- ---------- ----------
NUM_CPUS                                                                 16          0
IDLE_TIME                                                        2007288852          1
BUSY_TIME                                                         660341439          2
USER_TIME                                                         535932596          3
SYS_TIME                                                          124408843          4
IOWAIT_TIME                                                        80054612          5
AVG_IDLE_TIME                                                     125403159          7
AVG_BUSY_TIME                                                      41218836          8
AVG_USER_TIME                                                      33443377          9
AVG_SYS_TIME                                                        7723345         10
AVG_IOWAIT_TIME                                                     4951463         11
OS_CPU_WAIT_TIME                                                  940118500         13
RSRC_MGR_CPU_WAIT_TIME                                                    0         14
LOAD                                                              .00390625         15
NUM_CPU_CORES                                                             8         16
NUM_VCPUS                                                                 8         18
NUM_LCPUS                                                                16         19
PHYSICAL_MEMORY_BYTES                                            6.2277E+10       1008

Remark:
To monitor this use mpstat.

AIX offer the capability to bind a thread to a particular processor with the obvious promises to optimize CPU level 1 and level 2 (do not do it for database and log writer processes). By default threads are running on the next available processor, on the same one if possible anyway. Command is bindprocessor:

server2{root}# bindprocessor -q
The available processors are:  0 1 2 3 4 5
server2{root}# ps -ef | grep -e lsn -e PPID | grep -v grep
     UID     PID    PPID   C    STIME    TTY  TIME CMD
  ora320  639374       1   0 17:22:17      -  0:01 /ora320/software/bin/tnslsnr listener_db1 -inherit
server2{root}# bindprocessor 639374 0

You can also create resources pool (Resource Manager kind of product) using commands: mkrset, lsrset, execrset and attachrset.

server2{root}# lsrset -a
sys/sys0
sys/node.01.00000
sys/node.02.00000
sys/node.03.00000
sys/node.03.00001
sys/node.04.00000
sys/node.04.00001
sys/node.05.00000
sys/node.05.00001
sys/node.05.00002
sys/mem.00000
sys/cpu.00000
sys/cpu.00001
sys/cpu.00002
sys/cpu.00003
sys/cpu.00004
sys/cpu.00005

Difficult to predict what you can gain with this (not to say you can have a drawback) and most probably it is not worth the effort…

Another axis on investigation are LPAR and recently (Power5 and above) Dynamic LPAR. Note that Oracle is aware of dynamic CPU allocation starting with 10g…

File system defragmentation

Very hot topic (never tried any of the below commands):

server1{root}# defragfs -r /ora320/dump
Total allocation groups                                            : 10
Allocation groups skipped - entirely free                          : 8
Allocation groups that are candidates for defragmenting            : 2
Average number of free runs in candidate allocation groups         : 56
 
server1{root}# defragfs -s /ora320/dump
/dev/vgd21dlvol13 filesystem is 0 percent fragmented.
Total number of blocks                : 81866
Number of blocks that may be migrated : 0
 
server1{root}# defragfs -q /ora320/dump
Total allocation groups                                            : 10
Allocation groups skipped - entirely free                          : 8
Allocation groups that are candidates for defragmenting            : 2
Average number of free runs in candidate allocation groups         : 56

References

This entry was posted in AIX and tagged . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>