ORION – SLOB implementation comparison

 

Preamble

The subject of this blog post is around ORION (Oracle IO Numbers) from Oracle corporation and SLOB (Silly Little Oracle Benchmark) from Kevin Closson. ORION – SLOB implementation is a subject I have in my basket since ages. I know the tools but I have never taken few days to understand how they are working and which results I could obtain from them.

The post does not at all aim at bench-marking my infrastructure as here I’m using a Virtual Machine running under VirtualBox on my own desktop. The Operating system is Oracle Linux Server release 7.1 with Grid Infrastructure and Oracle Enterprise Edition 12.1.0.2.0.

This blog post is simply the usage of both tools and compare how they display their results. As I also use my desktop to work it would be difficult to have consistent results. From what I have read on Pythian web site it looks difficult to obtain consistent results for both tools…

To have four devices I have created four disks of 4GB that I have attached to my virtual machine and integrated inside ASM using UDEV, like we have seen in the UDEV post.

ORION

ORION is coming by default in any Oracle database home you install (in bin directory). You don’t even have to create a database or even declare the devices in ASM, ORION is able to work on them even without any piece of Oracle inside. So really nothing to do on this part…

I started by creating an orion.lun file (default file name) containing the list of my four devices:

[oracle@server3 orion]$ cat orion.lun
/dev/sdb
/dev/sdc
/dev/sdd
/dev/sde

As a first test I have tried with oltp benchmark:

[oracle@server3 orion]$ /u01/app/oracle/product/12.1.0/dbhome_1/bin/orion -run oltp
ORION: ORacle IO Numbers -- Version 12.1.0.2.0
orion_20150703_1642
Calibration will take approximately 25 minutes.
Using a large value for -cache_size may take longer.
 
 
Maximum Small IOPS=263 @ Small=80 and Large=0
Small Read Latency: avg=303432 us, min=7729 us, max=1917278 us, std dev=209005 us @ Small=80 and Large=0
 
Minimum Small Latency=33139 usecs @ Small=4 and Large=0
Small Read Latency: avg=33139 us, min=5746 us, max=395517 us, std dev=19557 us @ Small=4 and Large=0
Small Read / Write Latency Histogram @ Small=4 and Large=0
        Latency:                # of IOs (read)          # of IOs (write)
        0 - 1           us:             0                       0
        2 - 4           us:             0                       0
        4 - 8           us:             0                       0
        8 - 16          us:             0                       0
       16 - 32          us:             0                       0
       32 - 64          us:             0                       0
       64 - 128         us:             0                       0
      128 - 256         us:             0                       0
      256 - 512         us:             0                       0
      512 - 1024        us:             0                       0
     1024 - 2048        us:             0                       0
     2048 - 4096        us:             0                       0
     4096 - 8192        us:             23                      0
     8192 - 16384       us:             626                     0
    16384 - 32768       us:             3833                    0
    32768 - 65536       us:             2414                    0
    65536 - 131072      us:             293                     0
   131072 - 262144      us:             38                      0
   262144 - 524288      us:             3                       0
   524288 - 1048576     us:             0                       0
  1048576 - 2097152     us:             0                       0
  2097152 - 4194304     us:             0                       0
  4194304 - 8388608     us:             0                       0
  8388608 - 16777216    us:             0                       0
 16777216 - 33554432    us:             0                       0
 33554432 - 67108864    us:             0                       0
 67108864 - 134217728   us:             0                       0
134217728 - 268435456   us:             0                       0

Test says 263 IOPS of small (8KB) so 2.05 MBPS with an average read latency of 303 ms.

Refined a bit the test with writes and defining number of disks:

[oracle@server3 orion]$ /u01/app/oracle/product/12.1.0/dbhome_1/bin/orion -run oltp -num_disks 4 -write 20
ORION: ORacle IO Numbers -- Version 12.1.0.2.0
orion_20150706_1307
Calibration will take approximately 25 minutes.
Using a large value for -cache_size may take longer.
 
 
Maximum Small IOPS=217 @ Small=76 and Large=0
Small Read Latency: avg=411917 us, min=188 us, max=4162676 us, std dev=497365 us @ Small=76 and Large=0
Small Write Latency: avg=98543 us, min=90 us, max=3584324 us, std dev=435759 us @ Small=76 and Large=0
 
Minimum Small Latency=33385 usecs @ Small=4 and Large=0
Small Read Latency: avg=41432 us, min=219 us, max=2137641 us, std dev=85882 us @ Small=4 and Large=0
Small Write Latency: avg=2106 us, min=76 us, max=987482 us, std dev=30642 us @ Small=4 and Large=0
Small Read / Write Latency Histogram @ Small=4 and Large=0
        Latency:                # of IOs (read)          # of IOs (write)
        0 - 1           us:             0                       0
        2 - 4           us:             0                       0
        4 - 8           us:             0                       0
        8 - 16          us:             0                       0
       16 - 32          us:             0                       0
       32 - 64          us:             0                       0
       64 - 128         us:             0                       28
      128 - 256         us:             3                       346
      256 - 512         us:             4                       941
      512 - 1024        us:             1                       84
     1024 - 2048        us:             0                       57
     2048 - 4096        us:             0                       1
     4096 - 8192        us:             42                      0
     8192 - 16384       us:             688                     0
    16384 - 32768       us:             2713                    2
    32768 - 65536       us:             1776                    0
    65536 - 131072      us:             342                     2
   131072 - 262144      us:             70                      2
   262144 - 524288      us:             40                      3
   524288 - 1048576     us:             12                      1
  1048576 - 2097152     us:             10                      0
  2097152 - 4194304     us:             1                       0
  4194304 - 8388608     us:             0                       0
  8388608 - 16777216    us:             0                       0
 16777216 - 33554432    us:             0                       0
 33554432 - 67108864    us:             0                       0
 67108864 - 134217728   us:             0                       0
134217728 - 268435456   us:             0                       0

Test says 217 IOPS of small (8KB) so 1.70 MBPS with an average read latency of 411 ms and an average write latency of 98 ms.

With the normal benchmark, I have discovered that list of options is not usable for all tests:

[oracle@server3 orion]$ /u01/app/oracle/product/12.1.0/dbhome_1/bin/orion -run normal -num_disks 4 -write 20
ORION: ORacle IO Numbers -- Version 12.1.0.2.0
ERROR: When -run is simple or normal, -testname and -num_disks must be specified.  The only other parameters which may be specified are -cache_size, -storax, -hugenotneeded, iotype and -verbose.
[oracle@server3 orion]$ /u01/app/oracle/product/12.1.0/dbhome_1/bin/orion -run normal -num_disks 4
ORION: ORacle IO Numbers -- Version 12.1.0.2.0
orion_20150708_1549
Calibration will take approximately 190 minutes.
Using a large value for -cache_size may take longer.
 
Maximum Large MBPS=42.57 @ Small=0 and Large=7
 
Maximum Small IOPS=212 @ Small=18 and Large=0
Small Read Latency: avg=84594 us, min=5454 us, max=772792 us, std dev=82400 us @ Small=18 and Large=0
 
Minimum Small Latency=11973 usecs @ Small=1 and Large=0
Small Read Latency: avg=11973 us, min=0 us, max=368670 us, std dev=11770 us @ Small=1 and Large=0
Small Read / Write Latency Histogram @ Small=1 and Large=0
        Latency:                # of IOs (read)          # of IOs (write)
        0 - 1           us:             1                       0
        2 - 4           us:             0                       0
        4 - 8           us:             0                       0
        8 - 16          us:             0                       0
       16 - 32          us:             0                       0
       32 - 64          us:             0                       0
       64 - 128         us:             0                       0
      128 - 256         us:             0                       0
      256 - 512         us:             0                       0
      512 - 1024        us:             0                       0
     1024 - 2048        us:             1                       0
     2048 - 4096        us:             135                     0
     4096 - 8192        us:             1137                    0
     8192 - 16384       us:             3278                    0
    16384 - 32768       us:             336                     0
    32768 - 65536       us:             89                      0
    65536 - 131072      us:             21                      0
   131072 - 262144      us:             7                       0
   262144 - 524288      us:             2                       0
   524288 - 1048576     us:             0                       0
  1048576 - 2097152     us:             0                       0
  2097152 - 4194304     us:             0                       0
  4194304 - 8388608     us:             0                       0
  8388608 - 16777216    us:             0                       0
 16777216 - 33554432    us:             0                       0
 33554432 - 67108864    us:             0                       0
 67108864 - 134217728   us:             0                       0
134217728 - 268435456   us:             0                       0

Test says 212 IOPS of small (8KB) so 1.66 MBPS with an average read latency of 85 ms, the maximum throughput has been achieved with large IOs (1 MB) at 42.57 MBPS. The huge drop in read latency can be explained by the concurrent running programs on my desktop…

The dss test with random large (1M) IOs to get maximum throughput:

[oracle@server3 orion]$ /u01/app/oracle/product/12.1.0/dbhome_1/bin/orion -run dss -num_disks 4 -write 20
ORION: ORacle IO Numbers -- Version 12.1.0.2.0
orion_20150710_1305
Calibration will take approximately 77 minutes.
Using a large value for -cache_size may take longer.
 
Maximum Large MBPS=42.86 @ Small=0 and Large=48

So 42.86 MBPS.

SLOB

Putting in place SLOB is a bit more complex as you have to setup a database to have it working… The software installation in itself is as simple as an unzip plus a small make in wait_kit directory.

Once database created use setup.sh with the tablespace name you have pre-created (slob) and number of users (I have chosen it big to allow multiple test with lower values):

[oracle@server3 SLOB]$ ./setup.sh slob 128
NOTIFY  : 2015.07.09-16:33:09 :
NOTIFY  : 2015.07.09-16:33:09 : Begin SLOB setup. Checking configuration.
NOTIFY  : 2015.07.09-16:33:09 :
NOTIFY  : 2015.07.09-16:33:09 : Load parameters from slob.conf:
NOTIFY  : 2015.07.09-16:33:09 : LOAD_PARALLEL_DEGREE == "4"
NOTIFY  : 2015.07.09-16:33:09 : SCALE == "10000"
NOTIFY  : 2015.07.09-16:33:09 : ADMIN_SQLNET_SERVICE == ""
NOTIFY  : 2015.07.09-16:33:09 : ADMIN_CONNECT_STRING == "/ as sysdba"
NOTIFY  : 2015.07.09-16:33:09 : NON_ADMIN_CONNECT_STRING == ""
NOTIFY  : 2015.07.09-16:33:09 :
NOTIFY  : 2015.07.09-16:33:09 : Testing connectivity to the instance to validate slob.conf settings.
NOTIFY  : 2015.07.09-16:33:09 : Testing Admin connect using "/ as sysdba"
NOTIFY  : 2015.07.09-16:33:09 : ./setup.sh: Successful test connection: "sqlplus -L / as sysdba"
NOTIFY  : 2015.07.09-16:33:09 :
NOTIFY  : 2015.07.09-16:33:10 : Dropping prior SLOB schemas.
NOTIFY  : 2015.07.09-16:33:34 : Deleted 33 SLOB schemas. Note, this includes the user0 schema.
NOTIFY  : 2015.07.09-16:33:35 : Previous SLOB schemas have been removed.
NOTIFY  : 2015.07.09-16:33:35 : Preparing to load 128 schema(s) into tablespace: slob
NOTIFY  : 2015.07.09-16:33:35 : Loading user1 schema.
NOTIFY  : 2015.07.09-16:33:48 : Finished loading user1 schema in 13 seconds.
NOTIFY  : 2015.07.09-16:33:48 : Beginning concurrent load phase.
NOTIFY  : 2015.07.09-16:33:50 : Waiting for background group 1. Loading up to user5.
NOTIFY  : 2015.07.09-16:35:17 : Finished background group 1.
NOTIFY  : 2015.07.09-16:35:19 : Waiting for background group 2. Loading up to user9.
NOTIFY  : 2015.07.09-16:36:40 : Finished background group 2.
NOTIFY  : 2015.07.09-16:36:42 : Waiting for background group 3. Loading up to user13.
NOTIFY  : 2015.07.09-16:38:07 : Finished background group 3.
NOTIFY  : 2015.07.09-16:38:08 : Waiting for background group 4. Loading up to user17.
NOTIFY  : 2015.07.09-16:39:20 : Finished background group 4.
NOTIFY  : 2015.07.09-16:39:22 : Waiting for background group 5. Loading up to user21.
NOTIFY  : 2015.07.09-16:40:49 : Finished background group 5.
NOTIFY  : 2015.07.09-16:40:50 : Waiting for background group 6. Loading up to user25.
NOTIFY  : 2015.07.09-16:42:11 : Finished background group 6.
NOTIFY  : 2015.07.09-16:42:13 : Waiting for background group 7. Loading up to user29.
NOTIFY  : 2015.07.09-16:43:37 : Finished background group 7.
NOTIFY  : 2015.07.09-16:43:39 : Waiting for background group 8. Loading up to user33.
NOTIFY  : 2015.07.09-16:44:57 : Finished background group 8.
NOTIFY  : 2015.07.09-16:44:59 : Waiting for background group 9. Loading up to user37.
NOTIFY  : 2015.07.09-16:46:33 : Finished background group 9.
NOTIFY  : 2015.07.09-16:46:34 : Waiting for background group 10. Loading up to user41.
NOTIFY  : 2015.07.09-16:48:12 : Finished background group 10.
NOTIFY  : 2015.07.09-16:48:14 : Waiting for background group 11. Loading up to user45.
NOTIFY  : 2015.07.09-16:49:50 : Finished background group 11.
NOTIFY  : 2015.07.09-16:49:52 : Waiting for background group 12. Loading up to user49.
NOTIFY  : 2015.07.09-16:51:22 : Finished background group 12.
NOTIFY  : 2015.07.09-16:51:24 : Waiting for background group 13. Loading up to user53.
NOTIFY  : 2015.07.09-16:53:05 : Finished background group 13.
NOTIFY  : 2015.07.09-16:53:06 : Waiting for background group 14. Loading up to user57.
NOTIFY  : 2015.07.09-16:54:40 : Finished background group 14.
NOTIFY  : 2015.07.09-16:54:42 : Waiting for background group 15. Loading up to user61.
NOTIFY  : 2015.07.09-16:56:36 : Finished background group 15.
NOTIFY  : 2015.07.09-16:56:37 : Waiting for background group 16. Loading up to user65.
NOTIFY  : 2015.07.09-16:58:42 : Finished background group 16.
NOTIFY  : 2015.07.09-16:58:44 : Waiting for background group 17. Loading up to user69.
NOTIFY  : 2015.07.09-17:00:39 : Finished background group 17.
NOTIFY  : 2015.07.09-17:00:41 : Waiting for background group 18. Loading up to user73.
NOTIFY  : 2015.07.09-17:03:17 : Finished background group 18.
NOTIFY  : 2015.07.09-17:03:19 : Waiting for background group 19. Loading up to user77.
NOTIFY  : 2015.07.09-17:04:56 : Finished background group 19.
NOTIFY  : 2015.07.09-17:04:58 : Waiting for background group 20. Loading up to user81.
NOTIFY  : 2015.07.09-17:06:44 : Finished background group 20.
NOTIFY  : 2015.07.09-17:06:45 : Waiting for background group 21. Loading up to user85.
NOTIFY  : 2015.07.09-17:08:37 : Finished background group 21.
NOTIFY  : 2015.07.09-17:08:39 : Waiting for background group 22. Loading up to user89.
NOTIFY  : 2015.07.09-17:10:33 : Finished background group 22.
NOTIFY  : 2015.07.09-17:10:35 : Waiting for background group 23. Loading up to user93.
NOTIFY  : 2015.07.09-17:12:31 : Finished background group 23.
NOTIFY  : 2015.07.09-17:12:32 : Waiting for background group 24. Loading up to user97.
NOTIFY  : 2015.07.09-17:14:12 : Finished background group 24.
NOTIFY  : 2015.07.09-17:14:13 : Waiting for background group 25. Loading up to user101.
NOTIFY  : 2015.07.09-17:16:01 : Finished background group 25.
NOTIFY  : 2015.07.09-17:16:02 : Waiting for background group 26. Loading up to user105.
NOTIFY  : 2015.07.09-17:17:39 : Finished background group 26.
NOTIFY  : 2015.07.09-17:17:40 : Waiting for background group 27. Loading up to user109.
NOTIFY  : 2015.07.09-17:19:16 : Finished background group 27.
NOTIFY  : 2015.07.09-17:19:18 : Waiting for background group 28. Loading up to user113.
NOTIFY  : 2015.07.09-17:20:58 : Finished background group 28.
NOTIFY  : 2015.07.09-17:20:59 : Waiting for background group 29. Loading up to user117.
NOTIFY  : 2015.07.09-17:23:00 : Finished background group 29.
NOTIFY  : 2015.07.09-17:23:03 : Waiting for background group 30. Loading up to user121.
NOTIFY  : 2015.07.09-17:24:35 : Finished background group 30.
NOTIFY  : 2015.07.09-17:24:37 : Waiting for background group 31. Loading up to user125.
NOTIFY  : 2015.07.09-17:26:25 : Finished background group 31.
NOTIFY  : 2015.07.09-17:27:43 : Completed concurrent data loading phase: 3234 seconds
NOTIFY  : 2015.07.09-17:27:43 : Creating SLOB procedure.
NOTIFY  : 2015.07.09-17:27:44 : SLOB procedure created.
NOTIFY  : 2015.07.09-17:27:45 : Row and block counts for SLOB table(s) reported in ./slob_data_load_summary.txt
NOTIFY  : 2015.07.09-17:27:45 : Please examine ./slob_data_load_summary.txt for any possbile errors.
NOTIFY  : 2015.07.09-17:27:45 :
NOTIFY  : 2015.07.09-17:27:45 : NOTE: No errors *detected* but if ./slob_data_load_summary.txt shows errors then
NOTIFY  : 2015.07.09-17:27:45 : examine /home/oracle/slob/SLOB/cr_tab_and_load.out.
 
NOTIFY  : 2015.07.09-17:27:45 : SLOB setup complete (3276 seconds).

For around 10-11 GB:

SQL> SET lines 200
SQL> col FILE_NAME FOR a50
SQL> SQL> SELECT file_name,tablespace_name,bytes/(1024*1024) AS "Size MB" FROM  dba_data_files;
 
FILE_NAME                                          TABLESPACE_NAME                   SIZE MB
-------------------------------------------------- ------------------------------ ----------
+DATA/ORCL/DATAFILE/SYSTEM.261.884608369           SYSTEM                                700
+DATA/ORCL/DATAFILE/sysaux.262.884608397           SYSAUX                                550
+DATA/ORCL/DATAFILE/undotbs1.263.884608415         UNDOTBS1                              255
+DATA/ORCL/DATAFILE/users.265.884608455            USERS                                   5
+DATA/ORCL/DATAFILE/slob.267.884615501             SLOB                             10885.75

Each test run with runit.sh produce different files that are described in SLOB official documentation. The one that you will most use is awr.txt, yes AWR in text format, as a script to dig in it is provided. The script is awr_info.sh in misc directory.

The first test with 128 concurrent users gave:

[oracle@server3 SLOB]$ ./runit.sh 128
NOTIFY : 2015.07.09-17:38:41 :
NOTIFY : 2015.07.09-17:38:41 : Conducting SLOB pre-test checks.
NOTIFY:
UPDATE_PCT == 25
RUN_TIME == 300
WORK_LOOP == 0
SCALE == 10000
WORK_UNIT == 64
ADMIN_SQLNET_SERVICE == ""
SQLNET_SERVICE_MAX == "0"
admin_connect_string == "/ as sysdba"
non_admin_connect_string == ""
admin_conn == "sqlplus -L / as sysdba"
 
NOTIFY : 2015.07.09-17:38:41 : Verifying connectivity.
NOTIFY : 2015.07.09-17:38:41 : Testing SYSDBA connectivity to the instance to validate slob.conf settings.
NOTIFY : 2015.07.09-17:38:41 : Testing connectivity to instance (user1/user1)
NOTIFY : 2015.07.09-17:38:42 : Testing connectivity to instance (user128/user128)
NOTIFY : 2015.07.09-17:38:42 : Connectivity verified.
NOTIFY : 2015.07.09-17:38:42 : Performing redo log switch.
NOTIFY : 2015.07.09-17:38:45 : Redo log switch complete.
NOTIFY : 2015.07.09-17:38:45 : Setting up trigger mechanism.
NOTIFY : 2015.07.09-17:38:55 : Running iostat, vmstat and mpstat on current host--in background.
NOTIFY : 2015.07.09-17:38:55 : Connecting 128 sessions ...
NOTIFY : 2015.07.09-17:39:05 :
NOTIFY : 2015.07.09-17:39:05 : Pausing for 5 seconds before triggering the test.
NOTIFY : 2015.07.09-17:39:35 : Executing AWR "before snap" procedure. Connect string is "sqlplus -S -L / as sysdba"
NOTIFY : 2015.07.09-17:39:57 :
NOTIFY : 2015.07.09-17:39:57 : Triggering the test.
NOTIFY : 2015.07.09-17:40:05 : List of monitored sqlplus PIDs written to /tmp/23155_slob.pids.out
NOTIFY : 2015.07.09-17:40:29 : Waiting for 292 seconds before monitoring running processes (for exit).
NOTIFY : 2015.07.09-17:45:22 : Entering process monitoring loop.
NOTIFY : 2015.07.09-17:45:35 : There are 129 sqlplus processes remaining.
NOTIFY : 2015.07.09-17:45:46 : There are 129 sqlplus processes remaining.
NOTIFY : 2015.07.09-17:45:57 : There are 121 sqlplus processes remaining.
NOTIFY : 2015.07.09-17:46:08 : There are 109 sqlplus processes remaining.
NOTIFY : 2015.07.09-17:46:19 : There are 94 sqlplus processes remaining.
NOTIFY : 2015.07.09-17:46:29 : There are 62 sqlplus processes remaining.
NOTIFY : 2015.07.09-17:46:40 : There are 37 sqlplus processes remaining.
NOTIFY : 2015.07.09-17:46:46 : Run time in seconds was:  409
NOTIFY : 2015.07.09-17:46:46 : Executing AWR "after snap" procedure. Connect string is "sqlplus -S -L / as sysdba"
NOTIFY : 2015.07.09-17:47:20 : Generating AWR reports. HTML reports will be compressed. Connect string is "sqlplus -L / as sysdba"
NOTIFY : 2015.07.09-17:47:21 : Terminating background data collectors.
./runit.sh: line 589: 17395 Killed                  ( iostat -xm 3 > iostat.out 2>&1 )
./runit.sh: line 589: 17396 Killed                  ( vmstat 3 > vmstat.out 2>&1 )
./runit.sh: line 591: 17397 Killed                  ( mpstat -P ALL 3 > mpstat.out 2>&1 )
NOTIFY : 2015.07.09-17:48:43 : SLOB test is complete.

I found db file parallel read in podium of AWR report so applied Yury’s trick:

ALTER SYSTEM SET "_db_block_prefetch_limit"=0 scope=spfile;
ALTER SYSTEM SET "_db_block_prefetch_quota"=0 scope=spfile;
ALTER SYSTEM SET "_db_file_noncontig_mblock_read_count"=0 scope=spfile;

Second run with 128 concurrent users:

[oracle@server3 SLOB]$ ./runit.sh 128
.
.
.
[oracle@server3 SLOB]$ ./misc/awr_info.sh awr.txt
FILE|SESSIONS|ELAPSED|DB CPU|DB Tm|EXECUTES|LIO|PREADS|READ_MBS|PWRITES|WRITE_MBS|REDO_MBS|DFSR_LAT|DPR_LAT|DFPR_LAT|DFPW_LAT|LFPW_LAT|TOP WAIT|
awr.txt||862|0.4|77.7|211|947|78|    0.7|25|     0|0|  178304|0|0|  130938|  198502|cursor: pin S wait on X             67903      40.6K     598.63   60.7 Concurre|

The cursor: pin S wait on X wait event was more than 60% of DB time so clearly an issue as here I’m expecting to see an IO related wait event. So decided to decrease number of users. I have added space in awr_info.sh script output for better readability. Note you can also use Excel and the “Test to Columns” feature:

[oracle@server3 SLOB]$ ./runit.sh 32
.
.
.
[oracle@server3 SLOB]$ ./misc/awr_info.sh awr.txt
FILE   |SESSIONS|ELAPSED|DB CPU|DB Tm|EXECUTES|LIO|PREADS|READ_MBS|PWRITES|WRITE_MBS|REDO_MBS|DFSR_LAT|DPR_LAT|DFPR_LAT|DFPW_LAT|LFPW_LAT|TOP WAIT|
awr.txt|        |314    |0.1   |31.3 |30      |313|132   |1.1     |54     |1        |0       |236596  |0      |0       |178926  |380556  |db file sequential read             41087     9720.5     236.58   98.8 User I/O|

So 132 IOPS (PREADS i.e. physical reads) for 1.1 MBPS (READ_MBS) with a read latency of 237 ms (DFSR_LAT) and a write latency of 179 ms (DFPW_LAT).

With 64 users:

[oracle@server3 SLOB]$ ./runit.sh 64
.
.
.
[oracle@server3 SLOB]$ ./misc/awr_info.sh awr.txt
FILE   |SESSIONS|ELAPSED|DB CPU|DB Tm|EXECUTES|LIO|PREADS|READ_MBS|PWRITES|WRITE_MBS|REDO_MBS|DFSR_LAT|DPR_LAT|DFPR_LAT|DFPW_LAT|LFPW_LAT|TOP WAIT|
awr.txt|        |331    |0.1   |61.9 |50      |395|132   |1.1     |48     |     0   |0       |457996  |0       |0      |196242  |602524  |db file sequential read             43579        20K     458.00   97.4 User I/O|

So 132 IOPS (PREADS i.e. physical reads) for 1.1 MBPS (READ_MBS) with a read latency of 458 ms (DFSR_LAT) and a write latency of 196 ms (DFPW_LAT).

ORION – SLOB conclusion

The results match so so but again it was not the aim of this blog post. The multiple posts of Pythian web site have extensively tried to compare both tool. From what I have read and understood it seems ORION is a bit more optimistic than SLOB…

My personal feeling on both tools is that ORION is much faster to implement than SLOB. No database to setup and once you have put the database binaries from installer you can start to measure the performance of your infrastructure. Of course it is not a TPC-C (or any other TPC-x) but it could help to understand if your new disk array, new multipathing configuration or whatever have been well setup and provide expected benefit…

References

This entry was posted in Oracle and tagged . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>