IT World https://blog.yannickjaquier.com RDBMS, Unix and many more... Mon, 19 Feb 2018 09:31:00 +0000 en-US hourly 1 https://wordpress.org/?v=4.9.4 Secure external password store (SEPS) implementation https://blog.yannickjaquier.com/oracle/secure-external-password-store-implementation.html https://blog.yannickjaquier.com/oracle/secure-external-password-store-implementation.html#respond Mon, 19 Feb 2018 09:31:00 +0000 http://blog.yannickjaquier.com/?p=4223

Table of contents

Preamble

Warning: nothing recent in this blog post ! But as usual in our continuous improvement to secure our Oracle databases and the never ending requests of our preferred SOX auditors we have been looking for a solution to hide applicative accounts passwords from developers and end users.

You might argue that those applicative passwords should not be known by any non authorized person, and you were right except that in real life there might be some divergence on this basic rule. So how would they know those applicative accounts passwords ? Answer is by simply displaying batch job or display running processes for example (ps command). Introduced with Oracle 10gR2 Oracle secure external password store (SEPS) feature target is exactly answering to this problem: hiding clear text passwords in batch scripts and allowing people to access a database with an account without knowing the password.

The background of this feature is Oracle Wallet and we will store inside accounts and their associated password, in an encrypted way (3DES) of course, and the usage will be then password less on command line either interactively or in batch jobs.

Last but not least this feature does not require the Enterprise Advanced Security paid option. You can even use it for free on your Windows laptop client !

This blog post has been written using two virtual machines running Oracle 12cR2 (12.2.0.1.0) Enterprise Edition for the Oracle database and client part. Both machines are running Oracle Linux Server release 7.4. In below server1.domain.com is my database server while client part is running on server4.domain.com.

Wallet creation

Wallet management is made of three distinct tools:

  • Oracle Wallet Manager (OWM), only graphical tool in this list
  • orapki
  • mkstore

Very quickly you will realize that OWM has no menu to manage SEPS and even if you can create an empty wallet you cannot save if it is empty… We will need to work with command line tools…

The recommended tool to use is mkstore but orapki has the interesting -auto_login_local option which forbid the wallet to be copied to another machine. I will use OWM default directory i.e. $ORACLE_HOME/owm/wallets/oracle.

Create the wallet with:

[oracle@server4 ~]$ mkstore -wrl $ORACLE_HOME/owm/wallets/oracle -create
Oracle Secret Store Tool : Version 12.2.0.1.0
Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.

Enter password:
Enter password again:
[oracle@server4 ~]$ ll $ORACLE_HOME/owm/wallets/oracle
total 8
-rw------- 1 oracle dba 194 Feb  6 12:46 cwallet.sso
-rw------- 1 oracle dba   0 Feb  6 12:46 cwallet.sso.lck
-rw------- 1 oracle dba 149 Feb  6 12:46 ewallet.p12
-rw------- 1 oracle dba   0 Feb  6 12:46 ewallet.p12.lck

Or with orapki and -auto_login_local option:

[oracle@server4 ~]$ orapki wallet create -wallet $ORACLE_HOME/owm/wallets/oracle -auto_login_local
Oracle PKI Tool : Version 12.2.0.1.0
Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.

Enter password:
Enter password again:
Operation is successfully completed.
[oracle@server4 ~]$ ll $ORACLE_HOME/owm/wallets/oracle
total 8
-rw------- 1 oracle dba 194 Feb  6 12:48 cwallet.sso
-rw------- 1 oracle dba   0 Feb  6 12:48 cwallet.sso.lck
-rw------- 1 oracle dba 149 Feb  6 12:48 ewallet.p12
-rw------- 1 oracle dba   0 Feb  6 12:48 ewallet.p12.lck

You also need to modify your client sqlnet.ora to specify where is your wallet with WALLET_LOCATION and tell the client to override the credential with he one stored in the wallet with SQLNET.WALLET_OVERRIDE:

[oracle@server4 ~]$ cat $ORACLE_HOME/network/admin/sqlnet.ora
WALLET_LOCATION=
  (SOURCE=
      (METHOD=file)
      (METHOD_DATA=
         (DIRECTORY=/u01/app/oracle/product/12.2.0/client_1/owm/wallets/oracle)))

SQLNET.WALLET_OVERRIDE=true

Secure External Password Store credentials creation

To insert credentials into your wallet the only available tool is mkstore. When creating credential either you supply password on command line like first example or, better, you supply them interactively like second example. As you have guessed -deleteCredential is used to delete a credential and -listCredential to list them:

[oracle@server4 ~]$ mkstore
Oracle Secret Store Tool : Version 12.2.0.1.0
Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.

mkstore [-wrl wrl] [-create] [-createSSO] [-createLSSO] [-createALO] [-delete] [-deleteSSO] [-list] [-createEntry alias secret] [-viewEntry alias]
[-modifyEntry alias secret] [-deleteEntry alias] [-createCredential connect_string username password] [-listCredential]
[-modifyCredential connect_string username password] [-deleteCredential connect_string]  [-createUserCredential map key   password]
[-modifyUserCredential map key username password]  [-deleteUserCredential map key] [-help] [-nologo]
[oracle@server4 ~]$ mkstore -wrl $ORACLE_HOME/owm/wallets/oracle -createCredential pdb1_yjaquier yjaquier secure_password
Oracle Secret Store Tool : Version 12.2.0.1.0
Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.

Enter wallet password:
[oracle@server4 ~]$ mkstore -wrl $ORACLE_HOME/owm/wallets/oracle -deleteCredential pdb1_yjaquier
Oracle Secret Store Tool : Version 12.2.0.1.0
Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.

Enter wallet password:
[oracle@server4 ~]$ mkstore -wrl $ORACLE_HOME/owm/wallets/oracle -createCredential pdb1_yjaquier yjaquier
Oracle Secret Store Tool : Version 12.2.0.1.0
Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.

Your secret/Password is missing in the command line
Enter your secret/Password:
Re-enter your secret/Password:
Enter wallet password:
[oracle@server4 ~]$ mkstore -wrl $ORACLE_HOME/owm/wallets/oracle -listCredential
Oracle Secret Store Tool : Version 12.2.0.1.0
Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.

Enter wallet password:
List credential (index: connect_string username)
1: pdb1_yjaquier yjaquier

The orapki equivalent has no interest, same as -list option of mkstore:

[oracle@server4 ~]$ orapki wallet display -wallet /u01/app/oracle/product/12.2.0/client_1/owm/wallets/oracle
Oracle PKI Tool : Version 12.2.0.1.0
Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.

Requested Certificates:
User Certificates:
Oracle Secret Store entries:
oracle.security.client.connect_string1
oracle.security.client.password1
oracle.security.client.username1
Trusted Certificates:
[oracle@server4 ~]$ mkstore -wrl /u01/app/oracle/product/12.2.0/client_1/owm/wallets/oracle -list
Oracle Secret Store Tool : Version 12.2.0.1.0
Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.

Enter wallet password:
Oracle Secret Store entries:
oracle.security.client.connect_string1
oracle.security.client.connect_string2
oracle.security.client.password1
oracle.security.client.password2
oracle.security.client.username1
oracle.security.client.username2

Remark
If you want to modify or delete the entries you have mkstore -modifyCredential and -deleteCredential option !

You also need to insert a TNS entry in tnsnames.ora with exact same name as the credential you have just created:

[oracle@server4 ~]$ cat $ORACLE_HOME/network/admin/tnsnames.ora
pdb1_yjaquier =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = server1.domain.com)(PORT = 1531))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = pdb1)
    )
  )
[oracle@server4 ~]$ tnsping pdb1_yjaquier

TNS Ping Utility for Linux: Version 12.2.0.1.0 - Production on 06-FEB-2018 15:21:23

Copyright (c) 1997, 2016, Oracle.  All rights reserved.

Used parameter files:
/u01/app/oracle/product/12.2.0/client_1//network/admin/sqlnet.ora


Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = server1.domain.com)(PORT = 1531)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = pdb1)))
OK (10 msec)

Remark:
We see here that it’s important to setup a good naming convention in our credentials or it might quickly become a mess. Here I have chosen service name_account name.

Even if you cannot create those entries with OWM you can use it to display them, editing is also not available:

seps01
seps01

One “funny” thing is that you can still display passwords of the entries you have created in SEPS:

[oracle@server4 ~]$ mkstore -wrl /u01/app/oracle/product/12.2.0/client_1/owm/wallets/oracle -viewEntry oracle.security.client.username1
Oracle Secret Store Tool : Version 12.2.0.1.0
Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.

Enter wallet password:
oracle.security.client.username1 = yjaquier
[oracle@server4 ~]$ mkstore -wrl /u01/app/oracle/product/12.2.0/client_1/owm/wallets/oracle -viewEntry oracle.security.client.password1
Oracle Secret Store Tool : Version 12.2.0.1.0
Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.

Enter wallet password:
oracle.security.client.password1 = secure_password

Secure External Password Store testing

SQL*Plus

The most simple test is using SQL*Plus that is coming with my Linux client:

[oracle@server4 ~]$ sqlplus /@pdb1_yjaquier

SQL*Plus: Release 12.2.0.1.0 Production on Tue Feb 6 15:21:52 2018

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Last Successful login time: Mon Feb 05 2018 15:02:21 +01:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> show user
USER is "YJAQUIER"

JDBC OCI driver

Using the JDBC OCI driver is the most simple when planning to use SEPS because you directly benefit from the Oracle client where you have configured SEPS through the Oracle wallet. The source code I have written is:

import java.sql.ResultSet;
import java.sql.Connection;
import java.sql.SQLException;
import oracle.jdbc.OracleDriver;
import java.sql.DriverManager;

public class seps_oci {
  public static void main(String[] args) throws Exception {
    Connection connection1 = null;
    String query1 = "select user from dual";
    ResultSet resultset1 = null;

    try {
      connection1 = DriverManager.getConnection("jdbc:oracle:oci:/@pdb1_yjaquier");
    }
    catch (SQLException e) {
      System.out.println("Connection Failed! Check output console");
      e.printStackTrace();
      System.exit(1);
    }
    System.out.println("Connected to Oracle database...");
    
    if (connection1!=null) {
      try {
        resultset1 = connection1.createStatement().executeQuery(query1);
        while (resultset1.next()) {
          System.out.println("Connected user: "+resultset1.getString(1));
        }
      }
      catch (SQLException e) {
        System.out.println("Query has failed...");
      }
    }
    resultset1.close();
    connection1.close(); 
  }
}

To execute it command line (I normally use Eclipse) do:

[oracle@server4 ~]$ javac -cp $ORACLE_HOME/jdbc/lib/ojdbc8.jar seps_oci.java
[oracle@server4 ~]$ java -cp $ORACLE_HOME/jdbc/lib/ojdbc8.jar:. seps_oci
Connected to Oracle database...
Connected user: YJAQUIER

JDBC Thin driver

Using JDBC Thin driver is a little more complex because all the part done by the installed Oracle client is not pre-configured as for the JDBC OCI driver. And here it’s a little weird as you are supposed not having a client (Thin driver) but you need one for libraries and wallet configuration. Please note that instant client is not enough to do the job.

The first property to set when trying to connect is wallet location with oracle.net.wallet_location, this is done by:

props.setProperty("oracle.net.wallet_location","(SOURCE=(METHOD=file)(METHOD_DATA=(DIRECTORY=/u01/app/oracle/product/12.2.0/client_1/owm/wallets/oracle)))");

After to specify connect string you have two options either you insert in SEPS the complete TNS entry with something like:

[oracle@server4 ~]$ mkstore -wrl $ORACLE_HOME/owm/wallets/oracle -createCredential "(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=server1.domain.com)
(PORT=1531))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=pdb1)))" yjaquier
Oracle Secret Store Tool : Version 12.2.0.1.0
Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.

Your secret/Password is missing in the command line
Enter your secret/Password:
Re-enter your secret/Password:
Enter wallet password:
[oracle@server4 ~]$ mkstore -wrl $ORACLE_HOME/owm/wallets/oracle -listCredential
Oracle Secret Store Tool : Version 12.2.0.1.0
Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.

Enter wallet password:
List credential (index: connect_string username)
2: (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=server1.domain.com)(PORT=1531))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=pdb1))) yjaquier
1: pdb1_yjaquier yjaquier

Or you tell your Java program where is located tnsnames.ora file with oracle.net.tns_admin property:

System.setProperty("oracle.net.tns_admin","/u01/app/oracle/product/12.2.0/client_1/network/admin");

I have kept the two options in comment in my below Java code:

import java.sql.ResultSet;
import java.util.Properties;
import java.sql.Connection;
import java.sql.SQLException;
import oracle.jdbc.OracleDriver;
import java.sql.DriverManager;

public class seps_thin {
  public static void main(String[] args) throws Exception {
    Connection connection1 = null;
    String query1 = "select user from dual";
    String connect_string = "(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=server1.domain.com)(PORT=1531))"+
                            "(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=pdb1)))";
    ResultSet resultset1 = null;
    Properties props = new Properties();

    try {
      props.setProperty("oracle.net.wallet_location","(SOURCE=(METHOD=file)(METHOD_DATA="+
                        "(DIRECTORY=/u01/app/oracle/product/12.2.0/client_1/owm/wallets/oracle)))");
      System.setProperty("oracle.net.tns_admin","/u01/app/oracle/product/12.2.0/client_1/network/admin");
      //connection1 = DriverManager.getConnection("jdbc:oracle:thin:/@" + connect_string, props);
      connection1 = DriverManager.getConnection("jdbc:oracle:thin:/@pdb1_yjaquier", props);
    }
    catch (SQLException e) {
      System.out.println("Connection Failed! Check output console");
      e.printStackTrace();
      System.exit(1);
    }
    System.out.println("Connected to Oracle database...");
    
    if (connection1!=null) {
      try {
        resultset1 = connection1.createStatement().executeQuery(query1);
        while (resultset1.next()) {
          System.out.println("Connected user: "+resultset1.getString(1));
        }
      }
      catch (SQLException e) {
        System.out.println("Query has failed...");
      }
    }
    resultset1.close();
    connection1.close(); 
  }
}

Execute it same as for JDBC Thin driver except that you need to add oraclepki.jar from $ORACLE_HOME/jlib directory (not the one from $ORACLE_HOME/oc4j/jlib directory):

[oracle@server4 ~]$ javac -cp $ORACLE_HOME/jdbc/lib/ojdbc8.jar:$ORACLE_HOME/jlib/oraclepki.jar seps_thin.java
[oracle@server4 ~]$ java -cp $ORACLE_HOME/jdbc/lib/ojdbc8.jar:$ORACLE_HOME/jlib/oraclepki.jar:. seps_thin
Connected to Oracle database...
Connected user: YJAQUIER

References

]]>
https://blog.yannickjaquier.com/oracle/secure-external-password-store-implementation.html/feed 0
AWR mining for performance trend analysis https://blog.yannickjaquier.com/oracle/awr-mining-performance-trends.html https://blog.yannickjaquier.com/oracle/awr-mining-performance-trends.html#respond Sat, 20 Jan 2018 07:04:10 +0000 http://blog.yannickjaquier.com/?p=3964

Table of contents

Preamble

Following a performance issue we had on a BI environment we have extracted with Automatic Workload Repository (AWR) reports few SQLs that are running for tens of minutes not to say multiple hours. Before the mess started we had an hardware storage issue (I/O switches) which triggered additional I/Os to recover situation. In parallel applicative team that got no feedback about the situation that was under recover started to reorganize tables to try to reduce High Water Mark and increase performance. Overall the benefit was the opposite because when storage issues were completely resolved we did not get the exact performance before it all started.

The big question you must answer is: how was it behaving before the issue and do those SQLs had their execution time changed ?

Ideally you would need to have a baseline when performance were good and you would be able to compare, as we have already seen in this blog post. Another option is to use AWR tables and mine into them to try to compare how SQLs have diverged over time. For this obviously you need historical AWR snapshots so the must to keep at least 15 days of history (not to say one month) and change the too low default 7 days value. Example with 1 hour snapshot and 30 days history:

SQL> exec dbms_workload_repository.modify_snapshot_settings (interval=>60, retention=>43200);

PL/SQL procedure successfully completed.

Checking which SQLs diverge is also overall a very interesting information and can trigger nice discover in your batch jobs scheduling (too early, too late,…) and/or jobs running in parallel even if you are not stuck in a performance issue situation.

So far testing has been done on Oracle Enterprise Edition 11.2.0.4 with Tuning and Diagnostic packs running on RedHat Linux release 6.4 64 bits. I will complement this post on other releases and operating system in future…

Parallel downgrades

If you are using Cloud Control one thing you have surely noticed (thanks to the red arrow) in SQL Monitoring page is the decrease in allocated parallel processes if your server is overloaded or sessions are exaggeratedly using parallelism:

awr_mining01
awr_mining01

This can also be seen at SQL level with something like:

SQL> set lines 150 pages 1000
SQL> SELECT
  sql_id,
  sql_exec_id,
  TO_CHAR(sql_exec_start,'dd-mon-yyyy hh24:mi:ss') AS sql_exec_start,
  ROUND(elapsed_time/1000000) AS "Elapsed(s)",
  px_servers_requested AS "Requested DOP",
  px_servers_allocated AS "Allocated DOP",
  ROUND(cpu_time/1000000) AS "CPU(s)",
  buffer_gets AS "Buffer Gets",
  ROUND(physical_read_bytes /(1024*1024)) AS "Phys reads(MB)",
  ROUND(physical_write_bytes/(1024*1024)) AS "Phys writes(MB)",
  ROUND(user_io_wait_time/1000000) AS "IO Wait(s)"
FROM v$sql_monitor
WHERE px_servers_requested<>px_servers_allocated
ORDER BY sql_exec_start,sql_id;

SQL_ID        SQL_EXEC_ID SQL_EXEC_START       Elapsed(s) Requested DOP Allocated DOP     CPU(s) Buffer Gets Phys reads(MB) Phys writes(MB) IO Wait(s)
------------- ----------- -------------------- ---------- ------------- ------------- ---------- ----------- -------------- --------------- ----------
262dzg4ab75nt    16777216 02-dec-2016 15:22:34        263            32             0         10      511923           3625             810        250
883v5mk5bqwq8    16777216 02-dec-2016 15:23:43         41            64            14          8      156832           1426               0         27
7rykdg0zdyjz5    16777216 02-dec-2016 15:24:16        159            32             0          9      560335            802             856        151
amzmxuns5dctz    16777216 02-dec-2016 15:24:59        114            32             0          5       36383            569             548        108
414x9b7p4z10x    16777280 02-dec-2016 15:26:30          0            16            14          0           5              0               0          0
414x9b7p4z10x    16777281 02-dec-2016 15:26:31          0            16            10          0           5              0               0          0
414x9b7p4z10x    16777282 02-dec-2016 15:26:31          0            16            14          0           5              0               0          0
3472f0m6nm343    16778004 02-dec-2016 15:26:33          0            32            14          0         381              0               0          0
414x9b7p4z10x    16777283 02-dec-2016 15:26:35          0            16            14          0           5              0               0          0
3472f0m6nm343    16778005 02-dec-2016 15:26:40          0            32            14          0         381              0               0          0
3472f0m6nm343    16778006 02-dec-2016 15:26:46          0            32            14          0         381              0               0          0

11 rows selected.

Problem is that V$SQL_MONITORING has no history version so if you come late onto database you will not be able to get any past information about it…

What you can have is overall parallel situation of your database with:

SQL> set lines 150 pages 1000
SQL> SELECT name, value
FROM V$SYSSTAT
WHERE lower(name) LIKE '%parallel%'
ORDER BY 1;

NAME                                                                  VALUE
---------------------------------------------------------------- ----------
DBWR parallel query checkpoint buffers written                      1132544
DDL statements parallelized                                            1864
DFO trees parallelized                                               295265
DML statements parallelized                                              21
Parallel operations downgraded 1 to 25 pct                            55785
Parallel operations downgraded 25 to 50 pct                           12427
Parallel operations downgraded 50 to 75 pct                           78033
Parallel operations downgraded 75 to 99 pct                            8125
Parallel operations downgraded to serial                             150542
Parallel operations not downgraded                                   241815
queries parallelized                                                 208744

11 rows selected.

As this table has no historical equivalent you can have values for each AWR snapshot:

SQL> set lines 150 pages 1000
SQL> SELECT
  to_char(hsn.begin_interval_time,'dd-mon-yyyy hh24:mi:ss') AS begin_interval_time,
  to_char(hsn.end_interval_time,'dd-mon-yyyy hh24:mi:ss') AS end_interval_time,
  hsy.stat_name,
  hsy.value
FROM dba_hist_sysstat hsy, dba_hist_snapshot hsn
WHERE hsy.snap_id = hsn.snap_id
AND hsy.instance_number = hsn.instance_number
AND lower(hsy.stat_name) like '%parallel%'
ORDER BY hsn.snap_id DESC;


BEGIN_INTERVAL_TIME  END_INTERVAL_TIME    STAT_NAME                                                             VALUE
-------------------- -------------------- ---------------------------------------------------------------- ----------
30-nov-2016 23:00:18 01-dec-2016 00:00:48 queries parallelized                                                 182440
30-nov-2016 23:00:18 01-dec-2016 00:00:48 Parallel operations downgraded 25 to 50 pct                           10602
30-nov-2016 23:00:18 01-dec-2016 00:00:48 Parallel operations downgraded 1 to 25 pct                            48914
30-nov-2016 23:00:18 01-dec-2016 00:00:48 DML statements parallelized                                              17
30-nov-2016 23:00:18 01-dec-2016 00:00:48 DDL statements parallelized                                            1376
30-nov-2016 23:00:18 01-dec-2016 00:00:48 Parallel operations downgraded 50 to 75 pct                           56565
30-nov-2016 23:00:18 01-dec-2016 00:00:48 Parallel operations downgraded 75 to 99 pct                            7782
30-nov-2016 23:00:18 01-dec-2016 00:00:48 DBWR parallel query checkpoint buffers written                       936382
30-nov-2016 23:00:18 01-dec-2016 00:00:48 Parallel operations not downgraded                                   225266
30-nov-2016 23:00:18 01-dec-2016 00:00:48 DFO trees parallelized                                               248642
30-nov-2016 23:00:18 01-dec-2016 00:00:48 Parallel operations downgraded to serial                             117995
30-nov-2016 22:00:52 30-nov-2016 23:00:18 Parallel operations downgraded 50 to 75 pct                           56500
30-nov-2016 22:00:52 30-nov-2016 23:00:18 DFO trees parallelized                                               248352
30-nov-2016 22:00:52 30-nov-2016 23:00:18 Parallel operations not downgraded                                   225118
30-nov-2016 22:00:52 30-nov-2016 23:00:18 DBWR parallel query checkpoint buffers written                       919901
30-nov-2016 22:00:52 30-nov-2016 23:00:18 Parallel operations downgraded 75 to 99 pct                            7780
30-nov-2016 22:00:52 30-nov-2016 23:00:18 Parallel operations downgraded 25 to 50 pct                           10542
30-nov-2016 22:00:52 30-nov-2016 23:00:18 Parallel operations downgraded 1 to 25 pct                            48899
30-nov-2016 22:00:52 30-nov-2016 23:00:18 DML statements parallelized                                              17
30-nov-2016 22:00:52 30-nov-2016 23:00:18 queries parallelized                                                 182170
30-nov-2016 22:00:52 30-nov-2016 23:00:18 Parallel operations downgraded to serial                             117847
30-nov-2016 22:00:52 30-nov-2016 23:00:18 DDL statements parallelized                                            1356
.

With LAG analytic function you might even get the trend of one particular system statistics. I have chosen Parallel operations downgraded to serial means all queries that have moved from parallel to serial. On some you might expect a big performance penalty:

SQL> set lines 150 pages 1000
SQL> SELECT
  to_char(hsn.begin_interval_time,'dd-mon-yyyy hh24:mi:ss') AS begin_interval_time,
  to_char(hsn.end_interval_time,'dd-mon-yyyy hh24:mi:ss') AS end_interval_time,
  hsy.stat_name,
  hsy.value - hsy.prev_value AS value
  FROM (SELECT snap_id,instance_number,stat_name,value,LAG(value,1,value) OVER (ORDER BY snap_id) AS prev_value
        FROM dba_hist_sysstat
        WHERE stat_name = 'Parallel operations downgraded to serial') hsy,
        dba_hist_snapshot hsn
WHERE hsy.snap_id = hsn.snap_id
AND hsy.instance_number = hsn.instance_number
AND hsy.value - hsy.prev_value<>0
ORDER BY hsn.snap_id DESC;

BEGIN_INTERVAL_TIME  END_INTERVAL_TIME    STAT_NAME                                                             VALUE
-------------------- -------------------- ---------------------------------------------------------------- ----------
30-nov-2016 23:00:18 01-dec-2016 00:00:48 Parallel operations downgraded to serial                                148
30-nov-2016 22:00:52 30-nov-2016 23:00:18 Parallel operations downgraded to serial                                264
30-nov-2016 21:00:15 30-nov-2016 22:00:52 Parallel operations downgraded to serial                                160
30-nov-2016 20:00:42 30-nov-2016 21:00:15 Parallel operations downgraded to serial                                 65
30-nov-2016 19:00:25 30-nov-2016 20:00:42 Parallel operations downgraded to serial                                 31
30-nov-2016 18:00:29 30-nov-2016 19:00:25 Parallel operations downgraded to serial                                  8
30-nov-2016 17:00:57 30-nov-2016 18:00:29 Parallel operations downgraded to serial                               1789
30-nov-2016 16:00:11 30-nov-2016 17:00:57 Parallel operations downgraded to serial                                588
30-nov-2016 15:00:37 30-nov-2016 16:00:11 Parallel operations downgraded to serial                               1214
30-nov-2016 14:01:07 30-nov-2016 15:00:37 Parallel operations downgraded to serial                                856
30-nov-2016 09:00:24 30-nov-2016 10:00:26 Parallel operations downgraded to serial                                603
30-nov-2016 07:00:03 30-nov-2016 08:00:22 Parallel operations downgraded to serial                                  6
30-nov-2016 06:00:42 30-nov-2016 07:00:03 Parallel operations downgraded to serial                                502
30-nov-2016 05:00:24 30-nov-2016 06:00:42 Parallel operations downgraded to serial                                  8
30-nov-2016 04:00:04 30-nov-2016 05:00:24 Parallel operations downgraded to serial                               3032
30-nov-2016 03:00:37 30-nov-2016 04:00:04 Parallel operations downgraded to serial                               1161
30-nov-2016 02:00:39 30-nov-2016 03:00:37 Parallel operations downgraded to serial                               1243
30-nov-2016 01:01:04 30-nov-2016 02:00:39 Parallel operations downgraded to serial                                492
30-nov-2016 00:01:05 30-nov-2016 01:01:04 Parallel operations downgraded to serial                                 51
29-nov-2016 23:00:00 30-nov-2016 00:01:05 Parallel operations downgraded to serial                                 94
29-nov-2016 22:00:09 29-nov-2016 23:00:00 Parallel operations downgraded to serial                               7150
29-nov-2016 21:00:07 29-nov-2016 22:00:09 Parallel operations downgraded to serial                                167
29-nov-2016 20:00:33 29-nov-2016 21:00:07 Parallel operations downgraded to serial                                124
29-nov-2016 19:00:41 29-nov-2016 20:00:33 Parallel operations downgraded to serial                                157
29-nov-2016 18:00:22 29-nov-2016 19:00:41 Parallel operations downgraded to serial                                820
29-nov-2016 17:00:59 29-nov-2016 18:00:22 Parallel operations downgraded to serial                                 10
29-nov-2016 16:00:29 29-nov-2016 17:00:59 Parallel operations downgraded to serial                                 46
29-nov-2016 15:00:59 29-nov-2016 16:00:29 Parallel operations downgraded to serial                                761
29-nov-2016 14:00:23 29-nov-2016 15:00:59 Parallel operations downgraded to serial                              10342
29-nov-2016 11:00:08 29-nov-2016 12:00:14 Parallel operations downgraded to serial                                  8
29-nov-2016 09:00:31 29-nov-2016 10:00:45 Parallel operations downgraded to serial                                163
29-nov-2016 08:00:33 29-nov-2016 09:00:31 Parallel operations downgraded to serial                                 47
29-nov-2016 07:00:07 29-nov-2016 08:00:33 Parallel operations downgraded to serial                              10920
29-nov-2016 06:00:17 29-nov-2016 07:00:07 Parallel operations downgraded to serial                               1409
.

To be honest I was not expecting so high figures as for few 1 hour interval I got more than 10 thousands queries downgraded to serial (!!). The problem is, I think, coming from the maximum number of parallel processes that we have set to contain an applicative issue:

SQL> show parameter parallel_max_servers

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
parallel_max_servers                 integer     60

Unstable execution time

Goal of this chapter is to identify queries that have a highly divergent execution time. The table to use here is DBA_HIST_SQLSTAT that contains tons of very useful information. In this view never ever use the xx_TOTAL columns as if you statement has been aged out from library cache the cumulative value will restart from 0. Discussing with teammates I have decided to use the statistical function called standard deviation that is by default available in Oracle as analytics function called STDDEV. In plain English standard deviation is the average of the difference to average value (!!). Here below I have chosen to keep sql_id that have a maximum elapsed time of 5 minutes and where standard deviation is two times greater that minimum execution time to keep only extreme values:

SQL> set lines 150 pages 1000
SQL> WITH sql_id_stdded AS (SELECT
  sql_id,
  SUM(total_exec) AS total_exec,
  ROUND(MIN(avg_elapsed_time),2) AS min_elapsed_time,
  ROUND(MAX(avg_elapsed_time),2) AS max_elapsed_time,
  ROUND(stddev_elapsed_time,2) AS stddev_elapsed_time
  FROM (SELECT
          sql_id,
          total_exec,
          avg_elapsed_time,
          STDDEV(avg_elapsed_time) OVER(PARTITION BY sql_id) AS stddev_elapsed_time
        FROM (SELECT
                hsq.sql_id,
                hsq.plan_hash_value,
                SUM(nvl(hsq.executions_delta,0)) AS total_exec,
                SUM(hsq.elapsed_time_delta)/DECODE(SUM(nvl(hsq.executions_delta,0)),0,1,SUM(hsq.executions_delta))/1000000 AS avg_elapsed_time
              FROM dba_hist_sqlstat hsq, dba_hist_snapshot hsn
              WHERE hsq.snap_id = hsn.snap_id
              AND hsq.instance_number = hsn.instance_number
              AND hsq.executions_delta > 0
              GROUP BY hsq.sql_id, hsq.plan_hash_value))
        GROUP BY sql_id, stddev_elapsed_time)
SELECT
  sql_id,
  total_exec,
  TO_CHAR(CAST(NUMTODSINTERVAL(min_elapsed_time,'second') AS interval day(2) to second(0))) AS min_elapsed_time,
  TO_CHAR(CAST(NUMTODSINTERVAL(max_elapsed_time,'second') AS interval day(2) to second(0))) AS max_elapsed_time,
  TO_CHAR(CAST(NUMTODSINTERVAL(stddev_elapsed_time,'second') AS interval day(2) to second(0))) AS stddev_elapsed_time
FROM sql_id_stdded
WHERE stddev_elapsed_time>2*min_elapsed_time
AND max_elapsed_time>5*60
AND total_exec>1
ORDER BY stddev_elapsed_time desc;

SQL_ID        TOTAL_EXEC MIN_ELAPSED_T MAX_ELAPSED_T STDDEV_ELAPSE
------------- ---------- ------------- ------------- -------------
6hds16zkc8cgm          9 +00 00:09:04  +00 05:29:46  +00 01:49:56
76g9pn3z3a35u          8 +00 00:07:23  +00 02:47:35  +00 01:13:09
6w05thcf4w6pm          2 +00 00:02:30  +00 01:38:45  +00 01:08:04
bmtkpjynyhs88          8 +00 00:02:19  +00 02:38:59  +00 01:08:00
7k9cjk1q686f2         10 +00 00:09:38  +00 02:20:49  +00 00:53:56
56xyy7uq071g6          4 +00 00:08:56  +00 01:30:50  +00 00:44:57
cvbq6vqk8dbf3         13 +00 00:01:36  +00 02:25:31  +00 00:44:04
g5khnky3q36m1         18 +00 00:07:37  +00 01:38:04  +00 00:37:19
cru4zku27tv0p          2 +00 00:00:55  +00 00:47:09  +00 00:32:42
2m2ww08p9btvc          2 +00 00:09:50  +00 00:48:35  +00 00:27:24
08kdqk1abqm0v          2 +00 00:07:34  +00 00:45:43  +00 00:26:59
12txp882z4ucy         16 +00 00:04:17  +00 00:57:28  +00 00:22:17
1gta6uu65u8nw         14 +00 00:06:16  +00 00:48:53  +00 00:20:15
gcdtt7t6rf0hg         16 +00 00:02:59  +00 00:51:18  +00 00:18:29
3kffqq3kwta74          3 +00 00:00:43  +00 00:32:37  +00 00:18:13
1nqaj68tn9xp2          2 +00 00:05:46  +00 00:28:15  +00 00:15:54
b8pyc5puhh9c7          8 +00 00:00:49  +00 00:29:44  +00 00:15:40
2svyb8a5n6qb5         10 +00 00:00:56  +00 00:45:44  +00 00:15:31
8zsu7t63hj1zp         14 +00 00:03:46  +00 00:33:57  +00 00:14:13
1tdc0da6km50h         67 +00 00:01:09  +00 00:30:28  +00 00:14:04
3uvcgnm27xf1c          3 +00 00:06:21  +00 00:33:03  +00 00:13:22
ch3xpjhf3y2bf         11 +00 00:03:51  +00 00:22:21  +00 00:13:05
fdcpq4j8kxd49         13 +00 00:04:31  +00 00:22:50  +00 00:12:57
csctx8ttu58d9         25 +00 00:00:30  +00 00:26:47  +00 00:12:15
dv1cvuw300ny4         11 +00 00:03:14  +00 00:28:38  +00 00:10:40
gsx2423tf2a7f          7 +00 00:04:35  +00 00:29:30  +00 00:10:33
62nt4sxdsy586         19 +00 00:02:29  +00 00:27:16  +00 00:10:29
1jb164khmsyzj         60 +00 00:00:40  +00 00:19:42  +00 00:09:15
6mgwwp95jaxzz         13 +00 00:03:56  +00 00:15:46  +00 00:08:22
8v38gy0u6a2mk          5 +00 00:01:02  +00 00:15:50  +00 00:08:20
9mt8p7z1wjjkn          9 +00 00:02:12  +00 00:17:34  +00 00:08:09
byd0g9xfpwcuj          2 +00 00:00:10  +00 00:11:15  +00 00:07:51
7pm2wxdvu3mc7         11 +00 00:02:55  +00 00:13:27  +00 00:07:26
4u50sp3h1szcp          5 +00 00:02:52  +00 00:11:38  +00 00:06:12
7430xmabdv8av          3 +00 00:02:25  +00 00:12:17  +00 00:05:01
cd3xmx3sk7vrv         21 +00 00:00:22  +00 00:05:06  +00 00:03:21

36 rows selected.

If we take first sql_id query says minimum execution time is 9 minutes and 4 seconds and maximum one is 5 hours 29 minutes and 46 seconds. What an amazing difference !

To focus on this sql_id you might use something like. It is strongly suggested to execute this query in a graphical query tool because it is difficult to see something in pure command line. All timing are in seconds while size are in MegaBytes:

SQL> select
    hsq.sql_id,
    hsq.plan_hash_value,
    nvl(sum(hsq.executions_delta),0) as total_exec,
    round(sum(hsq.elapsed_time_delta)/1000000,2) as elapsed_time_total,
		round(sum(hsq.px_servers_execs_delta)/decode(sum(hsq.executions_delta),0,null,sum(hsq.executions_delta)),2) as avg_px_servers_execs,
    round(sum(hsq.elapsed_time_delta)/decode(sum(hsq.executions_delta),0,null,sum(hsq.executions_delta))/1000000,2) as avg_elapsed_time,
    round(sum(hsq.cpu_time_delta)/decode(sum(hsq.executions_delta),0,null,sum(hsq.executions_delta))/1000000,2) as avg_cpu_time,
    round(sum(hsq.iowait_delta)/decode(sum(hsq.executions_delta),0,null,sum(hsq.executions_delta))/1000000,2) as avg_iowait,
    round(sum(hsq.clwait_delta)/decode(sum(hsq.executions_delta),0,null,sum(hsq.executions_delta))/1000000,2) as avg_cluster_wait,
    round(sum(hsq.apwait_delta)/decode(sum(hsq.executions_delta),0,null,sum(hsq.executions_delta))/1000000,2) as avg_application_wait,
    round(sum(hsq.ccwait_delta)/decode(sum(hsq.executions_delta),0,null,sum(hsq.executions_delta))/1000000,2) as avg_concurrency_wait,
    round(sum(hsq.rows_processed_delta)/decode(sum(hsq.executions_delta),0,null,sum(hsq.executions_delta)),2) as avg_rows_processed,
    round(sum(hsq.buffer_gets_delta)/decode(sum(hsq.executions_delta),0,null,sum(hsq.executions_delta)),2) as avg_buffer_gets,
    round(sum(hsq.disk_reads_delta)/decode(sum(hsq.executions_delta),0,null,sum(hsq.executions_delta)),2) as avg_disk_reads,
    round(sum(hsq.direct_writes_delta)/decode(sum(hsq.executions_delta),0,null,sum(hsq.executions_delta)),2) as avg_direct_writes,
    round(sum(hsq.io_interconnect_bytes_delta)/decode(sum(hsq.executions_delta),0,null,sum(hsq.executions_delta))/(1024*1024),0) as avg_io_interconnect_mb,
    round(sum(hsq.physical_read_requests_delta)/decode(sum(hsq.executions_delta),0,null,sum(hsq.executions_delta)),0) as avg_phys_read_requests,
    round(sum(hsq.physical_read_bytes_delta)/decode(sum(hsq.executions_delta),0,null,sum(hsq.executions_delta))/(1024*1024),0) as avg_phys_read_mb,
    round(sum(hsq.physical_write_requests_delta)/decode(sum(hsq.executions_delta),0,null,sum(hsq.executions_delta)),0) as avg_phys_write_requests,
    round(sum(hsq.physical_write_bytes_delta)/decode(sum(hsq.executions_delta),0,null,sum(hsq.executions_delta))/(1024*1024),0) as avg_phys_write_mb
from dba_hist_sqlstat hsq
where hsq.sql_id='6hds16zkc8cgm'
group by hsq.sql_id, hsq.plan_hash_value;

SQL_ID        PLAN_HASH_VALUE TOTAL_EXEC ELAPSED_TIME_TOTAL AVG_PX_SERVERS_EXECS AVG_ELAPSED_TIME AVG_CPU_TIME AVG_IOWAIT AVG_CLUSTER_WAIT AVG_APPLICATION_WAIT AVG_CONCURRENCY_WAIT AVG_ROWS_PROCESSED AVG_BUFFER_GETS AVG_DISK_READS AVG_DIRECT_WRITES AVG_IO_INTERCONNECT_MB AVG_PHYS_READ_REQUESTS AVG_PHYS_READ_MB AVG_PHYS_WRITE_REQUESTS AVG_PHYS_WRITE_MB
------------- --------------- ---------- ------------------ -------------------- ---------------- ------------ ---------- ---------------- -------------------- -------------------- ------------------ --------------- -------------- ----------------- ---------------------- ---------------------- ---------------- ----------------------- -----------------
6hds16zkc8cgm       705417430          2           15342.95                   16          7671.48        42.59    7521.09                0                 3.14                77.09                959         7948659       739317.5                 0                      3                    187                3                       0                 0
6hds16zkc8cgm      2195747324          2            1087.57                   32           543.78         43.8     354.06                0                  1.6               140.49                961       6115419.5       739927.5                 0                      6                    299                6                       0                 0
6hds16zkc8cgm      4190635369          1           29581.68                   32         29581.68        44.39   29366.99                0                    0               131.88               1226         6573306         709640                 0                      7                    350                7                       0                 0
6hds16zkc8cgm      1445916266          1            8639.51                   12          8639.51        37.77    8538.27                0                 7.14                50.05                919         4620163         698482                 0                      7                    342                7                       0                 0
6hds16zkc8cgm      3246633284          1           15650.16                   12         15650.16        47.34   15533.76                0                  .42                51.99               1364         7318402         731331                 0                      1                     73                1                       0                 0
6hds16zkc8cgm       463109863          1            7905.76                   16          7905.76        41.31    7761.58                0                    0                72.88                890         4709496         756848                 0                      7                    330                7                       0                 0
6hds16zkc8cgm      2566346142          1            9315.98                   27          9315.98        53.88    9131.03                0                  5.2                93.75               1394        11788790         754218                 0                      5                    211                5                       0                 0
6hds16zkc8cgm      2387297713          1            5672.93                   12          5672.93        43.45    5565.33                0                27.04                35.22               1074         8563338         697194                 0                      0                     31                0                       0                 0

8 rows selected. 

PLAN_HASH_VALUE column is the column to use to understand if too SQL plan are identical or not rather than comparing plans line by line. Here above we see that SQL plan of the same SQL is almost never the same, that could explain the huge difference in response time. We also see huge differences in I/O wait time from 354 seconds to 29366 seconds…

To display all the different plan and start to dig into them you can use (I’m not displaying mine as it would be too long):

select * from table(dbms_xplan.display_awr(sql_id=>'6hds16zkc8cgm',format=>'all allstats'));

You should correlate the above figures with what you can find in DBA_HIST_ACTIVE_SESS_HISTORY. Do not use TIME_WAITED column in this view but count 10 seconds per line. 10 seconds because it is the frequency of snapshot taken from V$ACTIVE_SESSION_HISTORY. And V$ACTIVE_SESSION_HISTORY has a frequency of 1 second (that’s why you would simply use count if selecting from V$ACTIVE_SESSION_HISTORY):

SQL> set lines 150 pages 1000
SQL> col wait_class for a15
SQL> col event for a35
SQL> SELECT
     sql_id, sql_plan_hash_value, actual_dop, TO_CHAR(sql_exec_start,'dd-mon-yyyy hh24:mi:ss') AS sql_exec_start,
     wait_class, event, COUNT(*)*10 AS "time_waited (s)"
     FROM
     (SELECT sql_id,sql_plan_hash_value,trunc(px_flags / 2097152) AS actual_dop,
      sql_exec_start,
      DECODE(NVL(wait_class,'ON CPU'),'ON CPU',DECODE(session_type,'BACKGROUND','BCPU','CPU'),wait_class) AS wait_class,
      nvl(event,'ON CPU') AS event
      FROM dba_hist_active_sess_history
      WHERE sql_id='6hds16zkc8cgm') a
     GROUP BY sql_id, sql_plan_hash_value, actual_dop, sql_exec_start, wait_class,event
     ORDER BY sql_exec_start, sql_plan_hash_value, wait_class,event;

SQL_ID        SQL_PLAN_HASH_VALUE ACTUAL_DOP SQL_EXEC_START       WAIT_CLASS      EVENT                               time_waited (s)
------------- ------------------- ---------- -------------------- --------------- ----------------------------------- ---------------
6hds16zkc8cgm          4190635369         16 01-dec-2016 15:38:50 CPU             ON CPU                                           10
6hds16zkc8cgm          4190635369          0 01-dec-2016 15:38:50 Other           reliable message                                 20
6hds16zkc8cgm          4190635369         16 01-dec-2016 15:38:50 User I/O        db file parallel read                          1830
6hds16zkc8cgm          4190635369         16 01-dec-2016 15:38:50 User I/O        db file sequential read                         160
6hds16zkc8cgm          4190635369         16 01-dec-2016 15:38:50 User I/O        direct path read                              27030
6hds16zkc8cgm          4190635369         16 01-dec-2016 15:38:50 User I/O        read by other session                           180
6hds16zkc8cgm          2566346142          0 02-dec-2016 16:40:27 Application     enq: KO - fast object checkpoint                 10
6hds16zkc8cgm          2566346142         14 02-dec-2016 16:40:27 CPU             ON CPU                                          300
6hds16zkc8cgm          2566346142          0 02-dec-2016 16:40:27 Configuration   log buffer space                                 10
6hds16zkc8cgm          2566346142         14 02-dec-2016 16:40:27 User I/O        db file sequential read                          10
6hds16zkc8cgm          2566346142         14 02-dec-2016 16:40:27 User I/O        direct path read                               8240
6hds16zkc8cgm          2566346142         14 02-dec-2016 16:40:27 User I/O        read by other session                           650
6hds16zkc8cgm          2195747324         16 16-nov-2016 16:17:50 CPU             ON CPU                                           20
6hds16zkc8cgm          2195747324         16 16-nov-2016 16:17:50 User I/O        db file sequential read                         300
6hds16zkc8cgm          1759484988            17-nov-2016 18:11:25 CPU             ON CPU                                           30
6hds16zkc8cgm          1759484988          0 17-nov-2016 18:11:25 Other           reliable message                                 10
6hds16zkc8cgm          1759484988            17-nov-2016 18:11:25 User I/O        db file parallel read                           140
6hds16zkc8cgm          1759484988            17-nov-2016 18:11:25 User I/O        db file scattered read                           10
6hds16zkc8cgm          1759484988            17-nov-2016 18:11:25 User I/O        db file sequential read                          70
6hds16zkc8cgm          2195747324            18-nov-2016 18:06:45 CPU             ON CPU                                           50
6hds16zkc8cgm          2195747324            18-nov-2016 18:06:45 User I/O        db file parallel read                            30
6hds16zkc8cgm          2195747324            18-nov-2016 18:06:45 User I/O        db file sequential read                          50
6hds16zkc8cgm          3246633284          0 19-nov-2016 16:19:49 Application     enq: KO - fast object checkpoint                 10
6hds16zkc8cgm          3246633284            19-nov-2016 16:19:49 CPU             ON CPU                                           20
6hds16zkc8cgm          3246633284            19-nov-2016 16:19:49 Scheduler       resmgr:cpu quantum                               10
6hds16zkc8cgm          3246633284            19-nov-2016 16:19:49 User I/O        db file parallel read                           770
6hds16zkc8cgm          3246633284            19-nov-2016 16:19:49 User I/O        db file sequential read                          30
6hds16zkc8cgm          2195747324            20-nov-2016 15:41:33 CPU             ON CPU                                           20
6hds16zkc8cgm          2195747324          0 20-nov-2016 15:41:33 Other           reliable message                                 10
6hds16zkc8cgm          2195747324            20-nov-2016 15:41:33 User I/O        db file parallel read                           160
6hds16zkc8cgm          2195747324            20-nov-2016 15:41:33 User I/O        db file sequential read                          20
6hds16zkc8cgm          2195747324            20-nov-2016 15:41:33 User I/O        read by other session                            10
6hds16zkc8cgm          2195747324            21-nov-2016 15:49:06 CPU             ON CPU                                           20
6hds16zkc8cgm          2195747324          0 21-nov-2016 15:49:06 Other           reliable message                                 10
6hds16zkc8cgm          2195747324            21-nov-2016 15:49:06 User I/O        db file parallel read                           140
6hds16zkc8cgm          2195747324            21-nov-2016 15:49:06 User I/O        db file sequential read                          40
6hds16zkc8cgm           463109863          8 22-nov-2016 16:54:09 CPU             ON CPU                                           20
6hds16zkc8cgm           463109863          0 22-nov-2016 16:54:09 Other           reliable message                                 10
6hds16zkc8cgm           463109863          8 22-nov-2016 16:54:09 User I/O        db file parallel read                           550
6hds16zkc8cgm           463109863          8 22-nov-2016 16:54:09 User I/O        db file sequential read                          30
6hds16zkc8cgm           463109863          8 22-nov-2016 16:54:09 User I/O        direct path read                               7120
6hds16zkc8cgm          2387297713          0 23-nov-2016 17:06:32 Application     enq: KO - fast object checkpoint                 20
6hds16zkc8cgm          2387297713          6 23-nov-2016 17:06:32 CPU             ON CPU                                           30
6hds16zkc8cgm          2387297713          0 23-nov-2016 17:06:32 Configuration   log buffer space                                 10
6hds16zkc8cgm          2387297713          6 23-nov-2016 17:06:32 User I/O        db file sequential read                          40
6hds16zkc8cgm          2387297713          6 23-nov-2016 17:06:32 User I/O        direct path read                               5480
6hds16zkc8cgm          2387297713          6 23-nov-2016 17:06:32 User I/O        read by other session                            60
6hds16zkc8cgm           705417430          6 24-nov-2016 16:48:13 CPU             ON CPU                                           60
6hds16zkc8cgm           705417430          6 24-nov-2016 16:48:13 Configuration   log buffer space                                 20
6hds16zkc8cgm           705417430          6 24-nov-2016 16:48:13 User I/O        db file sequential read                         400
6hds16zkc8cgm           705417430          6 24-nov-2016 16:48:13 User I/O        direct path read                               5780
6hds16zkc8cgm           705417430          6 24-nov-2016 16:48:13 User I/O        read by other session                           170
6hds16zkc8cgm          3246633284          6 25-nov-2016 15:39:23 CPU             ON CPU                                           70
6hds16zkc8cgm          3246633284          6 25-nov-2016 15:39:23 User I/O        db file parallel read                          1980
6hds16zkc8cgm          3246633284          6 25-nov-2016 15:39:23 User I/O        db file sequential read                          80
6hds16zkc8cgm          3246633284          6 25-nov-2016 15:39:23 User I/O        direct path read                              13430
6hds16zkc8cgm          3246633284          6 25-nov-2016 15:39:23 User I/O        read by other session                            10
6hds16zkc8cgm          2195747324         16 26-nov-2016 15:32:14 CPU             ON CPU                                           50
6hds16zkc8cgm          2195747324         16 26-nov-2016 15:32:14 User I/O        db file sequential read                         140
6hds16zkc8cgm          2195747324         16 26-nov-2016 15:32:14 User I/O        direct path read                                360
6hds16zkc8cgm          2195747324         16 26-nov-2016 15:32:14 User I/O        read by other session                            10
6hds16zkc8cgm          2195747324         16 27-nov-2016 16:27:27 CPU             ON CPU                                           40
6hds16zkc8cgm          2195747324         16 27-nov-2016 16:27:27 User I/O        direct path read                                280
6hds16zkc8cgm           705417430          0 28-nov-2016 16:42:39 Application     enq: KO - fast object checkpoint                 10
6hds16zkc8cgm           705417430         10 28-nov-2016 16:42:39 CPU             ON CPU                                           10
6hds16zkc8cgm           705417430         10 28-nov-2016 16:42:39 Concurrency     buffer busy waits                                90
6hds16zkc8cgm           705417430         10 28-nov-2016 16:42:39 Configuration   log buffer space                                 10
6hds16zkc8cgm           705417430          0 28-nov-2016 16:42:39 Other           reliable message                                 10
6hds16zkc8cgm           705417430         10 28-nov-2016 16:42:39 User I/O        direct path read                               8520
6hds16zkc8cgm          1445916266          0 29-nov-2016 16:33:40 Application     enq: KO - fast object checkpoint                 10
6hds16zkc8cgm          1445916266          6 29-nov-2016 16:33:40 CPU             ON CPU                                           30
6hds16zkc8cgm          1445916266          6 29-nov-2016 16:33:40 User I/O        db file parallel read                           880
6hds16zkc8cgm          1445916266          6 29-nov-2016 16:33:40 User I/O        db file sequential read                         120
6hds16zkc8cgm          1445916266          6 29-nov-2016 16:33:40 User I/O        direct path read                               7520
6hds16zkc8cgm          1445916266          6 29-nov-2016 16:33:40 User I/O        read by other session                            30
6hds16zkc8cgm           463109863          0 30-nov-2016 16:33:40 CPU             ON CPU                                           10
6hds16zkc8cgm           463109863         15 30-nov-2016 16:33:40 CPU             ON CPU                                           10
6hds16zkc8cgm           463109863         15 30-nov-2016 16:33:40 User I/O        db file sequential read                         150
6hds16zkc8cgm           463109863         15 30-nov-2016 16:33:40 User I/O        direct path read                                190
6hds16zkc8cgm           463109863          0                      CPU             ON CPU                                           20
6hds16zkc8cgm           463109863                                 CPU             ON CPU                                           10
6hds16zkc8cgm           463109863          0                      Concurrency     cursor: pin S wait on X                         430
6hds16zkc8cgm           463109863          0                      Concurrency     library cache lock                               10
6hds16zkc8cgm           705417430          0                      CPU             ON CPU                                           10
6hds16zkc8cgm           705417430                                 CPU             ON CPU                                           10
6hds16zkc8cgm           705417430          0                      Concurrency     cursor: pin S wait on X                          90
6hds16zkc8cgm           705417430          0                      Concurrency     library cache lock                               20
6hds16zkc8cgm          1445916266                                 CPU             ON CPU                                           10
6hds16zkc8cgm          2195747324                                 CPU             ON CPU                                           40
6hds16zkc8cgm          2195747324          0                      CPU             ON CPU                                           10
6hds16zkc8cgm          2195747324          0                      Concurrency     cursor: pin S wait on X                         310
6hds16zkc8cgm          2387297713          0                      CPU             ON CPU                                           10
6hds16zkc8cgm          2387297713          0                      Concurrency     cursor: pin S wait on X                         110
6hds16zkc8cgm          3246633284                                 CPU             ON CPU                                           10
6hds16zkc8cgm          3246633284          0                      CPU             ON CPU                                           10
6hds16zkc8cgm          3246633284          0                      Concurrency     cursor: pin S wait on X                         110
6hds16zkc8cgm          3246633284                                 User I/O        db file sequential read                          10
6hds16zkc8cgm          4190635369          0                      CPU             ON CPU                                           10
6hds16zkc8cgm          4190635369          0                      Concurrency     cursor: pin S wait on X                         310

99 rows selected.

We see that top number one wait even while executing our query is direct path read. This wait event is new in 11g where Oracle has decided to bypass buffer cache and read directly in PGA for large table that does not fit in SGA. Overall it is not really an issue, THE SQL behind must be tune to favor more optimal choices. See Direct path chapter for more information.

Direct path

On one of our database we had below Top Foreground wait event:

awr_mining02
awr_mining02

Even if you will immediately focus on SQL tuning you might want to know which SQL are mainly responsible of this:

SQL> set lines 150 pages 1000
SQL> col wait_class for a10
SQL> col event for a25
SQL> SELECT
     sql_id, sql_plan_hash_value, TO_CHAR(sql_exec_start,'dd-mon-yyyy hh24:mi:ss') AS sql_exec_start,
     wait_class, event, COUNT(*)*10 AS "time_waited (s)"
     FROM
     (SELECT sql_id,sql_plan_hash_value,sql_exec_start,
      DECODE(NVL(wait_class,'ON CPU'),'ON CPU',DECODE(session_type,'BACKGROUND','BCPU','CPU'),wait_class) AS wait_class,
      nvl(event,'ON CPU') AS event
      FROM dba_hist_active_sess_history
      WHERE event like '%direct%') a
     GROUP BY sql_id, sql_plan_hash_value, sql_exec_start, wait_class,event
     ORDER BY 6 desc;

SQL_ID        SQL_PLAN_HASH_VALUE SQL_EXEC_START       WAIT_CLASS EVENT                     time_waited (s)
------------- ------------------- -------------------- ---------- ------------------------- ---------------
amzmxuns5dctz          1006290636 21-nov-2016 15:30:51 User I/O   direct path read temp              117670
amzmxuns5dctz          2716462349 30-nov-2016 15:28:23 User I/O   direct path read temp              101540
amzmxuns5dctz          2716462349 03-dec-2016 15:40:04 User I/O   direct path read temp               96800
                                0                      User I/O   direct path read                    90720
amzmxuns5dctz          1006290636 25-nov-2016 06:39:34 User I/O   direct path write temp              50740
f92r4f37kn015          2776764876 30-nov-2016 03:22:25 User I/O   direct path write temp              48010
114w500wy5039          2301194886 04-dec-2016 16:34:14 User I/O   direct path write temp              45480
cvbq6vqk8dbf3           720130659 25-nov-2016 15:42:05 User I/O   direct path read                    36520
amzmxuns5dctz          2716462349 26-nov-2016 15:40:00 User I/O   direct path read temp               33610
f92r4f37kn015          2776764876 24-nov-2016 00:57:35 User I/O   direct path write temp              33300
b2cwuxca44yw8          1919045833 01-dec-2016 16:30:03 User I/O   direct path read                    31890
amzmxuns5dctz          2716462349 30-nov-2016 06:35:58 User I/O   direct path read temp               31830
amzmxuns5dctz          1006290636 24-nov-2016 15:10:44 User I/O   direct path read temp               28210
amzmxuns5dctz          2716462349 30-nov-2016 06:35:58 User I/O   direct path write temp              27350
6hds16zkc8cgm          4190635369 01-dec-2016 15:38:50 User I/O   direct path read                    27030
amzmxuns5dctz          2716462349 28-nov-2016 15:13:16 User I/O   direct path read temp               26850
6hds16zkc8cgm          3246633284 05-dec-2016 16:15:22 User I/O   direct path read                    23020
aqh9x3yay6khc          2336987607 21-nov-2016 20:54:01 User I/O   direct path read temp               21060
amzmxuns5dctz          2716462349 01-dec-2016 15:08:29 User I/O   direct path read temp               20510
9mt8p7z1wjjkn          2327812543 04-dec-2016 11:00:32 User I/O   direct path read                    20450
48091vbtjj4xa          1708748504 28-nov-2016 04:03:28 User I/O   direct path read temp               20090
amzmxuns5dctz          2716462349 03-dec-2016 06:36:36 User I/O   direct path write temp              17190
f92r4f37kn015          2776764876 27-nov-2016 00:41:30 User I/O   direct path read temp               17190
amzmxuns5dctz          2716462349 29-nov-2016 15:43:15 User I/O   direct path read temp               16860
1wzd30k3m1ghn          1433478644 07-dec-2016 06:29:49 User I/O   direct path read temp               15990
amzmxuns5dctz          2716462349 30-nov-2016 15:28:23 User I/O   direct path write temp              15830
amzmxuns5dctz          2716462349 07-dec-2016 06:35:28 User I/O   direct path write temp              14850
1wzd30k3m1ghn          1433478644 29-nov-2016 08:07:57 User I/O   direct path read temp               14850
08kdqk1abqm0v          2813508085 04-dec-2016 15:49:02 User I/O   direct path read                    14680
aqh9x3yay6khc          2336987607 22-nov-2016 21:07:34 User I/O   direct path read temp               14450
5tjqq1cggd0c2          3765923229 23-nov-2016 19:04:17 User I/O   direct path read                    14280
5tjqq1cggd0c2          1896303415 22-nov-2016 19:23:11 User I/O   direct path read                    14210
5tjqq1cggd0c2          3837189456 29-nov-2016 19:34:04 User I/O   direct path read                    14010
grb144xf2asf3           759327608 25-nov-2016 22:17:39 User I/O   direct path read                    13790
f92r4f37kn015          2776764876 05-dec-2016 00:11:15 User I/O   direct path read temp               13460
6hds16zkc8cgm          3246633284 25-nov-2016 15:39:23 User I/O   direct path read                    13430
amzmxuns5dctz          1006290636 21-nov-2016 07:14:12 User I/O   direct path read temp               12780
b4yrbsczwf9xw          3470921796 27-nov-2016 03:58:25 User I/O   direct path read                    12720
grb144xf2asf3          1075986364 05-dec-2016 22:48:32 User I/O   direct path read                    12310
g8n42y0duhggt           152545410 24-nov-2016 21:13:25 User I/O   direct path read                    12060
a2xhxhppc23y1           327407190 06-dec-2016 22:56:59 User I/O   direct path read                    11930
1wzd30k3m1ghn          1433478644 23-nov-2016 07:13:00 User I/O   direct path read temp               11820
0pmqzssfmmdcv          3444765716 04-dec-2016 01:57:54 User I/O   direct path read                    11480
1zzsqqwax38kk           660240898 27-nov-2016 06:58:14 User I/O   direct path read                    11200
amzmxuns5dctz          2716462349 07-dec-2016 06:35:28 User I/O   direct path read temp               11160
aqh9x3yay6khc          2336987607 29-nov-2016 20:01:40 User I/O   direct path read temp               11110
ckapap92tfy3n          2695238043 23-nov-2016 23:47:49 User I/O   direct path read                    11050
aqh9x3yay6khc          2336987607 03-dec-2016 22:33:33 User I/O   direct path read temp               10990
amzmxuns5dctz          2716462349 03-dec-2016 06:36:36 User I/O   direct path read temp               10950
1wzd30k3m1ghn          1433478644 26-nov-2016 06:48:59 User I/O   direct path read temp               10950
1wzd30k3m1ghn          1433478644 06-dec-2016 06:45:53 User I/O   direct path read temp               10870
1wzd30k3m1ghn          1433478644 02-dec-2016 06:41:28 User I/O   direct path read temp               10690
57svfqqryy7h0          3182238936 28-nov-2016 05:17:30 User I/O   direct path read                    10540
74447zmmmw0zk           316009422 29-nov-2016 02:41:26 User I/O   direct path read                    10470
f92r4f37kn015          2776764876 28-nov-2016 02:22:45 User I/O   direct path read temp               10310
0cy2upaz2mtp3           420311346 01-dec-2016 17:54:20 User I/O   direct path read                    10220
daxpwau39csac          2945512965 29-nov-2016 18:07:05 User I/O   direct path read                    10200
                                0                      User I/O   direct path write                   10060
.

NULL sql_id is checkpoint process.

Here above we see than sql_id amzmxuns5dctz is the one to focus on…

Checkpoint

After a reboot of our database we have (luckily) noticed a huge increase in physical writes as shown in HP Performance Manager application:

awr_mining03
awr_mining03

We have investigated in all directions we could: OS, database, SQL tuning,… We noticed we mistakenly (since long time) set FAST_START_MTTR_TARGET to 300. To be honest I have never really understood the added value of this parameter even if I understand the description. What’s the point to tune a situation (recovery) that (hopefully) happen rarely and impacting all year long your performance. I prefer to let the checkpoint occur at redo log switch and set FAST_START_MTTR_TARGET to 0 (default value).

Anyways that said we reset the parameter and guess what ? Physical write decreased a lot:

awr_mining04
awr_mining04

Then I wanted to see at Oracle the decrease in number of checkpoint as well as decrease in number of write due to incremental check pointing activity. DBA_HIST_SYSSTAT comes to the rescue. In meanwhile a teammate changed AWR frequency so I had to tune a bit my query to have the sum per hour:

SELECT TO_CHAR(TRUNC(begin_interval_time,'HH'),'dd-mon-yyyy hh24:mi:ss') AS time,
stat_name,
SUM(value) AS value
FROM(
SELECT
hsn.snap_id,
  hsn.begin_interval_time,
  hsn.end_interval_time,
  hsy.stat_name,
  hsy.value - hsy.prev_value AS value
  FROM (SELECT snap_id,instance_number,stat_name,value,LAG(value,1,value) OVER (ORDER BY snap_id,stat_name) AS prev_value
        FROM dba_hist_sysstat
        WHERE stat_name in 'DBWR checkpoints') hsy,
        dba_hist_snapshot hsn
WHERE hsy.snap_id = hsn.snap_id
AND hsy.instance_number = hsn.instance_number
ORDER BY hsn.snap_id DESC)
group by trunc(begin_interval_time,'HH'),snap_id,stat_name
order by snap_id;

The two statistics name I will use are (please refer to official documenatiopn or V$STATNAME for a complete list of available ones):

Name Description
DBWR checkpoint buffers written Number of buffers that were written for checkpoints
DBWR checkpoints Number of times the DBWR was asked to scan the cache and write all blocks marked for a checkpoint or the end of recovery. This statistic is always larger than “background checkpoints completed”

I initially exported the result set in Excel format and build graph in Excel directly but finally decided to test the graph capability of SQL Developer. To access it use the Reports tab, or activate it in View and Reports menu. Then Create a new one in User Defined Reports using Chart style and Area as Chart Type, and any other option you like. A good trick is to connect to a database and check the Use Live Data in Property/Data option to directly see how your report looks like.

The queries to be displayed must be built to return the following three values: ‘x axis value’,’serie name’ and ‘y axis value’.

We see number of checkpoint decreasing after December the 7TH AM (we did the change around 10:00 AM CET):

awr_mining05
awr_mining05

But more impressive the number of buffer written:

awr_mining06
awr_mining06

But we did not change anything on the database before the applicative started to complain all their queries were slow. The change in FAST_START_MTTR_TARGET solved performance troubles but does not explain the root cause of the issue. Restoring a backup we have been able to extract and load old AWR figures (see chapter on this) to finally be able to perform difference AWR reports. Then obviously I have focused on Top Segments Comparison by Physical Writes paragraph of my AWR difference report (a good day before the issue versus a bad one) and saw this:

awr_mining07
awr_mining07

I noticed many new comers in top physical writes while the first one had 500% increase in number of physical writes…

The table to use in this situation is DBA_HIST_SEG_STAT, but you need the object id to fetch it:

SQL> set lines 150 pages 1000
SQL> col object_name for a30
SQL> select object_id,owner,object_name,object_type
     from dba_objects
     where owner||'.'||object_name in ('HUB.CRM_INVOICE_PREV_PK','E2DWH.BACKLOG_NEW',
     'E2DWH.E2_X_DWH_2_PK','E2DWH.E2_X_DWH2_IDX_SO_ITEM__ID','E2DWH.E2_X_DWH2_IDX_LAST_UPD');

 OBJECT_ID OWNER                          OBJECT_NAME                    OBJECT_TYPE
---------- ------------------------------ ------------------------------ -------------------
  43994766 E2DWH                          BACKLOG_NEW                    TABLE
  44003115 HUB                            CRM_INVOICE_PREV_PK            INDEX
  44005025 E2DWH                          E2_X_DWH2_IDX_SO_ITEM__ID      INDEX
  44005026 E2DWH                          E2_X_DWH2_IDX_LAST_UPD         INDEX
  44005027 E2DWH                          E2_X_DWH_2_PK                  INDEX

Or as they claim use DBA_HIST_SEG_STAT_OBJ table (even if it is really difficult to guess which key this table has):

SQL> set lines 150 pages 1000
SQL> col object_name for a50
SQL> select distinct obj#,owner||'.'||object_name||' ('||nvl2(subobject_name,object_type || ': ' || subobject_name,object_type)||')' as object_name
     from DBA_HIST_SEG_STAT_OBJ
     where owner||'.'||OBJECT_NAME in ('HUB.CRM_INVOICE_PREV_PK','E2DWH.BACKLOG_NEW',
     'E2DWH.E2_X_DWH_2_PK','E2DWH.E2_X_DWH2_IDX_SO_ITEM__ID','E2DWH.E2_X_DWH2_IDX_LAST_UPD');

      OBJ# OBJECT_NAME
---------- --------------------------------------------------
  43994766 E2DWH.BACKLOG_NEW (TABLE)
  44005027 E2DWH.E2_X_DWH_2_PK (INDEX)
  44005026 E2DWH.E2_X_DWH2_IDX_LAST_UPD (INDEX)
  44003115 HUB.CRM_INVOICE_PREV_PK (INDEX)
  44005025 E2DWH.E2_X_DWH2_IDX_SO_ITEM__ID (INDEX)

Then you can access to physical writes of this object with something like:

SQL> set lines 150 pages 1000
SQL> SELECT
     TO_CHAR(begin_interval_time,'dd-mon-yyyy hh24:mi:ss') AS begin_interval_time,
     TO_CHAR(end_interval_time,'dd-mon-yyyy hh24:mi:ss') AS end_interval_time,
     hss.physical_writes_delta,
     hss.physical_write_requests_delta
     FROM dba_hist_seg_stat hss, dba_hist_snapshot hsn
     WHERE hss.snap_id = hsn.snap_id
     AND hss.instance_number = hsn.instance_number
     AND hss.obj# = 44003115
     ORDER BY hss.snap_id

BEGIN_INTERVAL_TIME  END_INTERVAL_TIME    PHYSICAL_WRITES_DELTA PHYSICAL_WRITE_REQUESTS_DELTA
-------------------- -------------------- --------------------- -----------------------------
04-nov-2016 14:00:31 04-nov-2016 15:00:48                742817                        658007
05-nov-2016 14:00:08 05-nov-2016 15:00:26                749335                        652146
05-nov-2016 15:00:26 05-nov-2016 16:00:38                     0                             0
06-nov-2016 14:00:41 06-nov-2016 15:01:00                815897                        718208
07-nov-2016 14:00:20 07-nov-2016 15:00:36                722403                        613129
08-nov-2016 14:00:37 08-nov-2016 15:00:56                608903                        546690
09-nov-2016 14:00:51 09-nov-2016 15:00:08                633603                        549895
10-nov-2016 14:00:39 10-nov-2016 15:00:59                722215                        656331
11-nov-2016 14:00:54 11-nov-2016 15:00:14                606513                        535992
11-nov-2016 15:00:14 11-nov-2016 16:00:28                     0                             0
11-nov-2016 16:00:28 11-nov-2016 17:00:46                     0                             0
11-nov-2016 17:00:46 11-nov-2016 18:02:20                     0                             0
11-nov-2016 18:02:20 11-nov-2016 19:00:48                     0                             0
11-nov-2016 19:00:48 11-nov-2016 20:00:15                     0                             0
18-nov-2016 14:00:40 18-nov-2016 15:00:19               4514625                       4298799
19-nov-2016 14:00:22 19-nov-2016 15:00:56               4811683                       4610613
20-nov-2016 14:00:28 20-nov-2016 15:00:10               2404822                       2267424
21-nov-2016 14:00:08 21-nov-2016 15:00:34               2300106                       2145016
22-nov-2016 14:00:06 22-nov-2016 15:00:39               3589330                       3428962
22-nov-2016 15:00:39 22-nov-2016 16:00:04               1422715                       1373614
23-nov-2016 14:00:39 23-nov-2016 15:00:06               4505033                       4318507
23-nov-2016 17:00:40 23-nov-2016 17:10:56                     0                             0
24-nov-2016 14:00:34 24-nov-2016 15:00:59               2027948                       1873524
25-nov-2016 14:00:07 25-nov-2016 14:55:52               4477194                       4288564
26-nov-2016 14:00:59 26-nov-2016 15:00:18               2350258                       2210634
27-nov-2016 14:00:37 27-nov-2016 15:00:14               4886175                       4681261
28-nov-2016 14:00:19 28-nov-2016 15:00:45               2512397                       2394625
29-nov-2016 14:00:23 29-nov-2016 15:00:59               3476692                       3323396
29-nov-2016 15:00:59 29-nov-2016 16:00:29               1686445                       1632668
30-nov-2016 14:01:07 30-nov-2016 15:00:37               4468593                       4279528
01-dec-2016 14:00:40 01-dec-2016 15:00:02               2549539                       2426793
02-dec-2016 14:00:18 02-dec-2016 15:00:01               4781775                       4589442

32 rows selected.

We clearly see the trend with a 7-8 times increase in number of writes.

If you check for a segment that has no figures for a period (appearing or disappearing objects) then the query is a bit more complex to build. In other word building a query to report what appear in a difference AWR report is not so easy. Using standard variance (STDDEV) I have tried to build a query that would show me the segments that have varied the most for physical writes. Again if the segment appear or disappear:

SQL> set lines 150 pages 1000
SQL> col object_name for a50
SQL> SELECT
     TO_CHAR(hsn.begin_interval_time,'dd-mon-yyyy hh24:mi:ss') AS begin_interval_time,
     TO_CHAR(hsn.end_interval_time,'dd-mon-yyyy hh24:mi:ss') AS end_interval_time,
     --obj#,
     (SELECT distinct owner||'.'||object_name||' ('||nvl2(subobject_name,object_type || ': ' || subobject_name,object_type)||')'
      FROM dba_hist_seg_stat_obj
      WHERE obj#=hss.obj# AND dbid=hss.dbid AND dataobj#=hss.dataobj# AND ts#=hss.ts#) AS object_name,
     hss.physical_writes_delta,
     hss.stddev_physical_writes_delta
     FROM
     (SELECT
     snap_id,
     dbid,
     instance_number,
     obj#,
     dataobj#,
     ts#,
     physical_writes_delta,
     ROUND(STDDEV(physical_writes_delta) over (partition by obj#)) AS stddev_physical_writes_delta,
     COUNT(*) OVER (PARTITION BY obj#) AS nb
     FROM dba_hist_seg_stat
     GROUP BY snap_id,dbid,instance_number,obj#,dataobj#,ts#,physical_writes_delta) hss, dba_hist_snapshot hsn
     WHERE hss.snap_id = hsn.snap_id
     AND hss.instance_number = hsn.instance_number
     AND hss.nb >= 5
     ORDER BY stddev_physical_writes_delta desc,hss.snap_id;

BEGIN_INTERVAL_TIME  END_INTERVAL_TIME    OBJECT_NAME                                        PHYSICAL_WRITES_DELTA STDDEV_PHYSICAL_WRITES_DELTA
-------------------- -------------------- -------------------------------------------------- --------------------- ----------------------------
04-nov-2016 14:00:31 04-nov-2016 15:00:48 HUB.CRM_INVOICE_PREV_PK (INDEX)                                   742817                      1763019
05-nov-2016 14:00:08 05-nov-2016 15:00:26 HUB.CRM_INVOICE_PREV_PK (INDEX)                                   749335                      1763019
05-nov-2016 15:00:26 05-nov-2016 16:00:38 HUB.CRM_INVOICE_PREV_PK (INDEX)                                        0                      1763019
06-nov-2016 14:00:41 06-nov-2016 15:01:00 HUB.CRM_INVOICE_PREV_PK (INDEX)                                   815897                      1763019
07-nov-2016 14:00:20 07-nov-2016 15:00:36 HUB.CRM_INVOICE_PREV_PK (INDEX)                                   722403                      1763019
08-nov-2016 14:00:37 08-nov-2016 15:00:56 HUB.CRM_INVOICE_PREV_PK (INDEX)                                   608903                      1763019
09-nov-2016 14:00:51 09-nov-2016 15:00:08 HUB.CRM_INVOICE_PREV_PK (INDEX)                                   633603                      1763019
10-nov-2016 14:00:39 10-nov-2016 15:00:59 HUB.CRM_INVOICE_PREV_PK (INDEX)                                   722215                      1763019
11-nov-2016 14:00:54 11-nov-2016 15:00:14 HUB.CRM_INVOICE_PREV_PK (INDEX)                                   606513                      1763019
11-nov-2016 15:00:14 11-nov-2016 16:00:28 HUB.CRM_INVOICE_PREV_PK (INDEX)                                        0                      1763019
11-nov-2016 16:00:28 11-nov-2016 17:00:46 HUB.CRM_INVOICE_PREV_PK (INDEX)                                        0                      1763019
11-nov-2016 17:00:46 11-nov-2016 18:02:20 HUB.CRM_INVOICE_PREV_PK (INDEX)                                        0                      1763019
11-nov-2016 18:02:20 11-nov-2016 19:00:48 HUB.CRM_INVOICE_PREV_PK (INDEX)                                        0                      1763019
11-nov-2016 19:00:48 11-nov-2016 20:00:15 HUB.CRM_INVOICE_PREV_PK (INDEX)                                        0                      1763019
18-nov-2016 14:00:40 18-nov-2016 15:00:19 HUB.CRM_INVOICE_PREV_PK (INDEX)                                  4514625                      1763019
19-nov-2016 14:00:22 19-nov-2016 15:00:56 HUB.CRM_INVOICE_PREV_PK (INDEX)                                  4811683                      1763019
20-nov-2016 14:00:28 20-nov-2016 15:00:10 HUB.CRM_INVOICE_PREV_PK (INDEX)                                  2404822                      1763019
21-nov-2016 14:00:08 21-nov-2016 15:00:34 HUB.CRM_INVOICE_PREV_PK (INDEX)                                  2300106                      1763019
22-nov-2016 14:00:06 22-nov-2016 15:00:39 HUB.CRM_INVOICE_PREV_PK (INDEX)                                  3589330                      1763019
22-nov-2016 15:00:39 22-nov-2016 16:00:04 HUB.CRM_INVOICE_PREV_PK (INDEX)                                  1422715                      1763019
23-nov-2016 14:00:39 23-nov-2016 15:00:06 HUB.CRM_INVOICE_PREV_PK (INDEX)                                  4505033                      1763019
23-nov-2016 17:00:40 23-nov-2016 17:10:56 HUB.CRM_INVOICE_PREV_PK (INDEX)                                        0                      1763019
24-nov-2016 14:00:34 24-nov-2016 15:00:59 HUB.CRM_INVOICE_PREV_PK (INDEX)                                  2027948                      1763019
25-nov-2016 14:00:07 25-nov-2016 14:55:52 HUB.CRM_INVOICE_PREV_PK (INDEX)                                  4477194                      1763019
26-nov-2016 14:00:59 26-nov-2016 15:00:18 HUB.CRM_INVOICE_PREV_PK (INDEX)                                  2350258                      1763019
27-nov-2016 14:00:37 27-nov-2016 15:00:14 HUB.CRM_INVOICE_PREV_PK (INDEX)                                  4886175                      1763019
28-nov-2016 14:00:19 28-nov-2016 15:00:45 HUB.CRM_INVOICE_PREV_PK (INDEX)                                  2512397                      1763019
29-nov-2016 14:00:23 29-nov-2016 15:00:59 HUB.CRM_INVOICE_PREV_PK (INDEX)                                  3476692                      1763019
29-nov-2016 15:00:59 29-nov-2016 16:00:29 HUB.CRM_INVOICE_PREV_PK (INDEX)                                  1686445                      1763019
30-nov-2016 14:01:07 30-nov-2016 15:00:37 HUB.CRM_INVOICE_PREV_PK (INDEX)                                  4468593                      1763019
01-dec-2016 14:00:40 01-dec-2016 15:00:02 HUB.CRM_INVOICE_PREV_PK (INDEX)                                  2549539                      1763019
02-dec-2016 14:00:18 02-dec-2016 15:00:01 HUB.CRM_INVOICE_PREV_PK (INDEX)                                  4781775                      1763019
04-nov-2016 00:00:55 04-nov-2016 01:00:44 HUB.CRM_INVOICE_PK (INDEX)                                        474743                      1141594
04-nov-2016 20:00:26 04-nov-2016 21:00:47 HUB.CRM_INVOICE_PK (INDEX)                                        765938                      1141594
05-nov-2016 21:00:38 05-nov-2016 22:00:04 HUB.CRM_INVOICE_PK (INDEX)                                        919735                      1141594
06-nov-2016 19:00:44 06-nov-2016 20:00:11 HUB.CRM_INVOICE_PK (INDEX)                                        107619                      1141594
06-nov-2016 20:00:11 06-nov-2016 21:00:38 HUB.CRM_INVOICE_PK (INDEX)                                        474607                      1141594
07-nov-2016 20:00:30 07-nov-2016 21:00:45 HUB.CRM_INVOICE_PK (INDEX)                                        725956                      1141594
08-nov-2016 20:00:48 08-nov-2016 21:00:02 HUB.CRM_INVOICE_PK (INDEX)                                        113880                      1141594
08-nov-2016 20:00:48 08-nov-2016 21:00:02 HUB.CRM_INVOICE_PK (INDEX)                                        339013                      1141594
.

In above query I have limited the sample size to at least 5 elements because below this limit I rate standard variance number values makes no sense. We almost get this inside an AWR report except the appearing/disappearing segments…

The Service Request we opened at Oracle support reached to the same exact conclusion…

AWR figures extract and load

If you really come too late to check performance and you then lack the one when they were “good” one of the solution is to restore a database backup and create a temporary database on a test server. You might not always be able to do it but if you can afford do it (at the same time it will validate your backup/restore policy) ! Once this backup has been started export the newest AWR figure from your production database using awrextr.sql script located in in $ORACLE_HOME/rdbms/admin, you just need to create a directory and a bit of free disk space.

Transfer the file to your test server where you have recovered the production backup (so containing old AWR figures) and load in it latest AWR with awrload.sql script located in $ORACLE_HOME/rdbms/admin.

At first try I got below error message:

ERROR at line 1:
ORA-20105: unable to move AWR data to SYS
ORA-06512: at "SYS.DBMS_SWRF_INTERNAL", line 2984
ORA-20107: not allowed to move AWR data for local dbid
ORA-06512: at line 3

So changed the DBID of my restored database with NID (database New ID) tool (of course the restore must has been well done):

[oraxyz@server11 ~]$ nid target=sys DBNAME=sidxyz

DBNEWID: Release 11.2.0.4.0 - Production on Fri Dec 9 15:11:09 2016

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Password:
Connected to database EDWHUB (DBID=3256370993)

Connected to server version 11.2.0

Control Files in database:
    /ora_edwhub1/ctrl/edwhub/control01.ctl
    /ora_edwhub1/ctrl/edwhub/control02.ctl


The following datafiles are offline immediate:
    /ora_edwhub1/data02/edwhub/logdmfact2_01.db (284)
    /ora_edwhub/data02/edwhub/logdmfact2_01.db (290)

NID-00122: Database should have no offline immediate datafiles


Change of database name failed during validation - database is intact.
DBNEWID - Completed with validation errors.

Once recover well done and DBID change the import will run successfully and you can then generate AWR difference reports…

References

]]>
https://blog.yannickjaquier.com/oracle/awr-mining-performance-trends.html/feed 0
Oracle Disk Manager (ODM) successful implementation https://blog.yannickjaquier.com/oracle/oracle-disk-manager-odm-implementation.html https://blog.yannickjaquier.com/oracle/oracle-disk-manager-odm-implementation.html#respond Wed, 20 Dec 2017 15:08:37 +0000 http://blog.yannickjaquier.com/?p=4184

Table of contents

Preamble

As far as I remember one of our most important OLTP database, storing an home made application, has periodically suffered from performance problems. This database has been upgraded in 12cR1, moved to Linux RedHat 6 two years back as well as finally split from its applicative part (Tuxedo related). This has given us a lot of fresh air for a while but as time pass performance have degraded (why tune if hardware is under utilized) and the recent move of another application to the existing ones (made of tens of applications in reality) has moved us to a tense performance era…

Even if not explicitly mentioned in this blog, the performance of this database is one of the most challenging to maintain and this is also the one (along with our BI database) where I have seen the trickiest problem to solve (and so the source of many articles).

Recently one of my teammates has pushed to implement Oracle Disk Manager (ODM) from Veritas that I have tested back in 2012 but never ever implemented in live, refer to my previous document or official Veritas documentation on how to implement it.

Please also note that this ODM feature must be purchased ad is included in Veritas InfoScale Storage and Veritas InfoScale Enterprise as displayed below:

odm12
odm12

To be really transparent in our actions during agreed downtime we have done:

  • Implemented Oracle Disk Manager (ODM).
  • Rebuilt blindly all application indexes.
  • Corrected few logon storm issues. The worst case was one application doing 80,000 connections per day (yes almost one per second !!).

The change has been drastically beneficial and this is what we will se below…

The database is using Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production and is running on Red Hat Enterprise Linux Server release 6.5 (Santiago). The database size is around 2 TB. We are lucky enough to have licenses for Diagnostic and Tuning packs.

The below trial analysis is week before change versus week after change (working days).

Analysis

I have started to generate Operating System figures using HP OVPM, first chart I always generate is CPU figures. In below, break point is in the middle of the chart, this is a two weeks period:

odm01
odm01

CPU usage has increased, we would not have positive users’ feedback about performance we would be puzzled… In fact CPU increase could be explained by I/O subsystem better performing and so database server is able to do much more work and serve more users, so the increase in CPU…

I have also generated disk related charts. In below the HP OVPM definition of metrics are:

GBL_DISK_PHYS_BYTE_RATE (Physical Disk Byte Rate):

The average number of KBs per second at which data was transferred to and from disks during the interval. The bytes for all types physical IOs are counted. Only local disks are counted in this measurement. NFS devices are excluded. This is a measure of the physical data transfer rate. It is not directly related to the number of IOs, since IO requests can be of differing lengths. This is an indicator of how much data is being transferred to and from disk devices. Large spikes in this metric can
indicate a disk bottleneck. On Unix systems, all types of physical disk IOs are counted, including file system, virtual memory, and raw reads.

GBL_DISK_PHYS_IO_RATE (Physical Disk Rate):

The number of physical IOs per second during the interval. Only local disks are counted in this measurement. NFS devices are excluded. On Unix systems, all types of physical disk IOs are counted, including file system IO, virtual memory IO and raw IO.

odm02
odm02
odm03
odm03

I am personally not able to see any change in trend related to IOPS…

Then I have generated an ADDM compare report. The procedure to use is DBMS_ADDM.COMPARE_DATABASES. You can can select the result of this procedure (for the base and compare periods) from dual and remove extra bad lines from the generated spool file that will become an HTML file. To display it I have been obliged to download an old release of Firefox portable (50.1.0) because the latest Chrome (62) and Firefox (57) releases I had on my desktop include additional security rules that forbid the display of the report.

So with huge difficulties I have been able to display it. The top part of the report is an average active session comparison.:

odm04
odm04

The graphical comparison in itself says it all. The Wait Classes Commit and Concurrency have almost disappeared and the Application wait class have greatly reduced. We also noticed average active sessions has been divided by two, we would explain it by the correction of the logon storm issues we had… We also confirm than additional CPU consumption has been done by Oracle sessions.

The bottom part (Resource tab) is an I/O comparison of the two periods, the other tabs have not shown any useful information:

odm05
odm05

The Average Data File Single Block Read Latency (ms) has almost been divided by 10 with, obviously, no hardware change or upgrade.

I have also generated the classical AWR reports for each period (using awrrpt.sql script) as well as the much more interesting AWR compare period report (using awrddrpt.sql script).

The two standalone AWR reports in themselves do not bring more than the difference one. At least in the header of each I have the text confirmation of the above ADDM compare periods chart.

ADDM finding for first period (before ODM):

odm06
odm06

ADDM findings for second period (after ODM):

odm07
odm07

We obviously see a complete move from the “Commits and Rollbacks” wait class to the more classical “User I/O” wait class. I have been pushing development teams to reduce their commit frequency but in real life this is really complex to tune, even more when you are not able to finger point where to focus. Tuning “User I/O” wait class is far much easier as you can concentrate on SQL Statistics section of the report and tune the corresponding top SQL queries…

From the difference report the top header clearly show why our users are happier, the DB Time has been divided by two:

odm08
odm08

The difference load profile shows multiple interesting information:

odm09
odm09
  • The generated redo size is equivalent for the two period while we no more wait on it after ODM implementation.
  • We have satisfied much more queries: CPU time and Logical read (blocks) increased. Again mainly linked to ODM.
  • We have written and read less from disk: mainly due to indexes rebuild.

If we move on to the most interesting part, top timed events, we correlate the ADDM chart with text:

odm10
odm10

Various observations:

  • We have almost same number of waits for log file sync (49 millions vs 47 millions) while the overall wait time has been divided by 12 (!!). Linked to the figures of equivalent redo size for the two periods we see the most interesting part of ODM implementation.
  • Exact same comment for db file parallel read and direct path read: same number of waits around but overall wait time divided by 4-5 times.
  • Strangely the other I/O related wait events do not show up a similar improvement…
  • Cursor: pin S wait on X wait time has been divided by 50 but we did not change the application. I have not seen huge changes in SQL with high version count and/or in SQL having high parse call. So far it remains unexplained or indirectly linked to I/O improvement…

The last part I want to share is the tablespaces I/O stats, this part is directly linked to index rebuild and this clearly show the added value of having a good procedure to identify indexes that would benefit from rebuild:

odm11
odm11

We not only improved IOPS but we have combined with a decreased in their number in practically all tablespaces of our applications…

Conclusion

Here we have seen that even if you use the world fastest full flash array or even if your database is fully stored on a Fusion I/O card using NVMe driver then if you still use VxFS filesystem the bottleneck might be elsewhere. And clearly it is worth to give a try to Oracle Disk Manager (ODM). Of course Automatic Storage Management (ASM) is still an option and should provide the same benefit but who would use it in a non-RAC environment ?

Also this is obviously not a bullet proof approach, meaning that if by design your application is performing really poorly (lots of bad queries performing millions of logical reads) then ODM will be of no help…

As next steps it would be interesting to investigate (non exhaustively):

  • Cached Oracle Disk Manager (CODM): from Veritas document, ODM I/O normally bypasses the file system cache and directly reads from and writes to disk. Cached ODM enables some I/O to use caching and read ahead, which can improve ODM I/O performance for certain workloads.
  • Have, or enhance an existing one, algorithm to identify indexes that would benefit from rebuild.

References

]]>
https://blog.yannickjaquier.com/oracle/oracle-disk-manager-odm-implementation.html/feed 0
Oracle Label Security (OLS) 12c installation and configuration https://blog.yannickjaquier.com/oracle/oracle-label-security-ols-12c-setup.html https://blog.yannickjaquier.com/oracle/oracle-label-security-ols-12c-setup.html#comments Mon, 20 Nov 2017 15:16:16 +0000 http://blog.yannickjaquier.com/?p=3918

Table of contents

Preamble

Oracle Label Security (OLS) is not a new feature as it has been release in Oracle 9iR1. This feature is base on Virtual Private Database (VPD) technology that has even been released in Oracle 8i. OLS is a paid option of the database enterprise edition:

Oracle Virtual Private Database (VPD) is provided at no additional cost with the Enterprise Edition of Oracle Database. Oracle Label Security is an add-on security option for the Oracle Database Enterprise Edition.

When you want to protect rows of the same table you might do-it-yourself with a security column and you will check that the connected user has rights to see it with something like:

0 < (select count(*) from security_table sec where sec.name = 'BO_username' and fact_table.security__code like sec.code)

The wildcard (all regions, all groups, ...) can be simulated with underscore (_) that will perfectly work with LIKE SQL operator.

The drawback is an extended complexity in all SQL as the check will be added in WHERE clause. It is also quite easy to bypass the security if you are able to update the running SQL (forbid SQL editing in BO is a must in this case).

We have seen VPD in a previous post, that is generic term for Fine-Grained Access Control (FGAC), application context and global application context. VPD policies are made with PL/SQL while, as Oracle claims, Oracle Label Security is an out-of-the-box solution for row level security.

The high level picture of OLS model is this Oracle picture (copyright Oracle):

ols01
ols01

This can also be seen as a 3D security model (copyright Oracle):

ols02
ols02

Testing of this post has been done using Oracle Enterprise edition 12cR1 (12.1.0.2) running on Oracle Linux 7.2 64 bits running in a VirtualBox quest. The Cloud Control 13cR1 images have been done using the Oracle provided VirtualBox guest.

Oracle Label Security installation

Check if OLS is active on your database:

SQL> select value from v$option where parameter = 'Oracle Label Security';

VALUE
----------------------------------------------------------------
FALSE

If not you can activate it using DataBase Configuration Assistant (DBCA):

ols03
ols03

Or command line with:

SQL> EXEC LBACSYS.OLS_ENFORCEMENT.ENABLE_OLS;
BEGIN LBACSYS.OLS_ENFORCEMENT.ENABLE_OLS; END;

*
ERROR at line 1:
ORA-12459: Oracle Label Security not configured
ORA-06512: at "LBACSYS.OLS_ENFORCEMENT", line 3
ORA-06512: at "LBACSYS.OLS_ENFORCEMENT", line 25
ORA-06512: at line 1


SQL> !oerr ORA 12459
12459, 00000, "Oracle Label Security not configured"
// *Cause:  An administrative operation was attempted without configuring
//          Oracle Label Security.
// *Action: Consult the Oracle Label Security documentation for information
//          on how to configure Oracle Label Security.

We need to configure OLS before enabling it (database must be restarted):

SQL> select status from dba_ols_status where name = 'OLS_CONFIGURE_STATUS';

STATU
-----
FALSE

SQL> EXEC LBACSYS.CONFIGURE_OLS;

PL/SQL procedure successfully completed.

SQL> EXEC LBACSYS.OLS_ENFORCEMENT.ENABLE_OLS;

PL/SQL procedure successfully completed.

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area  838860800 bytes
Fixed Size                  2929936 bytes
Variable Size             603982576 bytes
Database Buffers          226492416 bytes
Redo Buffers                5455872 bytes
Database mounted.
Database opened.

Check the status:

SQL> select status from dba_ols_status where name = 'OLS_CONFIGURE_STATUS';

STATU
-----
TRUE

SQL> select value from v$option where parameter = 'Oracle Label Security';

VALUE
----------------------------------------------------------------
TRUE

Unlock OLS database user (LBACSYS) and change its password:

SQL> alter user lbacsys identified by "secure_password" account unlock;

User altered.

Oracle Label Security test data model

The schema owning the data model is an OS authenticated account, I grant it the LBAC_DBA role as this account will administer OLS directly:

SQL> create user app identified externally;

User created.

SQL> grant connect,resource to app;

Grant succeeded.

SQL> grant unlimited tablespace to app;

Grant succeeded.

SQL> grant lbac_dba to app;

Grant succeeded.

The data model is made of three table. One fact table and two dimension table: SALES, PRODUT_GROUP and REGION. The creation script is self explaining I hope (no indexes except the one for primary keys as the post is not performance related):

set pages 20
create table product_group(
  code varchar2(4) primary key,
  descr varchar2(20))
tablespace users;

insert into product_group values('0001','IoT');
insert into product_group values('0002','Mems');
insert into product_group values('0003','Smart Driving');

create table region(
  code varchar2(4) primary key,
  descr varchar2(30))
tablespace users;

set define '#'
insert into region values('0001','America');
insert into region values('0002','Asia Pacific');
insert into region values('0003','Japan & Korea');
insert into region values('0004','Greater China');
insert into region values('0005','Europe Middle East & Africa');

create table sales(
  product_group__code varchar2(4),
  region__code varchar2(4),
  val number,
	constraint product_group__code_fk foreign key (product_group__code) references product_group(code),
	constraint region__code_fk foreign key (region__code) references region(code))
tablespace users;

insert into sales values('0001','0005',1500);
insert into sales values('0002','0005',10000);
insert into sales values('0003','0005',500);
insert into sales values('0001','0001',5000);
insert into sales values('0002','0001',7500);
insert into sales values('0003','0001',400);
insert into sales values('0001','0003',4000);
insert into sales values('0002','0003',10400);
insert into sales values('0003','0003',400);
insert into sales values('0001','0004',3000);
insert into sales values('0002','0004',5000);
insert into sales values('0003','0004',200);
commit;

You can get sales by product group and region using this below classical query:

SQL> select app.product_group.descr, app.region.descr, sum(app.sales.val)
from app.product_group, app.region, app.sales
where app.sales.product_group__code=app.product_group.code
and app.sales.region__code=app.region.code
group by app.product_group.descr, app.region.descr
order by 1,2,3;

DESCR                DESCR                          SUM(SALES.VAL)
-------------------- ------------------------------ --------------
IoT                  America                                  5000
IoT                  Europe Middle East & Africa              1500
IoT                  Greater China                            3000
IoT                  Japan & Korea                            4000
Mems                 America                                  7500
Mems                 Europe Middle East & Africa             10000
Mems                 Greater China                            5000
Mems                 Japan & Korea                           10400
Smart Driving        America                                   400
Smart Driving        Europe Middle East & Africa               500
Smart Driving        Greater China                             200
Smart Driving        Japan & Korea                             400

12 rows selected.

I also create a classical password authenticated account that would be used in your applciation:

SQL> create user app_read identified by "secure_password";

User created.

SQL> grant connect to app_read;

Grant succeeded.

SQL> grant select on app.sales to app_read;

Grant succeeded.

SQL> grant select on app.region to app_read;

Grant succeeded.

SQL> grant select on app.product_group to app_read;

Grant succeeded.

I also grant few execute privileges on OLS packages to APP account:

SQL> grant execute on sa_policy_admin to app;

Grant succeeded.

SQL> grant execute on to_lbac_data_label to app;

Grant succeeded.

Oracle Label Security setup

Policy creation

As LBACSYS user I create policy sales_ols_pol using all enforcement options. I choose to call the OLS column ols_col and to keep it visible:

SQL> exec sa_sysdba.create_policy(policy_name => 'sales_ols_pol', column_name => 'ols_col', default_options => 'all_control');

PL/SQL procedure successfully completed.

SQL> col policy_options for a50 word_wrapped
SQL> col column_name for a10
SQL> select * from dba_sa_policies;

POLICY_NAME                    COLUMN_NAM STATUS   POLICY_OPTIONS                                     POLIC
------------------------------ ---------- -------- -------------------------------------------------- -----
SALES_OLS_POL                  OLS_COL    ENABLED  READ_CONTROL, INSERT_CONTROL, UPDATE_CONTROL,      FALSE
                                                   DELETE_CONTROL, LABEL_DEFAULT, LABEL_UPDATE,
                                                   CHECK_CONTROL

Remark
HIDE option hide the OLS column in protected tables (use 'all_control,hide').

To allow APP account to manage its own Label Security I grant to it the policy_name_dba role:

SQL> grant sales_ols_pol_dba to app;

Grant succeeded.

Levels creation

I define two levels, a public and a confidential one. Remember that level_num value define the sensitivity ranking:

SQL> exec sa_components.create_level(policy_name => 'sales_ols_pol', level_num => 10, short_name => 'P', long_name => 'PUBLIC');

PL/SQL procedure successfully completed.

SQL> exec sa_components.create_level(policy_name => 'sales_ols_pol', level_num => 20, short_name => 'C', long_name => 'CONFIDENTIAL');

PL/SQL procedure successfully completed.

SQL> col long_name for a20
SQL> select * from dba_sa_levels order by level_num;

POLICY_NAME                     LEVEL_NUM SHORT_NAME                     LONG_NAME
------------------------------ ---------- ------------------------------ --------------------
SALES_OLS_POL                          10 P                              PUBLIC
SALES_OLS_POL                          20 C                              CONFIDENTIAL

Compartments creation

I define three compartments that are my product groups. Comp_num parameter determines the order in which compartments are listed in labels. Names are not case sensitive, will be inserted in uppercase:

SQL> exec sa_components.create_compartment(policy_name => 'sales_ols_pol',comp_num => '10', short_name => 'IOT', long_name => 'IOT');

PL/SQL procedure successfully completed.

SQL> exec sa_components.create_compartment(policy_name => 'sales_ols_pol',comp_num => '20', short_name => 'MEMS', long_name => 'MEMS');

PL/SQL procedure successfully completed.

SQL> exec sa_components.create_compartment(policy_name => 'sales_ols_pol',comp_num => '30', short_name => 'SD', long_name => 'SMART DRIVING');

PL/SQL procedure successfully completed.

SQL> select * from dba_sa_compartments order by comp_num;

POLICY_NAME                      COMP_NUM SHORT_NAME                     LONG_NAME
------------------------------ ---------- ------------------------------ --------------------
SALES_OLS_POL                          10 IOT                            IOT
SALES_OLS_POL                          20 MEMS                           MEMS
SALES_OLS_POL                          30 SD                             SMART DRIVING

Groups creation

I define three main groups and two sub-group of a main groups so five in total. Names are not case sensitive, will be inserted in uppercase. Special character like & not allowed so replacing with words:

SQL> exec sa_components.create_group(policy_name => 'sales_ols_pol', group_num => 1000, short_name => 'USA', long_name => 'AMERICA');

PL/SQL procedure successfully completed.

SQL> exec sa_components.create_group(policy_name => 'sales_ols_pol', group_num => 2000, short_name => 'AP', long_name => 'ASIA PACIFIC');

PL/SQL procedure successfully completed.

SQL> exec sa_components.create_group(policy_name => 'sales_ols_pol', group_num => 2010, short_name => 'JK', long_name => 'JAPAN AND KOREA', parent_name=> 'AP');

PL/SQL procedure successfully completed.

SQL> exec sa_components.create_group(policy_name => 'sales_ols_pol', group_num => 2020, short_name => 'GC', long_name => 'GREATER CHINA', parent_name=> 'AP');

PL/SQL procedure successfully completed.

SQL> exec sa_components.create_group(policy_name => 'sales_ols_pol', group_num => 3000, short_name => 'EMEA', long_name => 'EUROPE MIDDLE EAST AND AFRICA');

PL/SQL procedure successfully completed.
SQL> col long_name for a30
SQL> select group_num,short_name,long_name,parent_num,parent_name from dba_sa_groups order by group_num;

 GROUP_NUM SHORT_NAME                     LONG_NAME                      PARENT_NUM PARENT_NAME
---------- ------------------------------ ------------------------------ ---------- ------------------------------
      1000 USA                            AMERICA
      2000 AP                             ASIA PACIFIC
      2010 JK                             JAPAN AND KOREA                      2000 AP
      2020 GC                             GREATER CHINA                        2000 AP
      3000 EMEA                           EUROPE MIDDLE EAST AND AFRICA

You can also view created hierarchy using:

SQL> col group_name for a40
SQL> select * from DBA_SA_GROUP_HIERARCHY;

POLICY_NAME                    HIERARCHY_LEVEL GROUP_NAME
------------------------------ --------------- ----------------------------------------
SALES_OLS_POL                                1   USA - AMERICA
SALES_OLS_POL                                1   AP - ASIA PACIFIC
SALES_OLS_POL                                2     JK - JAPAN AND KOREA
SALES_OLS_POL                                2     GC - GREATER CHINA
SALES_OLS_POL                                1   EMEA - EUROPE MIDDLE EAST AND AFRICA

Label function

This is the most tricky part. You need to write what Oracle call a label function. This function will compute a label for the row based on the values of inserted columns. The returned label must return short name of levels, compartments and groups. Here below Iot product group, as next future business, is defined as confidential figures. The non-sexy case could have been replaced by select into. Refer to Oracle documentation for another example:

create or replace function gen_sales_label(product_group__code varchar2, region__code varchar2)
return lbacsys.lbac_label
as
  i_label varchar2(80);
begin
  /************* determine level *************/
  if product_group__code='0001' then --IOT
    i_label := 'C:';
  else
    i_label := 'P:';
  end if;

  /************* determine compartment *************/
  case product_group__code
	  when '0001' then i_label := i_label || 'IOT:';
	  when '0002' then i_label := i_label || 'MEMS:';
	  when '0003' then i_label := i_label || 'SD:';
  end case;

  /************* determine groups *************/
	case region__code
	  when '0001' then i_label := i_label || 'USA';
	  when '0002' then i_label := i_label || 'AP';
	  when '0003' then i_label := i_label || 'JK';
	  when '0004' then i_label := i_label || 'GC';
	  when '0005' then i_label := i_label || 'EMEA';
  end case;

  return to_lbac_data_label('sales_ols_pol',i_label);
end;
/

As APP user I apply the OLS policy to my sales table using my label function:

SQL> exec sa_policy_admin.apply_table_policy(policy_name => 'sales_ols_pol', schema_name => 'app', table_name => 'sales', -
table_options => 'all_control', label_function => 'app.gen_sales_label(:new.product_group__code,:new.region__code)');

PL/SQL procedure successfully completed.

You can notice it adds a new column to your table (if not using HIDE option):

SQL> desc sales
 Name                                                                                Null?    Type
 ----------------------------------------------------------------------------------- -------- --------------------------------------------------------
 PRODUCT_GROUP__CODE                                                                          VARCHAR2(4)
 REGION__CODE                                                                                 VARCHAR2(4)
 VAL                                                                                          NUMBER
 OLS_COL                                                                                      NUMBER(10)

To remove the policy you can use:

SQL> exec sa_policy_admin.remove_table_policy(policy_name => 'sales_ols_pol', schema_name => 'app', table_name => 'sales', drop_column=> true);

PL/SQL procedure successfully completed.

If you get ORA-12446 error message:

exec sa_policy_admin.apply_table_policy(policy_name => 'sales_ols_pol', schema_name => 'app', table_name => 'sales', table_options  => 'all_control', -
> label_function => 'app.gen_sales_label(:new.product_group__code,:new.region__code)');
BEGIN sa_policy_admin.apply_table_policy(policy_name => 'sales_ols_pol', schema_name => 'app', table_name => 'sales', table_options  => 'all_control',
label_function => 'app.gen_sales_label(:new.product_group__code,:new.region__code)'); END;

*
ERROR at line 1:
ORA-12446: Insufficient authorization for administration of policy
sales_ols_pol
ORA-06512: at "LBACSYS.LBAC_POLICY_ADMIN", line 385
ORA-06512: at line 1

Then grant sales_ols_pol_dba role to app or execute the apply table policy with LBACSYS account.

If you get ORA-12433:

exec sa_policy_admin.apply_table_policy(policy_name => 'sales_ols_pol', schema_name => 'app', table_name => 'sales', table_options  => 'all_control', -
> label_function => 'app.gen_sales_label(:new.product_group__code,:new.region__code)');
BEGIN sa_policy_admin.apply_table_policy(policy_name => 'sales_ols_pol', schema_name => 'app', table_name => 'sales', table_options  => 'all_control',
label_function => 'app.gen_sales_label(:new.product_group__code,:new.region__code)'); END;

*
ERROR at line 1:
ORA-12433: create trigger failed, policy not applied
ORA-06512: at "LBACSYS.LBAC_POLICY_ADMIN", line 385
ORA-06512: at line 1

Then grant execute on app.gen_sales_label to LBACSYS. Clearly the error message is not at all self-explaining !

You could have done all this using Cloud Control 13cR1:

ols04
ols04

Oracle Label Security testing

At that stage APP and APP_READ user are not able to see anymore figures in SALES table because the OLS_COL is empty and also because we have not defined users' security:

SQL> select * from sales;

no rows selected

Update the OLS_COL column simulating a full update of your table with something like:

SQL> update sales set product_group__code=product_group__code;

12 rows updated.

SQL> commit;

Commit complete.

SQL> select * from sales;

PROD REGI        VAL    OLS_COL
---- ---- ---------- ----------
0001 0005       1500 1000000061
0002 0005      10000 1000000062
0003 0005        500 1000000063
0001 0001       5000 1000000064
0002 0001       7500 1000000065
0003 0001        400 1000000066
0001 0003       4000 1000000067
0002 0003      10400 1000000068
0003 0003        400 1000000069
0001 0004       3000 1000000070
0002 0004       5000 1000000071
0003 0004        200 1000000072

12 rows selected.

As LBACSYS we allow APP to bypass OLS and APP_READ to change its label and privileges to another user (by default the account still see nothing):

SQL> exec sa_user_admin.set_user_privs('sales_ols_pol','app','full');

PL/SQL procedure successfully completed.

SQL> exec sa_user_admin.set_user_privs('sales_ols_pol','app_read','profile_access');

PL/SQL procedure successfully completed.

Same as in real life each applicative users will not connect to the database with their own Oracle account. Either they are authenticated with an LDAP account or with an applicative security. So what we define is a set of accounts not linked to any database user. Those accounts name will have obvious name to ease understanding of what I plan to test:

Account Privileges
sales_p_ww Worldwide access to public information
sales_c_ww Worldwide access to public and confidential information
sales_p_jk Japan & Korea access to public information
sales_p_ap Asia Pacific access to public information
sales_c_ap Asia Pacific access to public and confidential information

With Cloud Control 13cR1 the user list is:

ols05
ols05

To simulate those application accounts we will use sa_session.set_access_profile procedure to set APP_READ behaving like our applicative users but I could have used a context as we have already seen with something like SYS_CONTEXT('userenv','CLIENT_IDENTIFIER').

First test is an account with public worldwide access, IoT information must not be display:

SQL> exec sa_user_admin.set_levels(policy_name => 'sales_ols_pol', user_name => 'sales_p_ww', max_level => 'P');

PL/SQL procedure successfully completed.

SQL> exec sa_user_admin.set_compartments(policy_name => 'sales_ols_pol', user_name => 'sales_p_ww', read_comps => 'MEMS,SD');

PL/SQL procedure successfully completed.

SQL> exec sa_user_admin.set_groups(policy_name => 'sales_ols_pol', user_name => 'sales_p_ww', read_groups => 'EMEA,AP,USA');

PL/SQL procedure successfully completed.

SQL> exec sa_session.set_access_profile(policy_name => 'sales_ols_pol', user_name => 'sales_p_ww');

PL/SQL procedure successfully completed.

SQL> select sa_session.sa_user_name(policy_name => 'sales_ols_pol') from dual;

SA_SESSION.SA_USER_NAME(POLICY_NAME=>'SALES_OLS_POL')
--------------------------------------------------------------------------------
SALES_P_WW

SQL> select sa_session.label(policy_name => 'sales_ols_pol') from dual;

SA_SESSION.LABEL(POLICY_NAME=>'SALES_OLS_POL')
--------------------------------------------------------------------------------
P:MEMS,SD:USA,AP,JK,GC,EMEA

SQL> select sa_session.comp_read(policy_name => 'sales_ols_pol') from dual;

SA_SESSION.COMP_READ(POLICY_NAME=>'SALES_OLS_POL')
--------------------------------------------------------------------------------
MEMS,SD

SQL> select app.product_group.descr, app.region.descr, sum(app.sales.val)
from app.product_group, app.region, app.sales
where app.sales.product_group__code=app.product_group.code
and app.sales.region__code=app.region.code
group by app.product_group.descr, app.region.descr
order by 1,2,3;

DESCR                DESCR                          SUM(APP.SALES.VAL)
-------------------- ------------------------------ ------------------
Mems                 America                                      7500
Mems                 Europe Middle East & Africa                 10000
Mems                 Greater China                                5000
Mems                 Japan & Korea                               10400
Smart Driving        America                                       400
Smart Driving        Europe Middle East & Africa                   500
Smart Driving        Greater China                                 200
Smart Driving        Japan & Korea                                 400

8 rows selected.

Second test is an account with worldwide all level access (full sales table in other words):

SQL> exec sa_user_admin.set_levels(policy_name => 'sales_ols_pol', user_name => 'sales_c_ww', max_level => 'C');

PL/SQL procedure successfully completed.

SQL> exec sa_user_admin.set_compartments(policy_name => 'sales_ols_pol', user_name => 'sales_c_ww', read_comps => 'IOT,MEMS,SD');

PL/SQL procedure successfully completed.

SQL> exec sa_user_admin.set_groups(policy_name => 'sales_ols_pol', user_name => 'sales_c_ww', read_groups => 'EMEA,AP,USA');

PL/SQL procedure successfully completed.

SQL> exec sa_session.set_access_profile(policy_name => 'sales_ols_pol', user_name => 'sales_c_ww');

PL/SQL procedure successfully completed.

SQL> select app.product_group.descr, app.region.descr, sum(app.sales.val)
from app.product_group, app.region, app.sales
where app.sales.product_group__code=app.product_group.code
and app.sales.region__code=app.region.code
group by app.product_group.descr, app.region.descr
order by 1,2,3;

DESCR                DESCR                          SUM(APP.SALES.VAL)
-------------------- ------------------------------ ------------------
IoT                  America                                      5000
IoT                  Europe Middle East & Africa                  1500
IoT                  Greater China                                3000
IoT                  Japan & Korea                                4000
Mems                 America                                      7500
Mems                 Europe Middle East & Africa                 10000
Mems                 Greater China                                5000
Mems                 Japan & Korea                               10400
Smart Driving        America                                       400
Smart Driving        Europe Middle East & Africa                   500
Smart Driving        Greater China                                 200
Smart Driving        Japan & Korea                                 400

12 rows selected.

Third test is an account with Japan and Korea access and only public information. You can also define directly the label of the user using sa_user_admin.set_user_labels:

SQL> exec sa_user_admin.set_user_labels(policy_name => 'sales_ols_pol', user_name => 'sales_p_jk', max_read_label => 'P:MEMS,SD:JK');

PL/SQL procedure successfully completed.

SQL> select app.product_group.descr, app.region.descr, sum(app.sales.val)
from app.product_group, app.region, app.sales
where app.sales.product_group__code=app.product_group.code
and app.sales.region__code=app.region.code
group by app.product_group.descr, app.region.descr
order by 1,2,3;

DESCR                DESCR                          SUM(APP.SALES.VAL)
-------------------- ------------------------------ ------------------
Mems                 Japan & Korea                               10400
Smart Driving        Japan & Korea                                 400

Fourth test is an account with Asia Pacific access and only public information. Aim here is to see if two sub-groups are well displayed:

SQL> exec sa_user_admin.set_user_labels(policy_name => 'sales_ols_pol', user_name => 'sales_p_ap', max_read_label => 'P:MEMS,SD:AP');

PL/SQL procedure successfully completed.

SQL> select app.product_group.descr, app.region.descr, sum(app.sales.val)
from app.product_group, app.region, app.sales
where app.sales.product_group__code=app.product_group.code
and app.sales.region__code=app.region.code
group by app.product_group.descr, app.region.descr
order by 1,2,3;

DESCR                DESCR                          SUM(APP.SALES.VAL)
-------------------- ------------------------------ ------------------
Mems                 Greater China                                5000
Mems                 Japan & Korea                               10400
Smart Driving        Greater China                                 200
Smart Driving        Japan & Korea                                 400

Fifth test is an account with Asia Pacific access and all levels. Aim is again to see if two sub-groups are well displayed:

SQL> exec sa_user_admin.set_user_labels(policy_name => 'sales_ols_pol', user_name => 'sales_c_ap', max_read_label => 'C:IOT,MEMS,SD:AP');

PL/SQL procedure successfully completed.

SQL> select /* Yannick */ app.product_group.descr, app.region.descr, sum(app.sales.val)
from app.product_group, app.region, app.sales
where app.sales.product_group__code=app.product_group.code
and app.sales.region__code=app.region.code
group by app.product_group.descr, app.region.descr
order by 1,2,3;

DESCR                DESCR                          SUM(APP.SALES.VAL)
-------------------- ------------------------------ ------------------
IoT                  Greater China                                3000
IoT                  Japan & Korea                                4000
Mems                 Greater China                                5000
Mems                 Japan & Korea                               10400
Smart Driving        Greater China                                 200
Smart Driving        Japan & Korea                                 400

6 rows selected.

I also wanted to see how Oracle is handling those query and if any transformations are applied to the query before executing them. We can see than SQL queries remain unchanged and only a filter is applied. See Predicate Information number 9 in below explain plan extract:

SQL_ID  8qt810d4tkg1x, child number 0
-------------------------------------
select /* Yannick */ app.product_group.descr, app.region.descr, 
sum(app.sales.val) from app.product_group, app.region, app.sales where 
app.sales.product_group__code=app.product_group.code and 
app.sales.region__code=app.region.code group by 
app.product_group.descr, app.region.descr order by 1,2,3
 
Plan hash value: 4206122969
 
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                        | Name          | Starts | E-Rows |E-Bytes| Cost (%CPU)| E-Time   | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem |  O/1/M   |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                 |               |      1 |        |       |    11 (100)|          |     12 |00:00:00.01 |      16 |      6 |       |       |          |
|   1 |  SORT ORDER BY                   |               |      1 |     11 |   583 |    11  (28)| 00:00:01 |     12 |00:00:00.01 |      16 |      6 |  2048 |  2048 |     1/0/0|
|   2 |   HASH GROUP BY                  |               |      1 |     11 |   583 |    11  (28)| 00:00:01 |     12 |00:00:00.01 |      16 |      6 |   930K|   930K|     1/0/0|
|*  3 |    FILTER                        |               |      1 |        |       |            |          |     12 |00:00:00.01 |      16 |      6 |       |       |          |
|*  4 |     HASH JOIN                    |               |      1 |     12 |   636 |     9  (12)| 00:00:01 |     12 |00:00:00.01 |      16 |      6 |  1393K|  1393K|     1/0/0|
|   5 |      MERGE JOIN                  |               |      1 |     12 |   396 |     6  (17)| 00:00:01 |     12 |00:00:00.01 |       9 |      4 |       |       |          |
|   6 |       TABLE ACCESS BY INDEX ROWID| PRODUCT_GROUP |      1 |      3 |    39 |     2   (0)| 00:00:01 |      3 |00:00:00.01 |       2 |      1 |       |       |          |
|   7 |        INDEX FULL SCAN           | SYS_C0010354  |      1 |      3 |       |     1   (0)| 00:00:01 |      3 |00:00:00.01 |       1 |      1 |       |       |          |
|*  8 |       SORT JOIN                  |               |      3 |     12 |   240 |     4  (25)| 00:00:01 |     12 |00:00:00.01 |       7 |      3 |  2048 |  2048 |     1/0/0|
|*  9 |        TABLE ACCESS FULL         | SALES         |      1 |     12 |   240 |     3   (0)| 00:00:01 |     12 |00:00:00.01 |       7 |      3 |       |       |          |
|  10 |      TABLE ACCESS FULL           | REGION        |      1 |      5 |   100 |     3   (0)| 00:00:01 |      5 |00:00:00.01 |       7 |      2 |       |       |          |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 
Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
 
   1 - SEL$F5BB74E1
   6 - SEL$F5BB74E1 / PRODUCT_GROUP@SEL$1
   7 - SEL$F5BB74E1 / PRODUCT_GROUP@SEL$1
   9 - SEL$F5BB74E1 / SALES@SEL$2
  10 - SEL$F5BB74E1 / REGION@SEL$1
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   3 - filter(TO_NUMBER(SYS_CONTEXT('LBAC$0_LAB','LBAC$MAXLABEL'))>=TO_NUMBER(SYS_CONTEXT('LBAC$0_LAB','LBAC$MINLABEL')))
   4 - access("REGION__CODE"="REGION"."CODE")
   8 - access("PRODUCT_GROUP__CODE"="PRODUCT_GROUP"."CODE")
       filter("PRODUCT_GROUP__CODE"="PRODUCT_GROUP"."CODE")
   9 - filter(("OLS_COL">=TO_NUMBER(SYS_CONTEXT('LBAC$0_LAB','LBAC$MINLABEL')) AND "OLS_COL"<=TO_NUMBER(SYS_CONTEXT('LBAC$0_LAB','LBAC$MAXLABEL')) AND 
              TO_NUMBER(SYS_CONTEXT('LBAC$LABELS',TO_CHAR("OLS_COL")))>=0))
 
Column Projection Information (identified by operation id):
-----------------------------------------------------------
 
   1 - (#keys=3) "PRODUCT_GROUP"."DESCR"[VARCHAR2,20], "REGION"."DESCR"[VARCHAR2,30], SUM("VAL")[22]
   2 - "PRODUCT_GROUP"."DESCR"[VARCHAR2,20], "REGION"."DESCR"[VARCHAR2,30], SUM("VAL")[22]
   3 - "PRODUCT_GROUP"."DESCR"[VARCHAR2,20], "REGION"."DESCR"[VARCHAR2,30], "VAL"[NUMBER,22], "REGION"."DESCR"[VARCHAR2,30]
   4 - (#keys=1) "PRODUCT_GROUP"."DESCR"[VARCHAR2,20], "REGION"."DESCR"[VARCHAR2,30], "VAL"[NUMBER,22], "REGION"."DESCR"[VARCHAR2,30]
   5 - "PRODUCT_GROUP"."DESCR"[VARCHAR2,20], "REGION__CODE"[VARCHAR2,4], "VAL"[NUMBER,22]
   6 - "PRODUCT_GROUP"."CODE"[VARCHAR2,4], "PRODUCT_GROUP"."DESCR"[VARCHAR2,20]
   7 - "PRODUCT_GROUP".ROWID[ROWID,10], "PRODUCT_GROUP"."CODE"[VARCHAR2,4]
   8 - (#keys=1) "PRODUCT_GROUP__CODE"[VARCHAR2,4], "REGION__CODE"[VARCHAR2,4], "VAL"[NUMBER,22]
   9 - "PRODUCT_GROUP__CODE"[VARCHAR2,4], "REGION__CODE"[VARCHAR2,4], "VAL"[NUMBER,22]
  10 - "REGION"."CODE"[VARCHAR2,4], "REGION"."DESCR"[VARCHAR2,30]
 
Note
-----
   - this is an adaptive plan

Oracle Label Security auditing

Oracle Label Security policies can be audited using SA_AUDIT_ADMIN package. First ensure audit_trail parameter is not set to none value:

SQL> show parameter audit_trail

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
audit_trail                          string      DB

I activate all possible audit for my policy by access:

SQL> exec sa_audit_admin.audit(policy_name => 'sales_ols_pol', audit_type => 'BY ACCESS');

PL/SQL procedure successfully completed.

SQL> exec sa_audit_admin.audit(policy_name => 'sales_ols_pol', audit_option => 'PRIVILEGES', audit_type => 'BY ACCESS');

PL/SQL procedure successfully completed.

SQL> col user_name for a30
SQL> select * from dba_sa_audit_options;

POLICY_NAME                    USER_NAME                      APY REM SET PRV
------------------------------ ------------------------------ --- --- --- ---
SALES_OLS_POL                  ALL_USERS                      A/A A/A A/A A/A

I also activate policy label recording with:

SQL> exec sa_audit_admin.audit_label(policy_name => 'sales_ols_pol');

PL/SQL procedure successfully completed.

SQL> set serveroutput on
begin
  if sa_audit_admin.audit_label_enabled('sales_ols_pol')
    then dbms_output.put_line('OLS sales_ols_pol labels are being audited.');
  else
    dbms_output.put_line('OLS sales_ols_pol labels not being audited.');
  end if;
end;
/
OLS sales_ols_pol labels are being audited.

PL/SQL procedure successfully completed.

I also create the dedicated view to display audit records:

SQL> exec sa_audit_admin.create_view(policy_name => 'sales_ols_pol');

PL/SQL procedure successfully completed.

SQL> desc dba_sales_ols_pol_audit_trail
 Name                                                                                Null?    Type
 ----------------------------------------------------------------------------------- -------- --------------------------------------------------------
 USERNAME                                                                                     VARCHAR2(128)
 USERHOST                                                                                     VARCHAR2(128)
 TERMINAL                                                                                     VARCHAR2(255)
 TIMESTAMP                                                                                    DATE
 OWNER                                                                                        VARCHAR2(128)
 OBJ_NAME                                                                                     VARCHAR2(128)
 ACTION                                                                              NOT NULL NUMBER
 ACTION_NAME                                                                                  VARCHAR2(47)
 COMMENT_TEXT                                                                                 VARCHAR2(4000)
 SESSIONID                                                                           NOT NULL NUMBER
 ENTRYID                                                                             NOT NULL NUMBER
 STATEMENTID                                                                         NOT NULL NUMBER
 RETURNCODE                                                                          NOT NULL NUMBER
 EXTENDED_TIMESTAMP                                                                           TIMESTAMP(6) WITH TIME ZONE
 OLS_COL                                                                                      VARCHAR2(4000)

If I select in SALES table with APP account I get:

SQL> col comment_text for a40
SQL> col username for a10
SQL> select username,timestamp,action_name,comment_text from lbacsys.dba_sales_ols_pol_audit_trail;

USERNAME   TIMESTAMP ACTION_NAME                                     COMMENT_TEXT
---------- --------- ----------------------------------------------- ----------------------------------------
APP        04-NOV-16 PRIVILEGED ACTION                               SALES_OLS_POL: BYPASSALL PRIVILEGE SET

But if I select with APP_READ using SA_SESSION.SET_ACCESS_PROFILE package then it does not generate any audit record, so a bit disappointed...

Auditing can also be managed with Cloud Control:

ols06
ols06

Oracle Label Security cleaning

Cleaning everything simply means dropping the policy with LBACSYS account:

SQL> exec sa_sysdba.drop_policy(policy_name => 'sales_ols_pol', drop_column => true);

PL/SQL procedure successfully completed.

References

]]>
https://blog.yannickjaquier.com/oracle/oracle-label-security-ols-12c-setup.html/feed 2
Database Vault 12cR1 installation and configuration https://blog.yannickjaquier.com/oracle/database-vault-12cr1-installation.html https://blog.yannickjaquier.com/oracle/database-vault-12cr1-installation.html#respond Mon, 23 Oct 2017 06:45:10 +0000 http://blog.yannickjaquier.com/?p=3884

Table of contents

Preamble

I have already tested Oracle Database Vault in 11gR2 but as we are starting the study for a new project containing HR figures I wanted to re-test Database Vault 12cR1 (12.1.0.2) to see how things improved. Worth to re-mention that Database Vault is a paid option of Oracle Database Enterprise Edition.

I have two virtual machine to do the testing, one is running my 12cR1 (12.1.0.2) Enterprise edition database under Oracle Linux Server release 7.2. The second virtual machine is a VirtualBox image that I have directly downloaded at:
http://www.oracle.com/technetwork/oem/enterprise-manager/downloads/oem-templates-2767917.html

What this VirtualBox image provides is a complete running Cloud Control 13cR1 (13.1.0.0.0) environment. Even if Cloud Control 13cR2 is available at the time of writing this blog post it is not yet available for download as a running image.

I will also briefly test Privilege Analysis, a new feature introduced with Database Vault 12cR1. Privilege Analysis does not require you enable Database Vault to use it but its licensing is part of Database Vault.

Database Vault 12cR1 installation

Versus 11gR2 Database Vault is already installed but not activated as Oracle says:

Starting with Oracle Database 12c, Oracle Database Vault is installed by default but not enabled. Customers can enable it using DBCA or from the command line using SQL*Plus in a matter of minutes.

Can be confirmed with:

SQL> set lines 150
SQL> select parameter, value from v$option where parameter = 'Oracle Database Vault';

PARAMETER                                                        VALUE
---------------------------------------------------------------- ----------------------------------------------------------------
Oracle Database Vault                                            FALSE

Following the official documentation to activate it I executed (the 12cR1 default accounts have moved from dbvxxx to dbv_xxx):

SQL> exec DVSYS.CONFIGURE_DV (dvowner_uname => 'dbv_owner', dvacctmgr_uname => 'dbv_acctmgr');
BEGIN DVSYS.CONFIGURE_DV (dvowner_uname => 'dbv_owner', dvacctmgr_uname => 'dbv_acctmgr'); END;

*
ERROR at line 1:
ORA-01918: user 'dbv_owner' does not exist
ORA-06512: at "DVSYS.DBMS_MACUTL", line 34
ORA-06512: at "DVSYS.DBMS_MACUTL", line 389
ORA-06512: at "DVSYS.CONFIGURE_DV", line 126
ORA-06512: at line 1

Fortunately MOS note How To Enable Database Vault in a 12c database ? (Doc ID 2112167.1) comes to the rescue and accounts must be created first:

SQL> create user dbv_owner identified by "secure_password";

User created.

SQL> create user dbv_acctmgr identified by "secure_password";

User created.

Vault activation is then straightforward:

SQL> exec DVSYS.CONFIGURE_DV (dvowner_uname => 'dbv_owner', dvacctmgr_uname => 'dbv_acctmgr');

PL/SQL procedure successfully completed.

SQL> connect dbv_owner/"secure_password"
Connected.
SQL> exec dbms_macadm.enable_dv;

PL/SQL procedure successfully completed.

Restart the database and you can see it’s there:

SQL> set lines 150
SQL> select parameter, value from v$option where parameter = 'Oracle Database Vault';

PARAMETER                                                        VALUE
---------------------------------------------------------------- ----------------------------------------------------------------
Oracle Database Vault                                            TRUE

To be able to use Cloud Control 13cR1 I have granted dv_admin role to my personal DBA account:

SQL> connect dbv_owner/"secure_password"
Connected.
SQL> grant dv_admin to yjaquier;

Grant succeeded.

Database Vault 12cR1 testing

The aim of my testing is to protect objects owned by and OS authenticated account (APP) from high privileges users but to keep the access for accounts to which grants have been given (APP_READ). The data model in itself is simple. I start with accounts creation with dbv_acctmgr account (except RESOURCE role that must be granted with SYS account):

SQL> connect dbv_acctmgr/"secure_password"
Connected.
SQL> create user app identified externally default tablespace users;
 
USER created.
 
SQL> alter user app quota unlimited on users;
 
USER altered.
 
SQL> grant connect to app;
 
GRANT succeeded.
 
SQL> create user app_read identified by secure_password;
 
USER created.
 
SQL> grant connect to app_read;
 
GRANT succeeded.

SQL> connect / as sysdba
Connected.
SQL> grant resource to app;
 
GRANT succeeded.

Remark
If you have activated Database Vault before creating the data model you start to see the added complexity of the product. The new account to manage accounts is dbv_acctmgr except for few privileges related to the default realms created when implementing Database Vault:

  • RESOURCE role is protected in Oracle System Privilege and Role Management Realm realm and accessible only by SYS
  • CONNECT role is protected in Database Vault Account Management realm and accessible only by DV_ACCTMGR

To see the Oracle defined realms either you use Cloud Control under security/Database Vault menu and check the “Show Oracle defined Realms” checkbox in Administration tab and Realms sub-menu:

database_vault_12cr1_01
database_vault_12cr1_01

Or you use below query (there is no real column to extract only Oracle defined realms):

SQL> set pages 50
SQL> col name for a50
SQL> col description for a80 word_wrapped
SQL> select name, description from DVSYS.DV$REALM where realm_type is null;

NAME                                               DESCRIPTION
-------------------------------------------------- --------------------------------------------------------------------------------
Oracle Database Vault                              Defines the realm for the Oracle Database Vault schemas - DVSYS, DVF and LBACSYS
                                                   where Database Vault access control configuration and roles are contained.

Database Vault Account Management                  Defines the realm for administrators who create and manage database accounts and
                                                   profiles.

Oracle Enterprise Manager                          Defines the Enterprise Manager monitoring and management realm.
Oracle Default Schema Protection Realm             Defines the realm for the Oracle Default schemas.
Oracle System Privilege and Role Management Realm  Defines the realm to control granting of system privileges and database
                                                   administrator roles.

Oracle Default Component Protection Realm          Defines the realm to protect default components of the Oracle database.

6 rows selected.

I connect with APP account and create the employees table and grant select and update on it to APP_READ:

SQL> set lines 150
SQL> CREATE TABLE employees (
     id NUMBER,
     firstname VARCHAR2(50),
     lastname VARCHAR2(50),
     salary NUMBER);
 
TABLE created.
 
SQL> GRANT select, update ON employees TO app_read;
 
GRANT succeeded.
 
SQL> INSERT INTO employees VALUES(1,'Yannick','Jaquier',10000);
 
1 ROW created.
 
SQL> COMMIT;
 
COMMIT complete.
 
SQL> SELECT * FROM employees;
 
        ID FIRSTNAME                                          LASTNAME                                               SALARY
---------- -------------------------------------------------- -------------------------------------------------- ----------
         1 Yannick                                            Jaquier                                                 10000

To create the realm you can use Cloud Control, click on create Administration tab and Realms sub-menu in Security/Database Vault:

database_vault_12cr1_02
database_vault_12cr1_02

Enter a name and a description, its original status and what you wan to audit:

database_vault_12cr1_03
database_vault_12cr1_03

Choose the objects you’d like to protect, let rest by default:

database_vault_12cr1_04
database_vault_12cr1_04

In PL/SQL it gives the two below command, parameter realm_type is set to 0 to avoid creation of a mandatory realms which means that objects owner (APP) will still have full access to its objects. I also activate the realms right after its creation (enabled parameter):

SQL> exec dbms_macadm.create_realm(realm_name => 'APP schema', description => 'Protect APP Schema ', enabled => 'Y', audit_options => 1, realm_type =>'0' );

PL/SQL procedure successfully completed.

SQL> exec dbms_macadm.add_object_to_realm(realm_name => 'APP schema', object_owner => 'APP', object_name => '%', object_type => '%' );

PL/SQL procedure successfully completed.

Even if I do not add any users to the realms the accounts that have select privileges on APP schema can still perform select while DBA like accounts (including SYS and SYSTEM) cannot see figures anymore:

SQL> set lines 150
SQL> show user
USER is "YJAQUIER"
SQL> select * from app.employees;
select * from app.employees
                  *
ERROR at line 1:
ORA-01031: insufficient privileges

SQL> connect app_read/"secure_password"
SQL> show user
USER is "APP_READ"
SQL> select * from app.employees;

        ID FIRSTNAME                                          LASTNAME                                               SALARY
---------- -------------------------------------------------- -------------------------------------------------- ----------
         1 Yannick                                            Jaquier                                                 10000

If you make the realms mandatory:

SQL> exec dbms_macadm.update_realm(realm_name => 'APP schema', description => 'Protect APP Schema ', enabled => 'Y', realm_type =>'1');

PL/SQL procedure successfully completed.

Then APP_READ schema cannot anymore select on employees table:

SQL> show user
USER is "APP_READ"
SQL> select * from app.employees;
select * from app.employees
                  *
ERROR at line 1:
ORA-01031: insufficient privileges

So the simplest first installation is a non mandatory realm but even if you try to make it simple you will see that APP user cannot create any more objects in its own schema:

SQL> show user
USER is "APP"
SQL> create table test01 (val number);
create table test01 (val number)
*
ERROR at line 1:
ORA-47401: Realm violation for CREATE TABLE on APP.TEST01

If you grant the APP account as a participant to its own realm (can also be done with Cloud Control):

SQL> exec dbms_macadm.add_auth_to_realm(realm_name => 'APP schema', grantee => 'APP', rule_set_name => '', auth_options => DBMS_MACUTL.G_REALM_AUTH_PARTICIPANT);

PL/SQL procedure successfully completed.

Then you can create object but you cannot grant them to other accounts:

SQL> create table test01(val number);

Table created.

SQL> insert into test01 values(1);

1 row created.

SQL> commit;

Commit complete.

SQL> grant select on test01 to app_read;
grant select on test01 to app_read
                *
ERROR at line 1:
ORA-47401: Realm violation for GRANT on APP.TEST01

If you grant APP to be a owner of its own realm (you need to remove it before re-adding it with another role), can be done with Cloud Control as well:

SQL> exec dbms_macadm.add_auth_to_realm(realm_name => 'APP schema', grantee => 'APP', rule_set_name => '', auth_options => DBMS_MACUTL.G_REALM_AUTH_OWNER);
BEGIN dbms_macadm.add_auth_to_realm(realm_name => 'APP schema', grantee => 'APP', rule_set_name => '', auth_options => DBMS_MACUTL.G_REALM_AUTH_OWNER); END;

*
ERROR at line 1:
ORA-47260: Realm Authorization to APP schema for Realm APP already defined
ORA-06512: at "DVSYS.DBMS_MACADM", line 1903
ORA-06512: at "DVSYS.DBMS_MACADM", line 1968
ORA-06512: at line 1


SQL> exec dbms_macadm.delete_auth_from_realm(realm_name => 'APP schema', grantee => 'APP');

PL/SQL procedure successfully completed.

SQL> exec dbms_macadm.add_auth_to_realm(realm_name => 'APP schema', grantee => 'APP', rule_set_name => '', auth_options => DBMS_MACUTL.G_REALM_AUTH_OWNER);

PL/SQL procedure successfully completed.

Then APP can grant its own objects to others (and grantee can see figures):

SQL> grant select on test01 to app_read;

Grant succeeded.

SQL> connect app_read/"secure_password"
SQL> select * from app.test01;

       VAL
----------
         1

To remove everything simply execute:

SQL> exec dbms_macadm.delete_realm(realm_name => 'APP schema');

PL/SQL procedure successfully completed.

Database Vault 12cR1 reporting

The two views, well three views, to control what has been changed in Vault or who has tried to violate one of the Vault enforcement are DVSYS.DV$CONFIGURATION_AUDIT and DVSYS.DV$ENFORCEMENT_AUDIT based on DVSYS.AUDIT_TRAIL$:

SQL> col username for a10
SQL> col action_name for a30
SQL> col action_object_name for a20
SQL> set pages 100
SQL> alter session set nls_date_format="dd-mon-yyyy hh24:mi:ss";

Session altered.
SQL> select username,timestamp,action_name,action_object_name from dvsys.dv$configuration_audit order by id# desc;

USERNAME   TIMESTAMP            ACTION_NAME                    ACTION_OBJECT_NAME
---------- -------------------- ------------------------------ --------------------
YJAQUIER   17-oct-2016 17:36:53 Add Realm Auth Audit           APP schema
YJAQUIER   17-oct-2016 17:36:50 Delete Realm Auth Audit        APP schema
YJAQUIER   17-oct-2016 17:33:47 Add Realm Auth Audit           APP schema
YJAQUIER   17-oct-2016 17:30:31 Add Realm Object Audit         APP schema
YJAQUIER   17-oct-2016 17:30:24 Realm Creation Audit           APP schema
YJAQUIER   17-oct-2016 17:17:31 Realm Deletion Audit           APP schema
YJAQUIER   14-oct-2016 17:55:59 Add Realm Auth Audit           APP schema
YJAQUIER   14-oct-2016 17:55:59 Delete Realm Auth Audit        APP schema
YJAQUIER   14-oct-2016 17:48:56 Add Realm Auth Audit           APP schema
YJAQUIER   14-oct-2016 17:48:56 Delete Realm Auth Audit        APP schema
YJAQUIER   14-oct-2016 16:51:04 Add Realm Auth Audit           APP schema
YJAQUIER   14-oct-2016 15:52:19 Add Realm Object Audit         APP schema
YJAQUIER   14-oct-2016 15:52:14 Realm Creation Audit           APP schema
YJAQUIER   14-oct-2016 15:48:54 Realm Deletion Audit           APP schema
YJAQUIER   14-oct-2016 13:07:04 Add Realm Object Audit         APP schema
YJAQUIER   14-oct-2016 13:06:59 Realm Creation Audit           APP schema
YJAQUIER   14-oct-2016 13:06:34 Realm Deletion Audit           APP schema
YJAQUIER   14-oct-2016 12:52:40 Realm Update Audit             APP schema
YJAQUIER   14-oct-2016 12:44:01 Delete Realm Auth Audit        APP schema
YJAQUIER   14-oct-2016 12:30:20 Realm Update Audit             APP schema
YJAQUIER   14-oct-2016 12:30:10 Realm Update Audit             APP schema
YJAQUIER   14-oct-2016 12:23:58 Add Realm Auth Audit           APP schema
YJAQUIER   14-oct-2016 12:14:33 Realm Update Audit             APP schema
YJAQUIER   14-oct-2016 10:50:13 Realm Update Audit             APP schema
YJAQUIER   14-oct-2016 10:15:03 Add Realm Object Audit         APP schema
YJAQUIER   14-oct-2016 10:14:55 Realm Creation Audit           APP schema
DBV_OWNER  13-oct-2016 16:40:49 Enable DV enforcement Audit

27 rows selected.

SQL> col action_command for a60
SQL> select username,timestamp,action_name,action_object_name,action_command from dvsys.dv$enforcement_audit order by id# desc;

USERNAME   TIMESTAMP            ACTION_NAME                    ACTION_OBJECT_NAME   ACTION_COMMAND
---------- -------------------- ------------------------------ -------------------- ------------------------------------------------------------
APP        17-oct-2016 17:34:31 Realm Violation Audit          APP schema           GRANT SELECT ON TEST01 TO APP_READ
APP        17-oct-2016 17:33:10 Realm Violation Audit          APP schema           CREATE TABLE TEST01 (VAL NUMBER)
APP        17-oct-2016 17:32:19 Realm Violation Audit          APP schema           DROP TABLE TEST1
YJAQUIER   17-oct-2016 17:31:45 Realm Violation Audit          APP schema           SELECT * FROM APP.EMPLOYEES
DBV_OWNER  17-oct-2016 15:44:36 Realm Violation Audit          Database Vault Accou GRANT CONNECT TO APP_READ
                                                               nt Management

DBV_OWNER  17-oct-2016 15:42:06 Realm Violation Audit          Oracle System Privil GRANT RESOURCE TO TEST
                                                               ege and Role Managem
                                                               ent Realm

DBV_ACCTMG 17-oct-2016 15:41:52 Realm Violation Audit          Oracle System Privil GRANT RESOURCE TO TEST
R                                                              ege and Role Managem
                                                               ent Realm

DBV_OWNER  17-oct-2016 15:41:45 Realm Violation Audit          Database Vault Accou GRANT CONNECT TO TEST
                                                               nt Management

DBV_OWNER  17-oct-2016 15:41:39 Realm Violation Audit          Oracle System Privil GRANT RESOURCE TO TEST
                                                               ege and Role Managem
                                                               ent Realm

DBV_OWNER  17-oct-2016 15:40:33 Realm Violation Audit          Database Vault Accou GRANT CONNECT,RESOURCE TO TEST
                                                               nt Management

SYS        17-oct-2016 15:40:19 Realm Violation Audit          Database Vault Accou GRANT CONNECT,RESOURCE TO TEST
                                                               nt Management

DBV_ACCTMG 17-oct-2016 15:40:02 Realm Violation Audit          Oracle System Privil GRANT CONNECT,RESOURCE TO TEST
R                                                              ege and Role Managem
                                                               ent Realm
.
.
.

But it cannot compete with the really nice Cloud Control Database Vault reporting. You can also click on each slice of the pie to get further details:

database_vault_12cr1_05
database_vault_12cr1_05

Database Vault 12cR1 errors

When playing with my own account granted and removed to my test realm and when de-activating, re-creating my test realm I reached a strange situation. I was able to select from APP.EMPLOYEES table with my own account while technically I should not have been able to do so. A simple flush shared pool solved the bad situation (and end my half day investigation):

SQL> show user
USER is "YJAQUIER"
SQL> select * from app.employees;

        ID FIRSTNAME                                          LASTNAME                                               SALARY
---------- -------------------------------------------------- -------------------------------------------------- ----------
         1 Yannick                                            Jaquier                                                 10000

SQL> alter system flush shared_pool;

System altered.

SQL> select * from app.employees;
select * from app.employees
                  *
ERROR at line 1:
ORA-01031: insufficient privileges

Oracle Database Vault 12cR1 Privilege Analysis

Database Vault 12cR1 introduce a new feature called Privilege Analysis. As the name stand for it helps you to analyze the used and unused privileges inside your database. A typical example is an applicative account which you can study to see which privileges (system or objects) it is using and the one it is NOT using. May really helps you to revoke too high privileges in a safer manner.

In Cloud Control in security menu choose Privilege Analysis:

database_vault_12cr1_06
database_vault_12cr1_06

To create a Privilege Analysis policy either you use Cloud Control or PL/SQL. I will create a policy to check privileges APP_READ account uses and dig in the ones it is not using. Remember we have granted update on APP.EMPLOYEES that I will not use in my testing:

SQL> exec DBMS_PRIVILEGE_CAPTURE.CREATE_CAPTURE(name => 'APP_READ policy', description => 'Check APP_READ privileges', type => DBMS_PRIVILEGE_CAPTURE.G_CONTEXT, -
> condition => 'SYS_CONTEXT (''USERENV'',''CURRENT_SCHEMA'') = ''APP_READ''' );

PL/SQL procedure successfully completed.

SQL> col context for a60
SQL> select type,enabled,context from DBA_PRIV_CAPTURES where name='APP_READ policy';

TYPE             E CONTEXT
---------------- - ------------------------------------------------------------
CONTEXT          N SYS_CONTEXT ('USERENV','CURRENT_SCHEMA') = 'APP_READ'

Enable the policy with:

SQL> exec dbms_privilege_capture.enable_capture(name => 'APP_READ policy');

PL/SQL procedure successfully completed.

Perform connect and few select with APP_READ account but do not update APP.EMPLOYESS table to show APP_READ does not need this privileges ! Once done disable the policy:

SQL> exec dbms_privilege_capture.disable_capture(name => 'APP_READ policy');

PL/SQL procedure successfully completed.

With Cloud Control it gives something like:

database_vault_12cr1_07
database_vault_12cr1_07

Generate result report with:

SQL> exec dbms_privilege_capture.generate_result(name => 'APP_READ policy');

PL/SQL procedure successfully completed.

And fetch the figures with (refer to Database Vault official documentation for long list of available views), this is extracted and modified from official documentation. List of used privileges:

SQL> col username format a10
SQL> col sys_priv format a16
SQL> col object_owner format a13
SQL> col object_name format a23
SQL> select username, nvl(sys_priv,obj_priv) as priv, object_owner, object_name from dba_used_privs where username='APP_READ';

USERNAME   PRIV                                     OBJECT_OWNER  OBJECT_NAME
---------- ---------------------------------------- ------------- -----------------------
APP_READ   SELECT                                   APP           EMPLOYEES
APP_READ   SELECT                                   APP           TEST01
APP_READ   SELECT                                   SYS           DUAL
APP_READ   CREATE SESSION
APP_READ   EXECUTE                                  SYS           DBMS_APPLICATION_INFO
APP_READ   SELECT                                   SYS           DUAL
APP_READ   SELECT                                   SYSTEM        PRODUCT_PRIVS

7 rows selected.

List of unused privileges, we clearly see that update on app.employees has not been used so might be revoked:

SQL> select username, nvl(sys_priv,obj_priv) as priv, object_owner, object_name from dba_unused_privs where username='APP_READ';

USERNAME   PRIV                                     OBJECT_OWNER  OBJECT_NAME
---------- ---------------------------------------- ------------- -----------------------
APP_READ   UPDATE                                   APP           EMPLOYEES
APP_READ   SET CONTAINER

More easily the reports can be generated from Cloud Control:

database_vault_12cr1_08
database_vault_12cr1_08

Clean Privileges Analysis with:

SQL> exec dbms_privilege_capture.drop_capture(name => 'APP_READ policy');

PL/SQL procedure successfully completed.

References

  • Master Note For Oracle Database Vault (Doc ID 1195205.1)
  • How To Enable Database Vault in a 12c database ? (Doc ID 2112167.1)
]]>
https://blog.yannickjaquier.com/oracle/database-vault-12cr1-installation.html/feed 0
Data Redaction (DBMS_REDACT) with 12cR1 (12.1.0.2) https://blog.yannickjaquier.com/oracle/data-redaction-dbms_redact-12cr1.html https://blog.yannickjaquier.com/oracle/data-redaction-dbms_redact-12cr1.html#respond Tue, 26 Sep 2017 15:02:37 +0000 http://blog.yannickjaquier.com/?p=3857

Table of contents

Preamble

Advanced Security enterprise edition paid option is made of two products:

Data Redaction conditionally hide on-the-fly sensitive data before it leaves the database. The picture available on Oracle product page (copyright Oracle) says it all:

data_redaction01
data_redaction01

The condition to hide figures is really open as you write it in PL/SQL. The one I will take as an example is an application with its own security model (LDAP or whatever) connecting to a database using the same applicative account. This is a real life example with any Java or web application.

As I have already tested Virtual private database (VPD), that is the term used for combination of fine grained access control (FGAC) with application contexts, I have asked myself the difference with Data Redaction. Fortunately it is well explained in official documentation in Oracle Data Redaction and Oracle Virtual Private Database chapter.

Whatever Oracle says I have feeling that Data Redaction that is new in 12cR1 and back ported in 11gR2 (11.2.0.4 only) is the new tool to use to hide sensitive information. Unfortunately VPD and FGAC are free while Data Redaction is not…

Testing of this blog post has been done using Oracle database enterprise edition 12.1.0.2.0 – 64bit running on Oracle Linux Server release 7.2 in a virtual machine.

Data redaction implementation

I create my application schema owner, identified externally. I also provide execute on Data Redaction package called DBMS_REDACT and capability to create a context (still in 12cR1 the create context does not exist):

SQL> create user app identified externally
     default tablespace users;

User created.

SQL> alter user app quota unlimited on users;

User altered.

SQL> grant connect,resource to app;

Grant succeeded.

SQL> grant execute on dbms_redact to app;

Grant succeeded.

SQL> grant create any context to app;

Grant succeeded.

I create and provide grants to the password authenticated user that will be used in my application:

SQL> create user app_read identified by secure_password;

User created.

SQL> grant connect to app_read;

Grant succeeded.

As app user I create my applicative table (really basic one). I also grant select and update to the account that will be used in my application:

SQL> create table employees (
     id number,
     firstname varchar2(50),
     lastname varchar2(50),
     salary number);

Table created.

SQL> grant select,update on employees to app_read;

Grant succeeded.

SQL> insert into employees values(1,'Yannick','Jaquier',10000);

1 row created.

SQL> commit;

Commit complete.

SQL> select * from employees;

        ID FIRSTNAME                                          LASTNAME                                               SALARY
---------- -------------------------------------------------- -------------------------------------------------- ----------
         1 Yannick                                            Jaquier                                                 10000

As you might guess the column we want to redact (hide) to a part of applicative users is salary !

To create the Data Redaction policy I will use an application context that we have already seen when testing FGAC so going a little bit faster on this part. As app account I create a context and a package to change its value based on client_identifier parameter value of userenv context, necessary grants are also provided. The rule is that any supervisor (supervisorxx value) can see salary while the other accounts cannot:

SQL> create or replace context my_context1 using my_context1_pkg;

Context created.

SQL> CREATE OR REPLACE PACKAGE my_context1_pkg IS
     PROCEDURE set_my_context1;
     END;
     /

Package created.

SQL> create or replace package body my_context1_pkg as
  procedure set_my_context1 is
  begin
    if lower(sys_context('userenv', 'client_identifier')) like 'supervisor%'
    then
      dbms_session.set_context('my_context1','salary_yes_no','DISPLAY_SALARY');
    else
      dbms_session.set_context('my_context1','salary_yes_no','DO_NOT_DISPLAY_SALARY');
		end if;
  end set_my_context1;
end;
/

Package body created.

SQL> grant execute on my_context1_pkg TO app_read;

Grant succeeded.

When adding a policy with DBMS_REDACT.ADD_POLICY one of the most important parameter is expression. Means that the real time masking will be performed only if the expression is TRUE. Here the example I would like to simulate is to hide salary when parameter salary_yes_no of my my_context1 context is set to DO_NOT_DISPLAY_SALARY or is not set (NULL value).

exec dbms_redact.add_policy(object_schema => 'app', object_name => 'employees', column_name => 'salary', -
policy_name => 'display_salary', function_type => dbms_redact.full, -
expression => 'sys_context(''my_context1'',''salary_yes_no'')=''DO_NOT_DISPLAY_SALARY'' or sys_context(''my_context1'',''salary_yes_no'') is null', -
policy_description => 'Hide salary column', -
column_description => 'Column with sensitive salary information');

The redaction policy is enabled by default (enable parameter is set to TRUE by default). The function_type parameter set the redaction masking function. DBMS_REDACT.FULL will simply set salary column to 0 but many other options are available like changing only first digit of a credit card number or a social security number. Please refer the official documentation for a complete description.

In case you want to perform multiple tests you can drop the policy with:

SQL> exec dbms_redact.drop_policy(object_schema => 'app', object_name => 'employees', policy_name => 'display_salary');

PL/SQL procedure successfully completed.

You have few dictionary tables to see what has been implemented:

SQL> set lines 200
SQL> col object_owner for a10
SQL> col object_name for a10
SQL> col policy_description for a20
SQL> col policy_name for a15
SQL> select object_owner,object_name,policy_name, enable,policy_description from redaction_policies;

OBJECT_OWN OBJECT_NAM POLICY_NAME     ENABLE  POLICY_DESCRIPTION
---------- ---------- --------------- ------- --------------------
APP        EMPLOYEES  display_salary  YES     Hide salary column

SQL> col column_name for a10
SQL> select object_owner,object_name,column_name,function_type from redaction_columns;

OBJECT_OWN OBJECT_NAM COLUMN_NAM FUNCTION_TYPE
---------- ---------- ---------- ---------------------------
APP        EMPLOYEES  SALARY     FULL REDACTION

Data redaction testing

For testing I will connect with app_read account and set CLIENT_IDENTIFIER parameter value of USERENV context with DBMS_SESSION.SET_IDENTIFIER procedure. CLIENT_IDENTIFIER parameter value simulate the applicative account that has been used to identify inside your Java/Web application (LDAP or whatever).

If you do not set CLIENT_IDENTIFIER value then salary is not displayed:

SQL> set lines 150
SQL> SELECT SYS_CONTEXT('my_context1','salary_yes_no') FROM dual;

SYS_CONTEXT('MY_CONTEXT1','SALARY_YES_NO')
------------------------------------------------------------------------------------------------------------------------------------------------------


SQL> select * from app.employees;

        ID FIRSTNAME                                          LASTNAME                                               SALARY
---------- -------------------------------------------------- -------------------------------------------------- ----------
         1 Yannick                                            Jaquier                                                     0

If you set CLIENT_IDENTIFIER value to an applicative account that is not allowed to see salaries:

SQL> EXEC DBMS_SESSION.SET_IDENTIFIER('operator01');

PL/SQL procedure successfully completed.

SQL> exec app.my_context1_pkg.set_my_context1;

PL/SQL procedure successfully completed.

SQL> SELECT SYS_CONTEXT('my_context1','salary_yes_no') FROM dual;

SYS_CONTEXT('MY_CONTEXT1','SALARY_YES_NO')
------------------------------------------------------------------------------------------------------------------------------------------------------
DO_NOT_DISPLAY_SALARY

SQL> select * from app.employees;

        ID FIRSTNAME                                          LASTNAME                                               SALARY
---------- -------------------------------------------------- -------------------------------------------------- ----------
         1 Yannick                                            Jaquier                                                     0

If you set CLIENT_IDENTIFIER value to an applicative account that has privilege to see salaries:

SQL> EXEC DBMS_SESSION.SET_IDENTIFIER('supervisor01');

PL/SQL procedure successfully completed.

SQL> exec app.my_context1_pkg.set_my_context1;

PL/SQL procedure successfully completed.

SQL> SELECT SYS_CONTEXT('my_context1','salary_yes_no') FROM dual;

SYS_CONTEXT('MY_CONTEXT1','SALARY_YES_NO')
------------------------------------------------------------------------------------------------------------------------------------------------------
DISPLAY_SALARY

SQL> select * from app.employees;

        ID FIRSTNAME                                          LASTNAME                                               SALARY
---------- -------------------------------------------------- -------------------------------------------------- ----------
         1 Yannick                                            Jaquier                                                 10000

One funny thing you might notice is even APP user, owner of the object, is not able to see the value of salary column. This can be solved granting below system privileges:

SQL> grant exempt redaction policy to app;

Grant succeeded.

You would have chosen DBMS_REDACT.RANDOM as a masking function the salary would be different each time you perform a select onto the employees table.

Even if you are not able to see salary column you can still update it:

SQL> SELECT SYS_CONTEXT('userenv','client_identifier') from dual;

SYS_CONTEXT('USERENV','CLIENT_IDENTIFIER')
------------------------------------------------------------------------------------------------------------------------------------------------------
operator01

SQL> SELECT SYS_CONTEXT('my_context1','salary_yes_no') FROM dual;

SYS_CONTEXT('MY_CONTEXT1','SALARY_YES_NO')
------------------------------------------------------------------------------------------------------------------------------------------------------
DO_NOT_DISPLAY_SALARY

SQL> update app.employees set salary=20000 where id=1;

1 row updated.

SQL> commit;

Commit complete.

SQL> select * from app.employees;

        ID FIRSTNAME                                          LASTNAME                                               SALARY
---------- -------------------------------------------------- -------------------------------------------------- ----------
         1 yannick                                            Jaquier                                                     0

If you control with schema owner app you will see that salary has been well set to 20,000…

References

]]>
https://blog.yannickjaquier.com/oracle/data-redaction-dbms_redact-12cr1.html/feed 0