IT World https://blog.yannickjaquier.com RDBMS, Unix and many more... Tue, 18 Dec 2018 13:42:25 +0000 en-US hourly 1 https://wordpress.org/?v=4.9.9 How to identify table fragmentation and remove it ? https://blog.yannickjaquier.com/oracle/table-fragmentation-identification.html https://blog.yannickjaquier.com/oracle/table-fragmentation-identification.html#respond Tue, 18 Dec 2018 13:42:25 +0000 https://blog.yannickjaquier.com/?p=4407 Preamble After a first blog post on when to rebuild or shrink indexes I have naturally decided to write a post on table fragmentation and how to remove it. Maybe I should have started with this one but when we implemented Oracle Disk Manager (ODM) I have got the feedback from applicative team they have […]

The post How to identify table fragmentation and remove it ? appeared first on IT World.

]]>

Table of contents

Preamble

After a first blog post on when to rebuild or shrink indexes I have naturally decided to write a post on table fragmentation and how to remove it. Maybe I should have started with this one but when we implemented Oracle Disk Manager (ODM) I have got the feedback from applicative team they have multiple times experience a good performance improvement when rebuilding indexes.

It is equally important to defragment tables as it reduces the number of physical reads to put table data blocks in memory. It also decreases the number of blocks to handle in each query (logical reads) as you will also reduce what is called High Water Mark. You increase rows density per blocks as more table figures (rows) will be clubbed in one block.

I have tried to draw few pictures (tolerance requested, done by myself) to visually explain the concept, you will see them in many other blog posts but those two pictures perfectly sum up what we want to achieve.

As a remainder a tablespace is made of multiple datafiles, the object logical storage is called a segment that is made of multiple extents and each extents is made of multiple blocks. In worst case situation you have deleted multiple rows from your table and remaining rows got sparse is a lot of blocks with a low percentage of completion:

table fragmentation 01
table fragmentation 01

The ideal target is to move to a situation where all rows have been condensed (defragmented) in a minimum number of blocks, each being almost full:

table fragmentation 02
table fragmentation 02

Of course if the deleted rows will be soon reinserted then no need to do anything as Oracle will start to insert new rows in not already full blocks before starting to allocate new ones.

This blog post as been written using Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production running on Red Hat Enterprise Linux Server release 6.5 (Santiago).

Legacy situation

The old school approach is to work with DBA_TABLES and estimate how much space the table is taking and how much optimal space it could taker. Current size is number of used blocks multiply by block size to have a size in bytes. Theoretical smaller size is number of rows multiply by average row length. The gain you might get is a simple computation from this two values. Of course you must take into account the value of PCTFREE that is percentage of space reserved in blocks for future updates (mainly to avoid what is called row chaining when a row is spread on more than one block).

Of course if you want to estimate the size of a non yet existing table it does not apply !

The query could looks like:

select
  a.blocks*b.block_size AS current_size,
  a.num_rows*a.avg_row_len AS theorical_size,
  (a.blocks*b.block_size)-(a.num_rows*a.avg_row_len) AS gain,
  (((a.blocks*b.block_size)-(a.num_rows*a.avg_row_len))*100/(a.blocks*b.block_size)) - a.pct_free AS percentage_gain
from dba_tables a, dba_tablespaces b
where a.tablespace_name=b.tablespace_name
and owner = upper('')
and table_name = upper('');

If you want a better display it can even be put in a PL/SQL block like (inspect_table.sql):

set linesize 200 pages 1000
set serveroutput on size 999999
set verify off
set feedback off
declare
  vcurrent_size number;
  vtheorical_size number;
  vgain number;
  vpercentage_gain number;
  function format_size(value1 in number)
  return varchar2 as
  begin
    case
      when (value1>1024*1024*1024) then return ltrim(to_char(value1/(1024*1024*1024),'999,999.999') || 'GB');
      when (value1>1024*1024) then return ltrim(to_char(value1/(1024*1024),'999,999.999') || 'MB');
      when (value1>1024) then return ltrim(to_char(value1/(1024),'999,999.999') || 'KB');
      else return ltrim(to_char(value1,'999,999.999') || 'B');
    end case;
  end format_size;
begin
  select
    a.blocks*b.block_size,
    a.num_rows*a.avg_row_len,
    (a.blocks*b.block_size)-(a.num_rows*a.avg_row_len),
    (((a.blocks*b.block_size)-(a.num_rows*a.avg_row_len))*100/(a.blocks*b.block_size)) - a.pct_free
  into vcurrent_size, vtheorical_size, vgain, vpercentage_gain
  from dba_tables a, dba_tablespaces b
  where a.tablespace_name=b.tablespace_name
  and owner = upper('&1.')
  and table_name = upper('&2.');

  dbms_output.put_line('For table ' || upper('&1.') || '.' || upper('&2.'));
  dbms_output.put_line('Current table size: ' || format_size(vcurrent_size));
  dbms_output.put_line('Theoretical table size: ' || format_size(vtheorical_size));
  dbms_output.put_line('Potential saving: ' || format_size(vgain));
  dbms_output.put_line('Potential saving percentage: ' || round(vpercentage_gain, 2) || '%');
end;
/
set feedback on

For one of my test tables it gives:

SQL> @inspect_table  
For table owner.table_name
Current table size: 57.031MB
Theoretical table size: 179.000B
Potential saving: 57.031MB
Potential saving percentage: 90%

So my table is currently using around 57MB and I could ideally make it fitting in 179 bytes so one block at the end (that’s why here the computation is not accurate). But here we do not take into account extent management of tablespaces and so obviously the gain will not be that big !

Newest methods to estimate tables size

Not like for indexes here you cannot use EXPLAIN PLAN for create table statement because mainly Oracle cannot guess how many rows you will insert. Since Oracle 10gR1, same as for indexes, you can now DBMS_SPACE.CREATE_TABLE_COST procedures (because two version exist). When calling them you specify target number of row for future tables or real number of rows for existing tables.

As just written there are two versions of DBMS_SPACE.CREATE_TABLE_COST. One where you specify average row length (so for existing table) and one where you give the data type and length of every columns (so apply for soon to be created tables). I have tried both on an existing table and it provided same result, the second form of the procedure is a bit ore complex to handle as you must build a variable with a special type (CREATE_TABLE_COST_COLUMNS) which describe all columns. Here is the small PL/SQL block I have written (create_table_cost.sql):

set linesize 200 pages 1000
set serveroutput on size 999999
set verify off
set feedback off
declare
  vtablespace_name dba_tables.tablespace_name%type;
  vavg_row_len dba_tables.avg_row_len%type;
  vnum_rows dba_tables.num_rows%type;
  vpct_free dba_tables.pct_free%type;
  used_bytes number;
  alloc_bytes number;
  cursor cursor1 is
  select data_type, data_length
  from dba_tab_columns
  where owner = upper('&1.')
  and table_name = upper('&2.')
  order by column_id;
  columns1 sys.create_table_cost_columns:=sys.create_table_cost_columns();
  i number:=0;
  type collection1 is table of cursor1%rowtype index by pls_integer;
  item1 collection1;
  function format_size(value1 in number)
  return varchar2 as
  begin
    case
      when (value1>1024*1024*1024) then return ltrim(to_char(value1/(1024*1024*1024),'999,999.999') || 'GB');
      when (value1>1024*1024) then return ltrim(to_char(value1/(1024*1024),'999,999.999') || 'MB');
      when (value1>1024) then return ltrim(to_char(value1/(1024),'999,999.999') || 'KB');
      else return ltrim(to_char(value1,'999,999.999') || 'B');
    end case;
  end format_size;
begin
  select tablespace_name, avg_row_len, num_rows, pct_free 
  into vtablespace_name, vavg_row_len, vnum_rows, vpct_free 
  from dba_tables
  where owner = upper('&1.')
  and table_name = upper('&2.');
  
  dbms_output.put_line('----------------------------------------------------------------');
  dbms_output.put_line('------------ DBMS_SPACE.CREATE_TABLE_COST version 1 ------------');
  dbms_output.put_line('----------------------------------------------------------------');
  dbms_space.create_table_cost(vtablespace_name, vavg_row_len, vnum_rows, vpct_free, used_bytes, alloc_bytes);
  dbms_output.put_line('Used: ' || format_size(used_bytes));
  dbms_output.put_line('Allocated: ' || format_size(alloc_bytes));

  open cursor1;
  fetch cursor1 bulk collect into item1;
  for i in item1.first..item1.last loop
    columns1.extend;
    columns1(i):=sys.create_table_cost_colinfo(item1(i).data_type, item1(i).data_length);
  end loop;
  close cursor1;
  dbms_output.put_line('----------------------------------------------------------------');
  dbms_output.put_line('------------ DBMS_SPACE.CREATE_TABLE_COST version 2 ------------');
  dbms_output.put_line('----------------------------------------------------------------');
  dbms_space.create_table_cost(vtablespace_name, columns1, vnum_rows, vpct_free, used_bytes, alloc_bytes);
  dbms_output.put_line('Used: ' || format_size(used_bytes));
  dbms_output.put_line('Allocated: ' || format_size(alloc_bytes));
end;
/
set feedback on

With my test table it gives:

SQL> @create_table_cost  
----------------------------------------------------------------
------------ DBMS_SPACE.CREATE_TABLE_COST version 1 ------------
----------------------------------------------------------------
Used: 8.000KB
Allocated: 64.000KB
----------------------------------------------------------------
------------ DBMS_SPACE.CREATE_TABLE_COST version 2 ------------
----------------------------------------------------------------
Used: 8.000KB
Allocated: 64.000KB

So the procedure handle minimum first extent size of 64KB of my 8KB block size and EXTENT MANAGEMENT LOCAL AUTOALLOCATE and SEGMENT SPACE MANAGEMENT AUTO. Despite what Oracle is claiming I see no difference between the two versions of the procedure. If we check current used space (DBA_EXTENTS):

SQL> select bytes, blocks,count(*)
  2  from dba_extents
  3  where owner = upper('')
  4  and segment_name = upper('')
  5  group by bytes, blocks
  6  order by blocks;

     BYTES     BLOCKS   COUNT(*)
---------- ---------- ----------
     65536          8         16
   1048576        128         57

2 rows selected.

Table fragmentation identification

Once we have the estimated size (whatever the method) of the table we can compare it with its actual size and see how much we might gain. To compute the current size of an existing index (of course) we have multiple methods:

  • DBMS_SPACE.SPACE_USAGE procedure
  • DBA_SEGMENTS view
  • DBA_TABLES view

From my testing DBMS_SPACE.SPACE_USAGE is giving exact same result as the query we have seen on DBA_TABLES with a bit ore insight on block completion. So the small PL/SQL blocks I have written is not using DBA_TABLES (table_saving.sql):

set linesize 200 pages 1000
set serveroutput on size 999999
set verify off
set feedback off
declare
  unformatted_blocks number;
  unformatted_bytes number;
  fs1_blocks number;
  fs1_bytes number;
  fs2_blocks number;
  fs2_bytes number;
  fs3_blocks number;
  fs3_bytes number;
  fs4_blocks number;
  fs4_bytes number;
  full_blocks number;
  full_bytes number;
  dbms_space_bytes number;
  bytes_dba_segments number;
  vtablespace_name dba_tables.tablespace_name%type;
  vavg_row_len dba_tables.avg_row_len%type;
  vnum_rows dba_tables.num_rows%type;
  vpct_free dba_tables.pct_free%type;
  used_bytes number;
  alloc_bytes number;
  function format_size(value1 in number)
  return varchar2 as
  begin
    case
      when (value1>1024*1024*1024) then return ltrim(to_char(value1/(1024*1024*1024),'999,999.999') || 'GB');
      when (value1>1024*1024) then return ltrim(to_char(value1/(1024*1024),'999,999.999') || 'MB');
      when (value1>1024) then return ltrim(to_char(value1/(1024),'999,999.999') || 'KB');
      else return ltrim(to_char(value1,'999,999.999') || 'B');
    end case;
  end format_size;
begin
  select tablespace_name, avg_row_len, num_rows, pct_free 
  into vtablespace_name, vavg_row_len, vnum_rows, vpct_free 
  from dba_tables
  where owner = upper('&1.')
  and table_name = upper('&2.');
  dbms_output.put_line('----------------------------------------------------------------');
  dbms_output.put_line('Analyzing table &1..&2.');
  dbms_output.put_line('----------------------------------------------------------------');
  dbms_output.put_line('-------------------- DBMS_SPACE.SPACE_USAGE --------------------');
  dbms_output.put_line('----------------------------------------------------------------');
  dbms_space.space_usage(upper('&1.'), upper('&2.'), 'TABLE', unformatted_blocks, unformatted_bytes, fs1_blocks, fs1_bytes, fs2_blocks,
  fs2_bytes, fs3_blocks, fs3_bytes, fs4_blocks, fs4_bytes, full_blocks, full_bytes);
  dbms_output.put_line('Total number of blocks unformatted :' || unformatted_blocks);
  --dbms_output.put_line('Total number of bytes unformatted: ' || unformatted_bytes);
  dbms_output.put_line('Number of blocks having at least 0 to 25% free space: ' || fs1_blocks);
  --dbms_output.put_line('Number of bytes having at least 0 to 25% free space: ' || fs1_bytes);
  dbms_output.put_line('Number of blocks having at least 25 to 50% free space: ' || fs2_blocks);
  --dbms_output.put_line('Number of bytes having at least 25 to 50% free space: ' || fs2_bytes);
  dbms_output.put_line('Number of blocks having at least 50 to 75% free space: ' || fs3_blocks);
  --dbms_output.put_line('Number of bytes having at least 50 to 75% free space: ' || fs3_bytes);
  dbms_output.put_line('Number of blocks having at least 75 to 100% free space: ' || fs4_blocks);
  --dbms_output.put_line('Number of bytes having at least 75 to 100% free space: ' || fs4_bytes);
  dbms_output.put_line('The number of blocks full in the segment: ' || full_blocks);
  --dbms_output.put_line('Total number of bytes full in the segment: ' || format_size(full_bytes));
  dbms_space_bytes:=unformatted_bytes+fs1_bytes+fs2_bytes+fs3_bytes+fs4_bytes+full_bytes;
  dbms_output.put_line('----------------------------------------------------------------');
  dbms_output.put_line('------------------------- DBA_SEGMENTS -------------------------');
  dbms_output.put_line('----------------------------------------------------------------');
  select bytes into bytes_dba_segments from dba_segments where owner=upper('&1.') and segment_name=upper('&2.');
  dbms_output.put_line('Size of the segment: ' || format_size(bytes_dba_segments));
  dbms_output.put_line('----------------------------------------------------------------');
  dbms_output.put_line('----------------- DBMS_SPACE.CREATE_TABLE_COST -----------------');
  dbms_output.put_line('----------------------------------------------------------------');
  dbms_space.create_table_cost(vtablespace_name, vavg_row_len, vnum_rows, vpct_free, used_bytes, alloc_bytes);
  dbms_output.put_line('Used: ' || format_size(used_bytes));
  dbms_output.put_line('Allocated: ' || format_size(alloc_bytes));
  dbms_output.put_line('----------------------------------------------------------------');
  dbms_output.put_line('---------------------------- Results ---------------------------'); 
  dbms_output.put_line('----------------------------------------------------------------');
  dbms_output.put_line('Potential percentage gain (DBMS_SPACE): ' || round(100 * (dbms_space_bytes - alloc_bytes) / dbms_space_bytes) || '%');
  dbms_output.put_line('Potential percentage gain (DBA_SEGMENTS): ' || round(100 * (bytes_dba_segments - alloc_bytes) / bytes_dba_segments) || '%');
end;
/
set feedback on

On my test table it gives:

SQL> @table_saving  
----------------------------------------------------------------
Analyzing table .
----------------------------------------------------------------
-------------------- DBMS_SPACE.SPACE_USAGE --------------------
----------------------------------------------------------------
Total number of blocks unformatted :0
Number of blocks having at least 0 to 25% free space: 0
Number of blocks having at least 25 to 50% free space: 0
Number of blocks having at least 50 to 75% free space: 0
Number of blocks having at least 75 to 100% free space: 7300
The number of blocks full in the segment: 0
----------------------------------------------------------------
------------------------- DBA_SEGMENTS -------------------------
----------------------------------------------------------------
Size of the segment: 58.000MB
----------------------------------------------------------------
----------------- DBMS_SPACE.CREATE_TABLE_COST -----------------
----------------------------------------------------------------
Used: 8.000KB
Allocated: 64.000KB
----------------------------------------------------------------
---------------------------- Results ---------------------------
----------------------------------------------------------------
Potential percentage gain (DBMS_SPACE): 100%
Potential percentage gain (DBA_SEGMENTS): 100%

Only with DBMS_SPACE.SPACE_USAGE you already know that the potential for storage saving is huge because my table is made of 7300 blocks which are all not more than 25% full…

You can even create a procedure based on above PL/SQL block, I have chosen to use DBMS_SPACE.SPACE_USAGE (table_saving_function.sql):

create or replace function table_saving_function(vtable_owner in varchar2, vtable_name in varchar2)
return number
authid current_user
as
  vtablespace_name dba_tables.tablespace_name%type;
  vavg_row_len dba_tables.avg_row_len%type;
  vnum_rows dba_tables.num_rows%type;
  vpct_free dba_tables.pct_free%type;
  unformatted_blocks number;
  unformatted_bytes number;
  fs1_blocks number;
  fs1_bytes number;
  fs2_blocks number;
  fs2_bytes number;
  fs3_blocks number;
  fs3_bytes number;
  fs4_blocks number;
  fs4_bytes number;
  full_blocks number;
  full_bytes number;
  dbms_space_bytes number;
  used_bytes number;
  alloc_bytes number;
begin
  select tablespace_name, avg_row_len, num_rows, pct_free 
  into vtablespace_name, vavg_row_len, vnum_rows, vpct_free 
  from dba_tables
  where owner = upper(vtable_owner)
  and table_name = upper(vtable_name);
  dbms_space.space_usage(upper(vtable_owner), upper(vtable_name), 'TABLE', unformatted_blocks, unformatted_bytes, fs1_blocks, fs1_bytes, fs2_blocks,
  fs2_bytes, fs3_blocks, fs3_bytes, fs4_blocks, fs4_bytes, full_blocks, full_bytes);
  dbms_space_bytes:=unformatted_bytes+fs1_bytes+fs2_bytes+fs3_bytes+fs4_bytes+full_bytes;
  if (vavg_row_len > 0 and vnum_rows > 0) then
    dbms_space.create_table_cost(vtablespace_name, vavg_row_len, vnum_rows, vpct_free, used_bytes, alloc_bytes);
    if (dbms_space_bytes <> 0) then
      return (100 * (dbms_space_bytes - alloc_bytes) / dbms_space_bytes);
    else
      return 0;
    end if;
  else
    return 0;
  end if;
end;
/

Then with a query like this you can find the best candidates to work on (this is by the way how I have found the example of this blog post):

select a.owner,a.table_name,table_saving_function(a.owner,a.table_name) as percentage_gain
from dba_tables a
where a.owner=''
and a.status='VALID' --In valid state
and a.iot_type is null -- IOT tables not supported by dbms_space
--and external='no' starting from 12cr2
and not exists (select 'x' from dba_external_tables b where b.owner=a.owner and b.table_name=a.table_name)
and temporary='N' --Temporary segment not supported
and a.last_analyzed is not null --Recently analyzed
order by 3 desc;

Move, shrink or export/import ?

We have three options in our hands to defragment tables:

  1. Alter table move (to another tablespace, or same tablespace) and rebuild indexes. You obviously need extra space in tablespace to use it. Using ONLINE keyword in Enterprise edition you have no lock and DML are still possible.
  2. Export and import the table. Needless to say the downtime is big and is difficult to get on a production database. Not the option I would choose…
  3. Shrink command available starting with Oracle 10gR1. Usable on segments in tablespaces with automatic segment management and when row movement has been activated.

So the method to target is ALTER TABLE . SHRINK SPACE [COMPACT] [CASCADE]. SHRINK SPACE COMPACT is equivalent to specifying ALTER [INDEX | TABLE ] … COALESCE.

Same as for indexes COMPACT option has poor interest:

If you specify COMPACT, then Oracle Database only defragments the segment space and compacts the table rows for subsequent release. The database does not readjust the high water mark and does not release the space immediately. You must issue another ALTER TABLE … SHRINK SPACE statement later to complete the operation. This clause is useful if you want to accomplish the shrink operation in two shorter steps rather than one longer step.

Let’s try with my test table:

SQL> alter table . shrink space;

Error starting at line : 1 in command -
alter table . shrink space
Error report -
ORA-10636: ROW MOVEMENT is not enabled

SQL> alter table . enable row movement;

Table . altered.

SQL> alter table . shrink space;

Table . altered.

SQL> exec dbms_stats.gather_table_stats('','');

PL/SQL procedure successfully completed.

SQL> @table_saving  
----------------------------------------------------------------
Analyzing table .
----------------------------------------------------------------
-------------------- DBMS_SPACE.SPACE_USAGE --------------------
----------------------------------------------------------------
Total number of blocks unformatted :0
Number of blocks having at least 0 to 25% free space: 0
Number of blocks having at least 25 to 50% free space: 0
Number of blocks having at least 50 to 75% free space: 0
Number of blocks having at least 75 to 100% free space: 1
The number of blocks full in the segment: 0
----------------------------------------------------------------
------------------------- DBA_SEGMENTS -------------------------
----------------------------------------------------------------
Size of the segment: 64.000KB
----------------------------------------------------------------
----------------- DBMS_SPACE.CREATE_TABLE_COST -----------------
----------------------------------------------------------------
Used: 8.000KB
Allocated: 64.000KB
----------------------------------------------------------------
---------------------------- Results ---------------------------
----------------------------------------------------------------
Potential percentage gain (DBMS_SPACE): -700%
Potential percentage gain (DBA_SEGMENTS): 0%

SQL> @create_table_cost  
----------------------------------------------------------------
------------ DBMS_SPACE.CREATE_TABLE_COST version 1 ------------
----------------------------------------------------------------
Used: 8.000KB
Allocated: 64.000KB
----------------------------------------------------------------
------------ DBMS_SPACE.CREATE_TABLE_COST version 2 ------------
----------------------------------------------------------------
Used: 16.000KB
Allocated: 64.000KB

SQL> @inspect_table  
For table .
Current table size: 8.000KB
Theoretical table size: 1.090KB
Potential saving: 6.910KB
Potential saving percentage: 76.38%

SQL> select bytes, blocks,count(*)
  2  from dba_extents
  3  where owner = upper('')
  4  and segment_name = upper('')
  5  group by bytes, blocks
  6  order by blocks;

     BYTES     BLOCKS   COUNT(*)
---------- ---------- ----------
     65536          8          1

1 row selected.

As expected the table is now fitting in one block. But still one extent of 64KB has been allocated to store it. HWM has been reduced so no real need to find something better. Maybe my PL/SQL should be modified to avoid reporting negative percentage gain and just report 0 for no gain…

References

The post How to identify table fragmentation and remove it ? appeared first on IT World.

]]>
https://blog.yannickjaquier.com/oracle/table-fragmentation-identification.html/feed 0
How to non intrusively find index rebuild or shrink candidates ? https://blog.yannickjaquier.com/oracle/candidates-index-rebuild-shrink.html https://blog.yannickjaquier.com/oracle/candidates-index-rebuild-shrink.html#comments Fri, 23 Nov 2018 14:25:30 +0000 https://blog.yannickjaquier.com/?p=4384 Preamble How to find index rebuild contenders ? I usually get this question around four times a year ! The always working answer is: when you have deleted many rows from source table. Obviously if you delete many rows from source table the under line index will have its leaf blocks getting empty and so […]

The post How to non intrusively find index rebuild or shrink candidates ? appeared first on IT World.

]]>

Table of contents

Preamble

How to find index rebuild contenders ? I usually get this question around four times a year ! The always working answer is: when you have deleted many rows from source table. Obviously if you delete many rows from source table the under line index will have its leaf blocks getting empty and so will benefit from a rebuild. Well to be honest benefit from a rebuild if you do not insert back those rows in source table or if you insert new rows with a different key (related to the index). Okay but how do we know how much leaf empty blocks have been created and how much space would we gain by rebuilding the index ?

The legacy method is based on SQL command:

analyze index ... validate structure;

Which has the bad idea to set an exclusive lock on base table and so forbid any DML. As this method was quite intrusive it has rarely been used on production databases… Despite this you still have plenty of references still suggesting this method that, in my opinion, you must avoid !

Looking a bit of the subject the newest and non intrusive methods are now based on the Oracle estimation of the index size versus the size it currently has. Some more advanced methods are also now displaying the index distribution that could give you an insight of the quality of the index and if you should consider to rebuild it or not.

Legacy situation

Start by analyzing validate structure the index, again this is intrusive command that is forbidding any DML on source table:

SQL> analyze index . validate structure;

Index . analyzed.

Then you have access to a table called INDEX_STATS. The interesting columns are HEIGHT for index height (number of blocks required to go from the root block to a leaf block), LF_ROWS for the number of leaf rows (values in the index) and DEL_LF_ROWS for the number of deleted leaf rows in the index. The seen everywhere formula is to rebuild index when its height is greater than 3 or percentage of leaf rows deleted is greater than 20%. So here is the query:

SQL> set lines 200
SQL> col name for a30
SQL> select name, height, round(del_lf_rows*100/lf_rows,4) as percentage from index_stats;

NAME                               HEIGHT PERCENTAGE
------------------------------ ---------- ----------
                            4      .0006

But again this is clearly a method to avoid nowadays…

Newest methods to estimate indexes size

Current methods are all coming from the feature of well known EXPLAIN PLAN command for DDL. Explaining the DDL of a create index command will feedback the estimated sire of the index. Let’s apply it to my existing index but you can also use it for an index you have not yet created. Get the DDL of your index using DBMS_METADATA.GET_DDL function:

SQL> set long 1000
SQL> select dbms_metadata.get_ddl('INDEX', '', '') as ddl from dual;
DDL
--------------------------------------------------------------------------------

  CREATE INDEX . ON .
  ("SO_SUB_ITEM__ID", "SO_PENDING_CAUSE__CODE")
  PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS NOLOGGING
  STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE 

Then explain create index statement and display related explain plan:

SQL> set lines 150
SQL> explain plan for
  2  CREATE INDEX .
  3  ON .
(, ) 4 PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS NOLOGGING 5 STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 6 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 7 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) 8 TABLESPACE ; Explained. SQL> select * from table(dbms_xplan.display); PLAN_TABLE_OUTPUT ------------------------------------------------------------------------------------------------------------------------------------------------------ Plan hash value: 1096024652 --------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | --------------------------------------------------------------------------------------------------- | 0 | CREATE INDEX STATEMENT | | 74M| 1419M| 156K (1)| 00:00:07 | | 1 | INDEX BUILD NON UNIQUE| | | | | | | 2 | SORT CREATE INDEX | | 74M| 1419M| | | | 3 | TABLE ACCESS FULL |
| 74M| 1419M| 85097 (1)| 00:00:04 | --------------------------------------------------------------------------------------------------- PLAN_TABLE_OUTPUT ------------------------------------------------------------------------------------------------------------------------------------------------------ Note ----- - estimated index size: 2617M bytes 14 rows selected.

And what do we see at the end of the explain plan: estimated index size: 2617M bytes. Oracle is telling us the size the index would take on disk !

Since Oracle 10gR1 since has been wrapped in a procedure of DBMS_SPACE, so you got this all in one using DBMS_SPACE.CREATE_INDEX_COST procedure.

I have create below script taking owner and index name as parameters:

set linesize 200 pages 1000
set serveroutput on size 999999
set verify off
set feedback off
declare
  used_bytes number;
  alloc_bytes number;
  function format_size(value1 in number)
  return varchar2 as
  begin
    case
      when (value1>1024*1024*1024) then return ltrim(to_char(value1/(1024*1024*1024),'999,999.999') || 'GB');
      when (value1>1024*1024) then return ltrim(to_char(value1/(1024*1024),'999,999.999') || 'MB');
      when (value1>1024) then return ltrim(to_char(value1/(1024),'999,999.999') || 'KB');
      else return ltrim(to_char(value1,'999,999.999') || 'B');
    end case;
  end format_size;
begin
  dbms_output.put_line('----------------------------------------------------------------');
  dbms_output.put_line('----------------- DBMS_SPACE.CREATE_INDEX_COST -----------------');
  dbms_output.put_line('----------------------------------------------------------------');
  dbms_space.create_index_cost(dbms_metadata.get_ddl('INDEX', upper('&2.'), upper('&1.')), used_bytes, alloc_bytes);
  dbms_output.put_line('Used: ' || format_size(used_bytes));
  dbms_output.put_line('Allocated: ' || format_size(alloc_bytes));
end;
/
set feedback on

It gives:

SQL> @create_index_cost  
----------------------------------------------------------------
----------------- DBMS_SPACE.CREATE_INDEX_COST -----------------
----------------------------------------------------------------
Used: 1.386GB
Allocated: 2.438GB

From official documentation:

  • used_bytes: The number of bytes representing the actual index data
  • alloc_bytes: Size of the index when created in the tablespace

So as we can see not an exactly byte to byte equivalence, 2617MB for EXPLAIN PLAN command and 2.438GB (2496MB) for DBMS_SPACE.CREATE_INDEX_COST procedure. But the procedure is far simpler to use !

In My Oracle Support (MOS) note Script to investigate a b-tree index structure (Doc ID 989186.1) Oracle claim they use undocumented SYS_OD_LBID instead of ANALYZE INDEX … VALIDATE STRUCTURE command. But looking deeper into their script, SYS_OP_LBID usage is for something completely and in fact they do not use it to list indexes that might benefit from rebuild. We will see SYS_OP_LBID function in a later chapter of this blog post.

Taking only the size estimate part of MOS note 989186.1 and modifying it to take only two parameters which would be index owner and index name it could become something like (inspect_index.sql):

set linesize 200 pages 1000
set serveroutput on size 999999
set verify off
set feedback off
declare
  vtargetuse   CONSTANT POSITIVE := 90;  -- equates to pctfree 10  
  vleafestimate number;  
  vblocksize    number;
  voverhead     number := 192; -- leaf block "lost" space in index_stats 
  vtable_owner dba_indexes.table_owner%type;
  vtable_name dba_indexes.table_owner%type;
  vleaf_blocks dba_indexes.table_owner%type;
  function format_size(value1 in number)
  return varchar2 as
  begin
    case
      when (value1>1024*1024*1024) then return ltrim(to_char(value1/(1024*1024*1024),'999,999.999') || 'GB');
      when (value1>1024*1024) then return ltrim(to_char(value1/(1024*1024),'999,999.999') || 'MB');
      when (value1>1024) then return ltrim(to_char(value1/(1024),'999,999.999') || 'KB');
      else return ltrim(to_char(value1,'999,999.999') || 'B');
    end case;
  end format_size;
begin
  select table_owner, table_name, leaf_blocks
  into vtable_owner, vtable_name, vleaf_blocks
  from dba_indexes
  where owner = upper('&1.')
  and index_name = upper('&2.');

  select a.block_size
  into vblocksize
  from dba_tablespaces a, dba_indexes b
  where b.index_name = upper('&2.')
  and b.owner = upper('&1.')
  and a.tablespace_name = b.tablespace_name;

  select round(100 / vtargetuse * -- assumed packing efficiency
               (ind.num_rows * (tab.rowid_length + ind.uniq_ind + 4) + sum((tc.avg_col_len) * (tab.num_rows) ))  -- column data bytes  
               / (vblocksize - voverhead)) index_leaf_estimate  
  into vleafestimate  
  from (select  /*+ no_merge */ table_name, num_rows, decode(partitioned,'YES',10,6) rowid_length  
       from dba_tables
       where table_name  = vtable_name  
         and owner       = vtable_owner) tab,  
      (select  /*+ no_merge */ index_name, index_type, num_rows, decode(uniqueness,'UNIQUE',0,1) uniq_ind  
       from dba_indexes  
       where table_owner = vtable_owner  
       and table_name  = vtable_name  
       and owner = upper('&1.')  
       and index_name  = upper('&2.')) ind,  
      (select  /*+ no_merge */ column_name  
       from dba_ind_columns  
       where table_owner = vtable_owner  
       and table_name  = vtable_name 
       and index_owner = upper('&1.')   
       and index_name  = upper('&2.')) ic,  
      (select  /*+ no_merge */ column_name, avg_col_len  
       from dba_tab_cols  
       where owner = vtable_owner  
       and table_name  = vtable_name) tc  
  where tc.column_name = ic.column_name  
  group by ind.num_rows, ind.uniq_ind, tab.rowid_length; 

  dbms_output.put_line('For index ' || upper('&1.') || '.' || upper('&2.') || ', source table is ' || vtable_owner || '.' || vtable_name);
  dbms_output.put_line('Current leaf blocks: ' || vleaf_blocks);
  dbms_output.put_line('Current size: ' || format_size(vleaf_blocks * vblocksize));
  dbms_output.put_line('Estimated leaf blocks: ' || round(vleafestimate,2));
  dbms_output.put_line('Estimated size: ' || format_size(vleafestimate * vblocksize));
end;
/
set feedback on

On my test index it gives:

SQL> @inspect_index  
For index ., source table is .
Current leaf blocks: 375382 Current size: 2.864GB Estimated leaf blocks: 335395 Estimated size: 2.559GB

Which is a third estimation of the size the index would take on disk… But not more explanation of the formula is given by Oracle so difficult to take it as is…

Index rebuild candidates list

Once we have the estimated size (whatever the method) of the index we can compare it with its actual size and see how much we might gain. To compute the current size of an existing index (of course) we have two methods:

  • DBMS_SPACE.SPACE_USAGE procedure
  • DBA_SEGMENTS view

Of course using DBA_SEGMENTS you are not only taking the real blocks used under the High Water Mark (HWM) but as you can see below it does not make a huge difference for my test index. The script I have written is taking index owner and index name as parameters (index_saving.sql):

set linesize 200 pages 1000
set serveroutput on size 999999
set verify off
set feedback off
declare
  unformatted_blocks number;
  unformatted_bytes number;
  fs1_blocks number;
  fs1_bytes number;
  fs2_blocks number;
  fs2_bytes number;
  fs3_blocks number;
  fs3_bytes number;
  fs4_blocks number;
  fs4_bytes number;
  full_blocks number;
  full_bytes number;
  dbms_space_bytes number;
  bytes_dba_segments number;
  used_bytes number;
  alloc_bytes number;
  function format_size(value1 in number)
  return varchar2 as
  begin
    case
      when (value1>1024*1024*1024) then return ltrim(to_char(value1/(1024*1024*1024),'999,999.999') || 'GB');
      when (value1>1024*1024) then return ltrim(to_char(value1/(1024*1024),'999,999.999') || 'MB');
      when (value1>1024) then return ltrim(to_char(value1/(1024),'999,999.999') || 'KB');
      else return ltrim(to_char(value1,'999,999.999') || 'B');
    end case;
  end format_size;
begin
  dbms_output.put_line('----------------------------------------------------------------');
  dbms_output.put_line('Analyzing index &1..&2.');
  dbms_output.put_line('----------------------------------------------------------------');
  dbms_output.put_line('-------------------- DBMS_SPACE.SPACE_USAGE --------------------');
  dbms_output.put_line('----------------------------------------------------------------');
  dbms_space.space_usage(upper('&1.'), upper('&2.'), 'INDEX', unformatted_blocks, unformatted_bytes, fs1_blocks, fs1_bytes, fs2_blocks,
  fs2_bytes, fs3_blocks, fs3_bytes, fs4_blocks, fs4_bytes, full_blocks, full_bytes);
  dbms_output.put_line('Total number of blocks unformatted :' || unformatted_blocks);
  --dbms_output.put_line('Total number of bytes unformatted: ' || unformatted_bytes);
  dbms_output.put_line('Number of blocks having at least 0 to 25% free space: ' || fs1_blocks);
  --dbms_output.put_line('Number of bytes having at least 0 to 25% free space: ' || fs1_bytes);
  dbms_output.put_line('Number of blocks having at least 25 to 50% free space: ' || fs2_blocks);
  --dbms_output.put_line('Number of bytes having at least 25 to 50% free space: ' || fs2_bytes);
  dbms_output.put_line('Number of blocks having at least 50 to 75% free space: ' || fs3_blocks);
  --dbms_output.put_line('Number of bytes having at least 50 to 75% free space: ' || fs3_bytes);
  dbms_output.put_line('Number of blocks having at least 75 to 100% free space: ' || fs4_blocks);
  --dbms_output.put_line('Number of bytes having at least 75 to 100% free space: ' || fs4_bytes);
  dbms_output.put_line('The number of blocks full in the segment: ' || full_blocks);
  --dbms_output.put_line('Total number of bytes full in the segment: ' || format_size(full_bytes));
  dbms_space_bytes:=unformatted_bytes+fs1_bytes+fs2_bytes+fs3_bytes+fs4_bytes+full_bytes;
  dbms_output.put_line('----------------------------------------------------------------');
  dbms_output.put_line('------------------------- DBA_SEGMENTS -------------------------');
  dbms_output.put_line('----------------------------------------------------------------');
  select bytes into bytes_dba_segments from dba_segments where owner=upper('&1.') and segment_name=upper('&2.');
  dbms_output.put_line('Size of the segment: ' || format_size(bytes_dba_segments));
  dbms_output.put_line('----------------------------------------------------------------');
  dbms_output.put_line('----------------- DBMS_SPACE.CREATE_INDEX_COST -----------------');
  dbms_output.put_line('----------------------------------------------------------------');
  dbms_space.create_index_cost(dbms_metadata.get_ddl('INDEX', upper('&2.'), upper('&1.')), used_bytes, alloc_bytes);
  dbms_output.put_line('Used: ' || format_size(used_bytes));
  dbms_output.put_line('Allocated: ' || format_size(alloc_bytes));
  dbms_output.put_line('----------------------------------------------------------------');
  dbms_output.put_line('---------------------------- Results ---------------------------'); 
  dbms_output.put_line('----------------------------------------------------------------');
  dbms_output.put_line('Potential percentage gain (DBMS_SPACE): ' || round(100 * (dbms_space_bytes - alloc_bytes) / dbms_space_bytes) || '%');
  dbms_output.put_line('Potential percentage gain (DBA_SEGMENTS): ' || round(100 * (bytes_dba_segments - alloc_bytes) / bytes_dba_segments) || '%');
end;
/
set feedback on

It gives for me:

SQL> @index_saving  
----------------------------------------------------------------
Analyzing index .
----------------------------------------------------------------
-------------------- DBMS_SPACE.SPACE_USAGE --------------------
----------------------------------------------------------------
Total number of blocks unformatted :1022
Number of blocks having at least 0 to 25% free space: 0
Number of blocks having at least 25 to 50% free space: 35
Number of blocks having at least 50 to 75% free space: 0
Number of blocks having at least 75 to 100% free space: 0
The number of blocks full in the segment: 365448
----------------------------------------------------------------
------------------------- DBA_SEGMENTS -------------------------
----------------------------------------------------------------
Size of the segment: 2.803GB
----------------------------------------------------------------
----------------- DBMS_SPACE.CREATE_INDEX_COST -----------------
----------------------------------------------------------------
Used: 1.386GB
Allocated: 2.438GB
----------------------------------------------------------------
---------------------------- Results ---------------------------
----------------------------------------------------------------
Potential percentage gain (DBMS_SPACE): 13%
Potential percentage gain (DBA_SEGMENTS): 13%           

Let say we chose the DBMS_SPACE method, I have then tried to club this in a function to be able to analyze multiple indexes of a schema at same time. To handle security problem I have granted to my DBA account:

SQL> grant execute on dbms_space to yjaquier;

Grant succeeded.

SQL> grant execute on dbms_metadata to yjaquier;

Grant succeeded.

SQL>  grant analyze any to yjaquier;

Grant succeeded.

And DBMS_METADATA as they say in official documentation:

If you want to write a PL/SQL program that fetches metadata for objects in a different schema (based on the invoker’s possession of SELECT_CATALOG_ROLE), you must make the program invokers-rights.

So used the AUTHID CURRENT_USER as invoker’s rights clause:

create or replace function index_saving_function(index_owner in varchar2, index_name varchar2)
return number
authid current_user
as
  unformatted_blocks number;
  unformatted_bytes number;
  fs1_blocks number;
  fs1_bytes number;
  fs2_blocks number;
  fs2_bytes number;
  fs3_blocks number;
  fs3_bytes number;
  fs4_blocks number;
  fs4_bytes number;
  full_blocks number;
  full_bytes number;
  dbms_space_bytes number;
  used_bytes number;
  alloc_bytes number;
begin
  dbms_space.space_usage(upper(index_owner), upper(index_name), 'INDEX', unformatted_blocks, unformatted_bytes, fs1_blocks, fs1_bytes, fs2_blocks,
  fs2_bytes, fs3_blocks, fs3_bytes, fs4_blocks, fs4_bytes, full_blocks, full_bytes);
  dbms_space_bytes:=unformatted_bytes+fs1_bytes+fs2_bytes+fs3_bytes+fs4_bytes+full_bytes;
  dbms_space.create_index_cost(dbms_metadata.get_ddl('INDEX', upper(index_name), upper(index_owner)), used_bytes, alloc_bytes);
  if (dbms_space_bytes <> 0) then
    return (100 * (dbms_space_bytes - alloc_bytes) / dbms_space_bytes);
  else
    return 0;
  end if;
end;
/

Finally a simple query like this gives a good first analysis of what could be potential candidates for shrink/rebuild:

select owner,index_name,index_saving_function(owner,index_name) as percentage_gain
from dba_indexes
where owner=''
and last_analyzed is not null
and partitioned='NO'
order by 3 desc;

To go further

The SYS_OP_LBID internal Oracle function that has been first (I think) shared by Jonathan Lewis and that you can find on plenty of blog post as well as MOS note Script to investigate a b-tree index structure (Doc ID 989186.1) return the leaf block id where is store the source table key that is given by source table rowid parameter of the SYS_OP_LBID function. If you group by the result per leaf block id you get the number of source table keys per leaf block id.

On all the queries shared around the next idea is to order by and group by this number of keys per leaf block and see how much blocks you have to access to get them. The queries using an aggregate function to sum row by row this block required to be read are the best to make an analysis (sys_op_lbid.sql):

set linesize 200 pages 1000
set serveroutput on size 999999
set verify off
set feedback off
declare
  vsql varchar2(1000);
  v_id number;
  vtable_owner dba_indexes.table_owner%type;
  vtable_name dba_indexes.table_owner%type;
  col01 varchar2(50);
  col02 varchar2(50);
  col03 varchar2(50);
  col04 varchar2(50);
  col05 varchar2(50);
  col06 varchar2(50);
  col07 varchar2(50);
  col08 varchar2(50);
  col09 varchar2(50);
  col10 varchar2(50);
  TYPE IdxRec IS RECORD (keys_per_leaf number, blocks number, cumulative_blocks number);
  TYPE IdxTab IS TABLE OF IdxRec;
  l_data IdxTab;
begin
  select object_id
  into v_id
  from dba_objects
  where owner = upper('&1.')
  and object_name = upper('&2.');
  
  select table_owner, table_name
  into vtable_owner, vtable_name
  from dba_indexes
  where owner = upper('&1.')
  and index_name = upper('&2.');
  
  select
    nvl(max(decode(column_position, 1,column_name)),'null'),
    nvl(max(decode(column_position, 2,column_name)),'null'),
    nvl(max(decode(column_position, 3,column_name)),'null'),
    nvl(max(decode(column_position, 4,column_name)),'null'),
    nvl(max(decode(column_position, 5,column_name)),'null'),
    nvl(max(decode(column_position, 6,column_name)),'null'),
    nvl(max(decode(column_position, 7,column_name)),'null'),
    nvl(max(decode(column_position, 8,column_name)),'null'),
    nvl(max(decode(column_position, 9,column_name)),'null'),
    nvl(max(decode(column_position, 10,column_name)),'null')
  into col01, col02, col03, col04, col05, col06, col07, col08, col09, col10
  from dba_ind_columns
  where table_owner = vtable_owner
  and table_name  = vtable_name
  and index_name  = upper('&2.')
  order by column_position;
  
  vsql:='SELECT keys_per_leaf, blocks, SUM(blocks) OVER(ORDER BY keys_per_leaf) cumulative_blocks FROM (SELECT ' ||
        'keys_per_leaf,COUNT(*) blocks FROM (SELECT /*+ ' ||
        'cursor_sharing_exact ' ||
        'dynamic_sampling(0) ' ||
        'no_monitoring ' ||
        'no_expand ' ||
        'index_ffs(' || vtable_name || ',' || '&2.' || ') ' ||
        'noparallel_index(' || vtable_name || ',' || '&2.' || ') */ ' ||
        'sys_op_lbid(' || v_id || ',''L'',t1.rowid) AS block_id,' ||
        'COUNT(*) AS keys_per_leaf ' ||
        'FROM &1..' || vtable_name ||' t1 ' ||
        'WHERE ' || col01 || ' IS NOT NULL ' ||
        'OR ' || col02 || ' IS NOT NULL ' ||
        'OR ' || col03 || ' IS NOT NULL ' ||
        'OR ' || col04 || ' IS NOT NULL ' ||
        'OR ' || col05 || ' IS NOT NULL ' ||
        'OR ' || col06 || ' IS NOT NULL ' ||
        'OR ' || col07 || ' IS NOT NULL ' ||
        'OR ' || col08 || ' IS NOT NULL ' ||
        'OR ' || col09 || ' IS NOT NULL ' ||
        'OR ' || col10 || ' IS NOT NULL ' ||
        'GROUP BY sys_op_lbid('||v_id||',''L'',t1.rowid)) ' ||
        'GROUP BY keys_per_leaf) ' ||
    'ORDER BY keys_per_leaf';
  --dbms_output.put_line(vsql);
  execute immediate vsql bulk collect into l_data;

  dbms_output.put_line('KEYS_PER_LEAF     BLOCKS CUMULATIVE_BLOCKS');
  dbms_output.put_line('------------- ---------- -----------------');
   for i in l_data.first..l_data.last loop
     dbms_output.put_line(lpad(l_data(i).keys_per_leaf,13) || ' ' || lpad(l_data(i).blocks,10) || ' ' || lpad(l_data(i).cumulative_blocks,17));
   end loop;
end;
/
set feedback on

Then a nice trick is to copy and paste the result in Excel and do a chart on this figures. Doing this you will better see any sudden jump in number of blocks required to read key lea blocks. In a well balanced index the progression should be as much linear as possible:

index_rebuild01
index_rebuild01

I initially thought that any sudden jump in the number of block required to be read to get the keys is an indication of a index that would benefit from rebuild. But I was wrong (see below why after the index has been rebuilt) ! In this chart what you have to try to identify if the number of cumulative blocks increasing rapidly while the number of keys read is moving slowly. In my chart it start well has number of key read is increasing while number of blocks is flat. But then after number of blocks is constantly increasing while the number of keys read is moving slowly. Said differently the list should be more condensed. The issue is here…

If we are back to the raw figures we see the jump here below:

KEYS_PER_LEAF     BLOCKS CUMULATIVE_BLOCKS       
------------- ---------- -----------------       
.
.
          117        196               202
          118        289               491
          119        347               838
          120        205              1043
          121        502              1545
          122        690              2235
          123        851              3086
          124       9629             12715
          125      11104             23819
          126       5773             29592
          127       1991             31583
          128       1148             32731
          129        956             33687
          130        982             34669
          131       1036             35705
          132       1946             37651
          133       4435             42086
          134       6254             48340
          135       2265             50605
          136         26             50631
          137         27             50658
          138         30             50688
          139         21             50709
          140         72             50781
          141         57             50838
          142         95             50933
          143        211             51144
          144        483             51627
          145        408             52035
.
.
          228        823            140172
          229        795            140967
          230       1111            142078
          231     215514            357592
          232       3212            360804
.
.

Rebuild or shrink ?

One of the drawback of rebuilding an index is that you need double index space on disk, this is also a little bit longer than coalescing it… If you run an enterprise edition of the Oracle database then the ONLINE keyword keep you safe from DML locking.

For an index or index-organized table, specifying ALTER [INDEX | TABLE] … SHRINK SPACE COMPACT is equivalent to specifying ALTER [INDEX | TABLE ] … COALESCE.

Few people have tried to compare REBUIL and SHRINK and draw some conclusions (see references), but to be honest it looks difficult to give precise rules on what to do. If your index is not too much fragmented SHRINK should give good result, if not then you have to go for REBUILD. It also depends on which overhead you would like to put on your database. I have tried both on my test index:

I have firstly tried using SHRINK SPACE COMPACT with very low result and found in Oracle official documentation:

If you specify COMPACT, then Oracle Database only defragments the segment space and compacts the table rows for subsequent release. The database does not readjust the high water mark and does not release the space immediately. You must issue another ALTER TABLE … SHRINK SPACE statement later to complete the operation. This clause is useful if you want to accomplish the shrink operation in two shorter steps rather than one longer step.

Even if they explain COMPACT option only for tables I have feeling that it behaves almost the same for indexes (I have been able to perform multiple tests as my test database got refreshed from live one that remained unchanged):>/p>

SQL> alter index . shrink space compact;

Index . altered.

SQL> @index_saving  
----------------------------------------------------------------
Analyzing index .
----------------------------------------------------------------
-------------------- DBMS_SPACE.SPACE_USAGE --------------------
----------------------------------------------------------------
Total number of blocks unformatted :1022
Number of blocks having at least 0 to 25% free space: 0
Number of blocks having at least 25 to 50% free space: 3
Number of blocks having at least 50 to 75% free space: 0
Number of blocks having at least 75 to 100% free space: 13865
The number of blocks full in the segment: 351615
----------------------------------------------------------------
------------------------- DBA_SEGMENTS -------------------------
----------------------------------------------------------------
Size of the segment: 2.803GB
----------------------------------------------------------------
----------------- DBMS_SPACE.CREATE_INDEX_COST -----------------
----------------------------------------------------------------
Used: 1.386GB
Allocated: 2.438GB
----------------------------------------------------------------
---------------------------- Results ---------------------------
----------------------------------------------------------------
Potential percentage gain (DBMS_SPACE): 9%
Potential percentage gain (DBA_SEGMENTS): 13%

SQL> @inspect_index  
For index ., source table is .
Current leaf blocks: 375382 Current size: 2.864GB Estimated leaf blocks: 335395 Estimated size: 2.559GB SQL> @sys_op_lbid KEYS_PER_LEAF BLOCKS CUMULATIVE_BLOCKS ------------- ---------- ----------------- . . 119 10 21 120 8 29 121 18 47 122 36 83 123 34 117 124 522 639 125 537 1176 126 253 1429 127 90 1519 . . 230 1238 91459 231 267907 359366 232 1562 360928 . .

Does not provide a very good result the index remained almost unchanged ! Without COMPACT keywords (figures slightly different as index evolve on live database:):

SQL> alter index . shrink space;

Index . altered.

SQL> @index_saving  
----------------------------------------------------------------
Analyzing index .
----------------------------------------------------------------
-------------------- DBMS_SPACE.SPACE_USAGE --------------------
----------------------------------------------------------------
Total number of blocks unformatted :0
Number of blocks having at least 0 to 25% free space: 0
Number of blocks having at least 25 to 50% free space: 3
Number of blocks having at least 50 to 75% free space: 0
Number of blocks having at least 75 to 100% free space: 0
The number of blocks full in the segment: 368842
----------------------------------------------------------------
------------------------- DBA_SEGMENTS -------------------------
----------------------------------------------------------------
Size of the segment: 2.821GB
----------------------------------------------------------------
----------------- DBMS_SPACE.CREATE_INDEX_COST -----------------
----------------------------------------------------------------
Used: 1.526GB
Allocated: 2.688GB
----------------------------------------------------------------
---------------------------- Results ---------------------------
----------------------------------------------------------------
Potential percentage gain (DBMS_SPACE): 4%
Potential percentage gain (DBA_SEGMENTS): 5%

A bit better without COMPACT keyword, we see that blocks have been de-fragmented but not released and index is still not in its optimal form. Which could be satisfactory versus the load you want to put on your database. let’s try rebuilding it which is a bit more resource consuming:

SQL> alter index . rebuild online;

Index . altered.

SQL> @index_saving  
----------------------------------------------------------------
Analyzing index .
----------------------------------------------------------------
-------------------- DBMS_SPACE.SPACE_USAGE --------------------
----------------------------------------------------------------
Total number of blocks unformatted :0
Number of blocks having at least 0 to 25% free space: 0
Number of blocks having at least 25 to 50% free space: 1
Number of blocks having at least 50 to 75% free space: 0
Number of blocks having at least 75 to 100% free space: 0
The number of blocks full in the segment: 325098
----------------------------------------------------------------
------------------------- DBA_SEGMENTS -------------------------
----------------------------------------------------------------
Size of the segment: 2.507GB
----------------------------------------------------------------
----------------- DBMS_SPACE.CREATE_INDEX_COST -----------------
----------------------------------------------------------------
Used: 1.386GB
Allocated: 2.438GB
----------------------------------------------------------------
---------------------------- Results ---------------------------
----------------------------------------------------------------
Potential percentage gain (DBMS_SPACE): 2%
Potential percentage gain (DBA_SEGMENTS): 3%

SQL> @inspect_index  
For index ., source table is .
Current leaf blocks: 323860 Current size: 2.471GB Estimated leaf blocks: 330825 Estimated size: 2.524GB SQL> @sys_op_lbid KEYS_PER_LEAF BLOCKS CUMULATIVE_BLOCKS ------------- ---------- ----------------- 56 1 1 217 10086 10087 218 146 10233 219 120 10353 220 70 10423 221 64 10487 222 65 10552 223 76 10628 224 29037 39665 225 1566 41231 226 1268 42499 227 1077 43576 228 861 44437 229 928 45365 230 1245 46610 231 277246 323856 248 1 323857 341 3 323860

Graphically it gives:

index_rebuild02
index_rebuild02

Much better ! I still have a big jump is number of blocks required to be read when number of keys is increasing but I suppose it comes from a key that has high frequency in my source table. Which demonstrate that it is not abnormal to have such big jump in queries with SYS_OP_LBID internal function.

The process you could apply is to try to shrink your index by default. If you are unhappy with the result try to afford a rebuild. The good thing now is that checking the index is not locking anything and you can launch it multiple time even on a production database…

References

The post How to non intrusively find index rebuild or shrink candidates ? appeared first on IT World.

]]> https://blog.yannickjaquier.com/oracle/candidates-index-rebuild-shrink.html/feed 1 Simple Oracle Document Access (SODA) installation and usage https://blog.yannickjaquier.com/oracle/simple-oracle-document-access-soda.html https://blog.yannickjaquier.com/oracle/simple-oracle-document-access-soda.html#respond Thu, 01 Nov 2018 08:44:40 +0000 https://blog.yannickjaquier.com/?p=4355 Preamble Simple Oracle Document Access (SODA) is a document store like any other NoSQL document store database (MongoDB to name most used one) base on a feature we have seen called Oracle REST Data Services (ORDS). As my test environment is still there this is perfect timing to continue with SODA. SODA allows you to […]

The post Simple Oracle Document Access (SODA) installation and usage appeared first on IT World.

]]>

Table of contents

Preamble

Simple Oracle Document Access (SODA) is a document store like any other NoSQL document store database (MongoDB to name most used one) base on a feature we have seen called Oracle REST Data Services (ORDS). As my test environment is still there this is perfect timing to continue with SODA.

SODA allows you to store, retrieve and manipulate documents without speaking SQL fluently.

Testing has been done on a virtual machine running Oracle Linux Server release 7.5 and Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 – 64bit. ORDS version is:

[oracle@server1 ords]$ java -jar ords.war version
Oracle REST Data Services 18.1.1.95.1251

SODA setup

With DBA account grant SODA_APP role to your test account:

SQL> grant soda_app to hr;

Grant succeeded.

With HR account, following the official documentation:

SQL> exec ords.create_role('SODA Developer');

PL/SQL procedure successfully completed.

SQL> exec ords.create_privilege(p_name => 'oracle.soda.privilege.developer', p_role_name => 'SODA Developer');
BEGIN ords.create_privilege(p_name => 'oracle.soda.privilege.developer', p_role_name => 'SODA Developer'); END;

*
ERROR at line 1:
ORA-00001: unique constraint (ORDS_METADATA.PRIVILEGES_UNQ_NAME) violated
ORA-06512: at "ORDS_METADATA.ORDS_SERVICES_INTERNAL", line 1062
ORA-06512: at "ORDS_METADATA.ORDS_SERVICES_INTERNAL", line 1112
ORA-06512: at "ORDS_METADATA.ORDS_SERVICES", line 466
ORA-06512: at "ORDS_METADATA.ORDS", line 357
ORA-06512: at "ORDS_METADATA.ORDS", line 377
ORA-06512: at line 1


SQL> exec ords.create_privilege_mapping('oracle.soda.privilege.developer', '/soda/*');
BEGIN ords.create_privilege_mapping('oracle.soda.privilege.developer', '/soda/*'); END;

*
ERROR at line 1:
ORA-00001: unique constraint (ORDS_METADATA.SEC_PRIV_MAP_PATTERN_UNQ) violated
ORA-06512: at "ORDS_METADATA.ORDS_SERVICES_INTERNAL", line 1705
ORA-06512: at "ORDS_METADATA.ORDS_SERVICES_INTERNAL", line 1683
ORA-06512: at "ORDS_METADATA.ORDS", line 538
ORA-06512: at "ORDS_METADATA.ORDS", line 553
ORA-06512: at line 1

Apparently with ORDS 18.1 all is already in place and ready to use (the role creation is re-runnable)…

If you want to get rid of any security you can simply execute:

SQL> exec ords.delete_privilege_mapping('oracle.soda.privilege.developer','/soda/*');

PL/SQL procedure successfully completed.

SQL> commit;

Commit complete.

From official documentation:

ORDS supports many different authentication mechanisms. JSON document store REST services are intended to be used in server-to-server interactions. Therefore, two-legged OAuth (the client-credentials flow) is the recommended authentication mechanism to use with the JSON document store REST services. However, other mechanisms such as HTTP basic authentication, are also supported.

So used the lazy approach with:

[oracle@server1 ords]$ java -jar ords.war user soda_user "SODA Developer"
Enter a password for user soda_user:
Confirm password for user soda_user:
Apr 27, 2018 11:13:53 AM oracle.dbtools.standalone.ModifyUser execute
INFO: Created user: soda_user in file: /u01/app/oracle/product/12.2.0/dbhome_1/ords/config/ords/credentials

You can know confirm SODA is up and running with (still using Insomnia as REST client):

soda01
soda01

SODA testing

Collection creation/deletion

Creating Collection01 with:

soda02
soda02

Can be confirmed with SQL:

SQL> desc "Collection01"
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 ID                                        NOT NULL VARCHAR2(255)
 CREATED_ON                                NOT NULL TIMESTAMP(6)
 LAST_MODIFIED                             NOT NULL TIMESTAMP(6)
 VERSION                                   NOT NULL VARCHAR2(255)
 JSON_DOCUMENT                                      BLOB

Can also be confirmed with:

soda03
soda03

To delete the collection use the exact same url and DELETE method…

Use https://server1.domain.com:8443/ords/pdb1/hr/soda/latest/ url to list all available collections.

Managing document in a collection

I will use and extract of the EMPLOYEES table of HR schema. Using well known Steven King user in item01.json file:

{
  "employee_id": 100,
  "first_name": "Steven",
  "last_name": "King",
  "email": "SKING",
  "phone_number": "",
  "phone_number": [
    {
      "type": "Office",
      "number": "515.123.4567"
    }
  ],
  "hire_date": "2003-06-16T22:00:00Z",
  "job_id": "AD_PRES",
  "salary": 24000,
  "commission_pct": null,
  "manager_id": null,
  "department_id": 90
}

To insert my above document use POST method specifying the file as body and an header of “application/json”:

soda04
soda04

You can also check in SQL that it has been done:

SQL> set lines 200
SQL> col id for a40
SQL> col created_on for a30
SQL> select id, created_on from "Collection01";

ID                                       CREATED_ON
---------------------------------------- ------------------------------
CE580C73CB254596A8595B83BA1ED409         27-APR-18 10.41.21.840041 AM

In you want to insert multiple documents at same time add “?action=insert” to the url…

Needless to say hat the only type of document you can store are JSON documents. If you try to change the header to “application/pdf” and try to POST file you get below error message:

{
  "type": "http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.16",
  "status": 415,
  "title": "Unsupported content type application/pdf.",
  "o:errorCode": "REST-02002"
}

To use other type of file you could store a reference to those file located in a directory or in another table in a BLOB column…

To retrieve the inserted document I use https://server1.domain.com:8443/ords/pdb1/hr/soda/latest/Collection01/CE580C73CB254596A8595B83BA1ED409 url with GET method. You must specify the ID field after the collection name:

soda05
soda05

To delete a document use same exact url as above with DELETE method.

If you do not specify the ID field you retrieve all the document of your collection…Which might not be efficient versus number of documents in your collection…

You can filter what you want to display when fetching all documents. Examples:

  • https://server1.domain.com:8443/ords/pdb1/hr/soda/latest/Collection01?fields=id
  • https://server1.domain.com:8443/ords/pdb1/hr/soda/latest/Collection01?fields=all&limit=10

Refer to GET collection documentation for a complete list of possible parameters…

You can also put your query in a JSON file like example below. Do not forget to append ?action=query at the end of your url or it will create a document instead:

soda06
soda06

When looking at table created to store the collection:

SQL> set long 2000000
SQL> set pagesize 1000
SQL> col ddl for a100
SQL> SELECT DBMS_METADATA.GET_ddl('TABLE','Collection01','HR') as ddl from dual;

DDL
----------------------------------------------------------------------------------------------------

  CREATE TABLE "HR"."Collection01"
   (    "ID" VARCHAR2(255) NOT NULL ENABLE,
        "CREATED_ON" TIMESTAMP (6) DEFAULT sys_extract_utc(SYSTIMESTAMP) NOT NULL ENABLE,
        "LAST_MODIFIED" TIMESTAMP (6) DEFAULT sys_extract_utc(SYSTIMESTAMP) NOT NULL ENABLE,
        "VERSION" VARCHAR2(255) NOT NULL ENABLE,
        "JSON_DOCUMENT" BLOB,
         CHECK ("JSON_DOCUMENT" is json format json strict) ENABLE,
         PRIMARY KEY ("ID")
  USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255
  STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "USERS"  ENABLE
   ) SEGMENT CREATION IMMEDIATE
  PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
 NOCOMPRESS LOGGING
  STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "USERS"
 LOB ("JSON_DOCUMENT") STORE AS SECUREFILE (
  TABLESPACE "USERS" ENABLE STORAGE IN ROW CHUNK 8192
  CACHE  NOCOMPRESS  KEEP_DUPLICATES
  STORAGE(INITIAL 106496 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT))

We see that the JSON_DOCUMENT field has the useful constraint to check it contains valid JSON format. So we can even use the so called Simple Dot-Notation Access to JSON Data of your collection:

SQL> col employee_id for a15
SQL> col first_name for a15
SQL> col last_name for a15
SQL> select c.json_document.employee_id, c.json_document.first_name, c.json_document.last_name from "Collection01" c;

EMPLOYEE_ID     FIRST_NAME      LAST_NAME
--------------- --------------- ---------------
100             Steven          King

You cannot update partially a document, you have to replace its whole content with PUT method.

References

The post Simple Oracle Document Access (SODA) installation and usage appeared first on IT World.

]]>
https://blog.yannickjaquier.com/oracle/simple-oracle-document-access-soda.html/feed 0
Oracle REST Data Services (ORDS) installation and usage https://blog.yannickjaquier.com/oracle/oracle-rest-data-services-ords.html https://blog.yannickjaquier.com/oracle/oracle-rest-data-services-ords.html#respond Mon, 08 Oct 2018 13:20:36 +0000 https://blog.yannickjaquier.com/?p=4334 Preamble In the database world technology trend you have surely already heard buzz words like Hadoop and NoSQL. Around those new non-relational databases there is a common open-standard file format massively used to read and write on those new databases call JavaScript Object Notation (JSON). If you have followed a bit few new web technologies […]

The post Oracle REST Data Services (ORDS) installation and usage appeared first on IT World.

]]>

Table of contents

Preamble

In the database world technology trend you have surely already heard buzz words like Hadoop and NoSQL. Around those new non-relational databases there is a common open-standard file format massively used to read and write on those new databases call JavaScript Object Notation (JSON).

If you have followed a bit few new web technologies trainings (Vue.js, Angular, React, …) each time they use a back-end database to store information the exchanges are always done through an asynchronous request using promises (Axios for the one I have used in a Vue.js project). The information are, also, always transferred using JSON format. Those exposed services by newest databases flavors are using a REpresentational State Transfer (REST) architecture and expose this is a RESTful web service or RESTful API.

Oracle corporation with their legacy Oracle database (back in the 80’s for first release) have climb on the bandwagon and have created a product called Oracle REST Data Services (ORDS) to make the bridge between newest generation of developers and (becoming old) past generation DBAs (guess in which category I am ?). This product/tool that is a simple jar file to run through Java is creating a RESTful service to expose Oracle database figures through http(s) requests…

To have figures to display I have decided, this time, to use the default HR sample schema that you can create using script located at:

$ORACLE_HOME/demo/schema/human_resources/hr_main.sql

Idea is to display and interact with employees table using a RESTful API. We would be able to display all employees or a particular one specifying an id in provided url as well as inserting, deleting and updating additional ones.

Testing has been done using a VirtualBox virtual machine running Oracle Linux Server release 7.4 and an Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 – 64bit.

ORDS installation

I could have used the JDK of the Oracle home but I rated the release a bit too old:

[oracle@server1 ~]$ $ORACLE_HOME/jdk/bin/java -version
java version "1.8.0_91"
Java(TM) SE Runtime Environment (build 1.8.0_91-b14)
Java HotSpot(TM) 64-Bit Server VM (build 25.91-b14, mixed mode)

As I’m rock’n’roll I have decided to install and use Java 10:

[oracle@server1 ~]$ java -version
java version "10" 2018-03-20
Java(TM) SE Runtime Environment 18.3 (build 10+46)
Java HotSpot(TM) 64-Bit Server VM 18.3 (build 10+46, mixed mode)

There is also an $ORACLE_HOME/ords but I have not been able to make it working so decided to replace it with the latest release available at the time of writing this post:

[oracle@server1 ords]$ java -jar ords.war version
Oracle REST Data Services 18.1.1.95.1251

I have also chosen the multitenant installation even if I have only on pluggable database, the installation is called Installation Enabling Multiple Releases (Recommended). In the advanced installation process you must supply the container (CDB) information and it will eb deployed in all pluggable databases including the seed one (and also in root one for common objects and accounts).

Note:
I have discovered that the configuration directory is relative to the directory where you have started the installation so simply config for me to be in $ORACLE_HOME/ords/config..

[oracle@server1 ords]$ cd $ORACLE_HOME/ords
[oracle@server1 ords]$ java -jar ords.war install advanced
This Oracle REST Data Services instance has not yet been configured.
Please complete the following prompts

Enter the location to store configuration data:config
Enter the name of the database server [localhost]:server1.domain.com
Enter the database listen port [1521]:1531
Enter 1 to specify the database service name, or 2 to specify the database SID [1]:
Enter the database service name:orcl
Enter 1 if you want to verify/install Oracle REST Data Services schema or 2 to skip this step [1]:
Enter the database password for ORDS_PUBLIC_USER:
Confirm password:
Requires SYS AS SYSDBA to verify Oracle REST Data Services schema.

Enter the database password for SYS AS SYSDBA:
Confirm password:

Retrieving information...
Your database connection is to a CDB.  ORDS common user ORDS_PUBLIC_USER will be created in the CDB.  ORDS schema will be installed in the PDBs.
Root CDB$ROOT - create ORDS common user
PDB PDB$SEED - install ORDS 18.1.1.95.1251 (mode is READ ONLY, open for READ/WRITE)
PDB PDB1 - install ORDS 18.1.1.95.1251

Enter 1 if you want to install ORDS or 2 to skip this step [1]:
Enter the default tablespace for ORDS_METADATA [SYSAUX]:
Enter the temporary tablespace for ORDS_METADATA [TEMP]:
Enter the default tablespace for ORDS_PUBLIC_USER [SYSAUX]:
Enter the temporary tablespace for ORDS_PUBLIC_USER [TEMP]:
Enter 1 if you want to use PL/SQL Gateway or 2 to skip this step.
If using Oracle Application Express or migrating from mod_plsql then you must enter 1 [1]:
Enter the PL/SQL Gateway database user name [APEX_PUBLIC_USER]:
Enter the database password for APEX_PUBLIC_USER:
Confirm password:
Enter 1 to specify passwords for Application Express RESTful Services database users (APEX_LISTENER, APEX_REST_PUBLIC_USER) or 2 to skip this step [1]:
Enter the database password for APEX_LISTENER:
Confirm password:
Enter the database password for APEX_REST_PUBLIC_USER:
Confirm password:
Apr 11, 2018 5:18:14 PM
INFO: Updated configurations: defaults, apex, apex_pu, apex_al, apex_rt


Installing Oracle REST Data Services version 18.1.1.95.1251 in CDB$ROOT
... Log file written to /u01/app/oracle/product/12.2.0/dbhome_1/ords/logs/ords_cdb_install_core_CDB_ROOT_2018-04-11_171814_00573.log
... Verified database prerequisites
... Created Oracle REST Data Services proxy user
Completed installation for Oracle REST Data Services version 18.1.1.95.1251. Elapsed time: 00:00:01.690

Installing Oracle REST Data Services version 18.1.1.95.1251 in PDB$SEED
... Log file written to /u01/app/oracle/product/12.2.0/dbhome_1/ords/logs/ords_cdb_install_core_PDB_SEED_2018-04-11_171818_00681.log
... Verified database prerequisites
... Created Oracle REST Data Services schema
... Created Oracle REST Data Services proxy user
... Granted privileges to Oracle REST Data Services
... Created Oracle REST Data Services database objects
... Log file written to /u01/app/oracle/product/12.2.0/dbhome_1/ords/logs/ords_cdb_install_datamodel_PDB_SEED_2018-04-11_171947_00378.log
... Log file written to /u01/app/oracle/product/12.2.0/dbhome_1/ords/logs/ords_cdb_install_apex_PDB_SEED_2018-04-11_171955_00184.log
Completed installation for Oracle REST Data Services version 18.1.1.95.1251. Elapsed time: 00:01:43.21

Installing Oracle REST Data Services version 18.1.1.95.1251 in PDB1
... Log file written to /u01/app/oracle/product/12.2.0/dbhome_1/ords/logs/ords_cdb_install_core_PDB1_2018-04-11_172036_00713.log
... Verified database prerequisites
... Created Oracle REST Data Services schema
... Created Oracle REST Data Services proxy user
... Granted privileges to Oracle REST Data Services
... Created Oracle REST Data Services database objects
... Log file written to /u01/app/oracle/product/12.2.0/dbhome_1/ords/logs/ords_cdb_install_datamodel_PDB1_2018-04-11_172204_00677.log
... Log file written to /u01/app/oracle/product/12.2.0/dbhome_1/ords/logs/ords_cdb_install_apex_PDB1_2018-04-11_172210_00596.log
Completed installation for Oracle REST Data Services version 18.1.1.95.1251. Elapsed time: 00:01:38.462

Completed CDB installation for Oracle REST Data Services version 18.1.1.95.1251.
Total elapsed time: 00:04:00.631

Enter 1 if you wish to start in standalone mode or 2 to exit [1]:2

You can have the configuration directory with:

[oracle@server1 ords]$ java -jar ords.war configdir
Apr 12, 2018 12:31:09 PM
INFO: The config.dir value is /u01/app/oracle/product/12.2.0/dbhome_1/ords/config

I have decided to not start it after installation to understand the real start procedure, the first time you start it a set of questions will be asked. As Internet is know almost full HTTPS this is the option I have chosen. In any case if you plan to use REST Enabled SQL HTTPS has to be chosen:

[oracle@server1 ords]$ java -jar ords.war standalone
Enter the APEX static resources location:
Enter 1 if using HTTP or 2 if using HTTPS [1]:2
Enter the HTTPS port [8443]:
Enter the SSL hostname:server1.domain.com
Enter 1 to use the self-signed certificate or 2 if you will provide the SSL certificate [1]:
2018-04-11 17:30:52.110:INFO::main: Logging initialized @26842ms to org.eclipse.jetty.util.log.StdErrLog
Apr 11, 2018 5:30:53 PM
INFO: HTTPS and HTTPS/2 listening on port: 8443
Apr 11, 2018 5:30:53 PM
INFO: Disabling document root because the specified folder does not exist: /u01/app/oracle/product/12.2.0/dbhome_1/ords/config/ords/standalone/doc_root
2018-04-11 17:30:53.408:INFO:oejs.Server:main: jetty-9.4.z-SNAPSHOT, build timestamp: 2017-11-21T22:27:37+01:00, git hash: 82b8fb23f757335bb3329d540ce37a2a2615f0a8
2018-04-11 17:30:53.464:INFO:oejs.session:main: DefaultSessionIdManager workerName=node0
2018-04-11 17:30:53.464:INFO:oejs.session:main: No SessionScavenger set, using defaults
2018-04-11 17:30:53.465:INFO:oejs.session:main: Scavenging every 660000ms
Apr 11, 2018 5:30:55 PM
WARNING: The pool named: |apex|| is invalid and will be ignored: The username or password for the connection pool named apex, are invalid, expired, or the account is locked
Apr 11, 2018 5:30:56 PM
INFO: Creating Pool:|apex|pu|
Apr 11, 2018 5:30:56 PM
INFO: Configuration properties for: |apex|pu|
cache.caching=false
cache.directory=/tmp/apex/cache
cache.duration=days
cache.expiration=7
cache.maxEntries=500
cache.monitorInterval=60
cache.procedureNameList=
cache.type=lru
db.hostname=server1.domain.com
db.password=******
db.port=1531
db.servicename=orcl
db.username=ORDS_PUBLIC_USER
debug.debugger=false
debug.printDebugToScreen=false
error.keepErrorMessages=true
error.maxEntries=50
jdbc.DriverType=thin
jdbc.InactivityTimeout=1800
jdbc.InitialLimit=3
jdbc.MaxConnectionReuseCount=1000
jdbc.MaxLimit=10
jdbc.MaxStatementsLimit=10
jdbc.MinLimit=1
jdbc.statementTimeout=900
log.logging=false
log.maxEntries=50
misc.compress=
misc.defaultPage=apex
security.disableDefaultExclusionList=false
security.maxEntries=2000
security.requestValidationFunction=wwv_flow_epg_include_modules.authorize
security.validationFunctionType=plsql

Apr 11, 2018 5:30:56 PM
WARNING: *** jdbc.MaxLimit in configuration |apex|pu| is using a value of 10, this setting may not be sized adequately for a production environment ***
Apr 11, 2018 5:30:56 PM
WARNING: *** jdbc.InitialLimit in configuration |apex|pu| is using a value of 3, this setting may not be sized adequately for a production environment ***
Apr 11, 2018 5:30:57 PM
WARNING: The pool named: |apex|al| is invalid and will be ignored: The username or password for the connection pool named apex_al, are invalid, expired, or the account is locked
Apr 11, 2018 5:30:58 PM
WARNING: The pool named: |apex|rt| is invalid and will be ignored: The username or password for the connection pool named apex_rt, are invalid, expired, or the account is locked
Apr 11, 2018 5:30:58 PM
INFO: Oracle REST Data Services initialized
Oracle REST Data Services version : 18.1.1.95.1251
Oracle REST Data Services server info: jetty/9.4.z-SNAPSHOT

2018-04-11 17:30:58.804:INFO:oejsh.ContextHandler:main: Started o.e.j.s.ServletContextHandler@eadd4fb{/ords,null,AVAILABLE,@Secured}
2018-04-11 17:30:58.821:INFO:oejus.SslContextFactory:main: x509=X509@7ce97ee5(selfsigned,h=[server1.domain.com],w=[]) for SslContextFactory@32c8e539[provider=null,keyStore=oracle.dbtools.standalone.InMemoryResource@73dce0e6,trustStore=oracle.dbtools.standalone.InMemoryResource@73dce0e6]
2018-04-11 17:30:58.888:INFO:oejs.AbstractConnector:main: Started Secured@a5bd950{SSL,[ssl, alpn, h2, http/1.1]}{0.0.0.0:8443}
2018-04-11 17:30:58.888:INFO:oejs.Server:main: Started @33622ms

Obviously you would need to use nohup command because in interactive mode the process is stopped when you quit the terminal…

To make all PDBs addressable by Oracle REST Data Services (Pluggable Mapping) I have finally used below command. A bit different than Oracle official documentation and my DB_DOMIAN parameter is unset:

[oracle@server1 ords]$ java -jar ords.war set-property db.serviceNameSuffix ''
Apr 12, 2018 12:38:47 PM oracle.dbtools.rt.config.setup.SetProperty execute
INFO: Modified: /u01/app/oracle/product/12.2.0/dbhome_1/ords/config/ords/defaults.xml, setting: db.serviceNameSuffix =

Then I have added my single pluggable database with:

[oracle@server1 ords]$ java -jar ords.war setup --database pdb1
Enter the name of the database server [server1.domain.com]:
Enter the database listen port [1531]:
Enter 1 to specify the database service name, or 2 to specify the database SID [1]:
Enter the database service name [orcl]:pdb1
Enter 1 if you want to verify/install Oracle REST Data Services schema or 2 to skip this step [1]:
Enter the database password for ORDS_PUBLIC_USER:
Confirm password:

Retrieving information.
Enter 1 if you want to use PL/SQL Gateway or 2 to skip this step.
If using Oracle Application Express or migrating from mod_plsql then you must enter 1 [1]:2
Apr 12, 2018 3:13:42 PM
INFO: Updated configurations: pdb1_pu
Apr 12, 2018 3:13:42 PM oracle.dbtools.rt.config.setup.SchemaSetup install
INFO: Oracle REST Data Services schema version 18.1.1.95.1251 is installed.

It has created below configuration file:

[oracle@server1 conf]$ cat $ORACLE_HOME/ords/config/ords/conf/pdb1_pu.xml



Saved on Thu Apr 12 15:13:42 CEST 2018
@05695FBEB8J4C5B200HG56EB28FF0F27B61F9B56AFF75F8472
pdb1
ORDS_PUBLIC_USER

I define the routing based on the request path prefix with (nothing original as the url will have the pluggable database name):

[oracle@server1 ords]$ java -jar ords.war map-url --type base-path /pdb1 pdb1
Apr 12, 2018 3:22:25 PM
INFO: Creating new mapping from: [base-path,/pdb1] to map to: [pdb1,,]

ORDS setup

When trying with SYS account I have gotten a strange error:

SQL> exec ords.enable_schema(p_schema => 'hr', p_url_mapping_type => 'BASE_PATH', p_url_mapping_pattern => 'hr');
BEGIN ords.enable_schema(p_schema => 'hr', p_url_mapping_type => 'BASE_PATH', p_url_mapping_pattern => 'hr'); END;

*
ERROR at line 1:
ORA-06598: insufficient INHERIT PRIVILEGES privilege
ORA-06512: at "ORDS_METADATA.ORDS", line 1
ORA-06512: at line 1

So finally executed it with my nominative DBA account:

SQL> exec ords.enable_schema(p_schema => 'hr', p_url_mapping_type => 'BASE_PATH', p_url_mapping_pattern => 'hr');

PL/SQL procedure successfully completed.

Instead of using the example of the official documentation:

exec ords.define_service(p_module_name => 'examples.routes', p_base_path => '/examples/routes/', p_pattern => 'greeting/:name', -
p_source => 'select ''Hello '' || :name || '' from '' || nvl(:whom,sys_context(''USERENV'',''CURRENT_USER'')) "greeting" from dual');

I have decided to try something much simpler for my first test but it failed for a strange error:

SQL> show user
USER is "HR"
SQL> exec ords.define_service(p_module_name => 'examples', p_base_path => 'examples/', p_method => 'GET', p_pattern => 'greeting/', -
     p_source => 'select sysdate from dual');
BEGIN ords.define_service(p_module_name => 'examples', p_base_path => 'examples/', p_method => 'GET', p_pattern => 'greeting/',  p_source => 'select sysdate from dual'); END;

*
ERROR at line 1:
ORA-01403: no data found
ORA-06512: at "ORDS_METADATA.ORDS_INTERNAL", line 617
ORA-06512: at "ORDS_METADATA.ORDS_SECURITY", line 85
ORA-06512: at "ORDS_METADATA.ORDS_SERVICES", line 117
ORA-06512: at "ORDS_METADATA.ORDS_SERVICES", line 52
ORA-06512: at "ORDS_METADATA.ORDS", line 694
ORA-06512: at line 1

Then the magic idea came to my mind and while I was grumbling about the fact that Oracle could have used a UPPER command for account name I remembered that since 11g account are now case sensitive, and this by default:

SQL> show parameter SEC_CASE_SENSITIVE_LOGON

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
sec_case_sensitive_logon             boolean     TRUE

SQL> select parsing_schema, status, auto_rest_auth from ords_metadata.ords_schemas;

PARSING_SCHEMA                 STATUS                         AUTO_REST_AUTH
------------------------------ ------------------------------ ------------------------------
ORDS_METADATA                  DISABLED                       ENABLED
hr                             ENABLED                        ENABLED

So did a bit of cleaning with (no DISABLE_SCHEMA or using false with ENABLE_SCHEMA is not deleting the line in ORDS_METADATA.ORDS_SCHEMAS:

SQL> EXECUTE ORDS.DROP_REST_FOR_SCHEMA('hr');

PL/SQL procedure successfully completed.

SQL> commit;

Commit complete.

SQL> exec ords.enable_schema(p_schema => 'HR', p_url_mapping_type => 'BASE_PATH', p_url_mapping_pattern => 'hr');

PL/SQL procedure successfully completed.

SQL> commit;

Commit complete.

SQL> select parsing_schema, status, auto_rest_auth from user_ords_schemas;

PARSING_SCHEMA                 STATUS                         AUTO_REST_AUTH
------------------------------ ------------------------------ ------------------------------
HR                             ENABLED                        ENABLED

And finally the service definition went well (even if it looks stupid the COMMIT is strongly suggested), I have also decided to format a bit the date display:

SQL> show user
USER is "HR"
SQL> exec ords.define_service(p_module_name => 'examples', p_base_path => '/examples/', p_method => 'GET', p_pattern => '/greeting/', -
     p_source => 'select to_char(sysdate,''dd-mon-yyyy hh24:mi:ss'') as current_date from dual');

PL/SQL procedure successfully completed.

SQL> commit;

Commit complete.

Then a simple GET request on the url should provide current date. To be honest I have not expected to spend so much time on this !! Initial request with Curl failed::

[oracle@server1 ~]$ curl https://server1.domain.com:8443/ords/pdb1/hr/examples/greeting/
curl: (60) Issuer certificate is invalid.
More details here: http://curl.haxx.se/docs/sslcerts.html

curl performs SSL certificate verification by default, using a "bundle"
 of Certificate Authority (CA) public keys (CA certs). If the default
 bundle file isn't adequate, you can specify an alternate file
 using the --cacert option.
If this HTTPS server uses a certificate signed by a CA represented in
 the bundle, the certificate verification probably failed due to a
 problem with the certificate (it might be expired, or the name might
 not match the domain name in the URL).
If you'd like to turn off curl's verification of the certificate, use
 the -k (or --insecure) option.
[oracle@server1 ~]$ curl -k https://server1.domain.com:8443/ords/pdb1/hr/examples/greeting/
invalid_preface

So decided to activate debugging mode with ($ORACLE_HOME/ords/config/ords/defaults.xml file):

true
true

Either you edit the file or use:

[oracle@server1 ords]$ java -jar ords.war set-property debug.debugger true
Apr 13, 2018 11:50:21 AM oracle.dbtools.rt.config.setup.SetProperty execute
INFO: Modified: /u01/app/oracle/product/12.2.0/dbhome_1/ords/config/ords/defaults.xml, setting: debug.debugger = true
[oracle@server1 ords]$ java -jar ords.war set-property debug.printDebugToScreen true
Apr 13, 2018 11:50:36 AM oracle.dbtools.rt.config.setup.SetProperty execute
INFO: Modified: /u01/app/oracle/product/12.2.0/dbhome_1/ords/config/ords/defaults.xml, setting: debug.printDebugToScreen = true

But to be honest this has bring nothing to help me… Do not forget to remove it afterwards as it is quite verbose… Then I noticed the HTTPS/2 in ORDS startup output:

INFO: HTTPS and HTTPS/2 listening on port: 8443

And wanted to use the –http2 option to tell curl to use HTTP version 2 but the release available in my OEL 7.4 (curl 7.29.0) at the time of writing this post is too old.

I have tried to download a binary on my Windows desktop and it has worked but I have remembered a Web training video where the presenter has introduced Postman. But I have not been able to make it working… So finally downloaded Insomnia and yeepee got the expected result:

ords01
ords01

If I try with a parameter in the url to get information for only one employee:

SQL> show user
USER is "HR"
SQL> exec ords.define_service(p_module_name => 'employees', p_base_path => '/employees/', p_method => 'GET', p_pattern => '/:id', -
     p_source => 'select * from employees where employee_id=:id');

PL/SQL procedure successfully completed.

SQL> commit;

Commit complete.

I can now specify in the url the employee id I would like to display:

ords02
ords02

With dictionary views you can double check what has been done (no ALL_xx or DBA_xx views, account owner is ORDS_METADATA):

SQL> set lines 200
SQL> col pattern for a15
SQL> col name for a15
SQL> col uri_prefix for a15
SQL> col uri_template for a15
SQL> col source_type for a20
SQL> select id, parsing_schema, type, pattern, status, auto_rest_auth from user_ords_schemas;

        ID PARSING_SCHEMA                 TYPE       PATTERN         STATUS                         AUTO_REST_AUTH
---------- ------------------------------ ---------- --------------- ------------------------------ ------------------------------
     10062 HR                             BASE_PATH  hr              ENABLED                        ENABLED

SQL> select id, name, uri_prefix, items_per_page, status from user_ords_modules;

        ID NAME            URI_PREFIX      ITEMS_PER_PAGE STATUS
---------- --------------- --------------- -------------- ------------------------------
     10120 employees       /employees/                 25 PUBLISHED
     10067 examples        /examples/                  25 PUBLISHED

SQL> select id, module_id, uri_template from user_ords_templates;

        ID  MODULE_ID URI_TEMPLATE
---------- ---------- ---------------
     10133      10120 /:id
     10101      10067 /greeting/

SQL> select id, template_id, source_type, method, source from user_ords_handlers;

        ID TEMPLATE_ID SOURCE_TYPE          METHOD     SOURCE
---------- ----------- -------------------- ---------- --------------------------------------------------------------------------------
     10134       10133 json/collection      GET        select * from employees where employee_id=:id
     10102       10101 json/collection      GET        select to_char(sysdate,'dd-mon-yyyy hh24:mi:ss') as current_date from dual

Automatic Enabling of Schema Objects for REST Access (AutoREST)

So far we have seen on how to fetch data from tables but how we modify them ? When looking in official documentation on how to update, delete and insert rows in objects I end up in AutoREST chapter. This is in fact linked to two procedures of ORDS package that remain unused: ENABLE_SCHEMA and ENABLE_OBJECT.

The shortest and self explaining definition of AutoREST is:

AutoREST is a quick and easy way to expose database tables as REST resources.

By default AutoREST is enabled but when using ENABLE_SCHEMA you might want to deactivate AutoREST authentication for easier testing (needless to say you must not do this in production):

SQL> exec ords.enable_schema(p_schema => 'HR', p_url_mapping_type => 'BASE_PATH', p_url_mapping_pattern => 'hr', p_auto_rest_auth => FALSE);

PL/SQL procedure successfully completed.

SQL> commit;

Commit complete.

SQL> select parsing_schema, type, pattern, status, auto_rest_auth from user_ords_schemas;

PARSING_SCHEMA                 TYPE       PATTERN         STATUS                         AUTO_REST_AUTH
------------------------------ ---------- --------------- ------------------------------ ------------------------------
HR                             BASE_PATH  hr              ENABLED                        DISABLED

Means we can display meta-data using special url:

ords03
ords03

Objects on the contrary are not enables by default:

SQL> set pages 1000
SQL> col parsing_object for a20
SQL> col object_alias for a20
SQL> select parsing_object, object_alias, type, status, auto_rest_auth from user_ords_objects;

PARSING_OBJECT       OBJECT_ALIAS         TYPE                           STATUS                         AUTO_REST_AUTH
-------------------- -------------------- ------------------------------ ------------------------------ ------------------------------
ADD_JOB_HISTORY      add_job_history      PROCEDURE                      DISABLED                       ENABLED
COUNTRIES            countries            TABLE                          DISABLED                       ENABLED
DEPARTMENTS          departments          TABLE                          DISABLED                       ENABLED
EMPLOYEES            employees            TABLE                          DISABLED                       ENABLED
EMP_DETAILS_VIEW     emp_details_view     VIEW                           DISABLED                       ENABLED
JOBS                 jobs                 TABLE                          DISABLED                       ENABLED
JOB_HISTORY          job_history          TABLE                          DISABLED                       ENABLED
LOCATIONS            locations            TABLE                          DISABLED                       ENABLED
REGIONS              regions              TABLE                          DISABLED                       ENABLED
SECURE_DML           secure_dml           PROCEDURE                      DISABLED                       ENABLED

10 rows selected.

Let enable EMPLOYEES table with (I have chosen another alias not to mess up with already existing one we have created above). I also deactivate authentication (needless to say you must not do this in production):

SQL> exec ords.enable_object(p_enabled => TRUE, p_schema => 'HR', p_object => 'EMPLOYEES', -
     p_object_type => 'TABLE',  p_object_alias => 'emp', p_auto_rest_auth => FALSE);

PL/SQL procedure successfully completed.

SQL> commit;

Commit complete.

SQL> select parsing_object, object_alias, type, status, auto_rest_auth from user_ords_enabled_objects;

PARSING_OBJECT       OBJECT_ALIAS         TYPE                           STATUS                         AUTO_REST_AUTH
-------------------- -------------------- ------------------------------ ------------------------------ ------------------------------
EMPLOYEES            emp                  TABLE                          ENABLED                        DISABLED

You can get meta-data with:

ords04
ords04

And here a list of multiple possible queries (not displaying a picture each time):

  • https://server1.domain.com:8443/ords/pdb1/hr/emp/ to display all employees
  • https://server1.domain.com:8443/ords/pdb1/hr/emp/100 to display employee id 100
  • https://server1.domain.com:8443/ords/pdb1/hr/emp/?limit=5 to display first five employees

To insert a row, I get REST current date format with (ISO 8601):

SQL> select to_char(sysdate,'YYYY-MM-DD')||'T'||to_char(sysdate,'HH24:MI:SS')||'Z' from dual;

TO_CHAR(SYSDATE,'YYY
--------------------
2018-04-19T16:59:36Z
ords05
ords05

To delete the just inserted row:

ords06
ords06

PUT would be use to perform an upsert (insert or update). Have a look to ORDS official documentation it contains plenty of different cases…

REST-Enabled SQL Service

From official documentation:

The REST Enabled SQL service is a HTTPS web service that provides access to the Oracle Database SQL Engine. You can POST SQL statements to the service. The service then runs the SQL statements against Oracle database and returns the result to the client in a JSON format.

I have activated REST-Enabled SQL Service with:

[oracle@server1 ords]$ java -jar ords.war set-property restEnabledSql.active true
Apr 12, 2018 12:41:30 PM oracle.dbtools.rt.config.setup.SetProperty execute
INFO: Modified: /u01/app/oracle/product/12.2.0/dbhome_1/ords/config/ords/defaults.xml, setting: restEnabledSql.active = true

The documentation provide below command with curl:

curl -i -X POST --user ORDSTEST:ordstest --data-binary "select sysdate from dual" -H "Content-Type: application/sql" -k https://localhost:8088/ords/ordstest/_/sql

With Insomnia it gives:

ords07
ords07
ords08
ords08
ords09
ords09

On the right part of the above screen shots we can see the correct result: 107 rows.

Configuring Secure Access to RESTful Services

We have configured our RESTful service with an HTTPS access but what if someone has been able to copy past the url ? Then he would be able to display all our employees information, which is most probably not a good thing. How to restrict this ? Well fortunately it has been implemented and you can secure RESTful API access with two methods:

  • First Party Cookie-Based Authentication
  • Third Party OAuth 2.0-Based Authentication

First Party Cookie-Based Authentication

Start by creating a role:

SQL> show user
USER is "HR"
SQL> exec ords.create_role(p_role_name => 'employees_role');

PL/SQL procedure successfully completed.

SQL> commit;

Commit complete.

Create a privilege associated with the role:

SQL>
DECLARE
l_arr OWA.vc_arr;
BEGIN
l_arr(1) := 'employees_role';
ords.define_privilege(p_privilege_name => 'employees_priv', p_roles => l_arr, p_label => 'Employees data', -
                      p_description => 'Securing access to employees data');
commit;
END;
/

PL/SQL procedure successfully completed.

Protect the RESTful API with he newly created privilege with (procedure not documented at the time of writing this post in ORDS 18.1):

SQL> exec ords.create_privilege_mapping(p_privilege_name => 'employees_priv', p_pattern => '/employees/*');

PL/SQL procedure successfully completed.

SQL> commit;

Commit complete.

With ORDS dictionary views it gives:

SQL> set lines 200
SQL> col name for a15
SQL> col label for a20
SQL> col description for a40
SQL> col pattern for a15
SQL> col privilege_name for a15
SQL> col role_name for a15
SQL> select id, name, schema_id from user_ords_roles where name='employees_role';

        ID NAME             SCHEMA_ID
---------- --------------- ----------
     10135 employees_role       10062

SQL> select id, label, name, description from user_ords_privileges where name='employees_priv';

        ID LABEL                NAME            DESCRIPTION
---------- -------------------- --------------- ----------------------------------------
     10136 Employees data       employees_priv  Securing access to employees data

SQL> select privilege_id, privilege_name, role_id, role_name from user_ords_privilege_roles where privilege_name='employees_priv';

PRIVILEGE_ID PRIVILEGE_NAME     ROLE_ID ROLE_NAME
------------ --------------- ---------- ---------------
       10136 employees_priv       10135 employees_role

SQL> select privilege_id, name, pattern from user_ords_privilege_mappings where name='employees_priv';

PRIVILEGE_ID NAME            PATTERN
------------ --------------- ---------------
       10136 employees_priv  /employees/*

finally as expected the RESTful API is no more accessible (HTTP error 401 Unauthorized):

ords10
ords10

Create a user to access again the RESTful API with:

[oracle@server1 ords]$ java -jar ords.war user hr_user employees_role
Enter a password for user hr_user:
Confirm password for user hr_user:
Apr 17, 2018 4:40:27 PM oracle.dbtools.standalone.ModifyUser execute
INFO: Created user: hr_user in file: /u01/app/oracle/product/12.2.0/dbhome_1/ords/config/ords/credentials

And then either you click on the link if you use a browser or with Insommia you can fill in the authentication tab as follow and you can again access the RESTfull API in a secure manner:

ords11
ords11

Third Party OAuth 2.0-Based Authentication

SQL> exec oauth.create_client(p_name => 'My Employees Application', p_grant_type => 'client_credentials', p_owner => 'Yannick', -
     p_description => 'A Vue.JS client to access Employees data', p_support_email => 'yannick@domain.com', p_privilege_names => 'employees_priv');

PL/SQL procedure successfully completed.

SQL> commit;

Commit complete.
SQL> exec oauth.grant_client_role(p_client_name => 'My Employees Application', p_role_name  => 'employees_role');

PL/SQL procedure successfully completed.

SQL> commit;

Commit complete.

With my Insomnia tool the access is then a bit different as I am not required to get the access token first. Insomnia is able to do it in one call. I have chosen OAuth 2 in authentication method and I specify token url, client id and client secret the url I aim to fetch is still the same (the access token has been auto-filled by Insomnia):

ords12
ords12

In Timeline tab of response we can see that Insomnia as fetch the access token (Authorization: Bearer line) and used it to fetch my final url to get employee id 100 information:

ords13
ords13

With ORDS dictionary views it gives:

SQL> col name for a30
SQL> col client_name for a25
SQL> col response_type for a15
SQL> select name, description, response_type, client_id, client_secret from user_ords_clients;

NAME                           DESCRIPTION                              RESPONSE_TYPE   CLIENT_ID                        CLIENT_SECRET
------------------------------ ---------------------------------------- --------------- -------------------------------- --------------------------------
My Employees Application       A Vue.JS client to access Employees data TOKEN           6YoYpRwsaeH19coruhZAsw..         yM49CRbG8WkvAb5AUx07lA..

SQL> select * from user_ords_client_roles;

 CLIENT_ID CLIENT_NAME                  ROLE_ID ROLE_NAME
---------- ------------------------- ---------- ---------------
     10163 My Employees Application       10135 employees_role

     SQL> select name, label, description, client_name FROM user_ords_client_privileges;

NAME                           LABEL                DESCRIPTION                              CLIENT_NAME
------------------------------ -------------------- ---------------------------------------- -------------------------
employees_priv                 Employees data       Securing access to employees data        My Employees Application

ORDS uninstall

To remove ORDS from you database use:

[oracle@server1 ords]$ java -jar ords.war uninstall advanced

References

The post Oracle REST Data Services (ORDS) installation and usage appeared first on IT World.

]]>
https://blog.yannickjaquier.com/oracle/oracle-rest-data-services-ords.html/feed 0
Application Continuity (AC) for Java – JDBC HA – part 6 https://blog.yannickjaquier.com/oracle/application-continuity-ac-jdbc-ha-part-6.html https://blog.yannickjaquier.com/oracle/application-continuity-ac-jdbc-ha-part-6.html#comments Thu, 13 Sep 2018 13:23:53 +0000 https://blog.yannickjaquier.com/?p=4321 Preamble Application Continuity (AC) feature introduced Oracle Database 12c Release 1 (12.1) masks database outages to application and end users. You must use Transaction Guard 12.2 for using this feature. Application Continuity is a feature of the Oracle JDBC Thin driver and is not supported by JDBC OCI driver. I am using the exact same […]

The post Application Continuity (AC) for Java – JDBC HA – part 6 appeared first on IT World.

]]>

Table of contents

Preamble

Application Continuity (AC) feature introduced Oracle Database 12c Release 1 (12.1) masks database outages to application and end users.

You must use Transaction Guard 12.2 for using this feature.
Application Continuity is a feature of the Oracle JDBC Thin driver and is not supported by JDBC OCI driver.

I am using the exact same test table as with Transaction Guard (TG). I start in this initial state:

SQL> select * from test01;

        ID        VAL
---------- ----------
         1          1
         2          1
         3          1
         4          1
         5          1
         6          1
         7          1
         8          1
         9          1
        10          1

10 rows selected.

You must configure your connection string using RETRY_COUNT, RETRY_DELAY, CONNECT_TIMEOUT, and TRANSPORT_CONNECT_TIMEOUT. As you can find in Oracle documentation:

Starting from Oracle Database 12c Release 2 (12.2.0.1), you must specify the value of the TRANSPORT_CONNECT_TIMEOUT parameter in milliseconds, instead of seconds.

The definition of each parameter:

  • RETRY_COUNT: To specify the number of times an ADDRESS list is traversed before the connection attempt is terminated.
  • RETRY_DELAY: To specify the delay in seconds between subsequent retries for a connection. This parameter works in conjunction with RETRY_COUNT parameter.
  • CONNECT_TIMEOUT: To specify the timeout duration in seconds for a client to establish an Oracle Net connection to an Oracle database.
  • TRANSPORT_CONNECT_TIMEOUT: To specify the transportation timeout duration in milliseconds for a client to establish an Oracle Net connection to an Oracle database.

I have defined mine as:

(DESCRIPTION=
  (RETRY_COUNT=20)
  (RETRY_DELAY=3)
  (CONNECT_TIMEOUT=60)
  (TRANSPORT_CONNECT_TIMEOUT=3000)
  (ADDRESS=
    (PROTOCOL=TCP)
    (HOST=rac-cluster-scan.domain.com)
    (PORT=1531))
    (CONNECT_DATA=(SERVICE_NAME=pdb1srv)))

I have used the exact same service as for Transaction Guard (TG):

[oracle@server2 ~]$ srvctl config service -db orcl -service pdb1srv
Service name: pdb1srv
Server pool: server_pool01
Cardinality: UNIFORM
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: true
Global: false
Commit Outcome: true
Failover type: TRANSACTION
Failover method:
TAF failover retries:
TAF failover delay:
Failover restore: NONE
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: NONE
Edition:
Pluggable database name: pdb1
Maximum lag time: ANY
SQL Translation Profile:
Retention: 86400 seconds
Replay Initiation Time: 300 seconds
Drain timeout:
Stop option:
Session State Consistency: DYNAMIC
GSM Flags: 0
Service is enabled
Service is individually enabled on nodes:
Service is individually disabled on nodes:
CSS critical: no

Application Continuity (AC) testing

The idea is almost the same as for Transaction Guard (TG) I start a transaction and before the commit I kill h=the pmon of the instance where my Java program has started. And we expect to see… nothing, means that replay will re-submit the transaction and the Java program will exit successfully !

Java testing code (you need to add ojdbc8.jar, ons.jar and ucp.jar to your project):

package ac01;

import java.sql.Connection;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Statement;
import java.time.LocalDateTime;
import java.time.format.DateTimeFormatter;
import java.util.Locale;
import oracle.jdbc.replay.ReplayStatistics;
import oracle.jdbc.replay.ReplayableConnection;
import oracle.ucp.jdbc.PoolDataSource;
import oracle.ucp.jdbc.PoolDataSourceFactory;

public class ac01 {
  private static void display_replay_statistics(ReplayStatistics rs) {
    System.out.println("FailedReplayCount="+rs.getFailedReplayCount());
    System.out.println("ReplayDisablingCount="+rs.getReplayDisablingCount());
    System.out.println("SuccessfulReplayCount="+rs.getSuccessfulReplayCount());
    System.out.println("TotalCalls="+rs.getTotalCalls());
    System.out.println("TotalCallsAffectedByOutages="+rs.getTotalCallsAffectedByOutages()); 
    System.out.println("TotalCallsAffectedByOutagesDuringReplay="+ rs.getTotalCallsAffectedByOutagesDuringReplay());
    System.out.println("TotalCallsTriggeringReplay="+rs.getTotalCallsTriggeringReplay());
    System.out.println("TotalCompletedRequests="+rs.getTotalCompletedRequests());
    System.out.println("TotalProtectedCalls="+rs.getTotalProtectedCalls());
    System.out.println("TotalReplayAttempts="+rs.getTotalReplayAttempts());
    System.out.println("TotalRequests="+rs.getTotalRequests());
  }

  public static void main(String[] args) throws Exception {
    PoolDataSource pds = PoolDataSourceFactory.getPoolDataSource();
    Connection connection1 = null;
    Statement statement1 = null;
    ResultSet resultset1 = null;
    int i = 0;

    //To have date format in English, my Windows desktop being in French 🙂
    Locale.setDefault(new Locale("en"));
    pds.setConnectionFactoryClassName("oracle.jdbc.replay.OracleDataSourceImpl");
    pds.setUser("yjaquier");
    pds.setPassword("secure_password");
    // The RAC connection using SCAN name and HA service
    pds.setURL("jdbc:oracle:thin:@(DESCRIPTION=(RETRY_COUNT=20)(RETRY_DELAY=3)(CONNECT_TIMEOUT=60)(TRANSPORT_CONNECT_TIMEOUT=3000)" + 
        "(ADDRESS=(PROTOCOL=TCP)(HOST=rac-cluster-scan.domain.com)(PORT=1531))(CONNECT_DATA=(SERVICE_NAME=pdb1srv)))");
    pds.setConnectionPoolName("ACPool");

    pds.setMinPoolSize(10);
    pds.setMaxPoolSize(20);
    pds.setInitialPoolSize(10);

    System.out.println("Trying to obtain a new connection from pool ...");
    connection1 = pds.getConnection();
    statement1 = connection1.createStatement();
    resultset1 = statement1.executeQuery("select sys_context('USERENV','SERVER_HOST') from dual");
    while (resultset1.next()) {
      System.out.println("Working on server " + resultset1.getString(1));
    }
    // To start fresh and to avoid
    // ORA-41412: results changed during replay; failover cannot continue
    ((oracle.jdbc.replay.ReplayableConnection) connection1).endRequest(); // Explicit request end
    try {
      ((oracle.jdbc.replay.ReplayableConnection) connection1).beginRequest(); // Explicit request begin
      connection1.setAutoCommit(false);
      statement1 = connection1.createStatement();
      for (i = 1; i<=10; i++) {
        System.out.println("Update "+i+" at "+LocalDateTime.now().format(DateTimeFormatter.ofPattern("dd-MM-yyyy HH:mm:ss")));          
        resultset1 = statement1.executeQuery("update test01 set val=val+1 where id = " + i);
      }
      System.out.println("\nJust before the kill...");
      // Sleeping 30 seconds to let me kill the instance where the connection has been borrowed from the pool
      Thread.sleep(30000);
      connection1.commit();
      ((oracle.jdbc.replay.ReplayableConnection) connection1).endRequest(); // Explicit request end
    }
    catch (Exception e)
    {
      //Transaction is not recoverable ?
      System.out.println("Exception detected:");
      e.printStackTrace();
      display_replay_statistics(((oracle.jdbc.replay.ReplayableConnection) connection1).getReplayStatistics(ReplayableConnection.StatisticsReportType.FOR_CURRENT_CONNECTION));
      e = ((SQLException) e).getNextException();
      e.printStackTrace();
      resultset1.close();
      statement1.close();
      connection1.close();
      System.exit(1);
    }
    // Transaction has been recovered
    statement1 = connection1.createStatement();
    resultset1 = statement1.executeQuery("select sys_context('USERENV','SERVER_HOST') from dual");
    while (resultset1.next()) {
      System.out.println("Working on server " + resultset1.getString(1));
    }
    display_replay_statistics(((oracle.jdbc.replay.ReplayableConnection) connection1).getReplayStatistics(ReplayableConnection.StatisticsReportType.FOR_CURRENT_CONNECTION));
    connection1.close();
    System.out.println("Normal exit");
    resultset1.close();
    statement1.close();
    connection1.close();
  }
}

Output display:

ac01
ac01

So we see that the connection I have borrowed from the pool is on server2.domain.com. The kill of this instance is done just before the explicit commit, thanks to the 30 seconds delay I have added. The magic is because the catch block has NOT been executed and the update has been replayed on surviving node i.e. server3.domain.com.

This is double confirmed by a simple database query:

SQL> select * from test01;

        ID        VAL
---------- ----------
         1          2
         2          2
         3          2
         4          2
         5          2
         6          2
         7          2
         8          2
         9          2
        10          2

10 rows selected.

Issues encountered

No more data to read from socket

The complete error message is the following:

java.sql.SQLRecoverableException: No more data to read from socket
at oracle.jdbc.driver.T4CMAREngineNIO.prepareForReading(T4CMAREngineNIO.java:119)
at oracle.jdbc.driver.T4CMAREngineNIO.unmarshalUB1(T4CMAREngineNIO.java:534)
at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:485)
at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:252)
at oracle.jdbc.driver.T4C7Ocommoncall.doOCOMMIT(T4C7Ocommoncall.java:72)
at oracle.jdbc.driver.T4CConnection.doCommit(T4CConnection.java:961)
at oracle.jdbc.driver.PhysicalConnection.commit(PhysicalConnection.java:1937)
at oracle.jdbc.driver.PhysicalConnection.commit(PhysicalConnection.java:1942)
at oracle.jdbc.proxy.oracle$1jdbc$1replay$1driver$1TxnReplayableConnection$2oracle$1jdbc$1internal$1OracleConnection$$$Proxy.commit(Unknown Source)
at ac01.ac01.main(ac01.java:100)

You will find plenty of Oracle notes on MOS about this issue. In my case this was the connection1.setAutoCommit(false); that was set outside of the begin/end request block.

Closed Connection

The complete trace is:

java.sql.SQLException: Closed Statement
	at oracle.jdbc.driver.OracleClosedStatement.executeQuery(OracleClosedStatement.java:2431)
	at oracle.jdbc.driver.OracleStatementWrapper.executeQuery(OracleStatementWrapper.java:366)
	at oracle.jdbc.proxy.oracle$1jdbc$1replay$1driver$1TxnReplayableStatement$2oracle$1jdbc$1internal$1OracleStatement$$$Proxy.executeQuery(Unknown Source)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
	at java.lang.reflect.Method.invoke(Unknown Source)
	at oracle.jdbc.replay.driver.TxnReplayableBase.replayOneCall(TxnReplayableBase.java:520)
	at oracle.jdbc.replay.driver.TxnFailoverManagerImpl.replayAllBeforeLastCall(TxnFailoverManagerImpl.java:1773)
	at oracle.jdbc.replay.driver.TxnFailoverManagerImpl.handleOutageInternal(TxnFailoverManagerImpl.java:1425)
	at oracle.jdbc.replay.driver.TxnFailoverManagerImpl.handleOutage(TxnFailoverManagerImpl.java:989)
	at oracle.jdbc.replay.driver.TxnReplayableBase.onErrorForAll(TxnReplayableBase.java:339)
	at oracle.jdbc.replay.driver.TxnReplayableConnection.onErrorForAll(TxnReplayableConnection.java:395)
	at oracle.jdbc.replay.driver.TxnReplayableBase.onErrorVoidForAll(TxnReplayableBase.java:262)
	at oracle.jdbc.replay.driver.TxnReplayableConnection.onErrorVoidForAll(TxnReplayableConnection.java:388)
	at oracle.jdbc.proxy.oracle$1jdbc$1replay$1driver$1TxnReplayableConnection$2oracle$1jdbc$1internal$1OracleConnection$$$Proxy.commit(Unknown Source)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
	at java.lang.reflect.Method.invoke(Unknown Source)
	at oracle.ucp.jdbc.proxy.JDBCConnectionProxyFactory.invoke(JDBCConnectionProxyFactory.java:329)
	at oracle.ucp.jdbc.proxy.ConnectionProxyFactory.invoke(ConnectionProxyFactory.java:50)
	at com.sun.proxy.$Proxy17.commit(Unknown Source)
	at ac01.ac01.main(ac01.java:112)

I had it on top of the "No more data to read from socket" one. I have the feeling that the Java block between beginRequest() and endRequest() must be re-runnable and so I have been obliged to add in this block (even if set just above):

statement1 = connection1.createStatement();

ORA-41412: results changed during replay; failover cannot continue

This one was the trickiest I had ! To see it I have even been obliged to update my Java code by adding in catch part:

e = ((SQLException) e).getNextException();
e.printStackTrace();

the full trace is:

java.sql.SQLException: ORA-41412: results changed during replay; failover cannot continue
	at oracle.jdbc.driver.T4CTTIoer11.processError(T4CTTIoer11.java:494)
	at oracle.jdbc.driver.T4CTTIoer11.processError(T4CTTIoer11.java:446)
	at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:1054)
	at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:623)
	at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:252)
	at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:612)
	at oracle.jdbc.driver.T4CStatement.doOall8(T4CStatement.java:213)
	at oracle.jdbc.driver.T4CStatement.doOall8(T4CStatement.java:37)
	at oracle.jdbc.driver.T4CStatement.executeForDescribe(T4CStatement.java:733)
	at oracle.jdbc.driver.OracleStatement.executeMaybeDescribe(OracleStatement.java:904)
	at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1082)
	at oracle.jdbc.driver.OracleStatement.executeQuery(OracleStatement.java:1276)
	at oracle.jdbc.driver.OracleStatementWrapper.executeQuery(OracleStatementWrapper.java:366)
	at oracle.jdbc.proxy.oracle$1jdbc$1replay$1driver$1TxnReplayableStatement$2oracle$1jdbc$1internal$1OracleStatement$$$Proxy.executeQuery(Unknown Source)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
	at java.lang.reflect.Method.invoke(Unknown Source)
	at oracle.jdbc.replay.driver.TxnReplayableBase.replayOneCall(TxnReplayableBase.java:520)
	at oracle.jdbc.replay.driver.TxnFailoverManagerImpl.replayAllBeforeLastCall(TxnFailoverManagerImpl.java:1773)
	at oracle.jdbc.replay.driver.TxnFailoverManagerImpl.handleOutageInternal(TxnFailoverManagerImpl.java:1425)
	at oracle.jdbc.replay.driver.TxnFailoverManagerImpl.handleOutage(TxnFailoverManagerImpl.java:989)
	at oracle.jdbc.replay.driver.TxnReplayableBase.onErrorForAll(TxnReplayableBase.java:339)
	at oracle.jdbc.replay.driver.TxnReplayableConnection.onErrorForAll(TxnReplayableConnection.java:395)
	at oracle.jdbc.replay.driver.TxnReplayableBase.onErrorVoidForAll(TxnReplayableBase.java:262)
	at oracle.jdbc.replay.driver.TxnReplayableConnection.onErrorVoidForAll(TxnReplayableConnection.java:388)
	at oracle.jdbc.proxy.oracle$1jdbc$1replay$1driver$1TxnReplayableConnection$2oracle$1jdbc$1internal$1OracleConnection$$$Proxy.commit(Unknown Source)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
	at java.lang.reflect.Method.invoke(Unknown Source)
	at oracle.ucp.jdbc.proxy.JDBCConnectionProxyFactory.invoke(JDBCConnectionProxyFactory.java:329)
	at oracle.ucp.jdbc.proxy.ConnectionProxyFactory.invoke(ConnectionProxyFactory.java:50)
	at com.sun.proxy.$Proxy17.commit(Unknown Source)
	at ac01.ac01.main(ac01.java:127)
Caused by: Error : 41412, Position : 49, Sql = select sys_context('USERENV','SERVER_HOST') from dual, OriginalSql = 
select sys_context('USERENV','SERVER_HOST') from dual, Error Msg = ORA-41412: results changed during replay; failover cannot continue

	at oracle.jdbc.driver.T4CTTIoer11.processError(T4CTTIoer11.java:498)
	... 34 more

Of course the query result returning the server name on which I am connected has changed ! I have killed the initial one ! Even the Oracle example you can found on MOS are displaying the working instance so really strange to me to hit this issue. To solve it I have added the endRequest() call just before the block I am expecting to be replayable:

((oracle.jdbc.replay.ReplayableConnection) connection1).endRequest();

Diagnostics and Tracing

I have not been able to activate tracing of replay events either on console or in a log file. The documentation says you can use:

oracle.jdbc.internal.replay.level = FINER|FINEST

And use for example, to log on console:

oracle.jdbc.internal.replay.level = FINER
handlers = java.util.logging.ConsoleHandler
java.util.logging.ConsoleHandler.level = ALL
java.util.logging.ConsoleHandler.formatter = java.util.logging.XMLFormatter

But it does not display anything for me...

References

  • Application Continuity for Java
  • "java.sql.SQLRecoverableException: No more data to read from socket" Using Application Continuity with 12c JDBC Driver (Doc ID 2140406.1)
  • How To Test Application Continuity Using A Standalone Java Program (Doc ID 1602233.1)
  • Application Continuity Throws Exception No more data to read from socket For Commits After Failover (Doc ID 2197029.1)
  • Java Client Failover With Application Continuity Fails With java.sql.SQLRecoverableException: No more data to read from socket (Doc ID 2294143.1)
  • Configuring Oracle Universal Connection Pool (UCP) for Planned Outages (Doc ID 1593786.1)
  • oracle.jdbc.replay Interface ReplayStatistics
  • Package oracle.jdbc.replay
  • Application Continuity

The post Application Continuity (AC) for Java – JDBC HA – part 6 appeared first on IT World.

]]>
https://blog.yannickjaquier.com/oracle/application-continuity-ac-jdbc-ha-part-6.html/feed 1
Transaction Guard (TG) for Java – JDBC HA – part 5 https://blog.yannickjaquier.com/oracle/transaction-guard-tg-jdbc-ha-part-5.html https://blog.yannickjaquier.com/oracle/transaction-guard-tg-jdbc-ha-part-5.html#comments Mon, 27 Aug 2018 10:09:58 +0000 https://blog.yannickjaquier.com/?p=4315 Preamble Transaction Guard (TG) is a database feature for applications to ensure that every transaction has been executed at most once in case of planned or unplanned outage. In the background each transaction is associated with a logical transaction ID that let you determine, afterward, if the transaction has committed and has been completed. Without […]

The post Transaction Guard (TG) for Java – JDBC HA – part 5 appeared first on IT World.

]]>

Table of contents

Preamble

Transaction Guard (TG) is a database feature for applications to ensure that every transaction has been executed at most once in case of planned or unplanned outage. In the background each transaction is associated with a logical transaction ID that let you determine, afterward, if the transaction has committed and has been completed. Without Transaction Guard (TG) the typical example that is given is a transaction that is increasing by 5% the salary of all company employees. If the application fail, outside of the fact that you will make happy employees by giving them 10.25% you cannot rerun blindly the transaction from scratch. With Transaction Guard you can know from application layout how it went and present a warning or gray the submit button to avoid a second dramatic rerun.

As we have seen in part 1 you must use JDBC Thin driver to use this feature !

For testing I have created below test table:

SQL> create table test01 as select level as id, 1 as val from dual connect by level <= 10;

Table created.

SQL> desc test01;
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 ID                                                 NUMBER
 VAL                                                NUMBER

SQL> select * from test01;

        ID        VAL
---------- ----------
         1          1
         2          1
         3          1
         4          1
         5          1
         6          1
         7          1
         8          1
         9          1
        10          1

10 rows selected.

The idea is to update each row using id column as a key and pause between the updates to let me kill the session and simulate an crash scenario. By default the Java connection has autocommit property set to TRUE

From Oracle Technology Network the requirements for TG are:

  • Use Oracle Database Release 12.1 or later.
  • Use an application service for all database work. Create the service using srvctl if using RAC or DBMS_SERVICE if not using RAC. You may also use GDSCTL.
  • Set the following properties on the service – COMMIT_OUTCOME = TRUE for Transaction Guard.
  • Grant execute permission on DBMS_APP_CONT package to the application user.
  • Increase DDL_LOCK_TIMEOUT if using Transaction Guard with DDL statements (for example, 10 seconds).

In part 1 of this series I have chosen only few options for my service and so I do not satisfy the minimum requirements for TG. I had to drop and recreate the service using the mandatory option. See issues encountered section for this.

The Oracle PL/SQL supplied package to know if a transaction has committed and has completed is DBMS_APP_CONT.GET_LTXID_OUTCOME:

SQL> desc DBMS_APP_CONT
PROCEDURE GET_LTXID_OUTCOME
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 CLIENT_LTXID                   RAW                     IN
 COMMITTED                      BOOLEAN                 OUT
 USER_CALL_COMPLETED            BOOLEAN                 OUT

This procedure has two Boolean out parameters and until recently it was not possible to map a PL/SQL Boolean in Java this is why all blog post (as well as official 12cR2 documentation) I have found, at the time of writing this post, are wrapping this procedure in a PL/SQL block to return numbers instead of Boolean. The limitation has gone with 12cR2:

JDBC Support for Binding PL/SQL BOOLEAN type

Starting from Oracle Database 12c Release 2 (12.2.0.1), Oracle JDBC drivers support binding PL/SQL BOOLEAN type, which is a true BOOLEAN type. PLSQL_BOOLEAN binds BOOLEAN type for input or output parameters when executing a PL/SQL function or procedure. With this feature, now JDBC supports the ability to bind PLSQL_BOOLEAN type into any PL/SQL block from Java.

Transaction Guard (TG) testing

Java testing code:

package tg01;

import java.sql.CallableStatement;
import java.sql.Connection;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.SQLRecoverableException;
import java.sql.Statement;
import java.time.LocalDateTime;
import java.time.format.DateTimeFormatter;

import oracle.jdbc.LogicalTransactionId;
import oracle.jdbc.OracleConnection;
import oracle.jdbc.OracleTypes;
import oracle.jdbc.pool.OracleDataSource;

public class tg01 {
  private static Boolean[] dbms_app_cont(Connection connection, LogicalTransactionId ltxid) throws SQLException {
    Boolean[] result;
    String query = "begin dbms_app_cont.get_ltxid_outcome(?, ?, ?); end;";
    CallableStatement callStmt1 = null;

    result = new Boolean[2];
    callStmt1 = connection.prepareCall(query);
    callStmt1.setObject(1, ltxid);
    callStmt1.registerOutParameter(2, OracleTypes.PLSQL_BOOLEAN);
    callStmt1.registerOutParameter(3, OracleTypes.PLSQL_BOOLEAN);
    callStmt1.execute();

    result[0] = callStmt1.getBoolean(2);
    result[1] = callStmt1.getBoolean(3);
    return result;
  }

  public static void main(String[] args) throws Exception {
    Connection connection1=null, connection2 = null;
    Statement statement1 = null;
    ResultSet resultset1 = null;
    OracleDataSource ods1 = new OracleDataSource();
    int i = 1;
    LogicalTransactionId ltxid = null;
    Boolean[] ltxid_result;

    ltxid_result = new Boolean[2];

    try {
      Class.forName("oracle.jdbc.driver.OracleDriver");
    }
    catch (ClassNotFoundException e) {
      System.out.println("Where is your Oracle JDBC Thin driver ?");
      e.printStackTrace();
      System.exit(1);
    }

    System.out.println("Oracle JDBC Driver Registered!");

    try {
      ods1.setUser("yjaquier");
      ods1.setPassword("secure_password");
      ods1.setURL("jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=rac-cluster-scan.domain.com)(PORT=1531))(CONNECT_DATA=(SERVICE_NAME=pdb1srv)))");
      connection1 = ods1.getConnection();
    }
    catch (SQLException e) {
      System.out.println("Connection Failed! Check output console");
      e.printStackTrace();
    }
    System.out.println("Connected to Oracle database...");
    statement1 = connection1.createStatement();
    resultset1 = statement1.executeQuery("select sys_context('USERENV','SERVER_HOST') from dual");
    while (resultset1.next()) {
      System.out.println("Working on server " + resultset1.getString(1));
    }

    try {
      // lxtid must be taken "at right time"
      // If you take it after last update you might have committed returned value
      // equal to false. Means that all previous updates are not taken into account
      ltxid = ((OracleConnection)connection1).getLogicalTransactionId();
      for (i=1; i<=10; i++) {
        System.out.println("\nUpdate "+i+" at "+LocalDateTime.now().format(DateTimeFormatter.ofPattern("dd-MM-yyyy HH:mm:ss")));          
        resultset1 = statement1.executeQuery("update test01 set val=val + 1 where id = " + i);
        // Sleeping 10 seconds to let me kill the session
        Thread.sleep(10000);
      }
    }
    catch (SQLRecoverableException recoverableException) {
      resultset1.close();
      connection1.close();
      try {
        connection2 = ods1.getConnection();
      }
      catch (SQLException e) {
        System.out.println("Connection Failed! Check output console");
        e.printStackTrace();
      }
      ltxid_result = dbms_app_cont(connection2, ltxid);
      System.out.println("Committed: " +  ltxid_result[0] + ", user_call_completed: " + ltxid_result[1]);
      connection1 = connection2;
    }
  }
}

I start the execution of my Java program (only ojdbc8.jar must be added to your project) and after 4th update I kill the Oracle session:

tg01
tg01

My Java program tells us that the transaction has committed and that the user call has not completed.

If we check at database level:

SQL> select * from test01;

        ID        VAL
---------- ----------
         1          2
         2          2
         3          2
         4          1
         5          1
         6          1
         7          1
         8          1
         9          1
        10          1

10 rows selected.

As we see on above picture DBMS_APP_CONT.GET_LTXID_OUTCOME procedure tells us that committed has been done and user call has not completed. This is obviously confirmed by the state of my test01 table. We clearly understand that the program cannot be re-submitted blindly and a bit of work is mandatory to recover the situation...

As a side note the transaction has committed because the auto commit feature of the connection is TRUE by default, you can deactivate this with:

connection1.setAutoCommit(false);

And use an explicit commit at the end of the ten updates with:

connection1.commit();

If you kill the session after the ten updates and just before the commit (slight modification of above Java code) the Java program will display:>/p>

Committed: false, user_call_completed: false

In this situation you know the table is unchanged and that you can safely resubmit he operation...

Issues encountered

In my initial testing I received an error message when calling DBMS_APP_CONT.GET_LTXID_OUTCOME procedure:

ORA-14903: Corrupt logical transaction detected.
ORA-06512: at "SYS.DBMS_APP_CONT", line 20
ORA-06512: at "SYS.DBMS_APP_CONT", line 71
ORA-06512: at line 1

This was simply coming, even if clearly written in documentation, from my service that was not satisfying the minimum requirement. So I had to drop and recreate my pdb1srv service that we created in part 1 of this series using the mandatory commit_outcome parameter (retention value I specify is in fact default value, 24 hours):

[oracle@server2 ~]$ srvctl stop service -d orcl -service pdb1srv
[oracle@server2 ~]$ srvctl remove service -d orcl -service pdb1srv
[oracle@server2 ~]$ srvctl add service -db orcl -pdb pdb1 -service pdb1srv -notification TRUE -serverpool server_pool01 \
                    -failovertype TRANSACTION -commit_outcome TRUE -retention 86400
[oracle@server2 ~]$ srvctl start service -db orcl -service pdb1srv
[oracle@server2 ~]$ crsctl stat res ora.orcl.pdb1srv.svc -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.orcl.pdb1srv.svc
      1        ONLINE  ONLINE       server2                  STABLE
      2        ONLINE  ONLINE       server3                  STABLE
--------------------------------------------------------------------------------

If later on you do not recall what you have configured for your service:

[oracle@server2 ~]$ srvctl config service -db orcl -service pdb1srv
Service name: pdb1srv
Server pool: server_pool01
Cardinality: UNIFORM
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: true
Global: false
Commit Outcome: true
Failover type: TRANSACTION
Failover method:
TAF failover retries:
TAF failover delay:
Failover restore: NONE
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: NONE
Edition:
Pluggable database name: pdb1
Maximum lag time: ANY
SQL Translation Profile:
Retention: 86400 seconds
Replay Initiation Time: 300 seconds
Drain timeout:
Stop option:
Session State Consistency: DYNAMIC
GSM Flags: 0
Service is enabled
Service is individually enabled on nodes:
Service is individually disabled on nodes:
CSS critical: no

References

The post Transaction Guard (TG) for Java – JDBC HA – part 5 appeared first on IT World.

]]>
https://blog.yannickjaquier.com/oracle/transaction-guard-tg-jdbc-ha-part-5.html/feed 1