If by mistake or bad luck you fill your archive log directory then it will be translated for your customers by an Interruption To Production (ITP). If you can connect to the database when the problem occurs and if users’ sessions have not yet been terminated you can immediately find the guilty with something like:
SQL> SET lines 150 SQL> col program FOR a30 SQL> col name FOR a30 SQL> SELECT s.sid, s.username, s.program, sn.name, DECODE(sn.class, 1, 'User', 2, 'Redo', 4, 'Enqueue', 8, 'Cache', 16, 'OS', 32, 'Real Application Clusters', 64, 'SQL', 128, 'Debug', '????') AS "Type", ROUND(t.VALUE/(1024*1024)) AS "Redo Size MB" FROM V$SESSION s, V$SESSTAT t, V$STATNAME sn WHERE s.sid = t.sid AND t.statistic# = sn.statistic# AND sn.name = 'redo size' AND ROUND(t.VALUE/(1024*1024)) != 0 ORDER BY SIGN(t.VALUE), t.VALUE DESC;
If you are not able to connect or have been too busy purging old archived log files or increasing archive log directory (or fast recovery area) then you might want to investigate later on what has caused this sudden increase of redo information.
As the database was blocked AWR reports have most probably not been generated (this requires Diagnostic and Tuning pack Enterprise edition paid options) but even if (by miracle) they are there you will realize that the table you would need should be called DBA_HIST_SESSTAT. This table does not exists maybe for the overhead it would cause to have it. The only available option you have is to use LogMiner and ingest all generated archived log file and dig inside their contents to understand which account and/or command(s) have generated the most redo information.
Even if below post has been written in 10g and 11g it is still valid in till 12c…
Load the first archived log file of your series with:
SQL> EXEC DBMS_LOGMNR.ADD_LOGFILE(logfilename => '/oracle/arch/s2/1_495664_415958725.dbf', options => DBMS_LOGMNR.NEW); PL/SQL PROCEDURE successfully completed.
Then load all the other archived log files with:
SQL> EXEC DBMS_LOGMNR.ADD_LOGFILE(logfilename => '/oracle/arch/s2/1_495665_415958725.dbf', options => DBMS_LOGMNR.ADDFILE); PL/SQL PROCEDURE successfully completed. . . . SQL> EXEC DBMS_LOGMNR.ADD_LOGFILE(logfilename => '/oracle/arch/s2/1_495684_415958725.dbf', options => DBMS_LOGMNR.ADDFILE); PL/SQL PROCEDURE successfully completed.
You can also do it more wisely with PL/SQL block:
SET serveroutput ON SIZE 999999 DECLARE i NUMBER; BEGIN i:=495665; WHILE (i<=495684) LOOP DBMS_OUTPUT.PUT_LINE('Adding /oracle/arch/s2/1_'||TO_CHAR(i)||'_415958725.dbf'); DBMS_LOGMNR.ADD_LOGFILE(logfilename => '/oracle/arch/s2/1_'||TO_CHAR(i)||'_415958725.dbf', options => DBMS_LOGMNR.ADDFILE); i:=i+1; END LOOP; END; /
Then start Log Miner to be able to query the V$LOGMNR_xxx views with:
SQL> EXEC DBMS_LOGMNR.START_LOGMNR(options => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + DBMS_LOGMNR.NO_ROWID_IN_STMT + DBMS_LOGMNR.NO_SQL_DELIMITER); PL/SQL PROCEDURE successfully completed.
The DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG is quite interesting as it avoid to generate a dictionary file and use the dictionary of the database where you are. Which is in most case the one where you want to analyze the archived log files…
If for performance reasons you really do not want to do this then two other options are available:
- Generate dictionary in redo log but for this you need to have supplemental logging activated which is most probably not the case:
SQL> SELECT supplemental_log_data_min FROM v$database; SUPPLEME -------- NO
- Or generate a dictionary flat file. The bad news is that Oracle still use the old utl_file_dir way of working that they themselves suggest to stop using:
SQL> EXEC dbms_logmnr_d.build('dictionary.ora', '/tmp', dbms_logmnr_d.store_in_flat_file); BEGIN dbms_logmnr_d.build('dictionary.ora', '/tmp', dbms_logmnr_d.store_in_flat_file); END; * ERROR AT line 1: ORA-01308: initialization parameter utl_file_dir IS NOT SET ORA-06512: AT "SYS.DBMS_LOGMNR_INTERNAL", line 7384 ORA-06512: AT "SYS.DBMS_LOGMNR_INTERNAL", line 7398 ORA-06512: AT "SYS.DBMS_LOGMNR_INTERNAL", line 7546 ORA-06512: AT "SYS.DBMS_LOGMNR_D", line 12 ORA-06512: AT line 1
So you need to restart you production database and you can finally extract that dictionary file that you will transfer to your working server (in below utl_file_dir has been set to /tmp directory:
SQL> EXEC dbms_logmnr_d.build('dictionary.ora', '/tmp', dbms_logmnr_d.store_in_flat_file); PL/SQL PROCEDURE successfully completed.
If you work with a dictionary in a flat file the command to start the analysis of archived log files is more something like (DBMS_LOGMNR.DDL_DICT_TRACKING was forbidding the dictionary load):
SQL> EXEC DBMS_LOGMNR.START_LOGMNR(dictfilename => '', options => DBMS_LOGMNR.NO_ROWID_IN_STMT + DBMS_LOGMNR.NO_SQL_DELIMITER + DBMS_LOGMNR.PRINT_PRETTY_SQL); PL/SQL PROCEDURE successfully completed.
You should be able to confirm the dictionary has been well loaded with, but it was not working very well in my case (18.104.22.168):
SQL> col filename FOR a40 SQL> col db_version FOR a10 SQL> SELECT db_name,db_character_set,db_version, filename,info,status FROM v$logmnr_dictionary; DB_NAME DB_CHARACTER_SET DB_VERSION FILENAME INFO STATUS --------- ------------------------------ ---------- ---------------------------------------- -------------------------------- ---------- LOTIDLIV WE8ISO8859P15 22.214.171.124.0 /log_recv/TE/bci/yannick/dictionary.ora 0 SQL> SELECT current_state,loaded,percent_done FROM v$logmnr_dictionary_load; CURRENT_STATE LOADED PERCENT_DONE -------------------------------- ------- ------------ UNKNOWN NO 0
Once analyzed check that all archived log files have been successfully loaded:
SQL> SET lines 130 SQL> col FILENAME FOR a50 SQL> SET pages 100 SQL> SELECT filename,blocksize,filesize,info,status FROM v$logmnr_logs; FILENAME BLOCKSIZE FILESIZE INFO STATUS -------------------------------------------------- ---------- ---------- -------------------------------- ---------- /oracle/arch/s2/1_495664_415958725.dbf 1024 294758400 0 /oracle/arch/s2/1_495665_415958725.dbf 1024 294311936 0 /oracle/arch/s2/1_495666_415958725.dbf 1024 299108352 0 /oracle/arch/s2/1_495667_415958725.dbf 1024 299392000 0 /oracle/arch/s2/1_495668_415958725.dbf 1024 297311232 0 /oracle/arch/s2/1_495669_415958725.dbf 1024 299233280 0 /oracle/arch/s2/1_495670_415958725.dbf 1024 299392000 0 /oracle/arch/s2/1_495671_415958725.dbf 1024 296854528 0 /oracle/arch/s2/1_495672_415958725.dbf 1024 295421952 0 /oracle/arch/s2/1_495673_415958725.dbf 1024 295309312 0 /oracle/arch/s2/1_495674_415958725.dbf 1024 297478144 0 /oracle/arch/s2/1_495675_415958725.dbf 1024 298231808 0 /oracle/arch/s2/1_495676_415958725.dbf 1024 294882304 0 /oracle/arch/s2/1_495677_415958725.dbf 1024 297648128 0 /oracle/arch/s2/1_495678_415958725.dbf 1024 294349824 0 /oracle/arch/s2/1_495679_415958725.dbf 1024 294217728 0 /oracle/arch/s2/1_495680_415958725.dbf 1024 294249472 0 /oracle/arch/s2/1_495681_415958725.dbf 1024 294775808 0 /oracle/arch/s2/1_495682_415958725.dbf 1024 296793088 0 /oracle/arch/s2/1_495683_415958725.dbf 1024 297419776 0 /oracle/arch/s2/1_495684_415958725.dbf 1024 294231040 0 21 ROWS selected.
Where STATUS column is:
- 0 – Will be read
- 1 – First to be read
- 2 – Not needed
- 4 – Missing log file
If you expect to select on V$LOGMNR_CONTENTS with your favorite graphical query tool (like SQL Developer) then you must copy it in a temporary table (V$LOGMNR_CONTENTS view is available only for your session). The tuple (RS_ID, SSN) uniquely identify a logical row change in this view (even if SSN is quite often equal to 0). So if operation is not too long issue something like:
CREATE TABLE yjaquier.logmnr_contents nologging TABLESPACE user_temp AS SELECT rs_id,ssn,seg_owner,seg_name,username,rbablk,rbabyte FROM v$logmnr_contents;
If you carefully look at the RS_ID column (log_id column is deprecated):
SQL> SELECT rs_id,ssn,log_id,rbablk,rbabyte FROM v$logmnr_contents; RS_ID SSN LOG_ID RBABLK RBABYTE -------------------------------- ---------- ---------- ---------- ---------- . . 0x079167.0000000a.02a8 0 495975 10 680 . .
You can also initiate LogMiner inside SQL*Developer which avoid this temporary table creation. Please also note that if you end or reset your session information are also lost…
you can see that:
- 079167 in Hexadecimal (495975 in Decimal) is the archived log file id.
- 0000000a in Hexadecimal (10 in Decimal) is the block id of the current row
- 02a8 in Hexadecimal (680 in Decimal) is the offset in the block of the current row
Before leaving the session do not forget to issue:
SQL> EXEC DBMS_LOGMNR.END_LOGMNR(); PL/SQL PROCEDURE successfully completed.
How to get redo size
What you will calculate with Log Miner view must be obviously correlated with total amount of archived log files we have loaded, to know this use:
SQL> SELECT SUM(filesize)/(1024*1024) AS "Archived size MB" FROM v$logmnr_logs; Archived SIZE MB ---------------- 5936.97559
Oracle claim it is not possible to get generated redo size per statement as information does not appear any more in V$LOGMNR_CONTENTS view… It does not appear clearly in a column but you can calculate it unless I’m completely wrong…
Inside V$LOGMNR_CONTENTS view the tuple (RS_ID, SSN) uniquely identify a logical row change that can be translated in one (most case) to multiple rows inside the view.
For each logical row change you have the block number (RBABLK) inside the archived log file and the offset of the information in that block (RBABYTE). Following a comment on this post and a deeper investigation on a new test following an issue we had I have changed the query to get the information. To calculate the redo size you need to know where is starting the next RS_ID, SSN tuple. To get next row value while fetching a table either you do an auto-join or you use analytics functions. I have moved my query from LAG (previous row(s)) to LEAD (next row(s)).
Block number multiply by archived log block size plus offset in block is giving you the absolute position of redo information inside the archived log file. If you subtract this absolute position of the row after with absolute position of current row you get the redo size of current row. I use ORDER BY rs_id,ssn to order the result on which the analytic function will search. And the PARTITION BY SUBSTR(rs_id,1,9) to stick into same archived log file. The only redo information you will lack is last row of each archived log file as the LEAD will not find anything (so the default value to same value and computation will equals to 0), but this will be small error compared to overall result…
If you sum all lines of below query it should matches exactly the size of all archived log files you have loaded in LogMiner process. I have also added a hint to parallelize the query, adjust the value to your underlying hardware:
SQL> SELECT /*+ parallel (16) */ NVL(seg_owner,'NULL') AS seg_owner, ROUND(SUM(redo_size)/(1024*1024)) AS "Size MB" FROM (SELECT seg_owner, LEAD(rbablk*1024+rbabyte,1,rbablk*1024+rbabyte) over(PARTITION BY SUBSTR(rs_id,1,9) ORDER BY rs_id, ssn) - (rbablk*1024+rbabyte) AS redo_size FROM v$logmnr_contents ORDER BY rs_id,ssn) GROUP BY NVL(seg_owner,'NULL') ORDER BY 2 DESC; SEG_OWNER SIZE MB -------------------------------- ---------- SICOMDBA 3395 NULL 2010 NCPM 97 SYS 84 BATCHSO 20 XNETDBA 9 PRMISINTRANET1 9 PRMIS 9 INKDBA 1 PRISMAWEB 0 PRMISINTRANET2 0 BATCHDBA 0 VLOGDBA 0 IND_PUROHIT 0 BATCHSCD 0 OVO 0 DLH_RIF 0 BATCHPR 0 UNKNOWN 0 PUBLIC 0 SYSTEM 0 IND_GUPTA 0 22 ROWS selected.
In the above 1024 is the redo block size of your platform. Adjust it to your platform, see next chapter.
The NULL segment owner is attached to start transaction, commit and internal operations for which it is not possible to get additional information. Having supplemental login would help as Oracle says but this would have been needed before you get the issue and the need to dig inside archived log file…
Redo log block size
Redo log file block size might differ with the platform that is running your Oracle database:
|Log Block Size||Operating Systems|
|512 bytes||Solaris, AIX, Windows, Linux|
You can also control it directly with:
SQL> SELECT DISTINCT block_size FROM v$archived_log; BLOCK_SIZE ---------- 1024
You can also get this value in the session where you have instantiate Log Miner with:
SQL> SELECT DISTINCT blocksize FROM v$logmnr_logs; BLOCKSIZE --------- 1024
- Master Note for LogMiner (Doc ID 1264738.1)
- How to Setup LogMiner (Doc ID 111886.1)
- The LogMiner Utility (Doc ID 62508.1)
- LogMiner Utility Release 8i – 11g (Doc ID 291686.1)
- How To Determine The Cause Of Lots Of Redo Generation Using LogMiner (Doc ID 300395.1)
- LogMiner: Unknown Objects And Dictionary Mismatch In V$LOGMNR_CONTENTS (Doc ID 843032.1)
- Avoid Unexpected Outputs Returned By LogMiner (Doc ID 1934944.1)
- LOGMINER V$LOGMNR_CONTENTS COLUMN USERNAME DOES NOT SHOWS USERNAME and ITS NULL (Doc ID 265484.1)
- Transparent Application Failover (TAF) – JDBC HA – part 2 - July 16, 2018
- JDBC client high availability features – JDBC HA – part 1 - July 5, 2018
- Pacemaker configuration for an Oracle database and its listener - June 8, 2018
- Grub configuration to disable consistent network device naming in OEL 7 - May 12, 2018
- Restricting and securing your database network – part 2 - April 13, 2018
- Restricting and securing your database network – part 1 - March 16, 2018