Table of contents
GoldenGate installation
Take the binary for your platform and database release from Oracle E-Delivery. I have personally taken for my test Oracle GoldenGate V11.1.1.0.0 for Oracle 11g on Linux x86-64.
All testing of this document have been done between two Red Hat Enterprise Linux Server release 5.6 (Tikanga) servers (VMs in fact).
My source and target database are in 11.2.0.2.0 and are both using ASM (which is an additional complexity as we will see).
Then on all servers create a dedicated directory to install the product, I simply chosen to create /goldengate with 1GB size.
[root@server1 /]# lvcreate -n lvol99 -L 1G vg00 Logical volume "lvol99" created [root@server1 /]# mkfs -t ext3 -v /dev/vg00/lvol99 mke2fs 1.39 (29-May-2006) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) 131072 inodes, 262144 blocks 13107 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=268435456 8 block groups 32768 blocks per group, 32768 fragments per group 16384 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376 Writing inode tables: done Creating journal (8192 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 34 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. |
Edit /etc/fstab and use:
[root@server1 /]# mkdir /goldengate [root@server1 /]# mount -a [root@server1 /]# chown orarac:dba /goldengate |
Unzip and untar V22230-01.zip file you downloaded.
You may choose to install GoldenGate with a specific account but I have chosen to install it with same account as the one that is owning the database binaries. This avoid environment variables setting, if not your case pay attention to this part as well.
[orarac@server1 goldengate]$ ggsci Oracle GoldenGate Command Interpreter for Oracle Version 11.1.1.0.0 Build 078 Linux, x64, 64bit (optimized), Oracle 11 on Jul 28 2010 13:13:42 Copyright (C) 1995, 2010, Oracle and/or its affiliates. All rights reserved. GGSCI (server1.domain.com) 1> create subdirs Creating subdirectories under current directory /goldengate Parameter files /goldengate/dirprm: created Report files /goldengate/dirrpt: created Checkpoint files /goldengate/dirchk: created Process status files /goldengate/dirpcs: created SQL script files /goldengate/dirsql: created Database definitions files /goldengate/dirdef: created Extract data files /goldengate/dirdat: created Temporary files /goldengate/dirtmp: created Veridata files /goldengate/dirver: created Veridata Lock files /goldengate/dirver/lock: created Veridata Out-Of-Sync files /goldengate/dirver/oos: created Veridata Out-Of-Sync XML files /goldengate/dirver/oosxml: created Veridata Parameter files /goldengate/dirver/params: created Veridata Report files /goldengate/dirver/report: created Veridata Status files /goldengate/dirver/status: created Veridata Trace files /goldengate/dirver/trace: created Stdout files /goldengate/dirout: created GGSCI (server1.domain.com) 2> exit |
SQL> col VALUE FOR a30 SQL> SET lines 130 SQL> SET pages 100 SQL> SELECT * FROM nls_database_parameters; PARAMETER VALUE ------------------------------ ------------------------------ NLS_LANGUAGE AMERICAN NLS_TERRITORY AMERICA NLS_CURRENCY $ NLS_ISO_CURRENCY AMERICA NLS_NUMERIC_CHARACTERS ., NLS_CHARACTERSET AL32UTF8 NLS_CALENDAR GREGORIAN NLS_DATE_FORMAT DD-MON-RR NLS_DATE_LANGUAGE AMERICAN NLS_SORT BINARY NLS_TIME_FORMAT HH.MI.SSXFF AM NLS_TIMESTAMP_FORMAT DD-MON-RR HH.MI.SSXFF AM NLS_TIME_TZ_FORMAT HH.MI.SSXFF AM TZR NLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HH.MI.SSXFF AM TZR NLS_DUAL_CURRENCY $ NLS_COMP BINARY NLS_LENGTH_SEMANTICS BYTE NLS_NCHAR_CONV_EXCP FALSE NLS_NCHAR_CHARACTERSET AL16UTF16 NLS_RDBMS_VERSION 11.2.0.2.0 20 ROWS selected. |
Then simply issue (or better add it in your .profile, .bash_profile…):
[orarac@server1 goldengate]$ export NLS_LANG=AMERICAN_AMERICA.AL32UTF8 |
Then on source database we create a test table with:
SQL> CREATE TABLE test1 AS SELECT * FROM dba_tables; TABLE created. |
Remark:
You must create the structure of the table on target database to be able to use the initial loading. Strangely the initial loading will load for you the data but will NOT create the table.
At Oracle database level you must activate supplemental logging on source database with:
SQL> ALTER DATABASE ADD supplemental LOG data; DATABASE altered. SQL> ALTER SYSTEM switch logfile; SYSTEM altered. SQL> SELECT supplemental_log_data_min FROM v$database; SUPPLEME -------- YES |
And have your source database in archivelog mode !
the target database can even be in noarchivelog mode.
In your Oracle database home directory create an alias to access your ASM instance like the following:
+ASM.WORLD = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(Host = server1.domain.com)(Port = 1531)) ) (CONNECT_DATA = (SID = +ASM) (SERVER = DEDICATED) ) ) |
Manager processes
As stated in official documentation manager process must run on source and target server and helps to (you start one manager process than can manage one or more extract or replicat processes):
- Start Oracle GoldenGate processes
- Start dynamic processes
- Perform trail management
- Create event, error, and threshold reports
Create the manager parameter file with (do same action on source and target servers):
GGSCI (server1.domain.com) 1> edit param mgr |
Then enter the following parameters (and anyone else you want):
port 7809 laginfoseconds 60 purgeoldextracts /goldengate/dirdat/aa* autorestart er capture autorestart er stage autostart er capture autostart er stage |
Parameters:
- port: port of the manager process.
- laginfoseconds: how often you want lag information to be reported in error log file (laginfominutes, laginfohours). I have not been able to find any log file but apparently this parameter is need to have relevant information in Lag column when issuing info all command.
- purgeoldextracts: purge trail files when GoldenGate has finished processing them.
- autostart and autorestart are self explaining. Be careful when using them as it will also restart the loading extract/replicat if you use * for group name…
Start the manager with:
GGSCI (server1.domain.com) 2> info mgr Manager is DOWN! GGSCI (server1.domain.com) 3> start mgr Manager started. GGSCI (server1.domain.com) 4> info mgr Manager is running (IP port server1.domain.com.7809). GGSCI (server1.domain.com) 5> view report mgr *********************************************************************** Oracle GoldenGate Manager for Oracle Version 11.1.1.0.0 Build 078 Linux, x64, 64bit (optimized), Oracle 11 on Jul 28 2010 13:12:40 Copyright (C) 1995, 2010, Oracle and/or its affiliates. All rights reserved. Starting at 2011-05-09 17:26:03 *********************************************************************** Operating System Version: Linux Version #1 SMP Tue Jan 18 20:11:49 EST 2011, Release 2.6.32-100.26.2.el5 Node: server1.domain.com Machine: x86_64 soft limit hard limit Address Space Size : unlimited unlimited Heap Size : unlimited unlimited File Size : unlimited unlimited CPU Time : unlimited unlimited Process id: 4796 Parameters... port 7809 laginfoseconds 60 purgeoldextracts /goldengate/dirdat/aa* *********************************************************************** ** Run Time Messages ** *********************************************************************** 2011-05-09 17:26:03 INFO OGG-00983 Manager started (port 7809). |
Create gss account on both databases:
SQL> CREATE USER ggs IDENTIFIED BY ggs; USER created. |
Then on source database (extract):
SQL> GRANT CREATE SESSION, ALTER SESSION, RESOURCE, CONNECT, SELECT ANY dictionary TO ggs; GRANT succeeded. SQL> GRANT flashback ANY TABLE, SELECT ANY TABLE TO ggs; GRANT succeeded. SQL> GRANT EXECUTE ON dbms_flashback TO ggs; GRANT succeeded. |
On target (Replicat) database:
SQL> GRANT CREATE SESSION, ALTER SESSION, RESOURCE, CONNECT, SELECT ANY dictionary TO ggs; GRANT succeeded. SQL> GRANT SELECT ANY TABLE, INSERT ANY TABLE, UPDATE ANY TABLE, DELETE ANY TABLE TO ggs; GRANT succeeded. SQL> GRANT CREATE TABLE, ALTER ANY TABLE, LOCK ANY TABLE TO ggs; GRANT succeeded. |
Add supplemental log to tables you want to replicate (same as Streams):
[orarac@server1 goldengate]$ ggsci Oracle GoldenGate Command Interpreter for Oracle Version 11.1.1.0.0 Build 078 HP/UX, IA64, 64bit (optimized), Oracle 10 on Jul 28 2010 14:36:33 Copyright (C) 1995, 2010, Oracle and/or its affiliates. All rights reserved. GGSCI (server1.domain.com) 1> dblogin, userid ggs, password ggs Successfully logged into database. GGSCI (server1.domain.com) 2> add trandata yjaquier.test1 2011-04-21 14:16:01 WARNING OGG-00869 No unique key is defined for table TEST1. All viable columns will be used to represent the key, but may not guarantee uniqueness. KEYCOLS may be used to define the key. 2011-04-21 14:16:01 WARNING OGG-01387 Table YJAQUIER.TEST1 has no valid key columns, added unconditional supplemental log group for all table columns. 2011-04-21 14:16:01 WARNING OGG-00706 Failed to add supplemental log group on table YJAQUIER.TEST1 due to ORA-01031: insufficient privileges, SQL ALTER TABLE "YJAQUIER"."TEST1" ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS /* GOLDENGATE_DDL_REPLICATION */. |
Despite GoldenGate documentation one grant is missing (that’s probably why Oracle grant DBA to this account):
SQL> GRANT ALTER ANY TABLE TO ggs; GRANT succeeded. |
Relaunch last command:
GGSCI (server1.domain.com) 3> add trandata yjaquier.test1 2011-04-21 14:17:29 WARNING OGG-00869 No unique key is defined for table TEST1. All viable columns will be used to represent the key, but may not guarantee uniqueness. KEYCOLS may be used to define the key. 2011-04-21 14:17:29 WARNING OGG-01387 Table YJAQUIER.TEST1 has no valid key columns, added unconditional supplemental log group for all table columns. Logging of supplemental redo data enabled for table YJAQUIER.TEST1. |
Remark:
As you see just above it is strongly suggested to have a primary key on the replicated table, it speed up the apply process (same as Data Guard logical standby).
You would do the same with an ALTER TABLE statement and you can verify it has been done at Oracle level:
SQL> SET lines 200 SQL> SELECT * FROM dba_log_groups ORDER BY owner, table_name,log_group_type WHERE owner='YJAQUIER'; OWNER LOG_GROUP_NAME TABLE_NAME LOG_GROUP_TYPE ALWAYS GENERATED ------------------------------ ------------------------------ ------------------------------ ---------------------------- ----------- -------------- YJAQUIER SYS_C0011067 TEST1 ALL COLUMN LOGGING ALWAYS GENERATED NAME |
Initial loading
Definition file
First we will create table definition file. This readable file contains the format of data that will be synchronized, please refer to official documentation for added value of working with definition file:
GGSCI (server1.domain.com) 4> edit param defgen |
Create below file with vi:
defsfile /goldengate/dirdef/source.def, purge userid ggs, password ggs table yjaquier.test1; |
Execute it with:
[orarac@server1 goldengate]$ defgen paramfile ./dirprm/defgen.prm *********************************************************************** Oracle GoldenGate Table Definition Generator for Oracle Version 11.1.1.0.0 Build 078 Linux, x64, 64bit (optimized), Oracle 11 on Jul 28 2010 13:15:49 Copyright (C) 1995, 2010, Oracle and/or its affiliates. All rights reserved. Starting at 2011-05-09 17:52:41 *********************************************************************** Operating System Version: Linux Version #1 SMP Tue Jan 18 20:11:49 EST 2011, Release 2.6.32-100.26.2.el5 Node: server1.domain.com Machine: x86_64 soft limit hard limit Address Space Size : unlimited unlimited Heap Size : unlimited unlimited File Size : unlimited unlimited CPU Time : unlimited unlimited Process id: 5387 *********************************************************************** ** Running with the following parameters ** *********************************************************************** defsfile /goldengate/dirdef/source.def, purge userid ggs, password *** table yjaquier.test1; Retrieving definition for YJAQUIER.TEST1 2011-05-09 17:52:41 WARNING OGG-00869 No unique key is defined for table TEST1. All viable columns will be used to represent the key, but may not guarantee uniqueness. KEYCOLS may be used to define the key. Definitions generated for 1 tables in /goldengate/dirdef/source.def |
Copy this definition file to target server:
[orarac@server1 dirdef]$ pwd /goldengate/dirdef [orarac@server1 dirdef]$ scp source.def orarac@server2:/goldengate/dirdef orarac@server2's password: source.def |
Replication process on source server
Create the replication process (8 characters maximum !!, ERROR: Invalid group name (must be at most 8 characters).), extract part. This process will read figures on source server and transfer them to remote server:
GGSCI (server1.domain.com) 1> edit param srv1srv2 GGSCI (server1.domain.com) 2> shell cat /goldengate/dirprm/srv1srv2.prm extract srv1srv2 userid ggs, password ggs rmthost server2.domain.com, mgrport 7809 rmttask replicat, group srv1srv2 table yjaquier.test1; |
Then create the command to launch it:
[orarac@server1 goldengate]$ pwd /goldengate [orarac@server1 goldengate]$ mkdir diroby [orarac@server1 goldengate]$ vi ./diroby/srv1srv2.oby [orarac@server1 goldengate]$ cat ./diroby/srv1srv2.oby add extract srv1srv2, sourceistable |
Remark:
Sourceistable is the keyword to create an initial loading extract and not an online synchronization process.
Replication process on target server
Do the same on target server. This process will read figures and insert them in target tables:
GGSCI (server2.domain.com) 1> edit param srv1srv2 GGSCI (server2.domain.com) 2> shell cat "/goldengate/dirprm/srv1srv2.prm" replicat srv1srv2 userid ggs, password ggs sourcedefs /goldengate/dirdef/source.def dboptions deferrefconst map yjaquier.*, target yjaquier.*; |
Remark:
Dboptions deferrefconst simply delays integrity constraints validation until commit, particularly useful when working with Oracle (valid if Oracle > 9.2.0.7).
Create as well the start command script:
[orarac@server2 goldengate]$ pwd /goldengate [orarac@server2 goldengate]$ mkdir ./diroby [orarac@server2 goldengate]$ vi ./diroby/srv1srv2.oby [orarac@server2 goldengate]$ cat ./diroby/srv1srv2.oby add replicat srv1srv2, specialrun |
Remark:
Specialrun is the counter part of sourceistable and is used only when creating the initial loading replicat.
Start first the replication on target server with:
GGSCI (server2.domain.com) 1> obey ./diroby/srv1srv2.oby GGSCI (server2.domain.com) 2> add replicat srv1srv2, specialrun REPLICAT added. GGSCI (server2.domain.com) 3> start replicat srv1srv2 Sending START request to MANAGER ... REPLICAT SRV1SRV2 starting GGSCI (server2.domain.com) 4> info srv1srv2 REPLICAT SRV1SRV2 Initialized 2011-05-09 18:03 Status STOPPED Checkpoint Lag 00:00:00 (updated 00:00:52 ago) Log Read Checkpoint Not Available Task SPECIALRUN |
Then same on source server:
GGSCI (server1.domain.com) 1> obey ./diroby/srv1srv2.oby GGSCI (server1.domain.com) 2> add extract srv1srv2, sourceistable EXTRACT added. GGSCI (server1.domain.com) 3> start extract srv1srv2 Sending START request to MANAGER ... EXTRACT SRV1SRV2 starting GGSCI (server1.domain.com) 4> info srv1srv2 EXTRACT SRV1SRV2 Initialized 2011-05-09 18:05 Status STOPPED Checkpoint Lag Not Available Log Read Checkpoint Not Available First Record Record 0 Task SOURCEISTABLE |
Then if you check on target database your target TEST1 table has been populated !!
Data synchronization
The process is the same as Oracle Streams (CSA):
- Capture
- Stage (or Pump)
- Apply
Remark:
We will use those names to create the extract and replicat processes.
Capture and stage processes
On source server we modify parameters of capture and stage processes:
GGSCI (server1.domain.com) 1> edit param capture GGSCI (server1.domain.com) 2> shell cat /goldengate/dirprm/capture.prm extract capture userid ggs, password ggs tranlogoptions asmuser sys@+ASM, asmpassword password exttrail /goldengate/dirdat/aa table yjaquier.test1; GGSCI (server1.domain.com) 3> edit param stage GGSCI (server1.domain.com) 4> shell cat /goldengate/dirprm/stage.prm extract stage rmthost server2.domain.com, mgrport 7809 rmttrail /goldengate/dirdat/aa passthru table yjaquier.test1; |
Remarks:
Tranlogoptions must be used if your archived log files are stored with ASM. Passthru speed up replication process when no data manipulation are need on objects.
Now we need to create and start the two processes we have just modified parameters. This will be done using an obey script:
GGSCI (server1.domain.com) 1> shell vi ./diroby/extract.oby GGSCI (server1.domain.com) 2> shell cat ./diroby/extract.oby kill extract capture delete extract capture add extract capture, tranlog, begin now add exttrail /goldengate/dirdat/aa, megabytes 1, extract capture start extract capture kill extract stage delete extract stage add extract stage, exttrailsource /goldengate/dirdat/aa add rmttrail /goldengate/dirdat/aa, megabytes 1, extract stage start stage |
Remark:
The script creates and starts capture and stage processes. We simply limit trail files to 1MB.
Apply process
On target system we will modify parameters of apply process:
GGSCI (server2.domain.com) 1> edit param apply GGSCI (server2.domain.com) 2> shell cat /goldengate/dirprm/apply.prm replicat apply userid ggs, password ggs discardfile /goldengate/dirout/apply.dsc, append sourcedefs /goldengate/dirdef/source.def insertmissingupdates dboptions deferrefconst map yjaquier.test1, target yjaquier.test1, colmap (usedefaults); |
Remark:
Insertmissingupdates that is not default value will not trigger an error when target records do not exist. It is expected behavior in my opinion so strange that it is not default value… The map keyword is a pretty complex one and allow you to perform any kind of transformations you may think.
Then same as before we will create an oby script to define and start the apply process:
GGSCI (server2.domain.com) 1> shell vi ./diroby/apply.oby GGSCI (server2.domain.com) 2> shell cat ./diroby/apply.oby kill extract apply delete replicat apply add replicat apply, exttrail /goldengate/dirdat/aa, begin now, nodbcheckpoint start replicat apply |
Remark:
Nodbcheckpoint does not log information into checkpoint table. you can specify one using checkpointtable
SQL> SET lines 200 SQL> SELECT * FROM ggs.ggs_checkpoint; GROUP_NA GROUP_KEY SEQNO RBA AUDIT_TS CREATE_TS LAST_UPDATE_TS -------- ---------- ---------- ---------- ----------------------------- ------------------ ------------------ CURRENT_DIR -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- APPLY 2253737335 5 28024 2011-05-11 14:12:59.000000 11-MAY-11 11-MAY-11 /goldengate |
Then we start on source server the capture and stage processes:
GGSCI (server1.domain.com) 1> obey /goldengate/diroby/extract.oby GGSCI (server1.domain.com) 2> kill extract capture ERROR: Invalid command. GGSCI (server1.domain.com) 3> delete extract capture ERROR: EXTRACT CAPTURE does not exist. GGSCI (server1.domain.com) 4> add extract capture, tranlog, begin now EXTRACT added. GGSCI (server1.domain.com) 5> add exttrail /goldengate/dirdat/aa, megabytes 1, extract capture EXTTRAIL added. GGSCI (server1.domain.com) 6> start extract capture Sending START request to MANAGER ... EXTRACT CAPTURE starting GGSCI (server1.domain.com) 7> kill extract stage ERROR: Invalid command. GGSCI (server1.domain.com) 8> delete extract stage ERROR: EXTRACT STAGE does not exist. GGSCI (server1.domain.com) 9> add extract stage, exttrailsource /goldengate/dirdat/aa EXTRACT added. GGSCI (server1.domain.com) 10> add rmttrail /goldengate/dirdat/aa, megabytes 1, extract stage RMTTRAIL added. GGSCI (server1.domain.com) 11> start stage Sending START request to MANAGER ... EXTRACT STAGE starting GGSCI (server1.domain.com) 12> info all Program Status Group Lag Time Since Chkpt MANAGER RUNNING EXTRACT RUNNING CAPTURE 00:00:00 00:00:08 EXTRACT RUNNING STAGE 00:00:00 00:00:06 GGSCI (server1.domain.com) 13> info * EXTRACT CAPTURE Last Started 2011-05-10 17:14 Status RUNNING Checkpoint Lag 00:00:00 (updated 00:00:05 ago) Log Read Checkpoint Oracle Redo Logs 2011-05-10 18:00:27 Seqno 94, RBA 10998784 EXTRACT STAGE Last Started 2011-05-10 16:29 Status RUNNING Checkpoint Lag 00:00:00 (updated 00:00:02 ago) Log Read Checkpoint File /goldengate/dirdat/aa000000 2011-05-10 16:57:23.000000 RBA 1547 GGSCI (server1.domain.com) 14> info capture, detail EXTRACT CAPTURE Last Started 2011-05-17 14:30 Status RUNNING Checkpoint Lag 00:00:00 (updated 00:00:01 ago) Log Read Checkpoint Oracle Redo Logs 2011-05-17 15:00:21 Seqno 99, RBA 12639232 Target Extract Trails: Remote Trail Name Seqno RBA Max MB /goldengate/dirdat/aa 0 0 1 Extract Source Begin End +DATA/rac/onlinelog/group_3.262.744732371 2011-05-16 16:27 2011-05-17 15:00 +DATA/rac/onlinelog/group_2.264.744732365 2011-05-11 17:43 2011-05-16 16:27 +DATA/rac/onlinelog/group_1.266.744732363 2011-05-11 14:36 2011-05-11 17:43 Not Available * Initialized * 2011-05-11 14:36 Current directory /goldengate Report file /goldengate/dirrpt/CAPTURE.rpt Parameter file /goldengate/dirprm/capture.prm Checkpoint file /goldengate/dirchk/CAPTURE.cpe Process file /goldengate/dirpcs/CAPTURE.pce Stdout file /goldengate/dirout/CAPTURE.out Error log /goldengate/ggserr.log GGSCI (server1.domain.com) 15> lag capture Sending GETLAG request to EXTRACT CAPTURE ... Last record lag: 1 seconds. At EOF, no more records to process. |
Then we start the apply process on target server:
GGSCI (server2.domain.com) 1> start /goldengate/diroby/apply.oby ERROR: Invalid command. GGSCI (server2.domain.com) 2> obey /goldengate/diroby/apply.oby GGSCI (server2.domain.com) 3> kill extract apply ERROR: Invalid command. GGSCI (server2.domain.com) 4> delete replicat apply ERROR: REPLICAT APPLY does not exist. GGSCI (server2.domain.com) 5> add replicat apply, exttrail /goldengate/dirdat/aa, begin now, nodbcheckpoint REPLICAT added. GGSCI (server2.domain.com) 6> start replicat apply Sending START request to MANAGER ... REPLICAT APPLY starting GGSCI (server2.domain.com) 7> info all Program Status Group Lag Time Since Chkpt MANAGER RUNNING REPLICAT RUNNING APPLY 00:00:00 00:00:04 GGSCI (server2.domain.com) 8> info * REPLICAT APPLY Last Started 2011-05-10 15:13 Status RUNNING Checkpoint Lag 00:00:00 (updated 00:00:09 ago) Log Read Checkpoint File /goldengate/dirdat/aa000000 2011-05-10 16:57:18.925791 RBA 1591 GGSCI (server2.domain.com) 9> info apply, detail REPLICAT APPLY Last Started 2011-05-17 14:31 Status RUNNING Checkpoint Lag 00:00:00 (updated 00:00:05 ago) Log Read Checkpoint File /goldengate/dirdat/aa000008 2011-05-17 14:30:34.409181 RBA 1493 Extract Source Begin End /goldengate/dirdat/aa000008 2011-05-16 14:49 2011-05-17 14:30 /goldengate/dirdat/aa000007 2011-05-11 14:36 2011-05-16 14:49 /goldengate/dirdat/aa000006 2011-05-11 14:12 2011-05-11 14:36 /goldengate/dirdat/aa000000 * Initialized * 2011-05-11 14:12 Current directory /goldengate Report file /goldengate/dirrpt/APPLY.rpt Parameter file /goldengate/dirprm/apply.prm Checkpoint file /goldengate/dirchk/APPLY.cpr Checkpoint table GGS.GGS_CHECKPOINT Process file /goldengate/dirpcs/APPLY.pcr Stdout file /goldengate/dirout/APPLY.out Error log /goldengate/ggserr.log |
Testing
SQL> SELECT TABLE_NAME FROM test1 WHERE owner='YJAQUIER'; TABLE_NAME ------------------------------ T1 T3 SQL> INSERT INTO test1(OWNER,TABLE_NAME) VALUES('YJAQUIER','TEST1'); 1 ROW created. SQL> COMMIT; COMMIT complete. SQL> SELECT TABLE_NAME FROM test1 WHERE owner='YJAQUIER'; TABLE_NAME ------------------------------ T1 T3 TEST1 |
And magically on target server:
SQL> SELECT TABLE_NAME FROM test1 WHERE owner='YJAQUIER'; TABLE_NAME ------------------------------ T1 T3 TEST1 |
Error correction
Missing archived log file
Error is something like:
2011-05-10 15:13:23 ERROR OGG-00446 Could not find archived log for sequence 92 thread 1 under default destinations SQL <SELECT name FROM v$archived_log WHERE sequence# = :ora_seq_no AND th read# = :ora_thread AND resetlogs_id = :ora_resetlog_id AND archived = 'YES' AND deleted = 'NO>, error retrieving redo file name for sequence 92, archived = 1, use_alternate = 0Not ab le to establish initial position for begin time 2011-05-10 14:38:39. |
To solve it (and accept being obliged to re-conciliate figures) you can issue:
GGSCI (server1.domain.com) 1> alter extract capture, extseqno 93 EXTRACT altered. GGSCI (server1.domain.com) 2> start capture Sending START request to MANAGER ... EXTRACT CAPTURE starting |
Archived log in ASM
Error looks like:
2011-05-10 16:57:56 ERROR OGG-00446 Could not find archived log for sequence 93 thread 1 under default destinations SQL <SELECT name FROM v$archived_log WHERE sequence# = :ora_seq_no AND th read# = :ora_thread AND resetlogs_id = :ora_resetlog_id AND archived = 'YES' AND deleted = 'NO>, error retrieving redo file name for sequence 93, archived = 1, use_alternate = 0Not ab le to establish initial position for sequence 93, rba 0. |
While you see it at Oracle level:
SQL> SET lines 200 SQL> col name FOR a75 SQL> SELECT SEQUENCE#, name FROM v$archived_log; SEQUENCE# NAME ---------- --------------------------------------------------------------------------- 93 +DATA/rac/archivelog/2011_05_10/thread_1_seq_93.270.750784485 |
Use the tranlogoptions asmuser sys@+ASM, asmpassword paswowrd option for the capture process.
Figures not present in target table
Error is something like:
2011-05-10 18:07:30 ERROR OGG-01296 Error mapping from YJAQUIER.TEST1 to YJAQUIER.TEST1. *********************************************************************** * ** Run Time Statistics ** * *********************************************************************** Last record for the last committed transaction is the following: ___________________________________________________________________ Trail name : /goldengate/dirdat/aa000000 Hdr-Ind : E (x45) Partition : . (x04) UndoFlag : . (x00) BeforeAfter: B (x42) RecLength : 470 (x01d6) IO Time : 2011-05-10 18:05:56.765745 IOType : 3 (x03) OrigNode : 255 (xff) TransInd : . (x03) FormatType : R (x52) SyskeyLen : 0 (x00) Incomplete : . (x00) AuditRBA : 94 AuditPos : 11454992 Continued : N (x00) RecCount : 1 (x01) 2011-05-10 18:05:56.765745 Delete Len 470 RBA 1591 Name: YJAQUIER.TEST1 ___________________________________________________________________ Reading /goldengate/dirdat/aa000000, current RBA 1591, 0 records Report at 2011-05-10 18:07:30 (activity since 2011-05-10 18:07:30) From Table YJAQUIER.TEST1 to YJAQUIER.TEST1: # inserts: 0 # updates: 0 # deletes: 0 # discards: 1 |
You can confirm the error by looking at the discard file you configured in apply process:
[orarac@server2 dirout]$ pwd /goldengate/dirout [orarac@server2 dirout]$ head -20 apply.dsc Oracle GoldenGate Delivery for Oracle process started, group APPLY discard file opened: 2011-05-10 18:07:22 Current time: 2011-05-10 18:07:30 Discarded record from action ABEND on error 1403 Aborting transaction on /goldengate/dirdat/aa beginning at seqno 0 rba 1591 error at seqno 0 rba 1591 Problem replicating YJAQUIER.TEST1 to YJAQUIER.TEST1 Record not found Mapping problem with delete record (target format)... * OWNER = YJAQUIER TABLE_NAME = TEST1 TABLESPACE_NAME = NULL CLUSTER_NAME = NULL IOT_NAME = NULL STATUS = NULL PCT_FREE = NULL PCT_USED = NULL |
One option to solve it could be to add handlecollisions in your apply parameters file (it may be your default option if you don’t want your apply process to be down each time users are doing *bad* updates):
GGSCI (server2.domain.com) 41> edit param apply GGSCI (server2.domain.com) 45> shell cat ./dirprm/apply.prm replicat apply userid ggs, password ggs discardfile /goldengate/dirout/apply.dsc, purge sourcedefs /goldengate/dirdef/source.def insertmissingupdates dboptions deferrefconst handlecollisions map yjaquier.test1, target yjaquier.test1, colmap (usedefaults); |
References
- Master Note – Oracle GoldenGate [ID 1298817.1]
- Oracle GoldenGate
- Oracle GoldenGate Tutorials now available …..
surender says:
Hi Yannick,
did you find a situation, where data missing on the target server, as we have recently implemented golden gate, but we have got this issue couple of times, i was late on your site.
Please help me out if you have anything in your mind regarding this.
Jatti
Yannick Jaquier says:
Hi Jatti,
We finally did not deploy GoldenGate under complex environment so I have poor return on real life example. We are going to have a massive deployment early next year I might come back with few ideas at that time. But I hope you will have solved your issue before that… Also in meanwhile GoldenGate has evolved so maybe now everything is more under control…
Thanks, Yannick.
Gusi says:
ok, we opened metalink ticket and they said use macros. thanks, gusi
Yannick Jaquier says:
Ok, nice to see that you solved, so definitively I have not well understood what you tried to achieve. Maybe you can share the final extract for other readers…
Gusi says:
Thanks,
Yes, TABLE keyword is fine. We have 70 tables to capture and apply change in golden gate from a schema having 1300 odd tables.
My question is how to use tokens keyword once for 70 tables. Currently, I can make it work by using TOKENS keyword for each table separately making extract file very big.
Is there a way I can use TOKENS keyword once. I can use TABLEEXCLUSION keyword and table hq.* but that is maintenance nightmare. Will have to add TABLEEXLUSION for every new table.
Thanks, gusi
Gusi says:
Hi,
Good article.
How can we use multiple tables for the same schema (not all tables of the schema but subset of tables) in the extract file
I was able to find that I can use this in the extract file:
TABLE hq.acct;
TABLE hq.sales;
My question is how to use tokens after this since we are populating hostname etc. Do I need to use Tokens command for each table defined in the capture parameter file?
thanks, gusi
Yannick Jaquier says:
Hi,
Thank you !
Honestly not sure to get your point, the TABLE keyword is quite complete and allow usage of wildcard. Example from Reference Guide:
TABLE fin.acc*;
Sajid Ali says:
How can we validate if GG is configured (Validation purpose)
Yannick Jaquier says:
By, for example, creating an heartbeat table and check that table is timely and well updated on target system.
mazen says:
HI Yannick,
thanks for your article it is very useful, i have a question please can we copy the trial file generated from source GoldenGate db to target GoldenGate db manually, so goldengate can apply them as it is moved automatically? …
i’m talking in case the network connections goes down and we need GoldenGate to keep working …
please advise
Thank You
Yannick Jaquier says:
Hi Mazen,
Thanks for stopping by !
I would say it should work but I have not tested it and have scratched my test environment.
If you refer to OGG Extract TCP/IP Performance troubleshooting (Doc ID 1428353.1) MOS note then manually copying trail files is what Oracle suggest to troubleshoot network performance issues.
Thanks, Yannick.
Chandra Cheedella says:
Dear Yannick,
Thank you so much for writing this wonderful article. Its impressive with all the needed details in it and I feel very comfortable now with GG setup & general operations. Job well done.
With regards.
Yannick Jaquier says:
Dear Chandra,
Thanks to you for stopping by !
Regards, Yannick.
Basil says:
Hi,
This is the issue have, we have what I will call the database “A” ( it’s an Oracle DB ), that we developers perform our (CRUD) operation. In addition we have the Oracle Goldengate product that our DBA’s setup. With this product they configured it to perform the “Change Data Capture” on database “A” and store the changes in database “B”. This was all done but configuration in Goldengate. The issue now has come up that there is no process to purge database “B”, as the volume is just growing with “Change Data Capture” operations done by the CRUD operations in database “A”. Our so called DBA is now saying that we developers have to write the code to purge database “B”, which contains about 650 tables. (sorry it’s not a well-designed database), but I am trying to push back and say to upper management that this operation should be configured in Goldengate. So my question is, Dose Goldengate have a configuration to purge the database “B”.
Thanks for your help.
Yannick Jaquier says:
Hi,
Not sure to catch your point. If data are not purged in database B but are in database A it means you have a particular configuration using something similar to IGNOREDELETES. So on database B you must implement an “applicative” purging of data based on business rules (after 6 months, …) and for me it has nothing to see with GoldenGGate…
Thanks, Yannick.