Oracle Golden Gate FAQ
Oracle Golden Gate FAQ
Q. What type of Topology does Goldengate support?
GoldenGate supports the following topologies.
1. Unidirectional
2. Bidirectional
3. Peer-to-peer
4. Broadcast
5. Consolidation
6. Cascasding
GoldenGate supports the following topologies.
1. Unidirectional
2. Bidirectional
3. Peer-to-peer
4. Broadcast
5. Consolidation
6. Cascasding
Q. What are the main components of the Goldengate replication?
The replication configuration consists of the following processes.
1. Manager
2. Extract
3. Pump
4. Replicate
The replication configuration consists of the following processes.
1. Manager
2. Extract
3. Pump
4. Replicate
Q. What transaction types does Goldengate support for
Replication?
Goldengate supports both DML and DDL Replication from the source to target.
Goldengate supports both DML and DDL Replication from the source to target.
Q. What are the supplemental logging pre-requisites?
The following supplemental logging is required.
1. Database supplemental logging
2. Object level logging
The following supplemental logging is required.
1. Database supplemental logging
2. Object level logging
Q. Why is Supplemental logging required for Replication?
Integrated Capture (IC):
1. In the Integrated Capture mode, GoldenGate works directly with the database log mining server to receive the data changes in the form of logical change records (LCRs).
2. IC mode does not require any special setup for the databases using ASM, transparent data encryption, or Oracle RAC.
3. This feature is only available for oracle databases in Version 11.2.0.3 or higher.
4. It also supports various object types which were previously not supported by Classic Capture.
5. This Capture mode supports extracting data from source databases using compression.
Integrated Capture can be configured in an online or downstream mode.
Integrated Capture (IC):
1. In the Integrated Capture mode, GoldenGate works directly with the database log mining server to receive the data changes in the form of logical change records (LCRs).
2. IC mode does not require any special setup for the databases using ASM, transparent data encryption, or Oracle RAC.
3. This feature is only available for oracle databases in Version 11.2.0.3 or higher.
4. It also supports various object types which were previously not supported by Classic Capture.
5. This Capture mode supports extracting data from source databases using compression.
Integrated Capture can be configured in an online or downstream mode.
Q. List the minimum parameters that can be used to create the
extract process?
The following are the minimum required parameters which must be defined in the extract parameter file.
1. EXTRACT NAME
2. USERID
3. EXTTRAIL
4. TABLE
The following are the minimum required parameters which must be defined in the extract parameter file.
1. EXTRACT NAME
2. USERID
3. EXTTRAIL
4. TABLE
Q. I want to configure multiple extracts to write to the same
exttrail file? Is this possible?
Only one Extract process can write to one exttrail at a time. So you can’t configure multiple extracts to write to the same exttrail.
Only one Extract process can write to one exttrail at a time. So you can’t configure multiple extracts to write to the same exttrail.
Q. What type of Encryption is supported in Goldengate?
Oracle Goldengate provides 3 types of Encryption.
1. Data Encryption using Blow fish.
2. Password Encryption.
3. Network Encryption.
Oracle Goldengate provides 3 types of Encryption.
1. Data Encryption using Blow fish.
2. Password Encryption.
3. Network Encryption.
Q. What are the different password encryption options available
with OGG?
You can encrypt a password in OGG using
1. Blowfish algorithm and
2. Advance Encryption Standard (AES) algorithm
You can encrypt a password in OGG using
1. Blowfish algorithm and
2. Advance Encryption Standard (AES) algorithm
Q. What are the different encryption levels in AES?
You can encrypt the password/data using the AES in three different keys
a) 128 bit
b) 192 bit and
c) 256 bit
You can encrypt the password/data using the AES in three different keys
a) 128 bit
b) 192 bit and
c) 256 bit
Q. What are some of the key features of GoldenGate 12c?
The following are some of the more interesting features of Oracle GoldenGate 12c:
1. Support for Multitenant Database
2. Coordinated Replicat
3. Integrated Replicat Mode
4. Use of Credential store
5. Use of Wallet and master key
6. Trigger-less DDL replication
7. Automatically adjusts threads when RAC node failure/start
8. Supports RAC PDML Distributed transaction
9. RMAN Support for mined archive logs
The following are some of the more interesting features of Oracle GoldenGate 12c:
1. Support for Multitenant Database
2. Coordinated Replicat
3. Integrated Replicat Mode
4. Use of Credential store
5. Use of Wallet and master key
6. Trigger-less DDL replication
7. Automatically adjusts threads when RAC node failure/start
8. Supports RAC PDML Distributed transaction
9. RMAN Support for mined archive logs
Q. What are the installation options available in OGG 12c?
You can install Oracle GoldenGate 12c using in 2 ways:
1) Interactive Installation with OUI – Graphical interface
2) Silent Installation with OUI – Command Interface
You can install Oracle GoldenGate 12c using in 2 ways:
1) Interactive Installation with OUI – Graphical interface
2) Silent Installation with OUI – Command Interface
Q. What is a Credential Store in OGG 12c?
OGG Credential Store manages Encrypted Passwords and USERIDs that are used to interact with the local database and Associate them with an Alias. Instead of specifying actual USERID and Password in a command or a parameter file, you can use an alias. The Credential Store is implemented as an auto login wallet within the Oracle Credential Store Framework (CSF).
OGG Credential Store manages Encrypted Passwords and USERIDs that are used to interact with the local database and Associate them with an Alias. Instead of specifying actual USERID and Password in a command or a parameter file, you can use an alias. The Credential Store is implemented as an auto login wallet within the Oracle Credential Store Framework (CSF).
Q. How to configure Credential Store in OGG 12c?
Steps to configure Oracle Credential Store are as follows:
1) By Default Credential Store is is located under “dircrd” directory.
If you want to specify a different location use can specidy “CREDENTIALSTORELOCATION” parameter in GLOBALS file.
Example: CREDENTIALSTORELOCATION /u01/app/oracle/OGG_PASSWD
2) Goto OGG home and connect to GGSCI.
cd $OGG_HOME
./ggsci
GGSCI>
Steps to configure Oracle Credential Store are as follows:
1) By Default Credential Store is is located under “dircrd” directory.
If you want to specify a different location use can specidy “CREDENTIALSTORELOCATION” parameter in GLOBALS file.
Example: CREDENTIALSTORELOCATION /u01/app/oracle/OGG_PASSWD
2) Goto OGG home and connect to GGSCI.
cd $OGG_HOME
./ggsci
GGSCI>
Q. What command is used to create the credential store?
ADD CREDENTIALSTORE
ADD CREDENTIALSTORE
Q. How do you add credentials to the credential store?
ALTER CREDENTIALSTORE ADD USER userid,
[PASSWORD password]
[ALIAS alias]
[DOMAIN domain]
Example: GGSCI> ALTER CREDENTIALSTORE ADD USER GGS@orcl, PASSWORD oracle ALIAS extorcl DOMAIN OracleGoldenGate
ALTER CREDENTIALSTORE ADD USER userid,
[PASSWORD password]
[ALIAS alias]
[DOMAIN domain]
Example: GGSCI> ALTER CREDENTIALSTORE ADD USER GGS@orcl, PASSWORD oracle ALIAS extorcl DOMAIN OracleGoldenGate
Q. How do you retrieve information from the Oracle Credential
Store?
GGSCI> INFO CREDENTIALSTORE
OR
GGSCI> INFO CREDENTIALSTORE DOMAIN OracleGoldenGate
GGSCI> INFO CREDENTIALSTORE
OR
GGSCI> INFO CREDENTIALSTORE DOMAIN OracleGoldenGate
Q. What are the different data encyption methods available in
OGG 12c?
In OGG 12c you can encrypt data with the following 2 methods:
1) Encrypt Data with Master Key and Wallet
2) Encrypt Data with ENCKEYS
In OGG 12c you can encrypt data with the following 2 methods:
1) Encrypt Data with Master Key and Wallet
2) Encrypt Data with ENCKEYS
Q. How do you enable Oracle GoldenGate for Oracle database
11.2.0.4?
The database services required to support Oracle GoldenGate capture and apply must be enabled explicitly for an Oracle 11.2.0.4 database. This is required for all modes of Extract and Replicat.
To enable Oracle GoldenGate, set the following database initialization parameter. All instances in Oracle RAC must have the same setting.
ENABLE_GOLDENGATE_REPLICATION=true
The database services required to support Oracle GoldenGate capture and apply must be enabled explicitly for an Oracle 11.2.0.4 database. This is required for all modes of Extract and Replicat.
To enable Oracle GoldenGate, set the following database initialization parameter. All instances in Oracle RAC must have the same setting.
ENABLE_GOLDENGATE_REPLICATION=true
Q. How does the Replicat works in a Coordinated Mode?
In a Coordinated Mode Replicat operates as follows:
1. Reads the Oracle GoldenGate trail.
2. Performs data filtering, mapping, and conversion.
3. Constructs SQL statements that represent source database DML or DDL transactions (in committed order).
4. Applies the SQL to the target through the SQL interface that is supported for the given target database, such as ODBC or the native database interface.
In a Coordinated Mode Replicat operates as follows:
1. Reads the Oracle GoldenGate trail.
2. Performs data filtering, mapping, and conversion.
3. Constructs SQL statements that represent source database DML or DDL transactions (in committed order).
4. Applies the SQL to the target through the SQL interface that is supported for the given target database, such as ODBC or the native database interface.
Q. What is the difference between Classic and Coordinated
Replicat?
The difference between classic mode and coordinated mode is that Replicat is multi-threaded in coordinated mode. Within a single Replicat instance, multiple threads read the trail independently and apply transactions in parallel. Each thread handles all of the filtering, mapping, conversion, SQL construction, and error handling for its assigned workload. A coordinator thread coordinates the transactions across threads to account for dependencies among the threads.
The difference between classic mode and coordinated mode is that Replicat is multi-threaded in coordinated mode. Within a single Replicat instance, multiple threads read the trail independently and apply transactions in parallel. Each thread handles all of the filtering, mapping, conversion, SQL construction, and error handling for its assigned workload. A coordinator thread coordinates the transactions across threads to account for dependencies among the threads.
Q. How do you create a COORDINATED REPLICATE in OGG 12c?
You can create the COORDINATED REPLICATE with the following OGG Command:
ADD REPLICAT rfin, COORDINATED MAXTHREADS 50, EXTTRAIL dirdat/et
You can create the COORDINATED REPLICATE with the following OGG Command:
ADD REPLICAT rfin, COORDINATED MAXTHREADS 50, EXTTRAIL dirdat/et
Q. If have created a Replicat process in OGG 12c and forgot to
specify DISCARDFILE parameter. What will happen?
Starting with OGG 12c, if you don’t specify a DISCARDFILE OGG process now generates a dicard file with default values whenever a process is started with START command through GGSCI.
Starting with OGG 12c, if you don’t specify a DISCARDFILE OGG process now generates a dicard file with default values whenever a process is started with START command through GGSCI.
Q. Is it possible to start OGG EXTRACT at a specific CSN?
Yes, Starting with OGG 12c you can now start Extract at a specific CSN in the transaction log or trail.
Example:
START EXTRACT fin ATCSN 12345
START EXTRACT finance AFTERCSN 67890
Yes, Starting with OGG 12c you can now start Extract at a specific CSN in the transaction log or trail.
Example:
START EXTRACT fin ATCSN 12345
START EXTRACT finance AFTERCSN 67890
Q. List a few parameters which may help improve the replicat
performance?
he parameters below can be used to improve the replicat performance:
1. BATCHSQL
2. GROUPTRANSOPS
3. INSERTAPPEND
he parameters below can be used to improve the replicat performance:
1. BATCHSQL
2. GROUPTRANSOPS
3. INSERTAPPEND
Q. What are the areas to monitor in Goldengate Replication?
The lag and checkpoint latency of the Extract, pump and Replicat processes are normally monitored.
The lag and checkpoint latency of the Extract, pump and Replicat processes are normally monitored.
Q. What is the PASSTHRU mode used for?
In pass-through mode, the Extract process does not look up the table definitions, either from the database or from a data definitions file. This increases the throughput of the data pump, as the object definition look-up is bypassed.
In pass-through mode, the Extract process does not look up the table definitions, either from the database or from a data definitions file. This increases the throughput of the data pump, as the object definition look-up is bypassed.
Q. What are the most common reasons of an Extract process
slowing down?
Some of the possible reasons are:
1. Long running batch transactions on a table.
2. Insufficient memory on the Extract side. Uncommitted, long running transactions can cause writing of a transaction to a temporary area (dirtmp) on disk. Once the transaction is committed it is read from the temporary location on the file system and converted to trail files.
3. Slow or overburdened Network.
Some of the possible reasons are:
1. Long running batch transactions on a table.
2. Insufficient memory on the Extract side. Uncommitted, long running transactions can cause writing of a transaction to a temporary area (dirtmp) on disk. Once the transaction is committed it is read from the temporary location on the file system and converted to trail files.
3. Slow or overburdened Network.
Q. What are the most common reasons of the Replicat process
slowing down?
Some of the possible reasons are:
1. Large amount of transactions on a particular table.
2. Blocking sessions on the destination database where non-Goldengate transactions are also taking place on the same table as the replicat processing.
3. If using DBFS, writing & reading of trail files may be slow if SGA parameters are not tuned.
4. For slow Replicat’s, latency may be due to missing indexes on target.
5. Replicat having to process Update, delete of rows in very large tables.
Some of the possible reasons are:
1. Large amount of transactions on a particular table.
2. Blocking sessions on the destination database where non-Goldengate transactions are also taking place on the same table as the replicat processing.
3. If using DBFS, writing & reading of trail files may be slow if SGA parameters are not tuned.
4. For slow Replicat’s, latency may be due to missing indexes on target.
5. Replicat having to process Update, delete of rows in very large tables.
Q. My extract was running fine for a long time. All of a sudden
it went down. I started the extract processes after 1 hour. What will happen to
my committed transactions that occurred in the database during last 1 hour?
OGG checkpoint provides the fault tolerance and make sure that the transaction marked for committed is capture and captured only once. Even if the extract went down abnormally, when you start the process again it reads the checkpoint file to provide the read consistency and transaction recovery.
OGG checkpoint provides the fault tolerance and make sure that the transaction marked for committed is capture and captured only once. Even if the extract went down abnormally, when you start the process again it reads the checkpoint file to provide the read consistency and transaction recovery.
Q. I have configured Oracle GoldenGate integrated capture
process using the default values. As the data load increases I see that extract
starts lagging behind by an hour (or more) and database performance degrades.
How you will resolve this performance issue?
When operating in integrated capture mode, you must make sure that you have assigned sufficient memory to STREAMS_POOL_SIZE. An undersized STREAMS_POOL_SIZE or limiting the streams pool to use a specific amount of memory can cause troubles.
The best practice is to allocate STREAMS_POOL_SIZE at the instance level and allocate the MAX. SGA at GG process level as below:
SQL> alter system set STREAMS_POOL_SIZE=3G
TRANLOGOPTIONS INTEGRATEDPARAMS (MAX_SGA_SIZE 2048, PARALLELISM 4)
When operating in integrated capture mode, you must make sure that you have assigned sufficient memory to STREAMS_POOL_SIZE. An undersized STREAMS_POOL_SIZE or limiting the streams pool to use a specific amount of memory can cause troubles.
The best practice is to allocate STREAMS_POOL_SIZE at the instance level and allocate the MAX. SGA at GG process level as below:
SQL> alter system set STREAMS_POOL_SIZE=3G
TRANLOGOPTIONS INTEGRATEDPARAMS (MAX_SGA_SIZE 2048, PARALLELISM 4)
Q. Why would you segregate the tables in a replication
configuration? How would you do it?
In OGG you can configure replicat at the data at the schema level or at the table level using TABLE parameter of extract and MAP parameter of replicat.
For replicating the entire database you can list all the schemas in the database in the extract/replicat parameter file.
Depending the amount of redo generation you can split the tables in a schema in multiple extracts and replicats to improve the performance of data replication. Alternatively youcan also group a set of tables in the confiuration by the application functionality.
Alternatively you may need to remove tables which have long running transactions in a seperate extract process to eliminte lag on the other tables.
Let’s say that you have a schema named SCOTT and it has 100 hundred tables.
Out of these hundred tables, 50 tables are heavily utilized by application.
To improve the overall replication performance you create 3 extract and 3 replicats as follows:
Ext_1/Rep_1 –> 25 tables
Ext_2/Rep_2 –> 25 tables
Ext_3/Rep_3 –> 50 tables
Ext_1/Rep_1 and Ext_2/Rep_2 contains 25 table each which are heavily utilized or generate more redo.
Ext_3/Rep_3 contains all the other 50 tables which are least used.
In OGG you can configure replicat at the data at the schema level or at the table level using TABLE parameter of extract and MAP parameter of replicat.
For replicating the entire database you can list all the schemas in the database in the extract/replicat parameter file.
Depending the amount of redo generation you can split the tables in a schema in multiple extracts and replicats to improve the performance of data replication. Alternatively youcan also group a set of tables in the confiuration by the application functionality.
Alternatively you may need to remove tables which have long running transactions in a seperate extract process to eliminte lag on the other tables.
Let’s say that you have a schema named SCOTT and it has 100 hundred tables.
Out of these hundred tables, 50 tables are heavily utilized by application.
To improve the overall replication performance you create 3 extract and 3 replicats as follows:
Ext_1/Rep_1 –> 25 tables
Ext_2/Rep_2 –> 25 tables
Ext_3/Rep_3 –> 50 tables
Ext_1/Rep_1 and Ext_2/Rep_2 contains 25 table each which are heavily utilized or generate more redo.
Ext_3/Rep_3 contains all the other 50 tables which are least used.
Q. How can we report on long running transactions?
The WARNLONGTRANS parameter can be specified with a threshold time that a transaction can be open before Extract writes a warning message to the ggs error log.
Example: WARNLONGTRANS 1h, CHECKINTERVAL 10m
The WARNLONGTRANS parameter can be specified with a threshold time that a transaction can be open before Extract writes a warning message to the ggs error log.
Example: WARNLONGTRANS 1h, CHECKINTERVAL 10m
Q. What command can be used to view the checkpoint information
for the extract process?
Use the following command to view the Extract checkpoint information.
GGSCI> info extract , showch
GGSCI> info extract ext_fin, showch
Use the following command to view the Extract checkpoint information.
GGSCI> info extract , showch
GGSCI> info extract ext_fin, showch
Q. How is the RESTARTCOLLISION parameter different from
HANDLECOLLISIONS?
The RESTARTCOLLISION parameter is used to skip ONE transaction only in a situation when the GoldenGate process crashed and performed an operation (INSERT, UPDATE & DELETE) in the database but could not checkpoint the process information to the checkpoint file/table. On recovery it will skip the transaction and AUTOMATICALLY continue to the next operation in the trail file.
When using HANDLECOLLISION GoldenGate will continue to overwritten and process transactions until the parameter is removed from the parameter files and the processes restarted.
The RESTARTCOLLISION parameter is used to skip ONE transaction only in a situation when the GoldenGate process crashed and performed an operation (INSERT, UPDATE & DELETE) in the database but could not checkpoint the process information to the checkpoint file/table. On recovery it will skip the transaction and AUTOMATICALLY continue to the next operation in the trail file.
When using HANDLECOLLISION GoldenGate will continue to overwritten and process transactions until the parameter is removed from the parameter files and the processes restarted.
Q. How do you view the data which has been extracted from the
redo logs?
The logdump utility is used to open the trail files and look at the actual records that have been extracted from the redo or the archive log files.
The logdump utility is used to open the trail files and look at the actual records that have been extracted from the redo or the archive log files.
Q. What does the RMAN-08147 warning signify when your
environment has a GoldenGate Capture Processes configured?
This occurs when the V$ARCHIVED_LOG.NEXT_CHANGE# is greater than the SCN required by the GoldenGate Capture process and RMAN is trying to delete the archived logs. The RMAN-08147 error is raised when RMAN tries to delete these files.
When the database is open it uses the DBA_CAPTURE values to determine the log files required for mining. However if the database is in the mount state the V$ARCHIVED_LOG. NEXT_CHANGE# value is used.
See MetaLink note: 1581365.1
This occurs when the V$ARCHIVED_LOG.NEXT_CHANGE# is greater than the SCN required by the GoldenGate Capture process and RMAN is trying to delete the archived logs. The RMAN-08147 error is raised when RMAN tries to delete these files.
When the database is open it uses the DBA_CAPTURE values to determine the log files required for mining. However if the database is in the mount state the V$ARCHIVED_LOG. NEXT_CHANGE# value is used.
See MetaLink note: 1581365.1
Q. How would you look at a trail file using logdump, if the
trail file is Encrypted?
You must use the DECRYPT option before viewing data in the Trail data.
List few useful Logdump commands to view and search data stored in OGG trail files.
Below are few logdump commands used on a daily basis for displaying or analyzing data stored in a trail file.
$ ./logdump – to connect to the logdump prompt
logdump> open /u01/app/oracle/dirdat/et000001 – to open a trail file in logdump
logdump> fileheader on – to view the trail file header
logdump> ghdr on – to view the record header with data
logdump> detail on – to view column information
logdump> detail data – to display HEX and ASCII data values to the column list
logdump> reclen 200 – to control how much record data is displayed
logdump> pos 0 – To go to the first record
logdump> next (or simply n) – to move from one record to another in sequence
logdump> count – counting records in a trail
You must use the DECRYPT option before viewing data in the Trail data.
List few useful Logdump commands to view and search data stored in OGG trail files.
Below are few logdump commands used on a daily basis for displaying or analyzing data stored in a trail file.
$ ./logdump – to connect to the logdump prompt
logdump> open /u01/app/oracle/dirdat/et000001 – to open a trail file in logdump
logdump> fileheader on – to view the trail file header
logdump> ghdr on – to view the record header with data
logdump> detail on – to view column information
logdump> detail data – to display HEX and ASCII data values to the column list
logdump> reclen 200 – to control how much record data is displayed
logdump> pos 0 – To go to the first record
logdump> next (or simply n) – to move from one record to another in sequence
logdump> count – counting records in a trail
Q. Why should I upgrade my GoldenGate Extract processes to
Integrated Extract?
Oracle is able to provide faster integration of the new database features by moving the GoldenGate Extraction processes into the database. Due to this, the GoldenGate Integrated Extract has a number of features like Compression which are not supported in the traditional Extract. You can read more about how to upgrade to Integrated Extract and more about Integrated Delivery. Going forward preference should be give to create new extracts as Integrated Extracts and also to upgrade existing traditional Extracts.
Oracle is able to provide faster integration of the new database features by moving the GoldenGate Extraction processes into the database. Due to this, the GoldenGate Integrated Extract has a number of features like Compression which are not supported in the traditional Extract. You can read more about how to upgrade to Integrated Extract and more about Integrated Delivery. Going forward preference should be give to create new extracts as Integrated Extracts and also to upgrade existing traditional Extracts.
Q. What is the minimum Database version which supports
Integrated Delivery?
Oracle 11.2.0.4 is the the minimum required database version which supports both Integrated extract and Integrated Reaplicat.
Oracle 11.2.0.4 is the the minimum required database version which supports both Integrated extract and Integrated Reaplicat.
Q. What databases supports GoldenGate Integrated Delivery?
Oracle Integrated Delivery is only available for Oracle Databases.
Oracle Integrated Delivery is only available for Oracle Databases.
Q. With Integrated Delivery, where can we look for the
performance stats?
Yes with 12c, performance statistics are collected in the AWR repository and the data is available via the normal AWR reports.
Yes with 12c, performance statistics are collected in the AWR repository and the data is available via the normal AWR reports.
Q. What are the steps required to add a new table to an existing
replication setup?
The steps to be executed would be the following:
1. Include the new table to the Extract & pump process.
2. Obtain starting database SCN and Copy the source table data to the target database
3. Start Replicat on target at the source SCN database point.
The steps to be executed would be the following:
1. Include the new table to the Extract & pump process.
2. Obtain starting database SCN and Copy the source table data to the target database
3. Start Replicat on target at the source SCN database point.
Q. What does the GoldenGate CSN equate to, in the Oracle Database?
It is equivalent of the Oracle database SCN transaction number.
It is equivalent of the Oracle database SCN transaction number.
Q. How do you create a CSV file from the Extracted data?
You will have to use the CSV Flat File Adaptor to create CSV files. The source would be the extract trail files which use the configuration of the adaptor settings to generate CSV files.
You will have to use the CSV Flat File Adaptor to create CSV files. The source would be the extract trail files which use the configuration of the adaptor settings to generate CSV files.
Q. What is the purpose of the DEFGEN utility?
When the source and the target schema objects are not the same (different DDL’s) the Replicat process needs to know the source definition of the objects. The output from the DEFGEN utility is used in conjunction with the trail data to determine which column value in the trail belongs to which column.
When the source and the target schema objects are not the same (different DDL’s) the Replicat process needs to know the source definition of the objects. The output from the DEFGEN utility is used in conjunction with the trail data to determine which column value in the trail belongs to which column.
Q. We want to setup one-way data replication for my online
transaction processing application. However there are compressed tables in the
environment. Please suggest how I can achieve it.
You must use OGG 11.2 and configure GoldenGate Integrated Capture process to extract data from compressed tables.
Note: Pre OGG 11.2 doesn’t support extracting data from compressed tables
You must use OGG 11.2 and configure GoldenGate Integrated Capture process to extract data from compressed tables.
Note: Pre OGG 11.2 doesn’t support extracting data from compressed tables
Q. We want to setup one-way data replication using Integrated
Extract for my Oracle database running 10.2.0.4. Please suggest the best
architecture to achieve it.
Oracle GoldenGate Integrated Capture process supports Oracle databases 10.2 and higher. But if you are running Oracle database 10.2 and want to you Oracle GoldenGate Integrated Capture process then you must configure downstream topology.
Oracle GoldenGate Integrated Capture process supports Oracle databases 10.2 and higher. But if you are running Oracle database 10.2 and want to you Oracle GoldenGate Integrated Capture process then you must configure downstream topology.
Q. I am migrating my Oracle database from non-exadata to exadata
but my source OGG software and target OGG software versions are different (Say
Source is running OGG 11.1 and target is running OGG 11.2). How can I configure
my OGG process to work in such configuration?
It is recommended that all instances of Oracle GoldenGate be the same version to take advantage of the new functionality, but this is not possible all the time and is not required. In this scenario, OGG provides a parameter called ‘FORMAT RELEASE’ which allows customers to use different versions of Oracle GoldenGate Extract, trail files, and Replicat together.
Example: RmtTrail /u01/app/oracle/dirdat, Format Release 11.1
Note: The input and output trails of a data pump must have the same trail file version.
It is recommended that all instances of Oracle GoldenGate be the same version to take advantage of the new functionality, but this is not possible all the time and is not required. In this scenario, OGG provides a parameter called ‘FORMAT RELEASE’ which allows customers to use different versions of Oracle GoldenGate Extract, trail files, and Replicat together.
Example: RmtTrail /u01/app/oracle/dirdat, Format Release 11.1
Note: The input and output trails of a data pump must have the same trail file version.
Q. What are the different OGG Initial load methods available?
OGG has 2 functionalities, one it is used for Online data Replication and second for Initial Loading.
If you are replicating data between 2 homogeneous databases then the best method is to use database specific method (Exp/Imp, RMAN, Transportable tablespaces, Physical Standby and so on). Database specific methods are usually faster than the other methods.
—If you are replicating data between 2 heterogeneous databases or your replicat involves complex transformations, then the database specific method can’t be used. In those cases you can always use Oracle GoldenGate to perform initial load.
Within Oracle GoldenGate you have 4 different ways to perform initial load.
1. Direct Load – Faster but doesn’t support LOB data types (12c include support for LOB)
2. Direct Bulk Load – Uses SQL*LOAD API for Oracle and SSIS for MS SQL SERVER
3. File to replicat – Fast but the rmtfile limit is 2GB. If the table can’t be fit in 1 rmtfile you can use maxfiles but the replicat need to be registered on the target OGG home to read the rmtfiles from source.
4. File to Database utility – depending on the target database, use SQL*LOAD for Oracle and SSIS for MS SQL SERVER and so on.
OGG has 2 functionalities, one it is used for Online data Replication and second for Initial Loading.
If you are replicating data between 2 homogeneous databases then the best method is to use database specific method (Exp/Imp, RMAN, Transportable tablespaces, Physical Standby and so on). Database specific methods are usually faster than the other methods.
—If you are replicating data between 2 heterogeneous databases or your replicat involves complex transformations, then the database specific method can’t be used. In those cases you can always use Oracle GoldenGate to perform initial load.
Within Oracle GoldenGate you have 4 different ways to perform initial load.
1. Direct Load – Faster but doesn’t support LOB data types (12c include support for LOB)
2. Direct Bulk Load – Uses SQL*LOAD API for Oracle and SSIS for MS SQL SERVER
3. File to replicat – Fast but the rmtfile limit is 2GB. If the table can’t be fit in 1 rmtfile you can use maxfiles but the replicat need to be registered on the target OGG home to read the rmtfiles from source.
4. File to Database utility – depending on the target database, use SQL*LOAD for Oracle and SSIS for MS SQL SERVER and so on.
Oracle GoldenGate
initial loading reads data directly from the source database tables without
locking them. So you don’t need downtime but it will use database resources and
can cause performance issues. Take extra precaution to perform the initial load
during the non-peak time so that you don’t run into resource contention.
Q. I have a table called ‘TEST’ on source and target with same
name, structure and data type but in a different column order. How can you
setup replication for this table?
OGG by default assumes that the sources and target tables are identical. A table is said to be identical if and only if the table structure, data type and column order are the same on both the source and the target.
If the tables are not identical you must use the parameter ‘SOURCEDEFS’ pointing to the source table definition and ‘COLMAP’ parameter to map the columns from source to target.
OGG by default assumes that the sources and target tables are identical. A table is said to be identical if and only if the table structure, data type and column order are the same on both the source and the target.
If the tables are not identical you must use the parameter ‘SOURCEDEFS’ pointing to the source table definition and ‘COLMAP’ parameter to map the columns from source to target.
Q. What is the best practice to delete the extract files in OGG?
Use the manager process to delete the extract files after they are consumed by the extract/replicat process
PURGEOLDEXTRACTS /u01/app/oracle/dirdat/et*, USECHECKPOINTS, MINKEEPHOURS 2
Use the manager process to delete the extract files after they are consumed by the extract/replicat process
PURGEOLDEXTRACTS /u01/app/oracle/dirdat/et*, USECHECKPOINTS, MINKEEPHOURS 2
Q. What parameters can be used to configure Oracle GoldenGate to
extract data from Archived Redo log files only?
Use the TRANLOGOPTIONS ARCHIVEDLOGONLY option in the parameter file.
Use the TRANLOGOPTIONS ARCHIVEDLOGONLY option in the parameter file.
Q. What are the basic resources required to configure Oracle
GoldenGate high availability solution with Oracle Clusterware?
There are 3 basic resources required:
1. Virtual IP
2. Shared storage
3. Action script
There are 3 basic resources required:
1. Virtual IP
2. Shared storage
3. Action script
Q. I have a one-way replication setup. The system administration
team wants to apply an OS patch to both the OGG source host and the target
servers. Provide the sequence of steps that you will carry before and after
applying this patch.
Procedure:
1. Check to make sure that the Extract has processed all the records in the data source (Online Redo/archive logs)
GGSCI> send extract , logend
(The above command should print YES)
2. Verify the extract, pump and replicat has zero lag.
GGSCI> send extract , getlag
GGSCI> send extract , getlag
GGSCI> send replicat , getlag
(The above command should pring “At EOF, no more records to process.”)
3. Stop all application and database activity.
4. Make sure that the primary extract is reading the end of the redolog and that there is no LAG at all for the processes.
5. Now proceed with stopping the processes:?
Procedure:
1. Check to make sure that the Extract has processed all the records in the data source (Online Redo/archive logs)
GGSCI> send extract , logend
(The above command should print YES)
2. Verify the extract, pump and replicat has zero lag.
GGSCI> send extract , getlag
GGSCI> send extract , getlag
GGSCI> send replicat , getlag
(The above command should pring “At EOF, no more records to process.”)
3. Stop all application and database activity.
4. Make sure that the primary extract is reading the end of the redolog and that there is no LAG at all for the processes.
5. Now proceed with stopping the processes:?
Source:
1. Stop the primary extract
2. Stop the pump extract
3. Stop the manager process
4. Make sure all the processes are down.
1. Stop the primary extract
2. Stop the pump extract
3. Stop the manager process
4. Make sure all the processes are down.
Target:
1. Stop replicat process
2. Stop mgr
3. Make sure that all the processes are down.
4. Proceed with the maintenance
5. After the maintenance, proceed with starting up the processes:
1. Stop replicat process
2. Stop mgr
3. Make sure that all the processes are down.
4. Proceed with the maintenance
5. After the maintenance, proceed with starting up the processes:
Source:
1. Start the manager process
2. Start the primary extract
3. Start the pump extract
(Or simply all the extract processes as GGSCI> start extract *)
4. Make sure all that the processes are up.
1. Start the manager process
2. Start the primary extract
3. Start the pump extract
(Or simply all the extract processes as GGSCI> start extract *)
4. Make sure all that the processes are up.
Target:
1. Start the manager process
2. Start the replicat process.
3. Make sure that all the processes are up.
1. Start the manager process
2. Start the replicat process.
3. Make sure that all the processes are up.
What database does
Oracle GoldenGate support for replication?
- Oracle Database
- TimesTen
- MySQL
- IBM DB2
- Microsoft SQL Server
- Informix
- Teradata
- Sybase
- Enscribe
- SQL/MX
What are the
supplemental logging pre-requisites?
The
following supplemental logging is required.
- Database supplemental
logging
- Object level logging
Why is Supplemental
logging required for Replication?
When
a transaction is committed on the source database, only new data is written to
the Redo log. However for Oracle to apply these transactions on the destination
database, the before image key values are required to identify the effected
rows. This data is also placed in the trail file and used to identify the rows
on the destination, using the key value the transactions are executed against
them.
List important
considerations for bi-directional replication?
The
customer should consider the following points in an active-active replication
environment.
- Primary Key: Helps to
identify conflicts and Resolve them.
- Sequences: Are not
supported. The work around is use to use odd/even, range or concatenate
sequences.
- Triggers: These should be
disabled or suppressed to avoid using uniqueness issue
- Data Looping: This can easy
avoided using OGG itself
- LAG: This should be
minimized. If a customer says that there will not be any LAG due to network
or huge load, then we don’t need to deploy CRDs. But this is not the case
always as there would be some LAG and these can cause Conflicts.
- CDR (Conflict Detection
& Resolution): OGG has built in CDRs for all kind of DMLs that can be
used to detect and resolve them.
- Packaged Application: These
are not supported as it may contain data types which are not support by
OGG or it might not allow the application modification to work with OGG.
Are OGG binaries
supported on ASM Cluster File System (ACFS)?
Yes,
you can install and configure OGG on ACFS.
Are OGG binaries
supported on the Database File System (DBFS)? What files can be stored in DBFS?
No,
OGG binaries are not supported on DBFS. You can however store parameter files,
data files (trail files), and checkpoint files on DBFS.
What is the default
location of the GLOBALS file?
A
GLOBALS file is located under Oracle GoldenGate installation directory (OGG
HOME)
Where can filtering
of data for a column be configured?
Filtering
of the columns of a table can be set at the Extract, Pump or Replicat level.
Is it a requirement
to configure a PUMP extract process in OGG replication?
A
PUMP extract is an option, but it is highly recommended to use this to safe
guard against network failures. Normally it is configured when you are setting
up OGG replication across the network.
What are the
differences between the Classic and integrated Capture?
Classic
Capture:
- The Classic Capture mode is
the traditional Extract process that accesses the database redo logs
(optionally archive logs) to capture the DML changes occurring on the
objects specified in the parameter files.
- At the OS level, the
GoldenGate user must be a part of the same database group which owns the
database redo logs.
- This capture mode is
available for other RDBMS as well.
- There are some data types
that are not supported in Classic Capture mode.
- Classic capture can’t read
data from the compressed tables/tablespaces.
Integrated
Capture (IC):
- In the Integrated Capture
mode, GoldenGate works directly with the database log mining server to
receive the data changes in the form of logical change records (LCRs).
- IC mode does not require any
special setup for the databases using ASM, transparent data encryption, or
Oracle RAC.
- This feature is only
available for oracle databases in Version 11.2.0.3 or higher.
- It also supports
various object types which were previously not supported by Classic
Capture.
- This Capture mode supports
extracting data from source databases using compression.
- Integrated Capture can be
configured in an online or downstream mode.
List the minimum
parameters that can be used to create the extract process?
The
following are the minimium required parameters which must be defined in the
extract parameter file.
- EXTRACT NAME
- USERID
- EXTTRAIL
- TABLE
What are macros?
Macro
is an easier way to build your parameter file. Once a macro is written it can
be called from different parameter files. Common parameters like
username/password and other parameters can be included in these macros. A macro
can either be another parameter file or a library.
Where can macros be
invoked?
The
macros can be called from the following parameter files.
- Manager
- Extract
- Replicat
- Gobals
How is a macro
defined?
A
macro statement consists of the following.
- Name of the Macro
- Parameter list
- Macro body
Sample:
MACRO #macro_name
PARAMS (#param1, #param2, …)
BEGIN
< macro_body >
END;
MACRO #macro_name
PARAMS (#param1, #param2, …)
BEGIN
< macro_body >
END;
I want to configure
multiple extracts to write to the same exttrail file? Is this possible?
Only
one Extract process can write to one exttrail at a time. So you can’t configure
multiple extracts to write to the same exttrail.
What type of
Encryption is supported in Oracle Goldengate?
Oracle
Goldengate provides 3 types of Encryption.
- Data Encryption using Blow
fish.
- Password Encryption.
- Network Encryption.
What are the
different password encrytion options available with OGG?
You
can encrypt a password in OGG using
- Blowfish algorithm and
- Advance Encryption Standard
(AES) algorithm
What are the
different encryption levels in AES?
You
can encrypt the password/data using the AES in three different keys
a)
128 bit
b) 192 bit and
c) 256 bit
b) 192 bit and
c) 256 bit
Is there a way to
check the syntax of the commands in the parameter file without actually running
the GoldenGate process
Yes,
you can place the SHOWSYNTAX parameter in the parameter file and try starting.
If there is any error you will see it.
How can you
increase the maximum size of the read operation into the buffer that holds the
results of the reads from the transaction log?
If
you are using the Classical Extract you may use the TRANSLOGOPTION ASMBUFSIZE
parameter to control the read size for ASM Databases.
What information
can you expect when there us data in the discard file?
When
data is discarded, the discard file can contain:
1. Discard row details
2. Database Errors
3. Trail file number
1. Discard row details
2. Database Errors
3. Trail file number
What command can be
used to switch writing the trail data to a new trail file?
You
can use the following command to write the trail data to a new trail file.
SEND EXTRACT ext_name, ROLLOVER
SEND EXTRACT ext_name, ROLLOVER
How can you
determine if the parameters for a process was recently changed
When
ever a process is started, the parameters in the .prm file for the process is
written to the process REPORT. You can look at the older process reports to
view the parameters which were used to start up the process. By comparing the
older and the current reports you can identify the changes in the parameters.
What is Manager
The Manager process
must be running on both the source as well as target
systems before the
Extract or Replicat process can be started and performs a
number of functions
including monitoring and starting other GoldenGate
processes, managing
the trail files and also reporting.
What is Extract?
The Extract process
runs on the source system and is the data capture
Mechanism of Golden
Gate. It can be configured both for initial loading of the
Source data as well as
to synchronize the changed data on the source with the
Target. This can be configured
to also propagate any DDL changes on those
Databases where DDL
change support is available.
What is Replicate?
The Replicat process
runs on the target system and reads transactional data
changes as well as DDL
changes and replicates then to the target database. Like
the Extract process,
the Replicat process can also be configured for Initial Load
as well as Change Synchronization.
What is Collector
The Collector is a
background process which runs on the target system and is
started automatically
by the Manager (Dynamic Collector) or it can be
configured to stsrt
manually (Static Collector). It receives extracted data
changes that are sent
via TCP/IP and writes then to the trail files from where
they are processed by
the Replicat process.
what is Trails
Trails are series of
files that GoldenGate temporarily stores on disks and these
files are written to
and read from by the Extract and Replicat processes as the
case may be. Depending
on the configuration chosen, these trail files can exist
on the source as well
as on the target systems. If it exists on the local system, it
will be known an
Extract Trail or as an Remote Trail if it exists on the target
system.
What is Data Pumps
Data Pumps are
secondary extract mechanisms which exist in the source
configuration. This is
optional component and if Data Pump is not used then
Extract sends data via
TCP/IP to the remote trail on the target. When Data Pump
is configured, the the
Primary Extract process will write to the Local Trail and
then this trail is
read by the Data Pump and data is sent over the network to
Remote Trails on the
target system.
In the absence of Data
Pump, the data that the Extract process extracts resides in
memory alone and there
is no storage of this data anywhere on the source
system. In case of
network of target failures, there could be cases where the
primary extract
process can abort or abend. Data Pump can also be useful in
those cases where we
are doing complex filtering and transformation of data as
well as when we are
consolidating data from many sources to a central target.
What is Data source
When processing
transactional data changes, the Extract process can obtain data
directly from the
database transaction logs (Oracle, DB2, SQL Server, MySQL
etc) or from a
GoldenGate Vendor Access Module (VAM) where the database
vendor (for example
Teradata) will provide the required components that will be
used by Extract to
extract the data changes.
What is Groups
To differentiate
between the number of different Extract and Replicat groups
which can potentially
co-exist on a system, we can define processing groups.
For instance, if we
want to replicate different sets of data in parallel, we can
create two Replicat
groups.
A processing group
consisits of a process which could be either a Extract or
Replicat process, a
corresponding parameter file, checkpoint file or checkpoint
table (for Replicat)
and other files which could be associated with the process.
Why is Supplemental logging required for
Replication?
When a transaction transaction is committed on
the source database, only new data is written to the Redo log. However
for Oracle to apply these transactions on the
destination database, the before image key values are required to identify the
effected rows. This data is also placed in the trail file and used to identify
the rows on the destination, using the key value the transactions are executed
against them.
List important considerations for
bi-directional replication?
The customer should consider the following
points in an active-active replication environment.
Primary Key: Helps to identify conflicts and Resolve them.
Sequences: Are not supported. The work around is use to
use odd/even, range or concatenate sequences.
Data Looping: This can easy avoided using OGG itself
LAG: This should be minimized. If a customer says that there will not
be any LAG due to network or huge load, then we don’t need to deploy CRDs. But
this is not the case always as there would be some LAG and these can cause
Conflicts.
CDR (Conflict Detection & Resolution): OGG has built in CDRs
for all kind of DMLs that can be used to detect and resolve them.
Packaged Application: These are not supported as it may contain data
types which are not support by OGG or it might not allow the application
modification to work with OGG.
What is data pump process in Goldengate?
The Data Pump (not to be confused with the
Oracle Export Import Data Pump) is an optional secondary Extract group that is
created on the source system. When Data Pump is not used, the Extract process
writes to a remote trail that is located on the target system using TCP/IP.
When Data Pump is configured, the Extract process writes to a local trail and
from here Data Pump will read the trail and write the data over the network to
the remote trail located on the target system.
The advantages of this be it protects against a
network failure as in the absence of a storage device on the local system, the
Extract process writes data into memory before the same is sent over the
network. Any failures in the network could then cause the Extract process to abort (abend). Also, if we are
doing any complex data transformation or filtering, the same can be performed
by the Data Pump. It will also be useful when we are consolidating data from
several sources into one central target where data pump on each individual
source system can write to one common trail file on the target.
Where can filtering of data for a column be
configured?
Desired to gain proficiency on
Oracle GoldenGate? Explore the blog post on
Is it a requirement to
configure a PUMP extract process in OGG replication?
A PUMP extract is an option, but it is highly
recommended to use this to safe guard against network failures. Normally it is
configured when you are setting up OGG replication across the network.
What are the differences between the Classic
and integrated Capture?
Classic Capture:
The Classic Capture mode is the traditional
Extract process that accesses the database redo logs (optionally archive logs)
to capture the DML changes occurring on the objects specified in the parameter
files.
At the OS level, the GoldenGate user must be a
part of the same database group which owns the database redo logs.
There are some data types that are not supported
in Classic Capture mode.
Classic capture can’t read data from the
compressed tables/tablespaces.
Integrated Capture (IC):
In the Integrated Capture mode, GoldenGate works
directly with the database log mining server to receive the data changes in the
form of logical change records (LCRs).
IC mode does not require any special setup for
the databases using ASM, transparent data encryption, or Oracle RAC.
This feature is only available for oracle
databases in Version 11.2.0.3 or higher.
It also supports various object types
which were previously not supported by Classic Capture.
This Capture mode supports extracting data from source
databases using compression.
Integrated Capture can be configured in an
online or downstream mode.
List the minimum parameters that can be used
to create the extract process?
The following are the minimium required
parameters which must be defined in the extract parameter file.
EXTRACT NAME
USERID
EXTTRAIL
TABLE
What are macros?
Macro is an easier way to build your parameter file. Once a macro
is written it can be called from different parameter files. Common parameters
like username/password and other parameters can be included in these macros. A
macro can either be another parameter file or a library.
What is the command line utility in GoldenGate
(or) what is ggsci?
Golden Gate Command Line Interface essential
commands – GGSCI
GGSCI — (Oracle) GoldenGate Software
Command Interpreter
Where can macros be invoked?
The macros can be called from the following
parameter files.
Manager
Extract
Replicat
Gobals
How is a macro defined?
A macro statement consists of the following.
Name of the Macro
Macro body
Sample:
MACRO #macro_name
PARAMS (#param1, #param2, …)
BEGIN
< macro_body >
END;
MACRO #macro_name
PARAMS (#param1, #param2, …)
BEGIN
< macro_body >
END;
What type of Encryption is supported in
Goldengate?
Oracle Goldengate provides 3 types of
Encryption.
Data Encryption using Blow fish.
Password Encryption.
Network Encryption.
What are the different password encrytion
options available with OGG?
You can encrypt a password in OGG using
Blowfish algorithm and
Advance Encryption Standard (AES) algorithm
Is there a way to check the syntax of the
commands in the parameter file without running the GoldenGate process?
Yes, you can place the SHOWSYNTAX parameter in
the parameter file and try starting. If there is any error, you will see it.
How can you increase the maximum size of the
read operation into the buffer that holds the results of the reads from the
transaction log?
If you are using the Classical Extract you may
use the TRANSLOGOPTION ASMBUFSIZE parameter to control, the read size for ASM
Databases.
What information can you expect when there us
data in the discard file?
When data is discarded, the discard file can
contain:
Discard row details
Database Errors
Trail file number
What command can be used to switch writing the
trail data to a new trail file?
You can use the following command to write the
trail data to a new trail file.
SEND EXTRACT ext_name, ROLLOVER
SEND EXTRACT ext_name, ROLLOVER
How can you determine if the parameters for a
process was recently changed?
Whenever a process is started, the parameters in
the. prm file for the process is written to the process REPORT. You can look at the older process reports to
view the parameters which were used to start up the process. By comparing the
older and the current reports you can identify the changes in the parameters.
What happens if RVWR cannot write to disk?
It depends on the context where the write error
occurs:
If there’s a Guaranteed Restore Point, the
database crashes to ensure the restore point guarantee is not voided.
If there isn’t a Guaranteed Restore Point and
it’s a primary database, the Flashback Mode will be automatically turned off
for the database, which will have continued to operate normally.
If there isn’t a Guaranteed Restore Point and
it’s a standby database, the database will hang until the cause of the write
failure is fixed.
How to list restore points in RMAN?
In RMAN you can use the LIST RESTORE POINT
[ALL|restore_point_name] command. If you use a recovery catalog you can use the
view RC_RESTORE_POINT in the recovery catalog repository, or the command the V$RESTORE_POINT in the
target database.
Can you see the progress of a FLASHBACK
DATABASE operation?
Yes, you can. During a FLASHBACK DATABASE
operation, you can query V$SESSION_LONGOPS from another session to see the
progress of the flashback.
The FLASHBACK DATABASE operation has two
distinct phases: the actual flashback and the media recovery that happens
afterwards to bring the database to a consistent state.
While the actual flashback is running, you’ll
see the following message in V$SESSION_LONGOPS, on Oracle 11gR2:
Flashback Database: Flashback Data Applied : 238
out of 282 Megabytes done
During the media recovery, the following
messages will be seen:
Media Recovery: Redo Applied: 263 out of 0
Megabytes done
Media Recovery: Average Apply Rate: 1164 out of
0 KB/sec done
Media Recovery: Last Applied Redo: 626540 out of
0 SCN+Time done
Media Recovery: Elapsed Time: 232 out of 0
Seconds done
Media Recovery: Active Time: 116 out of 0 Seconds
done
Media Recovery: Active Apply Rate: 1859 out of 0
KB/sec done
Media Recovery: Maximum Apply Rate: 1859 out of
0 KB/sec done
Media Recovery: Log Files: 15 out of 0 Files
done
Media Recovery: Apply Time per Log: 7 out of 0
Seconds done
How should I set the database to improve
Flashback performance?
Oracle’s recommendations are:
Use a fast file system for your flash recovery
area, preferably without operating system file caching. It is recommended to
use a file system that avoids operating system file caching, such as ASM.
Configure enough disk spindles for the file
system that will hold the flash recovery area. For large production databases,
multiple disk spindles may be needed to support the required disk throughput
for the database to write the flashback logs effectively.
If the storage system used to hold the flash
recovery area does not have non-volatile RAM, try to configure the file system
on top of striped storage volumes, with a relatively small stripe size such as
128K. This will allow each write to the flashback logs to be spread across
multiple spindles, improving performance
For large, production databases, set the
init.ora parameter LOG_BUFFER to be at least 8MB. This makes sure the database
allocates maximum memory (typically 16MB) for writing flashback database logs.
1. Question1.
What Type Of Topology Does Goldenggate Support?
Answer :
GoldenGate supports the following topologies:
o Unidirectional
o Bidirectional
o Peer-to-peer
o Broadcast
o Consolidation
o Cascasding
2. Question2.
What Are The Main Components Of The Goldengate Replication?
Answer :
The replication configuration consists of the following
processes:
o Manager
o Extract
o Pump
o Replicate
3. Question3.
What Transaction Types Does Goldengate Support For Replication?
Answer :
Goldengate supports both DML and DDL Replication from the source
to target.
4. Question4.
What Are The Supplemental Logging Pre-requisites?
Answer :
The following supplemental logging is required.
o Database supplemental logging
o Object level logging
5. Question5.
Why Is Supplemental Logging Required For Replication?
Answer :
Integrated Capture (IC):
o In the Integrated Capture mode, GoldenGate
works directly with the database log mining server to receive the data changes
in the form of logical change records (LCRs).
o IC mode does not require any special setup for
the databases using ASM, transparent data encryption, or Oracle RAC.
o This feature is only available for oracle
databases in Version 11.2.0.3 or higher.
o It also supports various object types which
were previously not supported by Classic Capture.
o This Capture mode supports extracting data
from source databases using compression.
o Integrated Capture can be configured in an
online or downstream mode.
6. Question6. List The Minimum Parameters That Can Be Used To
Create The Extract Process?
Answer :
The following are the minimum required
parameters which must be defined in the extract parameter file.
o EXTRACT NAME
o USERID
o EXTTRAIL
o TABLE
7. Question7. I Want To Configure Multiple Extracts To Write To
The Same Exttrail File? Is This Possible?
Answer :
Only one Extract process can write to
one exttrail at a time. So you can’t configure multiple extracts to write to
the same exttrail.
8. Question8. What Type Of Encryption Is Supported In
Goldengate?
Answer :
Oracle Goldengate provides 3 types of Encryption.
o Data Encryption using
Blow fish.
o Password Encryption.
o Network Encryption.
9. Question9. What Are The Different Password Encryption Options
Available With Ogg?
Answer :
You can encrypt a password in OGG using
o Blowfish algorithm and
o Advance Encryption
Standard (AES) algorithm
10.
Question10.
What Are The Different Encryption Levels In Aes?
Answer :
You can encrypt the password/data using
the AES in three different keys
o 128 bit
o 192 bit and
o 256 bit
11.
Question11.
What Are Some Of The Key Features Of Goldengate 12c?
Answer :
The following are some of the more
interesting features of Oracle GoldenGate 12c:
o Support for
Multitenant Database
o Coordinated Replicat
o Integrated Replicat
Mode
o Use of Credential
store
o Use of Wallet and
master key
o Trigger-less DDL replication
o Automatically adjusts
threads when RAC node failure/start
o Supports RAC PDML
Distributed transaction
o RMAN Support for mined
archive logs
12.
Question12.
What Are The Installation Options Available In Ogg 12c?
Answer :
You can install Oracle GoldenGate 12c
using in 2 ways:
o Interactive
Installation with OUI – Graphical interface
o Silent Installation
with OUI – Command Interface
13.
Question13.
What Is A Credential Store In Ogg 12c?
Answer :
OGG Credential Store manages Encrypted
Passwords and USERIDs that are used to interact with the local database and
Associate them with an Alias. Instead of specifying actual USERID and Password
in a command or a parameter file, you can use an alias. The Credential Store is
implemented as an auto login wallet within the Oracle Credential Store
Framework (CSF).
14.
Question14.
How To Configure Credential Store In Ogg 12c?
Answer :
Steps to configure Oracle Credential
Store are as follows:
o By Default Credential
Store is is located under “dircrd” directory.
If you want to specify a different
location use can specidy “CREDENTIALSTORELOCATION” parameter in GLOBALS file.
Example: CREDENTIALSTORELOCATION /u01/app/oracle/OGG_PASSWD
Example: CREDENTIALSTORELOCATION /u01/app/oracle/OGG_PASSWD
o Goto OGG home and
connect to GGSCI.
cd $OGG_HOME
./ggsci
GGSCI>
./ggsci
GGSCI>
15.
Question15.
What Command Is Used To Create The Credential Store?
Answer :
ADD CREDENTIALSTORE.
16.
Question16.
How Do You Add Credentials To The Credential Store?
Answer :
ALTER CREDENTIALSTORE ADD USER userid,
[PASSWORD password]
[ALIAS alias]
[DOMAIN domain]
Example: GGSCI> ALTER CREDENTIALSTORE ADD USER GGS@orcl, PASSWORD oracle ALIAS extorcl DOMAIN OracleGoldenGate
[PASSWORD password]
[ALIAS alias]
[DOMAIN domain]
Example: GGSCI> ALTER CREDENTIALSTORE ADD USER GGS@orcl, PASSWORD oracle ALIAS extorcl DOMAIN OracleGoldenGate
17.
Question17.
How Do You Retrieve Information From The Oracle Credential Store?
Answer :
GGSCI> INFO CREDENTIALSTORE
OR
GGSCI> INFO CREDENTIALSTORE DOMAIN OracleGoldenGate
OR
GGSCI> INFO CREDENTIALSTORE DOMAIN OracleGoldenGate
18.
Question18.
What Are The Different Data Encryption Methods Available In Ogg 12c?
Answer :
In OGG 12c you can encrypt data with the
following 2 methods:
o Encrypt Data with
Master Key and Wallet
o Encrypt Data with
ENCKEYS
19.
Question19.
How Do You Enable Oracle Goldengate For Oracle Database 11.2.0.4?
Answer :
The database services required to
support Oracle GoldenGate capture and apply must be enabled explicitly for an
Oracle 11.2.0.4 database. This is required for all modes of Extract and
Replicat.
To enable Oracle GoldenGate, set the
following database initialization parameter. All instances in Oracle RAC must
have the same setting.
ENABLE_GOLDENGATE_REPLICATION=true
20.
Question20.
How Does The Replicat Works In A Coordinated Mode?
Answer :
In a Coordinated Mode Replicat operates
as follows:
o Reads the Oracle
GoldenGate trail.
o Performs data
filtering, mapping, and conversion.
o Constructs SQL
statements that represent source database DML or DDL transactions (in committed
order).
o Applies the SQL to the
target through the SQL interface that is supported for the given target
database, such as ODBC or the native database interface.
21.
Question21.
What Is The Difference Between Classic And Coordinated Replicat?
Answer :
The difference between classic mode and
coordinated mode is that Replicat is multi-threaded in coordinated mode. Within
a single Replicat instance, multiple threads read the trail independently and
apply transactions in parallel. Each thread handles all of the filtering,
mapping, conversion, SQL construction, and error handling for its assigned
workload. A coordinator thread coordinates the transactions across threads to
account for dependencies among the threads.
22.
Question22.
How Do You Create A Coordinated Replicate In Ogg 12c?
Answer :
You can create the COORDINATED REPLICATE
with the following OGG Command:
ADD REPLICAT rfin, COORDINATED
MAXTHREADS 50, EXTTRAIL dirdat/et
23.
Question23.
If Have Created A Replicate Process In Ogg 12c And Forgot To Specify
Discardfile Parameter. What Will Happen?
Answer :
Starting with OGG 12c, if you don’t
specify a DISCARDFILE OGG process now generates a dicard file with default
values whenever a process is started with START command through GGSCI.
24.
Question24.
Is It Possible To Start Ogg Extract At A Specific Csn?
Answer :
Yes, Starting with OGG 12c you can now
start Extract at a specific CSN in the transaction log or trail.
Example:
o START EXTRACT fin
ATCSN 12345
o START EXTRACT finance
AFTERCSN 67890
25.
Question25.
List A Few Parameters Which May Help Improve The Replicat Performance?
Answer :
The parameters below can be used to
improve the replicat performance:
o BATCHSQL
o GROUPTRANSOPS
o INSERTAPPEND
26.
Question26.
What Are The Areas To Monitor In Goldengate Replication?
Answer :
The lag and checkpoint latency of the
Extract, pump and Replicat processes are normally monitored.
27.
Question27.
What Is The Passthru Mode Used For?
Answer :
In pass-through mode, the Extract
process does not look up the table definitions, either from the database or
from a data definitions file. This increases the throughput of the data pump,
as the object definition look-up is bypassed.
28.
Question28.
What Are The Most Common Reasons Of An Extract Process Slowing Down?
Answer :
Some of the possible reasons are:
o Long running batch
transactions on a table.
o Insufficient memory on
the Extract side. Uncommitted, long running transactions can cause writing of a
transaction to a temporary area (dirtmp) on disk. Once the transaction is
committed it is read from the temporary location on the file system and
converted to trail files.
o Slow or overburdened
Network.
29.
Question29.
What Are The Most Common Reasons Of The Replicate Process Slowing Down?
Answer :
Some of the possible reasons are:
o Large amount of
transactions on a particular table.
o Blocking sessions on
the destination database where non-Goldengate transactions are also taking
place on the same table as the replicat processing.
o If using DBFS, writing
& reading of trail files may be slow if SGA parameters are not tuned.
o For slow Replicat’s,
latency may be due to missing indexes on target.
o Replicat having to
process Update, delete of rows in very large tables.
30.
Question30.
My Extract Was Running Fine For A Long Time. All Of A Sudden It Went Down. I
Started The Extract Processes After 1 Hour. What Will Happen To My Committed Transactions
That Occurred In The Database During Last 1 Hour?
Answer :
OGG checkpoint provides the fault
tolerance and make sure that the transaction marked for committed is capture
and captured only once. Even if the extract went down abnormally, when you
start the process again it reads the checkpoint file to provide the read
consistency and transaction recovery.
31.
Question31.
I Have Configured Oracle Goldengate Integrated Capture Process Using The
Default Values. As The Data Load Increases I See That Extract Starts Lagging
Behind By An Hour (or More) And Database Performance Degrades. How You Will
Resolve This Performance Issue?
Answer :
When operating in integrated capture
mode, you must make sure that you have assigned sufficient memory to
STREAMS_POOL_SIZE. An undersized STREAMS_POOL_SIZE or limiting the streams pool
to use a specific amount of memory can cause troubles.
The best practice is to allocate
STREAMS_POOL_SIZE at the instance level and allocate the MAX. SGA at GG process
level as below:
SQL> alter system set
STREAMS_POOL_SIZE=3G
TRANLOGOPTIONS INTEGRATEDPARAMS
(MAX_SGA_SIZE 2048, PARALLELISM 4)
32.
Question32.
Why Would You Segregate The Tables In A Replication Configuration? How Would
You Do It?
Answer :
In OGG you can configure replicat at the
data at the schema level or at the table level using TABLE parameter of extract
and MAP parameter of replicat.
For replicating the entire database you can list all the schemas in the database in the extract/replicat parameter file.
Depending the amount of redo generation you can split the tables in a schema in multiple extracts and replicats to improve the performance of data replication. Alternatively youcan also group a set of tables in the confiuration by the application functionality.
Alternatively you may need to remove tables which have long running transactions in a seperate extract process to eliminte lag on the other tables.
Let’s say that you have a schema named SCOTT and it has 100 hundred tables.
Out of these hundred tables, 50 tables are heavily utilized by application.
For replicating the entire database you can list all the schemas in the database in the extract/replicat parameter file.
Depending the amount of redo generation you can split the tables in a schema in multiple extracts and replicats to improve the performance of data replication. Alternatively youcan also group a set of tables in the confiuration by the application functionality.
Alternatively you may need to remove tables which have long running transactions in a seperate extract process to eliminte lag on the other tables.
Let’s say that you have a schema named SCOTT and it has 100 hundred tables.
Out of these hundred tables, 50 tables are heavily utilized by application.
To improve the overall replication
performance you create 3 extract and 3 replicats as follows:
Ext_1/Rep_1 –> 25 tables
Ext_2/Rep_2 –> 25 tables
Ext_3/Rep_3 –> 50 tables
Ext_1/Rep_1 and Ext_2/Rep_2 contains 25 table each which are heavily utilized or generate more redo.
Ext_3/Rep_3 contains all the other 50 tables which are least used.
Ext_1/Rep_1 –> 25 tables
Ext_2/Rep_2 –> 25 tables
Ext_3/Rep_3 –> 50 tables
Ext_1/Rep_1 and Ext_2/Rep_2 contains 25 table each which are heavily utilized or generate more redo.
Ext_3/Rep_3 contains all the other 50 tables which are least used.
33.
Question33.
How Can We Report On Long Running Transactions?
Answer :
The WARNLONGTRANS parameter can be
specified with a threshold time that a transaction can be open before Extract
writes a warning message to the ggs error log.
Example: WARNLONGTRANS 1h, CHECKINTERVAL
10m
34.
Question34.
What Command Can Be Used To View The Checkpoint Information For The Extract
Process?
Answer :
Use the following command to view the
Extract checkpoint information.
GGSCI> info extract , showch
GGSCI> info extract ext_fin, showch
35.
Question35.
How Is The Restartcollision Parameter Different From Handlecollisions?
Answer :
The RESTARTCOLLISION parameter is used
to skip ONE transaction only in a situation when the GoldenGate process crashed
and performed an operation (INSERT, UPDATE & DELETE) in the database but
could not checkpoint the process information to the checkpoint file/table. On
recovery it will skip the transaction and AUTOMATICALLY continue to the next
operation in the trail file.
When using HANDLECOLLISION GoldenGate
will continue to overwritten and process transactions until the parameter is
removed from the parameter files and the processes restarted.
36.
Question36.
How Do You View The Data Which Has Been Extracted From The Redo Logs?
Answer :
The logdump utility is used to open the
trail files and look at the actual records that have been extracted from the
redo or the archive log files.
37.
Question37.
What Does The Rman-08147 Warning Signify When Your Environment Has A Goldengate
Capture Processes Configured?
Answer :
This occurs when the
V$ARCHIVED_LOG.NEXT_CHANGE# is greater than the SCN required by the GoldenGate
Capture process and RMAN is trying to delete the archived logs. The RMAN-08147
error is raised when RMAN tries to delete these files.
When the database is open it uses the
DBA_CAPTURE values to determine the log files required for mining. However if
the database is in the mount state the V$ARCHIVED_LOG. NEXT_CHANGE# value is
used.
38.
Question38.
How Would You Look At A Trail File Using Logdump, If The Trail File Is
Encrypted?
Answer :
You must use the DECRYPT option before
viewing data in the Trail data.
List few useful Logdump commands to view and search data stored in OGG trail files.
Below are few logdump commands used on a daily basis for displaying or analyzing data stored in a trail file.
$ ./logdump – to connect to the logdump prompt
logdump> open /u01/app/oracle/dirdat/et000001 – to open a trail file in logdump
logdump> fileheader on – to view the trail file header
logdump> ghdr on – to view the record header with data
logdump> detail on – to view column information
logdump> detail data – to display HEX and ASCII data values to the column list
logdump> reclen 200 – to control how much record data is displayed
logdump> pos 0 – To go to the first record
logdump> next (or simply n) – to move from one record to another in sequence
logdump> count – counting records in a trail
List few useful Logdump commands to view and search data stored in OGG trail files.
Below are few logdump commands used on a daily basis for displaying or analyzing data stored in a trail file.
$ ./logdump – to connect to the logdump prompt
logdump> open /u01/app/oracle/dirdat/et000001 – to open a trail file in logdump
logdump> fileheader on – to view the trail file header
logdump> ghdr on – to view the record header with data
logdump> detail on – to view column information
logdump> detail data – to display HEX and ASCII data values to the column list
logdump> reclen 200 – to control how much record data is displayed
logdump> pos 0 – To go to the first record
logdump> next (or simply n) – to move from one record to another in sequence
logdump> count – counting records in a trail
39.
Question39.
Why Should I Upgrade My Goldengate Extract Processes To Integrated Extract?
Answer :
Oracle is able to provide faster
integration of the new database features by moving the GoldenGate Extraction
processes into the database. Due to this, the GoldenGate Integrated Extract has
a number of features like Compression which are not supported in the
traditional Extract. You can read more about how to upgrade to Integrated
Extract and more about Integrated Delivery. Going forward preference should be
give to create new extracts as Integrated Extracts and also to upgrade existing
traditional Extracts.
40.
Question40.
What Is The Minimum Database Version Which Supports Integrated Delivery?
Answer :
Oracle 11.2.0.4 is the the minimum
required database version which supports both Integrated extract and Integrated
Reaplicat.
41.
Question41.
What Databases Supports Goldengate Integrated Delivery?
Answer :
Oracle Integrated Delivery is only
available for Oracle Databases.
42.
Question42.
With Integrated Delivery, Where Can We Look For The Performance Stats?
Answer :
Yes with 12c, performance statistics are
collected in the AWR repository and the data is available via the normal AWR
reports.
43.
Question43.
What Are The Steps Required To Add A New Table To An Existing Replication
Setup?
Answer :
The steps to be executed would be the
following:
o Include the new table
to the Extract & pump process.
o Obtain starting
database SCN and Copy the source table data to the target database
o Start Replicat on
target at the source SCN database point.
44.
Question44.
What Does The Goldengate Csn Equate To, In The Oracle Database?
Answer :
It is equivalent of the Oracle database
SCN transaction number.
45.
Question45.
How Do You Create A Csv File From The Extracted Data?
Answer :
You will have to use the CSV Flat File
Adaptor to create CSV files. The source would be the extract trail files which
use the configuration of the adaptor settings to generate CSV files.
46.
Question46.
What Is The Purpose Of The Defgen Utility?
Answer :
When the source and the target schema
objects are not the same (different DDL’s) the Replicat process needs to know
the source definition of the objects. The output from the DEFGEN utility is
used in conjunction with the trail data to determine which column value in the
trail belongs to which column.
47.
Question47.
What Are The Basic Resources Required To Configure Oracle Goldengate High
Availability Solution With Oracle Clusterware?
Answer :
There are 3 basic resources required:
o Virtual IP
o Shared storage
o Action script
48.
Question48.
I Have A One-way Replication Setup. The System Administration Team Wants To
Apply An Os Patch To Both The Ogg Source Host And The Target Servers. Provide
The Sequence Of Steps That You Will Carry Before And After Applying This Patch?
Answer :
Procedure:
o Check to make sure
that the Extract has processed all the records in the data source (Online
Redo/archive logs)
GGSCI> send extract , logend
(The above command should print YES)
(The above command should print YES)
o Verify the extract,
pump and replicat has zero lag.
GGSCI> send extract , getlag
GGSCI> send extract , getlag
GGSCI> send replicat , getlag
GGSCI> send extract , getlag
GGSCI> send replicat , getlag
(The above command should pring “At EOF,
no more records to process.”)
o Stop all application
and database activity.
o Make sure that the
primary extract is reading the end of the redolog and that there is no LAG at
all for the processes.
o Now proceed with
stopping the processes:
Source:
o Stop the primary
extract
o Stop the pump extract
o Stop the manager
process
o Make sure all the
processes are down.
Target:
o Stop replicat process
o Stop mgr
o Make sure that all the
processes are down.
o Proceed with the
maintenance
o After the maintenance,
proceed with starting up the processes:
Source:
o Start the manager
process
o Start the primary
extract
o Start the pump extract
(Or simply all the extract processes as
GGSCI> start extract *)
o Make sure all that the
processes are up.
Target:
o Start the manager
process
o Start the replicat
process.
o Make sure that all the
processes are up.
49.
Question49.
What Parameters Can Be Used To Configure Oracle Goldengate To Extract Data From
Archived Redo Log Files Only?
Answer :
Use the TRANLOGOPTIONS ARCHIVEDLOGONLY
option in the parameter file.
50.
Question50.
What Is The Best Practice To Delete The Extract Files In Ogg?
Answer :
Use the manager process to delete the
extract files after they are consumed by the extract/replicat process
PURGEOLDEXTRACTS
/u01/app/oracle/dirdat/et*, USECHECKPOINTS, MINKEEPHOURS 2
51.
Question51.
I Have A Table Called ‘test’ On Source And Target With Same Name, Structure And
Data Type But In A Different Column Order. How Can You Setup Replication For
This Table?
Answer :
OGG by default assumes that the sources
and target tables are identical. A table is said to be identical if and only if
the table structure, data type and column order are the same on both the source
and the target.
If the tables are not identical you must
use the parameter ‘SOURCEDEFS’ pointing to the source table definition and
‘COLMAP’ parameter to map the columns from source to target.
52.
Question52.
What Are The Different Ogg Initial Load Methods Available?
Answer :
OGG has 2 functionalities, one it is
used for Online data Replication and second for Initial Loading.
If you are replicating data between 2
homogeneous databases then the best method is to use database specific method
(Exp/Imp, RMAN, Transportable tablespaces, Physical Standby and so on).
Database specific methods are usually faster than the other methods.
—If you are replicating data between 2
heterogeneous databases or your replicat involves complex transformations, then
the database specific method can’t be used. In those cases you can always use
Oracle GoldenGate to perform initial load.
Within Oracle GoldenGate you have 4
different ways to perform initial load.
o Direct Load – Faster
but doesn’t support LOB data types (12c include support for LOB)
o Direct Bulk Load –
Uses SQL*LOAD API for Oracle and SSIS for MS SQL SERVER
o File to replicat –
Fast but the rmtfile limit is 2GB. If the table can’t be fit in 1 rmtfile you
can use maxfiles but the replicat need to be registered on the target OGG home
to read the rmtfiles from source.
o File to Database
utility – depending on the target database, use SQL*LOAD for Oracle and SSIS
for MS SQL SERVER and so on.
Oracle GoldenGate initial loading reads
data directly from the source database tables without locking them. So you
don’t need downtime but it will use database resources and can cause
performance issues. Take extra precaution to perform the initial load during
the non-peak time so that you don’t run into resource contention.
53.
Question53.
I Am Migrating My Oracle Database From Non-exadata To Exadata But My Source Ogg
Software And Target Ogg Software Versions Are Different (say Source Is Running
Ogg 11.1 And Target Is Running Ogg 11.2). How Can I Configure My Ogg Process To
Work In Such Configuration?
Answer :
It is recommended that all instances of
Oracle GoldenGate be the same version to take advantage of the new
functionality, but this is not possible all the time and is not required. In
this scenario, OGG provides a parameter called ‘FORMAT RELEASE’ which allows
customers to use different versions of Oracle GoldenGate Extract, trail files,
and Replicat together.
Example: RmtTrail
/u01/app/oracle/dirdat, Format Release 11.1
Note: The input and output trails of a
data pump must have the same trail file version.
54.
Question54.
We Want To Setup One-way Data Replication For My Online Transaction Processing
Application. However There Are Compressed Tables In The Environment. Please
Suggest How I Can Achieve It?
Answer :
You must use OGG 11.2 and configure
GoldenGate Integrated Capture process to extract data from compressed tables.
Note: Pre OGG 11.2 doesn’t support
extracting data from compressed tables
55.
Question1. What Type Of
Topology Does Goldenggate Support?
Answer :
GoldenGate supports the following topologies:
o Unidirectional
o Bidirectional
o Peer-to-peer
o Broadcast
o Consolidation
o Cascasding
56.
Question2. What Are The
Main Components Of The Goldengate Replication?
Answer :
The replication configuration consists of the following
processes:
o Manager
o Extract
o Pump
o Replicate
57.
Question3. What Transaction
Types Does Goldengate Support For Replication?
Answer :
Goldengate supports both DML and DDL Replication from the source
to target.
58.
Question4. What Are The
Supplemental Logging Pre-requisites?
Answer :
The following supplemental logging is required.
o Database supplemental logging
o Object level logging
59.
Question5. Why Is
Supplemental Logging Required For Replication?
Answer :
Integrated Capture (IC):
o In the Integrated Capture mode, GoldenGate
works directly with the database log mining server to receive the data changes
in the form of logical change records (LCRs).
o IC mode does not require any special setup for
the databases using ASM, transparent data encryption, or Oracle RAC.
o This feature is only available for oracle
databases in Version 11.2.0.3 or higher.
o It also supports various object types which
were previously not supported by Classic Capture.
o This Capture mode supports extracting data
from source databases using compression.
o Integrated Capture can be configured in an
online or downstream mode.
60.
Question6.
List The Minimum Parameters That Can Be Used To Create The Extract Process?
Answer :
The following are the minimum required
parameters which must be defined in the extract parameter file.
o EXTRACT NAME
o USERID
o EXTTRAIL
o TABLE
61.
Question7.
I Want To Configure Multiple Extracts To Write To The Same Exttrail File? Is
This Possible?
Answer :
Only one Extract process can write to
one exttrail at a time. So you can’t configure multiple extracts to write to
the same exttrail.
62.
Question8.
What Type Of Encryption Is Supported In Goldengate?
Answer :
Oracle Goldengate provides 3 types of
Encryption.
o Data Encryption using
Blow fish.
o Password Encryption.
o Network Encryption.
63.
Question9.
What Are The Different Password Encryption Options Available With Ogg?
Answer :
You can encrypt a password in OGG using
o Blowfish algorithm and
o Advance Encryption
Standard (AES) algorithm
64.
Question10.
What Are The Different Encryption Levels In Aes?
Answer :
You can encrypt the password/data using
the AES in three different keys
o 128 bit
o 192 bit and
o 256 bit
65.
Question11.
What Are Some Of The Key Features Of Goldengate 12c?
Answer :
The following are some of the more
interesting features of Oracle GoldenGate 12c:
o Support for
Multitenant Database
o Coordinated Replicat
o Integrated Replicat Mode
o Use of Credential
store
o Use of Wallet and
master key
o Trigger-less DDL
replication
o Automatically adjusts
threads when RAC node failure/start
o Supports RAC PDML
Distributed transaction
o RMAN Support for mined
archive logs
66.
Question12.
What Are The Installation Options Available In Ogg 12c?
Answer :
You can install Oracle GoldenGate 12c
using in 2 ways:
o Interactive
Installation with OUI – Graphical interface
o Silent Installation
with OUI – Command Interface
67.
Question13.
What Is A Credential Store In Ogg 12c?
Answer :
OGG Credential Store manages Encrypted
Passwords and USERIDs that are used to interact with the local database and
Associate them with an Alias. Instead of specifying actual USERID and Password
in a command or a parameter file, you can use an alias. The Credential Store is
implemented as an auto login wallet within the Oracle Credential Store
Framework (CSF).
68.
Question14.
How To Configure Credential Store In Ogg 12c?
Answer :
Steps to configure Oracle Credential
Store are as follows:
o By Default Credential Store
is is located under “dircrd” directory.
If you want to specify a different
location use can specidy “CREDENTIALSTORELOCATION” parameter in GLOBALS file.
Example: CREDENTIALSTORELOCATION /u01/app/oracle/OGG_PASSWD
Example: CREDENTIALSTORELOCATION /u01/app/oracle/OGG_PASSWD
o Goto OGG home and
connect to GGSCI.
cd $OGG_HOME
./ggsci
GGSCI>
./ggsci
GGSCI>
69.
Question15.
What Command Is Used To Create The Credential Store?
Answer :
ADD CREDENTIALSTORE.
70.
Question16.
How Do You Add Credentials To The Credential Store?
Answer :
ALTER CREDENTIALSTORE ADD USER userid,
[PASSWORD password]
[ALIAS alias]
[DOMAIN domain]
Example: GGSCI> ALTER CREDENTIALSTORE ADD USER GGS@orcl, PASSWORD oracle ALIAS extorcl DOMAIN OracleGoldenGate
[PASSWORD password]
[ALIAS alias]
[DOMAIN domain]
Example: GGSCI> ALTER CREDENTIALSTORE ADD USER GGS@orcl, PASSWORD oracle ALIAS extorcl DOMAIN OracleGoldenGate
71.
Question17.
How Do You Retrieve Information From The Oracle Credential Store?
Answer :
GGSCI> INFO CREDENTIALSTORE
OR
GGSCI> INFO CREDENTIALSTORE DOMAIN OracleGoldenGate
OR
GGSCI> INFO CREDENTIALSTORE DOMAIN OracleGoldenGate
72.
Question18.
What Are The Different Data Encryption Methods Available In Ogg 12c?
Answer :
In OGG 12c you can encrypt data with the
following 2 methods:
o Encrypt Data with
Master Key and Wallet
o Encrypt Data with
ENCKEYS
73.
Question19.
How Do You Enable Oracle Goldengate For Oracle Database 11.2.0.4?
Answer :
The database services required to
support Oracle GoldenGate capture and apply must be enabled explicitly for an
Oracle 11.2.0.4 database. This is required for all modes of Extract and
Replicat.
To enable Oracle GoldenGate, set the
following database initialization parameter. All instances in Oracle RAC must
have the same setting.
ENABLE_GOLDENGATE_REPLICATION=true
74.
Question20.
How Does The Replicat Works In A Coordinated Mode?
Answer :
In a Coordinated Mode Replicat operates
as follows:
o Reads the Oracle
GoldenGate trail.
o Performs data
filtering, mapping, and conversion.
o Constructs SQL
statements that represent source database DML or DDL transactions (in committed
order).
o Applies the SQL to the
target through the SQL interface that is supported for the given target
database, such as ODBC or the native database interface.
75.
Question21.
What Is The Difference Between Classic And Coordinated Replicat?
Answer :
The difference between classic mode and
coordinated mode is that Replicat is multi-threaded in coordinated mode. Within
a single Replicat instance, multiple threads read the trail independently and
apply transactions in parallel. Each thread handles all of the filtering,
mapping, conversion, SQL construction, and error handling for its assigned
workload. A coordinator thread coordinates the transactions across threads to
account for dependencies among the threads.
76.
Question22.
How Do You Create A Coordinated Replicate In Ogg 12c?
Answer :
You can create the COORDINATED REPLICATE
with the following OGG Command:
ADD REPLICAT rfin, COORDINATED
MAXTHREADS 50, EXTTRAIL dirdat/et
77.
Question23.
If Have Created A Replicate Process In Ogg 12c And Forgot To Specify Discardfile
Parameter. What Will Happen?
Answer :
Starting with OGG 12c, if you don’t
specify a DISCARDFILE OGG process now generates a dicard file with default
values whenever a process is started with START command through GGSCI.
78.
Question24.
Is It Possible To Start Ogg Extract At A Specific Csn?
Answer :
Yes, Starting with OGG 12c you can now
start Extract at a specific CSN in the transaction log or trail.
Example:
o START EXTRACT fin
ATCSN 12345
o START EXTRACT finance
AFTERCSN 67890
79.
Question25.
List A Few Parameters Which May Help Improve The Replicat Performance?
Answer :
The parameters below can be used to
improve the replicat performance:
o BATCHSQL
o GROUPTRANSOPS
o INSERTAPPEND
80.
Question26.
What Are The Areas To Monitor In Goldengate Replication?
Answer :
The lag and checkpoint latency of the
Extract, pump and Replicat processes are normally monitored.
81.
Question27.
What Is The Passthru Mode Used For?
Answer :
In pass-through mode, the Extract
process does not look up the table definitions, either from the database or
from a data definitions file. This increases the throughput of the data pump,
as the object definition look-up is bypassed.
82.
Question28.
What Are The Most Common Reasons Of An Extract Process Slowing Down?
Answer :
Some of the possible reasons are:
o Long running batch
transactions on a table.
o Insufficient memory on
the Extract side. Uncommitted, long running transactions can cause writing of a
transaction to a temporary area (dirtmp) on disk. Once the transaction is
committed it is read from the temporary location on the file system and
converted to trail files.
o Slow or overburdened
Network.
83.
Question29.
What Are The Most Common Reasons Of The Replicate Process Slowing Down?
Answer :
Some of the possible reasons are:
o Large amount of
transactions on a particular table.
o Blocking sessions on
the destination database where non-Goldengate transactions are also taking
place on the same table as the replicat processing.
o If using DBFS, writing
& reading of trail files may be slow if SGA parameters are not tuned.
o For slow Replicat’s,
latency may be due to missing indexes on target.
o Replicat having to
process Update, delete of rows in very large tables.
84.
Question30.
My Extract Was Running Fine For A Long Time. All Of A Sudden It Went Down. I
Started The Extract Processes After 1 Hour. What Will Happen To My Committed
Transactions That Occurred In The Database During Last 1 Hour?
Answer :
OGG checkpoint provides the fault
tolerance and make sure that the transaction marked for committed is capture
and captured only once. Even if the extract went down abnormally, when you
start the process again it reads the checkpoint file to provide the read
consistency and transaction recovery.
85.
Question31.
I Have Configured Oracle Goldengate Integrated Capture Process Using The Default
Values. As The Data Load Increases I See That Extract Starts Lagging Behind By
An Hour (or More) And Database Performance Degrades. How You Will Resolve This
Performance Issue?
Answer :
When operating in integrated capture
mode, you must make sure that you have assigned sufficient memory to
STREAMS_POOL_SIZE. An undersized STREAMS_POOL_SIZE or limiting the streams pool
to use a specific amount of memory can cause troubles.
The best practice is to allocate
STREAMS_POOL_SIZE at the instance level and allocate the MAX. SGA at GG process
level as below:
SQL> alter system set
STREAMS_POOL_SIZE=3G
TRANLOGOPTIONS INTEGRATEDPARAMS
(MAX_SGA_SIZE 2048, PARALLELISM 4)
86.
Question32.
Why Would You Segregate The Tables In A Replication Configuration? How Would
You Do It?
Answer :
In OGG you can configure replicat at the
data at the schema level or at the table level using TABLE parameter of extract
and MAP parameter of replicat.
For replicating the entire database you can list all the schemas in the database in the extract/replicat parameter file.
Depending the amount of redo generation you can split the tables in a schema in multiple extracts and replicats to improve the performance of data replication. Alternatively youcan also group a set of tables in the confiuration by the application functionality.
Alternatively you may need to remove tables which have long running transactions in a seperate extract process to eliminte lag on the other tables.
Let’s say that you have a schema named SCOTT and it has 100 hundred tables.
Out of these hundred tables, 50 tables are heavily utilized by application.
For replicating the entire database you can list all the schemas in the database in the extract/replicat parameter file.
Depending the amount of redo generation you can split the tables in a schema in multiple extracts and replicats to improve the performance of data replication. Alternatively youcan also group a set of tables in the confiuration by the application functionality.
Alternatively you may need to remove tables which have long running transactions in a seperate extract process to eliminte lag on the other tables.
Let’s say that you have a schema named SCOTT and it has 100 hundred tables.
Out of these hundred tables, 50 tables are heavily utilized by application.
To improve the overall replication
performance you create 3 extract and 3 replicats as follows:
Ext_1/Rep_1 –> 25 tables
Ext_2/Rep_2 –> 25 tables
Ext_3/Rep_3 –> 50 tables
Ext_1/Rep_1 and Ext_2/Rep_2 contains 25 table each which are heavily utilized or generate more redo.
Ext_3/Rep_3 contains all the other 50 tables which are least used.
Ext_1/Rep_1 –> 25 tables
Ext_2/Rep_2 –> 25 tables
Ext_3/Rep_3 –> 50 tables
Ext_1/Rep_1 and Ext_2/Rep_2 contains 25 table each which are heavily utilized or generate more redo.
Ext_3/Rep_3 contains all the other 50 tables which are least used.
87.
Question33.
How Can We Report On Long Running Transactions?
Answer :
The WARNLONGTRANS parameter can be
specified with a threshold time that a transaction can be open before Extract
writes a warning message to the ggs error log.
Example: WARNLONGTRANS 1h, CHECKINTERVAL
10m
88.
Question34.
What Command Can Be Used To View The Checkpoint Information For The Extract
Process?
Answer :
Use the following command to view the
Extract checkpoint information.
GGSCI> info extract , showch
GGSCI> info extract ext_fin, showch
89.
Question35.
How Is The Restartcollision Parameter Different From Handlecollisions?
Answer :
The RESTARTCOLLISION parameter is used
to skip ONE transaction only in a situation when the GoldenGate process crashed
and performed an operation (INSERT, UPDATE & DELETE) in the database but
could not checkpoint the process information to the checkpoint file/table. On
recovery it will skip the transaction and AUTOMATICALLY continue to the next
operation in the trail file.
When using HANDLECOLLISION GoldenGate
will continue to overwritten and process transactions until the parameter is
removed from the parameter files and the processes restarted.
90.
Question36.
How Do You View The Data Which Has Been Extracted From The Redo Logs?
Answer :
The logdump utility is used to open the
trail files and look at the actual records that have been extracted from the
redo or the archive log files.
91.
Question37.
What Does The Rman-08147 Warning Signify When Your Environment Has A Goldengate
Capture Processes Configured?
Answer :
This occurs when the
V$ARCHIVED_LOG.NEXT_CHANGE# is greater than the SCN required by the GoldenGate
Capture process and RMAN is trying to delete the archived logs. The RMAN-08147
error is raised when RMAN tries to delete these files.
When the database is open it uses the
DBA_CAPTURE values to determine the log files required for mining. However if
the database is in the mount state the V$ARCHIVED_LOG. NEXT_CHANGE# value is
used.
92.
Question38.
How Would You Look At A Trail File Using Logdump, If The Trail File Is
Encrypted?
Answer :
You must use the DECRYPT option before
viewing data in the Trail data.
List few useful Logdump commands to view and search data stored in OGG trail files.
Below are few logdump commands used on a daily basis for displaying or analyzing data stored in a trail file.
$ ./logdump – to connect to the logdump prompt
logdump> open /u01/app/oracle/dirdat/et000001 – to open a trail file in logdump
logdump> fileheader on – to view the trail file header
logdump> ghdr on – to view the record header with data
logdump> detail on – to view column information
logdump> detail data – to display HEX and ASCII data values to the column list
logdump> reclen 200 – to control how much record data is displayed
logdump> pos 0 – To go to the first record
logdump> next (or simply n) – to move from one record to another in sequence
logdump> count – counting records in a trail
List few useful Logdump commands to view and search data stored in OGG trail files.
Below are few logdump commands used on a daily basis for displaying or analyzing data stored in a trail file.
$ ./logdump – to connect to the logdump prompt
logdump> open /u01/app/oracle/dirdat/et000001 – to open a trail file in logdump
logdump> fileheader on – to view the trail file header
logdump> ghdr on – to view the record header with data
logdump> detail on – to view column information
logdump> detail data – to display HEX and ASCII data values to the column list
logdump> reclen 200 – to control how much record data is displayed
logdump> pos 0 – To go to the first record
logdump> next (or simply n) – to move from one record to another in sequence
logdump> count – counting records in a trail
93.
Question39.
Why Should I Upgrade My Goldengate Extract Processes To Integrated Extract?
Answer :
Oracle is able to provide faster
integration of the new database features by moving the GoldenGate Extraction
processes into the database. Due to this, the GoldenGate Integrated Extract has
a number of features like Compression which are not supported in the
traditional Extract. You can read more about how to upgrade to Integrated
Extract and more about Integrated Delivery. Going forward preference should be
give to create new extracts as Integrated Extracts and also to upgrade existing
traditional Extracts.
94.
Question40.
What Is The Minimum Database Version Which Supports Integrated Delivery?
Answer :
Oracle 11.2.0.4 is the the minimum
required database version which supports both Integrated extract and Integrated
Reaplicat.
95.
Question41.
What Databases Supports Goldengate Integrated Delivery?
Answer :
Oracle Integrated Delivery is only
available for Oracle Databases.
96.
Question42.
With Integrated Delivery, Where Can We Look For The Performance Stats?
Answer :
Yes with 12c, performance statistics are
collected in the AWR repository and the data is available via the normal AWR
reports.
97.
Question43.
What Are The Steps Required To Add A New Table To An Existing Replication
Setup?
Answer :
The steps to be executed would be the
following:
o Include the new table
to the Extract & pump process.
o Obtain starting
database SCN and Copy the source table data to the target database
o Start Replicat on
target at the source SCN database point.
98.
Question44.
What Does The Goldengate Csn Equate To, In The Oracle Database?
Answer :
It is equivalent of the Oracle database
SCN transaction number.
99.
Question45.
How Do You Create A Csv File From The Extracted Data?
Answer :
You will have to use the CSV Flat File
Adaptor to create CSV files. The source would be the extract trail files which
use the configuration of the adaptor settings to generate CSV files.
100.
Question46.
What Is The Purpose Of The Defgen Utility?
Answer :
When the source and the target schema
objects are not the same (different DDL’s) the Replicat process needs to know
the source definition of the objects. The output from the DEFGEN utility is
used in conjunction with the trail data to determine which column value in the
trail belongs to which column.
101.
Question47.
What Are The Basic Resources Required To Configure Oracle Goldengate High
Availability Solution With Oracle Clusterware?
Answer :
There are 3 basic resources required:
o Virtual IP
o Shared storage
o Action script
102.
Question48.
I Have A One-way Replication Setup. The System Administration Team Wants To
Apply An Os Patch To Both The Ogg Source Host And The Target Servers. Provide
The Sequence Of Steps That You Will Carry Before And After Applying This Patch?
Answer :
Procedure:
o Check to make sure
that the Extract has processed all the records in the data source (Online
Redo/archive logs)
GGSCI> send extract , logend
(The above command should print YES)
(The above command should print YES)
o Verify the extract,
pump and replicat has zero lag.
GGSCI> send extract , getlag
GGSCI> send extract , getlag
GGSCI> send replicat , getlag
GGSCI> send extract , getlag
GGSCI> send replicat , getlag
(The above command should pring “At EOF,
no more records to process.”)
o Stop all application
and database activity.
o Make sure that the
primary extract is reading the end of the redolog and that there is no LAG at
all for the processes.
o Now proceed with
stopping the processes:
Source:
o Stop the primary
extract
o Stop the pump extract
o Stop the manager
process
o Make sure all the
processes are down.
Target:
o Stop replicat process
o Stop mgr
o Make sure that all the
processes are down.
o Proceed with the
maintenance
o After the maintenance,
proceed with starting up the processes:
Source:
o Start the manager
process
o Start the primary
extract
o Start the pump extract
(Or simply all the extract processes as
GGSCI> start extract *)
o Make sure all that the
processes are up.
Target:
o Start the manager
process
o Start the replicat
process.
o Make sure that all the
processes are up.
103.
Question49.
What Parameters Can Be Used To Configure Oracle Goldengate To Extract Data From
Archived Redo Log Files Only?
Answer :
Use the TRANLOGOPTIONS ARCHIVEDLOGONLY
option in the parameter file.
104.
Question50.
What Is The Best Practice To Delete The Extract Files In Ogg?
Answer :
Use the manager process to delete the
extract files after they are consumed by the extract/replicat process
PURGEOLDEXTRACTS
/u01/app/oracle/dirdat/et*, USECHECKPOINTS, MINKEEPHOURS 2
105.
Question51.
I Have A Table Called ‘test’ On Source And Target With Same Name, Structure And
Data Type But In A Different Column Order. How Can You Setup Replication For
This Table?
Answer :
OGG by default assumes that the sources
and target tables are identical. A table is said to be identical if and only if
the table structure, data type and column order are the same on both the source
and the target.
If the tables are not identical you must
use the parameter ‘SOURCEDEFS’ pointing to the source table definition and
‘COLMAP’ parameter to map the columns from source to target.
106.
Question52.
What Are The Different Ogg Initial Load Methods Available?
Answer :
OGG has 2 functionalities, one it is
used for Online data Replication and second for Initial Loading.
If you are replicating data between 2
homogeneous databases then the best method is to use database specific method
(Exp/Imp, RMAN, Transportable tablespaces, Physical Standby and so on).
Database specific methods are usually faster than the other methods.
—If you are replicating data between 2
heterogeneous databases or your replicat involves complex transformations, then
the database specific method can’t be used. In those cases you can always use
Oracle GoldenGate to perform initial load.
Within Oracle GoldenGate you have 4 different
ways to perform initial load.
o Direct Load – Faster
but doesn’t support LOB data types (12c include support for LOB)
o Direct Bulk Load –
Uses SQL*LOAD API for Oracle and SSIS for MS SQL SERVER
o File to replicat –
Fast but the rmtfile limit is 2GB. If the table can’t be fit in 1 rmtfile you
can use maxfiles but the replicat need to be registered on the target OGG home
to read the rmtfiles from source.
o File to Database
utility – depending on the target database, use SQL*LOAD for Oracle and SSIS
for MS SQL SERVER and so on.
Oracle GoldenGate initial loading reads
data directly from the source database tables without locking them. So you
don’t need downtime but it will use database resources and can cause
performance issues. Take extra precaution to perform the initial load during
the non-peak time so that you don’t run into resource contention.
107.
Question53.
I Am Migrating My Oracle Database From Non-exadata To Exadata But My Source Ogg
Software And Target Ogg Software Versions Are Different (say Source Is Running
Ogg 11.1 And Target Is Running Ogg 11.2). How Can I Configure My Ogg Process To
Work In Such Configuration?
Answer :
It is recommended that all instances of
Oracle GoldenGate be the same version to take advantage of the new
functionality, but this is not possible all the time and is not required. In
this scenario, OGG provides a parameter called ‘FORMAT RELEASE’ which allows
customers to use different versions of Oracle GoldenGate Extract, trail files,
and Replicat together.
Example: RmtTrail /u01/app/oracle/dirdat,
Format Release 11.1
Note: The input and output trails of a
data pump must have the same trail file version.
108.
Question54.
We Want To Setup One-way Data Replication For My Online Transaction Processing
Application. However There Are Compressed Tables In The Environment. Please
Suggest How I Can Achieve It?
Answer :
You must use OGG 11.2 and configure
GoldenGate Integrated Capture process to extract data from compressed tables.
Note: Pre OGG 11.2 doesn’t support
extracting data from compressed tables
Goldengate Interview Questions:
What type of Topology does Oracle Goldengate support?
·
Unidirectional
·
Bidirectional
·
Peer-to-peer
·
Broadcast
·
Consolidation
·
Cascasding
What are the main components of the Oracle Goldengate
replication?
The replication configuration consists of the following
processes.
·
Manager
·
Extract
·
Pump
·
Replicate
What database does Oracle GoldenGate support for replication?
·
Oracle Database
·
TimesTen
·
MySQL
·
IBM DB2
·
Microsoft SQL Server
·
Informix
·
Teradata
·
Sybase
·
Enscribe
·
SQL/MX
What transaction types does Oracle Goldengate support for
Replication?
Goldengate supports both DML and DDL Replication from the
source to target.
What are the supplemental logging pre-requisites?
The following supplemental logging is required.
·
Database supplemental logging
·
Object level logging
Why is Supplemental logging required for Replication?
When a transaction is committed on the source database, only
new data is written to the Redo log. However for Oracle to apply these
transactions on the destination database, the before image key values are
required to identify the effected rows. This data is also placed in the trail
file and used to identify the rows on the destination, using the key value the
transactions are executed against them.
List important considerations for bi-directional replication?
The customer should consider the following points in an
active-active replication environment.
·
Primary Key: Helps to identify
conflicts and Resolve them.
·
Sequences: Are not supported. The
work around is use to use odd/even, range or concatenate sequences.
·
Triggers: These should be disabled or
suppressed to avoid using uniqueness issue
·
Data Looping: This can easy avoided
using OGG itself
·
LAG: This should be minimized. If a
customer says that there will not be any LAG due to network or huge load, then
we don’t need to deploy CRDs. But this is not the case always as there would be
some LAG and these can cause Conflicts.
·
CDR (Conflict Detection &
Resolution): OGG has built in CDRs for all kind of DMLs that can be used to
detect and resolve them.
·
Packaged Application: These are not
supported as it may contain data types which are not support by OGG or it might
not allow the application modification to work with OGG.
Are OGG binaries supported on ASM Cluster File System (ACFS)?
Yes, you can install and configure OGG on ACFS.
Are OGG binaries supported on the Database File System (DBFS)?
What files can be stored in DBFS?
No, OGG binaries are not supported on DBFS. You can
however store parameter files, data files (trail files), and checkpoint files
on DBFS.
What is the default location of the GLOBALS file?
A GLOBALS file is located under Oracle GoldenGate
installation directory (OGG HOME)
Where can filtering of data for a column be configured?
Filtering of the columns of a table can be set at the
Extract, Pump or Replicat level.
Is it a requirement to configure a PUMP extract process in OGG
replication?
A PUMP extract is an option, but it is highly recommended
to use this to safe guard against network failures. Normally it is configured
when you are setting up OGG replication across the network.
What are the differences between the Classic and integrated
Capture?
Classic Capture:
·
The Classic Capture mode is the
traditional Extract process that accesses the database redo logs (optionally
archive logs) to capture the DML changes occurring on the objects specified in
the parameter files.
·
At the OS level, the GoldenGate user
must be a part of the same database group which owns the database redo logs.
·
This capture mode is available for
other RDBMS as well.
·
There are some data types that are
not supported in Classic Capture mode.
·
Classic capture can’t read data from
the compressed tables/tablespaces.
Integrated Capture (IC):
·
In the Integrated Capture mode,
GoldenGate works directly with the database log mining server to receive the
data changes in the form of logical change records (LCRs).
·
IC mode does not require any special
setup for the databases using ASM, transparent data encryption, or Oracle RAC.
·
This feature is only available for
oracle databases in Version 11.2.0.3 or higher.
·
It also supports various object
types which were previously not supported by Classic Capture.
·
This Capture mode supports extracting
data from source databases using compression.
·
Integrated Capture can be configured
in an online or downstream mode.
List the minimum parameters that can be used to create the
extract process?
The following are the minimium required parameters which
must be defined in the extract parameter file.
·
EXTRACT NAME
·
USERID
·
EXTTRAIL
·
TABLE
What are macros?
Macro is an easier way to build your parameter file. Once
a macro is written it can be called from different parameter files. Common
parameters like username/password and other parameters can be included in these
macros. A macro can either be another parameter file or a library.
Where can macros be invoked?
The macros can be called from the following parameter
files.
·
Manager
·
Extract
·
Replicat
·
Gobals
How is a macro defined?
A macro statement consists of the following.
·
Name of the Macro
·
Parameter list
·
Macro body
Sample:
MACRO #macro_name
PARAMS (#param1, #param2, …)
BEGIN
< macro_body >
END;
MACRO #macro_name
PARAMS (#param1, #param2, …)
BEGIN
< macro_body >
END;
I want to configure multiple extracts to write to the same
exttrail file? Is this possible?
Only one Extract process can write to one exttrail at a
time. So you can’t configure multiple extracts to write to the same exttrail.
What type of Encryption is supported in Oracle Goldengate?
Oracle Goldengate provides 3 types of Encryption.
·
Data Encryption using Blow fish.
·
Password Encryption.
·
Network Encryption.
What are the different password encrytion options available with
OGG?
You can encrypt a password in OGG using
·
Blowfish algorithm and
·
Advance Encryption Standard (AES)
algorithm
What are the different encryption levels in AES?
You can encrypt the password/data using the AES in three
different keys
a) 128 bit
b) 192 bit and
c) 256 bit
b) 192 bit and
c) 256 bit
Is there a way to check the syntax of the commands in the
parameter file without actually running the GoldenGate process
Yes, you can place the SHOWSYNTAX parameter in the
parameter file and try starting. If there is any error you will see it.
How can you increase the maximum size of the read operation into
the buffer that holds the results of the reads from the transaction log?
If you are using the Classical Extract you may use the
TRANSLOGOPTION ASMBUFSIZE parameter to control the read size for ASM Databases.
What information can you expect when there us data in the
discard file?
When data is discarded, the discard file can contain:
1. Discard row details
2. Database Errors
3. Trail file number
1. Discard row details
2. Database Errors
3. Trail file number
What command can be used to switch writing the trail data to a
new trail file?
You can use the following command to write the trail data
to a new trail file.
SEND EXTRACT ext_name, ROLLOVER
SEND EXTRACT ext_name, ROLLOVER
How can you determine if the parameters for a process was
recently changed
When ever a process is started, the parameters in the .prm
file for the process is written to the process REPORT. You can look at the
older process reports to view the parameters which were used to start up the
process. By comparing the older and the current reports you can identify the
changes in the parameters.
What is checkpoint table?
ANS:
Create the GoldenGate Checkpoint table
GoldenGate maintains its own Checkpoints which is a known position in the trail file from where the Replicat process will start processing after any kind of error or shutdown.
This ensures data integrity and a record of these checkpoints is either maintained in files stored on disk or table in the database which is the preferred option.
ANS:
Create the GoldenGate Checkpoint table
GoldenGate maintains its own Checkpoints which is a known position in the trail file from where the Replicat process will start processing after any kind of error or shutdown.
This ensures data integrity and a record of these checkpoints is either maintained in files stored on disk or table in the database which is the preferred option.
=================
OTHER :
Oracle DBA Goldengate interview questions:
OTHER :
Oracle DBA Goldengate interview questions:
1. What are the important process in GG?
2. What is difference between CSN, SCN?
3. What is extract, replicat?
4. What are the GG types or topologies?
5. How to get log from source & target?
6. Difference between classic & integral
extract process?
7. What are the pre-request for GG?
8. Types of replication in GG?
9. What are the areas monitoring required in GG?
10. What is the difference between local trail / Remote Trail?
11. What are the GG utilities used?
12. How to define size for extract file?
13. How to change GG writing extract file?
14. How to purge extract file automatically?
15. How to check how many extract file created per day?
16. What to check, when replication steps in primary & target?
17. How to start/stop, extract/replicat for source & target?
18. How to sync target with source if we missed the trail file?
19. What is the format for trail file?
20. What is the default location for the trail file & can we
change the trail file location?
21. What is the datapump in GG?
22. Is there any limit in extract groups / Replicat numbers?
23. What are the GG views?
24. What is GLOBAL file in GG & location?
25. What are the directory in GG?
26. What is credential store?
27. What are the types of encryption supported by GG?
28. What is obey usage?
29. What is discard file & usage? Can we change from default
location?
30. How to switch writing to new trail file?
31. What is the proper steps to shutdown GG database?
32. How to automatically remove old trail file?
33. How to enable automatic for replicat/ extract?
34. What are the parameter file used by GG?
35. What are the limitation for bidirectional GG?
36. What is the use of REPERROR in GG?
37. How to check latency between source & GG?
38. What is utility used to open trail file in GG?
39. How to check particular trail file needed or not?
40. What are the various GG operational types?
41. What is reverse utility?
42. In unidirectional GG setup, steps to add a table for
replication?
43. Can we change the GG oracle DB without setting up in operating
system level?
44. What are the types extract check point positions?
45. How to change the trail file size in GG?
46. How to check all records processed in replicat? Info replicat
group , it should return
47. How to calculate & allocate swap space for GG server?
48. How to check last 5 recent extract checkpoints?
49. How to control the checkpoint in extract & replicat?
50. How to confirm extract in working file in GG?
51. What are the recovery types in extract process?
52. How to setup lag process time in GG?
53. How to manager process to check extract/replicat lag?
54. How to view GG error log & process report?
55. How to check how many records processed in replication
/extraction?
56. What is discard file? How to check default location?
57. How to check ,how many updates/deletes happened on GG in particular time?
58. what is archive log only mode in GG?
59. what is the command to see the recovery status in GG?
60.what is the use of SOURCEISTABLE &SPECIALRUN keyword in GG?
57. How to check ,how many updates/deletes happened on GG in particular time?
58. what is archive log only mode in GG?
59. what is the command to see the recovery status in GG?
60.what is the use of SOURCEISTABLE &SPECIALRUN keyword in GG?
==================================================================
How to Resync a Single Table with Minimum Impact
to Other Tables' Replication? (Doc ID 966211.1)
============================================
1. When DDL is active, will
be not able to perform Filtering,Mapping,Transformation of data.
2. Using GG - We can replicate DML and DDL Operations.
3. Can we move similar or dissimilar data across different Oracle Versions - YES.
4. GG Supports - Filtering,Mapping,Transformation of data.
5. Trail Files can reside on NAS / SAN
6. How do you estimate trail space.
To estimate required trail space
1. Estimate the longest time that the network could be unavailable. Plan to store enough
data to withstand the longest possible outage, because otherwise you will need to
resynchronize the source and target data if the outage outlasts disk capacity.
2. Estimate how much transaction log volume your business applications generate in one
hour.
3. Use the following formula to calculate the required disk space.
[log volume in one hour] x [number of hours downtime] x .4 = trail disk space
2. Using GG - We can replicate DML and DDL Operations.
3. Can we move similar or dissimilar data across different Oracle Versions - YES.
4. GG Supports - Filtering,Mapping,Transformation of data.
5. Trail Files can reside on NAS / SAN
6. How do you estimate trail space.
To estimate required trail space
1. Estimate the longest time that the network could be unavailable. Plan to store enough
data to withstand the longest possible outage, because otherwise you will need to
resynchronize the source and target data if the outage outlasts disk capacity.
2. Estimate how much transaction log volume your business applications generate in one
hour.
3. Use the following formula to calculate the required disk space.
[log volume in one hour] x [number of hours downtime] x .4 = trail disk space
============================================
FAQ: Most Common Goldengate
Errors and Issues (Doc ID 1354649.1)
Oracle GoldenGate Logdump Complete Reference FAQ (Doc ID 1446672.1)============================================
Oracle GoldenGate Logdump Complete Reference FAQ (Doc ID 1446672.1)============================================
1.How to resync a single table with minimum
impact to other tables replication
1. Stop your Replicat on the Target side,
most likely it already abended, goto step 2.
GGSCI>STOP REPLICAT <NAME>
2. Comment out the MAP statement of that
table in your Replicat parameter file
3. Restart your Replicat, so all other
tables are still being replicated.
4. Use the LAG command to check the LAG of
the Replicat, wait till the Replicat catches up.
ggsci>lag replicat <name>
How to tell it catches up? When the LAG
result shows AT EOF
5. Stop both the target Replicat and source
extract.
6. In source extract parameter file, add:
FETCHOPTIONS FETCHPKUPDATECOLS
N.B. FETCHOPTIONS FETCHPKUPDATECOLS is not
valid for NSK Guardian extracts. Omit
it.
7. restart the extract.
8. reload the table from the source to
target using your initial load method after deleting existing rows in the
target table.
9. Wait till initial load is finished, get
the table back in your Replicat parameter file, put on handlecollisions
parameter
10. Restart your Replicat
11. Wait till the Replicat catches up, stop
the source Extract and target Replicat
12. Comment out handlecollisions in Replicat
parameter file, and comment out "FETCHOPTIONS FETCHPKUPDATECOLS" in
Extract parameter.
13. restart the Extract and Replicat.
2.What Causes the GGS ERROR 160 Bad Column
Index(xxxx) Error in Replicat
1) Column mismatch between source and
target
The GGS ERROR 160 Bad Column Index(xxxx)
error in Replicat is caused by a Source Column Index "xxxx" greater
than the number of columns in the Source Table Definition File Input to the
Replicat process or if the ASSUMETARGETDEFS
parameter is used and the Source Table and Target Table do not have the same
structure, the Source Table has more columns than the Target Table.
Example
GGS ERROR 160 Bad column index(129)
specified for table {table name}, max columns = 127
Explanation
The source table trail record has an index
and data for column number 129 but only 127 columns are defined in the Source
Table Definition File or when the ASSUMETARGETDEFS parameter is used in the
Replicat Parameter File the Target Table contains 127 columns.
This is generally caused by changes in the
Source Table or Target Table(i.e. columns have been added or deleted and a new
Source Definition File has not been created to reflect the Source Table
structure to match the Trail Records that Replicat is trying to process.
To resolve this error, run DEFGEN on the
Source System for the Table causing the Replicat abend, copy that Definition
File to the Target system. Add this
SOURCEDEFS file to the Replicat Parameter file and restart the Replicat process.
Note:
This applies to all Open Systems platforms except z/OS(IBM mainframe)
2) Encryption and decrypttrail
Check if the trail source is encrypted and
if it is, then add the decrypttail parameter to the Replicat to eliminate the
error.
If the extract does not use encryption, and
the replicat has decrypttrail enabled, replicat will read an un-encrypted
number and translate it to a column index that does not conform to the proper
index number. In that case remove the decrypttail parameter from the Replicat.
Note: DECRYPT/ENCRYPT parameters must be
set before RMTTRAIL parameter for a correct behavior of encryption/decription
3.How to Recover from Extract ERROR 180
Encountered Commit SCN<n.xxx> That Is Not Greater than the Highest SCN
Aly Processed
This error occurs in an Oracle RAC environment
after a transaction is written to the idle node but does not yet appear in the
redo log when the current transaction is processed. This transaction will have
a higher SCN then the previous transaction.
For a discussion on how to avoid this
issue, please view Document 957112.1.
Solution Overview:
Verify that the Extract Pump and Replicat
have completely processed the records in trail then alter Extract Pump and
Replicat to process new trail created by the ETROLLOVER from the main Extract.
Solution Details:
1. Do an ETROLLOVER on Extract, and take
note of the new sequence number of the trail file.
ALTER EXTRACT [name], ETROLLOVER
2. Start extract
START EXTRACT [name]
3. Send PUMP, LOGEND, to see if it's at the
end of the previous trail.
SEND EXTRACT [pump_name], LOGEND
4. Once it is at the end of the trail file,
You must stop the pump, and do an ETROLLOVER for it too. Take note of the new
trail file sequence number that is created from this step.
STOP EXTRACT [pump_name]
ALTER EXTRACT [pump_name], ETROLLOVER
5. Alter the pump to SEQNO to the new trail
file created from step #1.
ALTER EXTRACT [pump_name], EXTSEQNO #####
EXTRBA 0
6. Restart pump
START EXTRACT [pump_name]
7. Send Replicat, LOGEND to make sure it
has processed all the remaining data, and stop Replicat.
SEND REPLICAT [name], LOGEND
STOP REPLICAT [name]
8. If replicat is not at end of trail,
generate a report and forcestop replicat
SEND REPLICAT [name], REPORT
STOP REPLICAT [name]!
9. Add the following parameters to replicat
parameter file to allow replicat to process each trail record as a single
transaction, or set them to 1 if you have any of these parameters:
GROUPTRANSOPS 1
MAXTRANSOPS 1
10. Restart replicat
START REPLICAT [name]
11. Once replicat has completely processed
the trail, stop the replicat
STOP REPLICAT [name]
12. Edit the replicat parameter file:
- Add parameter HANDLECOLLISIONS to Replicat parameter file
- Remove or comment out GROUPTRANSOPS and MAXTRANSOPS or revert them
back to their original values.
Note: There are pre-conditions for
using HANDLECOLLISIONS . Ther emust be
either
a. no pkupdates
or
b. extract
has "FETCHOPTIONS FETCHPKUPDATECOLS"
Also all the tables should have Primary key
or unique index on the table to avoid data integrity issues when using
handlecollisions.
13. ALTER REPLICAT, SEQNO to the new trail
file created in step #4.
ALTER REPLICAT [name], EXTSEQNO ######
EXTRBA 0
14. Start Replicat
START REPLICAT [name]
15. Once Replicat has processed the out of
order SCN operations, disable HANDLECOLLISIONS. You could also look for the CSN
and wait for Replicat to checkpoint past it.
SEND REPLICAT [name], NOHANDLECOLLISIONS.
16.Edit the replicat parameter and comment
out the HANDLECOLLISIONS parameter. You do not need to stop/restart replicat.
This will ensure that on any subsequent replicat restarts the parameter is
disabled.
Note:
If the out of order transactions happen to
the SAME record on the SAME table, then the Replicat will probably abend in
step 7 or step 10. If that is the case, look at the discard file find out that
particular record (the primary key value). Then add handlecollisions, continue
with the steps below. Later, once the Replicat caught up, that particular
record needs to be manually synced.
However the chance of an out of order
transactions happening to the SAME record on SAME table is rare. Given an
example, this will mean that the user application insert record A from node 1,
then immediately it updated this record A on node 2. Out of order transactions
usually happens to different tables or different records on same table.
In other words, if you do see collisions
been handled in this recovery procedure, that means you have to re-sync ONLY
those particular records that have collisions.
4.GoldenGate Extract Abends with
"Unable to lock file" Error For Trail File
cause:-
The trails cannot be exclusively lock for
writes by the server/collector process running on the target. As of v10.4,
Server/Collector locks the trail file to prevent multiple processes from
writing to the same trail file, so new Server/Collector processes are unable to
lock the trail files.
Network outages that last longer than the
time the TCP/IP stack is configured to
retransmit unacknowledged packets may result in "orphan" TCP/IP
connections on the RMTHOST system. Since the local system has closed the
connections and the "RST" packets were lost due to the network
outage, no packets (data or "control") will ever be sent for these
connections.
Since the RST packets were not delivered to
the RMTHOST, the TCP/IP stack will not present an error to the Server/Collector
process The Server/Collector process will continue to wait, passively, forever,
for new data that will never arrive because the Extract process on the other
system is no longer running.
A second cause for this symptom is that the
remote server was rebooted and the Network-Attached Storage (NAS) device where
the target trails reside did not detect and was not notified of the reboot, so
the locks acquired prior to the reboot are still considered to be in force.
Solution:-
1) Investigate why a server/collector
process is still running when a new server/collector process is started to
access the same trail. You can kill orphan server/collector to resolve the immediate
issue.
2) You can overwrite the server/collector
by using the RMTHOST UNLOCKEDTRAILS option. Use this option with CAUTION as it
can cause trail corruption. You must investigate why the trails are locked by
another server or kill these server/collector processes.
NOTE that if an extract pump is stopped
normally, the server/collector process stops immediately. By default, current
versions (11.1/11.2 onwards) has a default timeout of 5 mins. Please refer to
the reference for your version's default. One can overwrite this value using
the RMTHOST TIMEOUT option. Example setting timeout to 40 seconds.
RMTHOST 192.168.10.1, MGRPORT 7809, PARAMS
TIMEOUT 40
This tells the Server/Collector to
terminate if it doesn't receive any checkpoint information for more than 40
seconds. DO NOT set too low a value, TCPIP communication performance varies
throughout the day.
Other notes:
NAS related issue:
In the case where the NAS was unaware that
the system had been rebooted, the best long-term solution is to contact the NAS
vendor, who might be able to provide an utility program that can be run early
in the system startup process to notify the NAS that it should release all
locks owned by this system. The following procedure might offer a short-term
work-around:
Stop all REPLICAT processes that read the
trail file.
Stop the target MGR process.
Copy trail file xx000000 to xx000000.bk
Delete trail file xx000000.
mv xx000000.bk to xx000000.
Repeat steps 2-5 for each trail file that
can't be locked.
From the shell, kill the server (collector)
process that was writing to the trail.
ie Check on OS level for orphan processes, e.g. on unix style OS's: ps
-ef | grep server
If
any such orphan servers exist, e.g.:
oracle 27165 1 0
11:20 ? 00:00:00 ./server -p 7840 -k -l
/opt/oracle/gg/ggserr.log
Then: kill 27165 (or, kill -9 27165) (for this particular case)
Start MGR.
Start the REPLICAT processes.
Re-start the extract that abended and gave
this error message.
Note that this may not work, depending on
the NAS and the way it keeps track of advisory file locks acquired using fcntl(
F_GETLK ).
Cluster failover:
When a system is failover to another node,
the GoldenGate processes should be stopped typically by using ggsci > stop *
and > stop mgr commands, however processes such a server/collectors remain
running. Stop the extract pumps manually or kill the processes. You should
check that no processes or running from the GoldenGate directory before
switching GoldenGate to run on another node.
5.When Replicat is Running but Not Applying
Records
Thse causes are:
1) The wrong trail is being read
2) The wrong trail file is being read
3) The table name format is incorrectly
specified
4)
Wrong table name specified
Each of these cases will show a status of running.
A review of the report will show that no records have been replicated.
Case 1: The wrong trail is being read
Do an info detail of the extract or look at
the rmttrail statement in the extract params
Do an info detail of the replicat or review
the replicat report
The replicat trail file being read must be
the same name, machine and location of the trail the extract is writing.
Common errors are specifying the wrong
target machine or the wrong subdirectory for the trail
Erroneous extract param files and add
exttrail or add replicat statements are often the cause
This is a setup error and needs to be
reconfigured
Case 2: The wrong trail file is being read
Do an info detail of the replicat or review
the replicat report
Examine the trail files (ls -l) on the
target.
Does the trail file being read exist?
Is the replicat at the end of a trail file
not advancing to the next available file?
The solution is to do an > alter rep
<repname>, extseqno <NNNN>, extrba 0 to cause the replicat to
advance to the next file and restart the replicat
This is commonly caused by doing an
etrollover on the source. This generally requires doing an alter on all
downstream processes. This is an administrative error.
Case 3:
The table name format is incorrectly specified
The schema.table name in the replicat param
file must match the schema.table name in the trail.
Do a logdump on a trail file and do a count
command with detail turned on
Or use logdump to look directly at a row to
be applied with ghdr turned on
The table.schema in the replicat must match
that in the trail file. On NSK, this includes matching any system name
occurrence or absence.
Additionally, changes may be made on the
source side that may change the table specification
This is commonly a setup error that is easily
corrected. Adjust the replicat specification, repoint the replicat to the
beginning of data and rerun.
Case4:
Wrong table name specified
This could be a case where the source table
name of our interest is not the one which is intended.
Need to check the parameter file and
correct the table name. Once the check
is done, need to alter the replicat back to earlier sequence or to a particular
RBA and restart it to apply the transactions.
Recap of Oracle GoldenGate
12c Webcast with Q&A
Simply amazing! That’s
how I would summarize last week’s
webcast for Oracle GoldenGate 12c. It was a very interactive event with hundreds of live attendees and hundreds of great questions. In the presentation part my
colleagues, Doug Reid and Joe deBuzna, went over the new features of Oracle GoldenGate 12c. They explained Oracle GoldenGate 12c key new features including:
webcast for Oracle GoldenGate 12c. It was a very interactive event with hundreds of live attendees and hundreds of great questions. In the presentation part my
colleagues, Doug Reid and Joe deBuzna, went over the new features of Oracle GoldenGate 12c. They explained Oracle GoldenGate 12c key new features including:
· Integrated Delivery for Oracle Database,
· Coordinated Delivery for non-Oracle databases,
·
Support for Oracle
Database 12c multitenant architecture,
Database 12c multitenant architecture,
· Enhanced high availability via integration
with Oracle Data Guard Fast-Start Failover,
·
Expanded heterogeneity, i.e. support for new databases and
operating systems,
· Improved security,
·
Low-downtime database migration
solutions for Oracle E-Business Suite,
solutions for Oracle E-Business Suite,
· Integration with Oracle Coherence.
We also had a nice
long and live Q&A section. In the previous Oracle GoldenGate webcasts, we
could not respond to all audience questions in a 10-15 minute timeframe at the
end of the presentation. This
time we kept the presentation part short and left more than 30 minutes for
Q&A. To our surprise, we could not answer even half of the questions we
received.
time we kept the presentation part short and left more than 30 minutes for
Q&A. To our surprise, we could not answer even half of the questions we
received.
If you missed this
great webcast discussing the new features of Oracle GoldenGate 12c, and
more than 30 minutes of Q&A with GoldenGate Product Management, you can
still watch it on demand via the
link below.
link below.
On this blog post I would like to provide
brief answers from our PM team for some of the questions that we were not
able to answer during the live webcast.
1) Does Oracle
GoldenGate
replicate DDL statements or DML for Oracle Database?
replicate DDL statements or DML for Oracle Database?
Oracle GoldenGate replicates DML and DDL operations for Oracle Database
and Teradata.
and Teradata.
2) Where do we get
more
info on how to setup integration with Data Guard Fast-Start Failover (FSFO)?
info on how to setup integration with Data Guard Fast-Start Failover (FSFO)?
Please see the
following blog posts or documents on My Oracle Support:
Best Practice - Oracle GoldenGate and Oracle
Data Guard -
Switchover/Fail-over Operations for GoldenGate
[My Oracle Support Article ID 1322547.1]
Switchover/Fail-over Operations for GoldenGate
[My Oracle Support Article ID 1322547.1]
Best Practice - Oracle GoldenGate 11gr2
integrated extract
and Oracle Data Guard - Switchover/Fail-over Operations
[My Oracle Support Article ID 1436913.1]
and Oracle Data Guard - Switchover/Fail-over Operations
[My Oracle Support Article ID 1436913.1]
3) Does GoldenGate
support SQL Server 2012 extraction? In the past only apply was supported.
support SQL Server 2012 extraction? In the past only apply was supported.
Yes, starting with the
new 12c release GoldenGate captures from SQL
Server 2012 in addition to delivery capabilities.
Server 2012 in addition to delivery capabilities.
4) Which RDBMS does
GoldenGate 12c support?
GoldenGate 12c support?
GoldenGate supports
all major RDBMS. For a full list of
supported platforms please see Oracle GoldenGate certification matrix.
supported platforms please see Oracle GoldenGate certification matrix.
5) Could you provide
some
more details please on Integrated Delivery for dynamic parallel threads at
Target side?
more details please on Integrated Delivery for dynamic parallel threads at
Target side?
Please check out our
white papers on Oracle GoldenGate 12c resource kit for more details on the new features, and how
Oracle GoldenGate 12c works with Oracle Database.
6) What is the best
way
to sync partial data (based on some selection criterion) from a table between
databases?
to sync partial data (based on some selection criterion) from a table between
databases?
Please refer to
the article: How To Resync A Single Table With Minimum
Impact To Other
Tables' Replication? [Article ID
966211.1]
Tables' Replication? [Article ID
966211.1]
7) How can GoldenGate
be
better than database trigger to push data into custom tables?
better than database trigger to push data into custom tables?
Triggers can cause
high CPU overhead, in some cases almost
double compared to reading from redo or transaction logs. In addition, they are
intrusive to the application and cause management overhead as application
changes. Oracle GoldenGate's log-based change data capture is not only low-impact in terms of CPU utilization, but also non-intrusive to the application with low maintenance requirements.
double compared to reading from redo or transaction logs. In addition, they are
intrusive to the application and cause management overhead as application
changes. Oracle GoldenGate's log-based change data capture is not only low-impact in terms of CPU utilization, but also non-intrusive to the application with low maintenance requirements.
8) Are there any
customers in the manufacturing industry using GoldenGate and for which application?
customers in the manufacturing industry using GoldenGate and for which application?
We have many
references in manufacturing. In fact,
SolarWorld USA was our guest speaker in the executive video webcast last
November. You can watch the interview here.
RIM Blackberry uses Oracle GoldenGate for multi-master replication between its global
manufacturing systems. Here is another manufacturing customer story from AkzoNobel.
SolarWorld USA was our guest speaker in the executive video webcast last
November. You can watch the interview here.
RIM Blackberry uses Oracle GoldenGate for multi-master replication between its global
manufacturing systems. Here is another manufacturing customer story from AkzoNobel.
9) Does GoldenGate 12c
support compressed objects for replication? Also does it supports BLOB/CLOB
columns?
columns?
Yes, GoldenGate 12c
and GoldenGate 11gR2 both support
compressed objects. GoldenGate has been supporting BLOB/CLOB columns since version 10.
compressed objects. GoldenGate has been supporting BLOB/CLOB columns since version 10.
10) Is Oracle Database
11.2.0.4 mandatory to use GoldenGate 12c Integrated Delivery? Not earlier
versions?
11.2.0.4 mandatory to use GoldenGate 12c Integrated Delivery? Not earlier
versions?
Yes. To use GoldenGate
12c’s Integrated Delivery, for the target environment Oracle Database 11.2.04
and above is
required .
required .
11) We have Oracle
Streams implementation for more than 5 years. We would like to migrate to
GoldenGate, however older version of GoldenGate were not supporting filtering
individual transactions. Is it supported in GoldenGate 12c?
Streams implementation for more than 5 years. We would like to migrate to
GoldenGate, however older version of GoldenGate were not supporting filtering
individual transactions. Is it supported in GoldenGate 12c?
Yes, it is
supported in GoldenGate 12c.
Oracle
GoldenGate Interview Questions
1) What are processes/components in GoldenGate?
Ans:
Manager, Extract, Replicat, Data Pump
2) What is Data Pump process in GoldenGate ?
he Data Pump (not to be confused with the Oracle Export Import Data Pump) is an optional secondary Extract group that is created on the source system. When Data Pump is not used, the Extract process writes to a remote trail that is located on the target system using TCP/IP. When Data Pump is configured, the Extract process writes to a local trail and from here Data Pump will read the trail and write the data over the network to the remote trail located on the target system.
The advantages of this can be seen as it protects against a network failure as in the absence of a storage device on the local system, the Extract process writes data into memory before the same is sent over the network. Any failures in the network could then cause the Extract process to abort (abend). Also if we are doing any complex data transformation or filtering, the same can be performed by the Data Pump. It will also be useful when we are consolidating data from several sources into one central target where data pump on each individual source system can write to one common trail file on the target.
3) What is the command line utility in GoldenGate (or) what is ggsci?
ANS: Golden Gate Command Line Interface essential commands – GGSCI
GGSCI -- (Oracle) GoldenGate Software Command Interpreter
4) What is the default port for GoldenGate Manager process?
ANS:
7809
5) What are important files GoldenGate?
GLOBALS, ggserr.log, dirprm, etc ...
6) What is checkpoint table?
ANS:
Create the GoldenGate Checkpoint table
GoldenGate maintains its own Checkpoints which is a known position in the trail file from where the Replicat process will start processing after any kind of error or shutdown.
This ensures data integrity and a record of these checkpoints is either maintained in files stored on disk or table in the database which is the preferred option.
7) How can you see GoldenGate errors?
ANS:
ggsci> VIEW GGSEVT
ggserr.log file
1) What are processes/components in GoldenGate?
Ans:
Manager, Extract, Replicat, Data Pump
2) What is Data Pump process in GoldenGate ?
he Data Pump (not to be confused with the Oracle Export Import Data Pump) is an optional secondary Extract group that is created on the source system. When Data Pump is not used, the Extract process writes to a remote trail that is located on the target system using TCP/IP. When Data Pump is configured, the Extract process writes to a local trail and from here Data Pump will read the trail and write the data over the network to the remote trail located on the target system.
The advantages of this can be seen as it protects against a network failure as in the absence of a storage device on the local system, the Extract process writes data into memory before the same is sent over the network. Any failures in the network could then cause the Extract process to abort (abend). Also if we are doing any complex data transformation or filtering, the same can be performed by the Data Pump. It will also be useful when we are consolidating data from several sources into one central target where data pump on each individual source system can write to one common trail file on the target.
3) What is the command line utility in GoldenGate (or) what is ggsci?
ANS: Golden Gate Command Line Interface essential commands – GGSCI
GGSCI -- (Oracle) GoldenGate Software Command Interpreter
4) What is the default port for GoldenGate Manager process?
ANS:
7809
5) What are important files GoldenGate?
GLOBALS, ggserr.log, dirprm, etc ...
6) What is checkpoint table?
ANS:
Create the GoldenGate Checkpoint table
GoldenGate maintains its own Checkpoints which is a known position in the trail file from where the Replicat process will start processing after any kind of error or shutdown.
This ensures data integrity and a record of these checkpoints is either maintained in files stored on disk or table in the database which is the preferred option.
7) How can you see GoldenGate errors?
ANS:
ggsci> VIEW GGSEVT
ggserr.log file
- Unidirectional
- Bidirectional
- Peer-to-peer
- Broadcast
- Consolidation
- Cascasding
What are the main components of the Goldengate replication?
The
replication configuration consists of the following processes.
- Manager
- Extract
- Pump
- Replicate
What transaction types does Goldengate support for Replication?
Goldengate
supports both DML and DDL Replication from the source to target.
What are the supplemental logging pre-requisites?
The
following supplemental logging is required.
- Database
supplemental logging
- Object level logging
Why is Supplemental logging required for Replication?
[sociallocker]When
a transaction is committed on the source database, only new data is written to
the Redo log. However for Oracle to apply these transactions on the destination
database, the before image key values are required to identify the effected
rows. This data is also placed in the trail file and used to identify the rows
on the destination, using the key value the transactions are executed against
them.
List important considerations for bi-directional replication?
The
customer should consider the following points in an active-active replication
environment.
- Primary Key:
Helps to identify conflicts and Resolve them.
- Sequences: Are
not supported. The work around is use to use odd/even, range or
concatenate sequences.
- Triggers: These
should be disabled or suppressed to avoid using uniqueness issue
- Data Looping:
This can easy avoided using OGG itself
- LAG: This should
be minimized. If a customer says that there will not be any LAG due to
network or huge load, then we don’t need to deploy CRDs. But this is not
the case always as there would be some LAG and these can cause Conflicts.
- CDR (Conflict
Detection & Resolution): OGG has built in CDRs for all kind of DMLs
that can be used to detect and resolve them.
- Packaged
Application: These are not supported as it may contain data types which
are not support by OGG or it might not allow the application modification
to work with OGG.
Are OGG binaries supported on ASM Cluster File System (ACFS)?
Yes, you
can install and configure OGG on ACFS.
Are OGG binaries supported on the Database File System (DBFS)?
What files can be stored in DBFS?
No, OGG
binaries are not supported on DBFS. You can however store parameter files, data
files (trail files), and checkpoint files on DBFS.
What is the default location of the GLOBALS file?
A
GLOBALS file is located under Oracle GoldenGate installation directory (OGG
HOME)
Where can filtering of data for a column be configured?
Filtering
of the columns of a table can be set at the Extract, Pump or Replicat level.
Is it a requirement to configure a PUMP extract process in OGG
replication?
A PUMP
extract is an option, but it is highly recommended to use this to safe guard
against network failures. Normally it is configured when you are setting up OGG
replication across the network.
What are the differences between the Classic and integrated
Capture?
Classic
Capture:
- The Classic
Capture mode is the traditional Extract process that accesses the database
redo logs (optionally archive logs) to capture the DML changes occurring
on the objects specified in the parameter files.
- At the OS level,
the GoldenGate user must be a part of the same database group which owns
the database redo logs.
- This capture mode
is available for other RDBMS as well.
- There are some
data types that are not supported in Classic Capture mode.
- Classic capture
can’t read data from the compressed tables/tablespaces.
[/sociallocker]
Integrated Capture (IC):
Integrated Capture (IC):
- In the Integrated
Capture mode, GoldenGate works directly with the database log mining
server to receive the data changes in the form of logical change records
(LCRs).
- IC mode does not
require any special setup for the databases using ASM, transparent data
encryption, or Oracle RAC.
- This feature is
only available for oracle databases in Version 11.2.0.3 or higher.
- It also
supports various object types which were previously not supported by
Classic Capture.
- This Capture mode
supports extracting data from source databases using compression.
- Integrated
Capture can be configured in an online or downstream mode.
List the minimum parameters that can be used to create the
extract process?
The
following are the minimium required parameters which must be defined in the
extract parameter file.
- EXTRACT NAME
- USERID
- EXTTRAIL
- TABLE
I want to configure multiple extracts to write to the same
exttrail file? Is this possible?
Only one
Extract process can write to one exttrail at a time. So you can’t configure
multiple extracts to write to the same exttrail.
What type of Encryption is supported in Goldengate?
Oracle
Goldengate provides 3 types of Encryption.
- Data Encryption
using Blow fish.
- Password
Encryption.
- Network
Encryption.
What are the different password encrytion options available with
OGG?
You can
encrypt a password in OGG using
- Blowfish
algorithm and
- Advance
Encryption Standard (AES) algorithm
What are the different encryption levels in AES?
You can
encrypt the password/data using the AES in three different keys
a) 128 bit
b) 192 bit and
c) 256 bit
b) 192 bit and
c) 256 bit
1.What type of
Topology does Oracle Goldengate support?
- Unidirectional
- Bidirectional
- Peer-to-peer
- Broadcast
- Consolidation
- Cascasding
2.What are the main
components of the Oracle Goldengate replication?
The
replication configuration consists of the following processes.
- Manager
- Extract
- Pump
- Replicate
3.What database
does Oracle GoldenGate support for replication?
- Oracle Database
- TimesTen
- MySQL
- IBM DB2
- Microsoft SQL Server
- Informix
- Teradata
- Sybase
- Enscribe
- SQL/MX
4.What transaction
types does Oracle Goldengate support for Replication?
Goldengate
supports both DML and DDL Replication from the source to target.
5.What are the
supplemental logging pre-requisites?
The
following supplemental logging is required.
- Database supplemental
logging
- Object level logging
6.Why is
Supplemental logging required for Replication?
When
a transaction is committed on the source database, only new data is written to
the Redo log. However for Oracle to apply these transactions on the destination
database, the before image key values are required to identify the effected
rows. This data is also placed in the trail file and used to identify the rows
on the destination, using the key value the transactions are executed against
them.
7.List important
considerations for bi-directional replication?
The
customer should consider the following points in an active-active replication
environment.
- Primary Key: Helps to
identify conflicts and Resolve them.
- Sequences: Are not
supported. The work around is use to use odd/even, range or concatenate
sequences.
- Triggers: These should be
disabled or suppressed to avoid using uniqueness issue
- Data Looping: This can easy
avoided using OGG itself
- LAG: This should be
minimized. If a customer says that there will not be any LAG due to
network or huge load, then we don’t need to deploy CRDs. But this is not
the case always as there would be some LAG and these can cause Conflicts.
- CDR (Conflict Detection
& Resolution): OGG has built in CDRs for all kind of DMLs that can be
used to detect and resolve them.
- Packaged Application: These
are not supported as it may contain data types which are not support by
OGG or it might not allow the application modification to work with OGG.
8.Are OGG binaries
supported on ASM Cluster File System (ACFS)?
Yes,
you can install and configure OGG on ACFS.
9.Are OGG binaries
supported on the Database File System (DBFS)? What files can be stored in DBFS?
No,
OGG binaries are not supported on DBFS. You can however store parameter files,
data files (trail files), and checkpoint files on DBFS.
10.What is the
default location of the GLOBALS file?
A
GLOBALS file is located under Oracle GoldenGate installation directory (OGG
HOME)
11.Where can
filtering of data for a column be configured?
Filtering
of the columns of a table can be set at the Extract, Pump or Replicat level.
12.Is it a
requirement to configure a PUMP extract process in OGG replication?
A
PUMP extract is an option, but it is highly recommended to use this to safe
guard against network failures. Normally it is configured when you are setting
up OGG replication across the network.
13.What are the
differences between the Classic and integrated Capture?
Classic
Capture:
- The Classic Capture mode is
the traditional Extract process that accesses the database redo logs
(optionally archive logs) to capture the DML changes occurring on the
objects specified in the parameter files.
- At the OS level, the
GoldenGate user must be a part of the same database group which owns the
database redo logs.
- This capture mode is
available for other RDBMS as well.
- There are some data types
that are not supported in Classic Capture mode.
- Classic capture can’t read
data from the compressed tables/tablespaces.
Integrated
Capture (IC):
- In the Integrated Capture
mode, GoldenGate works directly with the database log mining server to
receive the data changes in the form of logical change records (LCRs).
- IC mode does not require any
special setup for the databases using ASM, transparent data encryption, or
Oracle RAC.
- This feature is only
available for oracle databases in Version 11.2.0.3 or higher.
- It also supports
various object types which were previously not supported by Classic
Capture.
- This Capture mode supports
extracting data from source databases using compression.
- Integrated Capture can be
configured in an online or downstream mode.
14.List the minimum
parameters that can be used to create the extract process?
The
following are the minimium required parameters which must be defined in the
extract parameter file.
- EXTRACT NAME
- USERID
- EXTTRAIL
- TABLE
15.What are macros?
Macro
is an easier way to build your parameter file. Once a macro is written it can
be called from different parameter files. Common parameters like
username/password and other parameters can be included in these macros. A macro
can either be another parameter file or a library.
16.Where can macros
be invoked?
The
macros can be called from the following parameter files.
- Manager
- Extract
- Replicat
- Gobals
17.How is a macro
defined?
A
macro statement consists of the following.
- Name of the Macro
- Parameter list
- Macro body
Sample:
MACRO #macro_name
PARAMS (#param1, #param2, …)
BEGIN
< macro_body >
END;
MACRO #macro_name
PARAMS (#param1, #param2, …)
BEGIN
< macro_body >
END;
18.I want to
configure multiple extracts to write to the same exttrail file? Is this
possible?
Only
one Extract process can write to one exttrail at a time. So you can’t configure
multiple extracts to write to the same exttrail.
19.What type of
Encryption is supported in Oracle Goldengate?
Oracle
Goldengate provides 3 types of Encryption.
- Data Encryption using Blow
fish.
- Password Encryption.
- Network Encryption.
20.What are the
different password encrytion options available with OGG?
You
can encrypt a password in OGG using
- Blowfish algorithm and
- Advance Encryption Standard
(AES) algorithm
21.What are the
different encryption levels in AES?
You
can encrypt the password/data using the AES in three different keys
a)
128 bit
b) 192 bit and
c) 256 bit
b) 192 bit and
c) 256 bit
22.Is there a way
to check the syntax of the commands in the parameter file without actually
running the GoldenGate process
Yes,
you can place the SHOWSYNTAX parameter in the parameter file and try starting.
If there is any error you will see it.
23.How can you
increase the maximum size of the read operation into the buffer that holds the
results of the reads from the transaction log?
If
you are using the Classical Extract you may use the TRANSLOGOPTION ASMBUFSIZE
parameter to control the read size for ASM Databases.
24.What information
can you expect when there us data in the discard file?
When
data is discarded, the discard file can contain:
1. Discard row details
2. Database Errors
3. Trail file number
1. Discard row details
2. Database Errors
3. Trail file number
25.What command can
be used to switch writing the trail data to a new trail file?
You
can use the following command to write the trail data to a new trail file.
SEND EXTRACT ext_name, ROLLOVER
SEND EXTRACT ext_name, ROLLOVER
26.How can you
determine if the parameters for a process was recently changed
When ever a process is started, the parameters in
the .prm file for the process is written to the process REPORT. You can look at
the older process reports to view the parameters which were used to start up
the process. By comparing the older and the current reports you can identify
the changes in the parameters.
GoldenGate Director Monitoring Migration FAQ (Doc ID 1455454.1)
Q1 - What is the java based thick client that helps in drag and drop the
GG process rather that typing them as unix commands.
A1 - I believe you may be referring to the Director client.
Q2 - Where will the configuration parameters be stored in case we use the above mentioned thick client (IDE ) to model the GG process?
A1 - I believe you may be referring to the Director client.
Q2 - Where will the configuration parameters be stored in case we use the above mentioned thick client (IDE ) to model the GG process?
A2 - The Director client diagrams are stored in the Director repository
database.
Q3 - How can we promote these
configuration across the environments, say from Development to Staging etc.
A3- If you want to move Director server from one environment to another and preserve your configuration, you can follow the steps in KM Doc Id 1192548.1.
Q4 - Is there a way as to apply patches in one shot to all the GG instances running across all nodes.
A3 - No. OGG installs need to be migrated manually. They could be scripted but OGG does not provide any tool to accomplish this.
A3- If you want to move Director server from one environment to another and preserve your configuration, you can follow the steps in KM Doc Id 1192548.1.
Q4 - Is there a way as to apply patches in one shot to all the GG instances running across all nodes.
A3 - No. OGG installs need to be migrated manually. They could be scripted but OGG does not provide any tool to accomplish this.
Nice collection for oracle faq
ReplyDelete