FAQ on Oracle Golden gate for unix & windows


                                        FAQ on Oracle Golden gate for unix & windows

1.What is the purpose of running  the Reverse utility

To restore a test database to its original state before the test run. Because the
Reverse utility only backs out changes, a test database can be restored in a
matter of minutes, much more efficiently than a complete database restore, which
can take hours.
To reverse errors caused by corrupt data or accidental deletions. For example, if
an UPDATE or DELETE command is issued without a WHERE clause, the Reverse utility
reverses the operation.
2.What needs to do for using the Reverse utility

Run Extract to extract the before data.
Run the Reverse utility to perform the reversal of the transactions.
Run Replicat to apply the restored data to the target database.
3.How the Reverse utility reverses the forward operations
Reversing the ordering of database operations in an extract file, a series of extract
files, or a trail so that they can be processed in reverse order, guaranteeing that
records with the same key are properly applied.
Changing delete operations to inserts.
Changing inserts to deletes.
Changing update before images to update after images.
Reversing the begin and end transaction indicators.
4.What are the data types Not Supported by the Reverse Utility in oracle
CLOB
BLOB
NCLOB
LONG
LONG RAW
XMLType
UDT
Nested Tables
VARRAY
5. What is the restrictions on the reverse utility?

Commit timestamps are not changed during the reverse procedure, which causes
the time sequencing in the trail to be backwards. Because of this, you cannot
Position Replicate based on a timestamp.

Oracle Golden Gate does not store the before images of the following data types,
so these types are not supported by the Reverse utility. A before image is required
to reverse update and delete operations
6. Explain the parameter that uses in Extract File for the Reverse Utility
END {time | RUNTIME}
time causes Extract to terminate when it reaches a record in
the data source whose timestamp exceeds the one that is
specified with this parameter. The valid format is as follows,
based on a 24-hour clock:
yyyy-mm-dd[ hh:mi[:ss[.cccccc]]
RUNTIME causes Extract to terminate when it reaches a record
in the data source whose timestamp exceeds the current date
and clock time. All unprocessed records with timestamps up
to this point in time are processed. One advantage of using
RUNTIME is that you do not have to alter the parameter file to
change dates and times from run to run. Instead, you can
control the process start time within your batch programming.
NOCOMPRESSDELETES
Causes Extract to send all column data to the output, instead
of only the primary key. Enables deletes to be converted back
to inserts.
GETUPDATEBEFORES
Directs Oracle GoldenGate to extract before images so that
updates can be rolled back.

7. Explain the parameter for Replicat  File for the Reverse Utility
END {time | RUNTIME}
time causes Extract to terminate when it reaches a record
in the data source whose timestamp exceeds the one that
is specified with this parameter. The valid format is as
follows, based on a 24-hour clock:
yyyy-mm-dd[ hh:mi[:ss[.cccccc]]
RUNTIME causes Replicat to terminate when it reaches a
record in the data source whose timestamp exceeds the
current date and clock time. All unprocessed records with
timestamps up to this point in time are processed. One
advantage of using RUNTIME is that you do not have to alter
the parameter file to change dates and times from run to
run. Instead, you can control the process start time within
your batch programming.
{SOURCEDEFS full_pathname} |
ASSUMETARGETDEFS
Use SOURCEDEFS if the source and target tables
have different definitions. Specify the sourcedefinitions
file generated by DEFGEN. for more
information about DEFGEN,
.
Use ASSUMETARGETDEFS if the source and target
tables have the same definitions.
8. How to run the Reverse utility by using the fully qualified path name
/GoldenGate_directory/reverse input_file, output_file
Where:
input_file is the input file specified with EXTTRAIL or RMTTRAIL in the Extract
parameter file.
output_file is the output file specified with EXTTRAIL in the ADD REPLICAT
command.
\home\ggs\reverse input.c:\ggs\dirdat\et, output.c:\ggs\dirdat\rt
9.How to Undoing the Changes Made by the Reverse Utility
If the reverse processing produces unexpected or undesired results, you can reapply
the original changes to the database. To do so, edit the Replicat parameter file and
specify the input file in place of the output file, then run Replicat again.

10.GoldenGate supports DDL replication

True

If Oracle GoldenGate supports DDL replication for your database type, you can
use it to replicate the DDL without stopping replication processes. To use this
method, the source and target table structures must be identical.

11.How to Use Oracle GoldenGate to Replicate Patch DDL

If the application patch or upgrade adds new objects that you want to include in
data replication, make certain that you include them in the DDL parameter
statement. To add new objects to your TABLE and MAP statements,
.
If the application patch or upgrade installs triggers or cascade constraints, disable
those objects on the target to prevent collisions between DML that they execute on
the target and the same DDL that is replicated from the source trigger or cascaded
operation.

12.How to Apply a Patch Manually on the Source and Target

1. Stop access to the source database.
2. Allow Extract to finish capturing the transaction data that remains in the
transaction log. To determine when Extract is finished, issue the following
command in GGSCI until it returns At EOF.
SEND EXTRACT group GETLAG
3. Stop Extract.
STOP EXTRACT group
4. Start applying the patch on the source.
5. Wait until the data pump (if used) and Replicat are finished processing the data in
their respective trails. To determine when they are finished, use the following
commands until they return At EOF.
SEND EXTRACT group GETLAG
SEND REPLICAT group GETLAG
6. Stop the data pump and Replicat.
STOP EXTRACT group
STOP REPLICAT group
7. Apply the patch on the target.
8. If the patches changed table definitions, run DEFGEN for the source tables to
generate updated source definitions, and then replace the old definitions with the
new ones in the existing source definitions file on the target system.
9. Start the Oracle GoldenGate processes whenever you are ready to begin
capturing user activity again

13.How to Initialize  the Transaction Logs

1. Stop the application from accessing the database. This stops more transaction
data from being logged.
2. Run GGSCI and issue the SEND EXTRACT command with the LOGEND option for the
primary Extract group. This command queries Extract to determine whether or not
Extract is finished processing the records that remain in the transaction log.
SEND EXTRACT group LOGEND
3. Continue issuing the command until it returns a YES status, indicating that there are
no more records to process.
4. On the target system, run GGSCI and issue the SEND REPLICAT command with the
STATUS option. This command queries Replicat to determine whether or not it is
finished processing the data that remains in the trail.
SEND REPLICAT group STATUS
5. Continue issuing the command until it shows 0 records in the current transaction,
for example:
Sending STATUS request to REPLICAT REPSTAB...
Current status:
Seqno 0, Rba 9035
0 records in current transaction.
6. Stop the primary Extract group, the data pump (if used), and the Replicat group.
STOP EXTRACT group
STOP EXTRACT pump_group
STOP REPLICAT group
7. Delete the Extract, data pump, and Replicat groups.
DELETE EXTRACT group
DELETE EXTRACT pump_group
DELETE REPLICAT group
8. Using standard operating system commands, delete the trail files.
9. Stop the database.
10. Initialize and restart the database.
11. Recreate the primary Extract group.
ADD EXTRACT group TRANLOG, BEGIN NOW
12. Recreate the local trail (if used).
ADD EXTTRAIL trail, EXTRACT group
13. Recreate the data pump (if used).
ADD EXTRACT pump_group, EXTTRAILSOURCE trail
14. Recreate the remote trail.
ADD RMTTRAIL trail, EXTRACT pump_group
15. Recreate the Replicat group.
ADD REPLICAT group, EXTTRAIL trail
16. Start Extract, the data pump (if used), and Replicat.
START EXTRACT group
START EXTRACT pump_group
START REPLICAT group

14.what is the process to Shutting Down the System if OGG is running

When shutting down a system for maintenance and other procedures that affect
Oracle GoldenGate, follow these steps to make certain that Extract has processed all
of the transaction log records.

1. Stop all application and database activity that generates transactions that are
processed by Oracle GoldenGate.
2. Run GGSCI.
3. In GGSCI, issue the SEND EXTRACT command with the LOGEND option. This command
queries the Extract process to determine whether or not it is finished processing
the records in the data source.
SEND EXTRACT group LOGEND
4. Continue issuing the command until it returns a YES status. At that point, all
transaction log data has been processed, and you can safely shut down Oracle
GoldenGate and the system.

15.What needs to do in golden gate process if we are Changing Database Metadata

This procedure is required to prevent Replicat errors when changing the following
metadata of the source database:
Database character set
National character set
Locale
Timezone
Object name case-sensitivity

1. Stop transaction activity on the source database. Do not make the metadata
change to the database yet.
2. In GGSCI on the source system, issue the SEND EXTRACT command with the LOGEND
option until it shows there is no more redo data to capture.
SEND EXTRACT group LOGEND
3. Stop Extract.
STOP EXTRACT group
4. On each target system, issue the SEND REPLICAT command with the STATUS option
until it shows a status of "At EOF" to indicate that it finished processing all of the
data in the trail. This must be done on all target systems until all Replicat
processes return "At EOF."
SEND REPLICAT group STATUS
5. Stop the data pumps and Replicat.
STOP EXTRACT group
STOP REPLICAT group

6. Change the database metadata.
7. In in GGSCI on the source system, issue the ALTER EXTRACT command with the
ETROLLOVER option for the primary Extract to roll over the local trail to the start of a
new file.
ALTER EXTRACT group, ETROLLOVER
8. Issue the ALTER EXTRACT command with the ETROLLOVER option for the data pumps to
roll over the remote trail to the start of a new file.
ALTER EXTRACT pump, ETROLLOVER
9. Start Extract.
START EXTRACT group
10. In GGSCI, reposition the data pumps and Replicat processes to start at the new
trail sequence number.
ALTER EXTRACT pump, EXTSEQNO seqno, EXTRBA RBA
ALTER REPLICAT group, EXTSEQNO seqno, EXTRBA RBA
11. Start the data pumps.
START EXTRACT group
12. Start the Replicat processes.
START REPLICAT group

For Oracle and Teradata databases, you can enable the DDL support feature
of Oracle GoldenGate to automatically capture and apply the DDL that adds
new tables, instead of using this procedure.

16.What is the process to add a Table to the Oracle GoldenGate Configuration

1. Stop user access to the new tables.
2. (If new tables do not satisfy a wildcard) If you are adding numerous tables that do
not satisfy a wildcard, make a copy of the Extract and Replicat parameter files,
and then add the new tables with TABLE and MAP statements. If you do not want to
work with a copy, then edit the original parameter files after you are prompted to
stop each process.
3. (If new tables satisfy wildcards) In the Extract and Replicat parameter files, make
certain the WILDCARDRESOLVE parameter is not being used, unless it is set to the
default of DYNAMIC.
4. (If new tables do not satisfy a wildcard) If the new tables do not satisfy a wildcard
definition, stop Extract.
STOP EXTRACT group
5. Add the new tables to the source and target databases.
6. If required for the source database, issue the ADD TRANDATA command in GGSCI for
the new tables. Before using ADD TRANDATA, issue the DBLOGIN command.
7. Depending on whether the source and target definitins are identical or different,
use either ASSUMETARGETDEFS or SOURCEDEFS in the Replicat parameter file. If
SOURCEDEFS is needed, you can do either of the following:
Run DEFGEN, then copy the new definitions to the source definitions file on
the target.
If the new tables match a definitions template, specify the template with the
DEF option of the MAP parameter. (DEFGEN not needed.)
8. To register the new source definitions or new MAP statements, stop and then start
Replicat.
STOP REPLICAT group
START REPLICAT group
9. Start Extract, if applicable.
START EXTRACT group
10. Permit user access to the new tables.

17.if you are changing an attribute of a source table that is in the
Oracle GoldenGate configuration, such as adding or changing columns or partitions, or
changing supplemental logging details (Oracle).what is the process of syncing

1. On the source and target systems, create a table, to be known as the marker
table, that can be used for the purpose of generating a marker that denotes a
stopping point in the transaction log. Just create two simple columns: one as a
primary key and the other as a regular column. For example:
CREATE TABLE marker
(
id int NOT NULL,
column varchar(25) NOT NULL,
PRIMARY KEY (id)
);
2. Insert a row into the marker table on both the source and target systems.
INSERT INTO marker VALUES (1, 1);
COMMIT;
3. On the source system, run GGSCI.
4. Open the Extract parameter file for editing.
5. Add the marker table to the Extract parameter file in a TABLE statement.
TABLE marker;
6. Save and close the parameter file.
7. Add the marker table to the TABLE statement of the data pump, if one is being used.
8. Stop the Extract and data pump processes, and then restart them immediately to
prevent capture lag.
STOP EXTRACT group
START EXTRACT group
STOP EXTRACT pump_group
START EXTRACT pump_group
9. On the target system, run GGSCI.
10. Open the Replicat parameter file for editing
11. Add the marker table to the Replicat parameter file in a MAP statement, and use the
EVENTACTIONS parameter as shown to stop Replicat and ignore operations on the
marker table.
MAP marker, TARGET marker, EVENTACTIONS (STOP, IGNORE);
12. Save and close the parameter file.
13. Stop, and then immediately restart, the Replicat process.
STOP REPLICAT group
START REPLICAT group
14. When you are ready to change the table attributes for both source and target
tables, stop all user activity on them.
15. On the source system, perform an UPDATE operation to the marker table as the only
operation in the transaction.
UPDATE marker
SET column=2,
WHERE id=1;
COMMIT;
16. On the target system, issue the following command until it shows that Replicat is
stopped as a result of the EVENTACTIONS rule.
STATUS REPLICAT group
17. Perform the DDL on the source and target tables, but do not yet allow user activity.
18. Start Replicat.
START REPLICAT group
19. Allow user activity on the source and target tables.

18.how to Change the Number of Oracle RAC Threads when UsingClassic Capture

1. On the source and target systems, run GGSCI.
2. Stop Extract and Replicat.
STOP EXTRACT group
STOP REPLICAT group
3. On the source system, issue the following command to delete the primary Extract
group and the data pump.
DELETE EXTRACT group
DELETE EXTRACT pump_group
4. On the target system, issue the following command to delete the Replicat groups.
DELETE REPLICAT group
5. Using standard operating system commands, remove the local and remote trail
files.
6. Add the primary Extract group again with the same name as before, specifying the
new number of RAC threads.
ADD EXTRACT group TRANLOG, THREADS n, BEGIN NOW
7. Add the local trail again with the same name as before.
ADD EXTTRAIL trail, EXTRACT group
8. Add the data pump Extract again, with the same name as before.
ADD EXTRACT group EXTTRAILSOURCE trail, BEGIN NOW
9. Add the remote trail again with the same name as before.
ADD RMTTRAIL trail, EXTRACT group
10. Add the Replicat group with the same name as before. Leave off any BEGIN options
so that processing begins at the start of the trail.
ADD REPLICAT group EXTTRAIL trail
11. Start all processes, using wildcards as appropriate. If the re-created processes are
the only ones in the source and target Oracle GoldenGate instances, you can use
START ER * instead of the following commands.
START EXTRACT group
START REPLICAT group

19.Explain the Extract's checkpoint positions

INFO EXTRACT group, SHOWCH

The Input Checkpoint field shows where Extract began processing when it was
started.
The Recovery Checkpoint field shows the location of the oldest uncommitted
transaction.
The Next Checkpoint field shows the position in the redo log that Extract is reading.
The Output Checkpoint field shows the position where Extract is writing.

change the file size.

(Remote trail)
ALTER RMTTRAIL trail, EXTRACT group, MEGABYTES n
(Local trail)
ALTER EXTTRAIL trail, EXTRACT group, MEGABYTES n

20.how to determine the oldest open transaction

GGSCI> SEND EXTRACT group, SHOWTRANS
GGSCI> SEND EXTRACT group, { SKIPTRANS ID [THREAD n] [FORCE] |
FORCETRANS ID [THREAD n] [FORCE] }

21.What is the process of Switching Extract from Classic Mode to Integrated Mode

1. Back up the current Oracle GoldenGate working directories.
2. While the Oracle GoldenGate processes continue to run in their current
configuration, so that they keep up with current change activity, copy the Extract
parameter file to a new name.
3. Grant the appropriate privileges to the Extract user and perform the required
configuration steps to support your business applications in integrated capture
mode. See Installing and Configuring Oracle GoldenGate for Oracle Database for
information about configuring and running Extract in integrated mode.
4. Log into the mining database with one of the following commands, depending on
where the mining database is located.
DBLOGIN USERIDALIAS alias
MININGDBLOGIN USERIDALIAS alias
Where: alias specifies the alias of a user in the credential store who has the
privileges granted through the Oracle dbms_goldengate_auth.grant_admin_privilege
procedure.
5. Register the Extract group with the mining database. Among other things, this
creates the logmining server.
REGISTER EXTRACT group DATABASE
6. Issue the following command to determine whether the upgrade command can be
issued. Transactions that started before the registration command must be written
to the trail before you can proceed with the upgrade. You may have to issue this
command more than once until it returns a message stating that Extract can be
upgraded.
INFO EXTRACT group UPGRADE
7. Stop the Extract group.
STOP EXTRACT group
8. Switch the Extract group to integrated mode. See Oracle RAC options for this
command in Reference for Oracle GoldenGate for Windows and UNIX, if
applicable.
ALTER EXTRACT group UPGRADE INTEGRATED TRANLOG
9. Replace the old parameter file with the new one, keeping the same name.
10. Start the Extract group.
START EXTRACT group

22.Issue the following command to determine whether the downgrade command can
be issued. Transactions that started before the downgrade command is issued
must be written to the trail before you can proceed. You may have to issue this
command more than once until it returns a message stating that Extract can be
downgraded.

INFO EXTRACT group DOWNGRADE

23. how to Switch the Extract group to classic mode.

ALTER EXTRACT group DOWNGRADE INTEGRATED TRANLOG

24.How to Reprocessing the replicat From the Low Watermark with HANDLECOLLISIONS

In this procedure, you reposition all of the threads to the low watermark position. This
is the earliest checkpoint position performed among all of the threads. To state it
another way, the low watermark position is the last record processed by the slowest
thread before the unclean stop.

Issue the following command to synchronize the Replicat threads to the maximum
checkpoint position. This command automatically starts Replicat and executes the
threads until they reach the maximum checkpoint position.

SYNCHRONIZE REPLICAT group

25.What is the LOGBSN

Extract passes a log begin sequence number, or LOGBSN, to the trail files. The BSN
is the native database sequence number that identifies the oldest uncommitted
transaction that is held in Extract memory. For example, the BSN in an Oracle
installation would be the Oracle system change number (SCN). Each trail file contains
the lowest LOGBSN value for all of the transactions in that trail file. Once you know the
LOGBSN value, you can reposition Extract at the correct read position to ensure that the
appropriate transactions are re-generated to the trail and propagated to Replicat

The bounded recovery checkpoint is not taken into account when calculating the
LOGBSN. The failure that affected the Extract checkpoint file may also involve a loss of
the persisted bounded recovery data files and bounded recovery checkpoint
information.

On the target, obtain the LOGBSN value by issuing the INFO REPLICAT command with
the DETAIL option.
INFO REPLICAT group, DETAIL
The BSN is included in the output as a line similar to the following:
Current Log BSN value: 1151679

26.There is a limit on how far back Extract can go in the transaction stream,
when in integrated mode. If the required SCN is no longer available, the
ALTER EXTRACT command fails.

27.If Replicat is operating in
integrated mode (Oracle targets only), you do not need the FILTERDUPTRANSACTIONS
option. Integrated Replicat handles duplicate transactions transparently

28.you may be able to use multiple processes. Keep related DML together in the same
process stream to ensure data integrity.

29.The number of concurrent Extract and Replicat process groups that can run on a
system depends on how much system memory is available. Each Extract and Replicat
process needs approximately 25-55 MB of memory, or more depending on the size of
the transactions and the number of concurrent transactions.

30.The Oracle GoldenGate GGSCI command interface fully supports up to 5,000
concurrent Extract and Replicat groups per instance of Oracle GoldenGate Manager

31.Oracle GoldenGate recommends
keeping the number of Extract and Replicat groups (combined) at the default level of
300 or below in order to manage your environment effectively. The number of groups
is controlled by the MAXGROUPS parameter.

32.When creating the groups, keep tables that have relational constraints to
each other in the same group

33. How to determine the required swap space for OGG

1. Start up one Extract or Replicat.
2. Run GGSCI.
3. View the report file and find the line PROCESS VM AVAIL FROM OS (min
Multiply that value by the number of Extract and Replicat processes that will be
running. The result is the maximum amount of swap space that could be required


34.You can get fetch statistics from the STATS EXTRACT command if you
include the STATOPTIONS REPORTFETCH parameter in the Extract parameter file

35.Even with BATCHSQL enabled Replicat may take longer to process tables
that have large or long-running transactions, heavy volume, a very large number of
columns that change, and LOB data

36.All Replicat groups can use the same checkpoint table

37.How to Split Large Tables Into Row Ranges AcrossProcess Groups

You can use the @RANGE function to divide the rows of any table across two or more
Oracle GoldenGate processes
It might be more efficient to use the primary Extract or a data pump to calculate the
ranges than to use Replicat


38.Keep in mind that Extract will normally be
faster than Replicat because of the kind of tasks that each one performs

39.How to detect a network bottleneck that is affecting the throughput of Oracle GoldenGate,


1. Issue the following command to view the ten most recent Extract checkpoints. If
you are using a data-pump Extract on the source system, issue the command for
the primary Extract and also for the data pump.
INFO EXTRACT group, SHOWCH 10
2. Look for the Write Checkpoint statistic. This is the place where Extract is writing to
the trail.
Write Checkpoint #1
GGS Log Trail
Current Checkpoint (current write position):
Sequence #: 2
RBA: 2142224
Timestamp: 2011-01-09 14:16:50.567638
Extract Trail: ./dirdat/eh
3. For both the primary Extract and data pump:
Determine whether there are more than one or two checkpoints. There can be
up to ten.
Find the Write Checkpoint n heading that has the highest increment number
(for example, Write Checkpoint #8) and make a note of the Sequence, RBA, and
Timestamp values. This is the most recent checkpoint.
4. Refer to the information that you noted, and make the following validation:
Is the primary Extract generating a series of checkpoints, or just the initial
checkpoint?
If a data pump is in use, is it generating a series of checkpoints, or just one?
5. Issue INFO EXTRACT for the primary and data pump Extract processes again.

Has the most recent write checkpoint increased? Look at the most recent
Sequence, RBA, and Timestamp values to see if their values were incremented
forward since the previous INFO EXTRACT command.
6. Issue the following command to view the status of the Replicat process.
SEND REPLICAT group, STATUS
The status indicates whether Replicat is delaying (waiting for data to process),
processing data, or at the end of the trail (EOF).


40.There is a network bottleneck if the status of Replicat is either in delay mode or at the
end of the trail file and either of the following is true:

You are only using a primary Extract and its write checkpoint is not increasing or is
increasing too slowly. Because this Extract process is responsible for sending data
across the network, it will eventually run out of memory to contain the backlog of
extracted data and abend.
You are using a data pump, and its write checkpoint is not increasing, but the write
checkpoint of the primary Extract is increasing. In this case, the primary Extract
can write to its local trail, but the data pump cannot write to the remote trail. The
data pump will abend when it runs out of memory to contain the backlog of
extracted data. The primary Extract will run until it reaches the last file in the trail
sequence and will abend because it cannot make a checkpoint


41.Even when there is a network outage, Replicat will process in a normal
manner until it applies all of the remaining data from the trail to the target.
Eventually, it will report that it reached the end of the trail file

42.How to Increase the TCP/IP Packet Size

Use the TCPBUFSIZE option of the RMTHOST parameter to control the size of the TCP
socket buffer that Extract maintains. By increasing the size of the buffer, you can send
larger packets to the target system

43.How to determine the optimum buffer size

For example, if average RTT is .08
seconds, and the bandwidth is 100 megabits per second, then the optimum buffer
size is:
0.08 second * 100 megabits per second = 8 megabits
3. Divide the result by 8 to determine the number of bytes (8 bits to a byte). For
example:
8 megabits / 8 = 1 megabyte per second
The required unit for TCPBUFSIZE is bytes, so you would set it to a value of
1000000.

44.How to Improve I/O performance

RAID 0+1 (striping and mirroring) is a better choice than RAID 5, which uses
checksums that slow down I/O and are not necessary for these types of files.

45.How to Improve I/O Performance Within the Oracle GoldenGate Configuration

Use the CHECKPOINTSECS parameter to control how often Extract and Replicat make
their routine checkpoints
CHECKPOINTSECS is not valid for an integrated Replicat on an Oracle
database system.

46.Use the GROUPTRANSOPS parameter to control the number of SQL operations that are
contained in a Replicat transaction when operating in its normal mode. Increasing
the number of operations in a Replicat transaction improves the performance of
Oracle GoldenGate by reducing the number of transactions executed by Replicat,
and by reducing I/O activity to the checkpoint file and the checkpoint table, if used.

47.Replicat issues a checkpoint whenever it applies a transaction to the target, in
addition to its scheduled checkpoints

48.GROUPTRANSOPS is not valid for an integrated Replicat on an Oracle
database system, unless the inbound server parameter parallelism is
set to 1.

49.Use the EOFDELAY or EOFDELAYCSECS parameter to control how often Extract, a data
pump, or Replicat checks for new data after it has reached the end of the current
data in its data source. Increasing the values of these parameters improves performance, but it also
increases the amount of data that must be reprocessed if the process fails


50.The CACHEMGR parameter controls the amount of virtual memory
and temporary disk space that is available for caching uncommitted transaction data
that is being processed by Oracle GoldenGate


51.When a process starts, the cache manager checks the availability of resources for
virtual memory, as shown in the following example:
CACHEMGR virtual memory values (may have been adjusted)CACHESIZE:
32GCACHEPAGEOUTSIZE (normal): 8M PROCESS VM AVAIL FROM OS (min): 63.97GCACHESIZEMAX
(strict force to disk): 48G
If the current resources are not sufficient, a message like the following may be
returned:
2013-11-11 14:16:22 WARNING OGG-01842 CACHESIZE PER DYNAMIC DETERMINATION (32G) LESS
THAN RECOMMENDED: 64G (64bit system)vm found: 63.97GCheck swap space. Recommended
swap/extract: 128G (64bit system).
If the system exhibits excessive paging and the performance of critical processes is
affected, you can reduce the CACHESIZE option of the CACHEMGR. parameter. You can also
control the maximum amount of disk space that can be allocated to the swap directory
with the CACHEDIRECTORY option



Avoid using the primary Extract to filter and convert data. Keep it dedicated to data
capture. It will perform better and is less vulnerable to any process failures that
result from those activities. The objective is to make certain the primary Extract
process is running and keeping pace with the transaction volume.
Use Replicat or a data-pump to perform filtering and conversion.

Tuning Coordination Performance Against Barrier Transactions
In a coordinated Replicat configuration, barrier transactions such as updates to the
primary key cause an increased number of commits to the database, and they
interrupt the benefit of the GROUPTRANSOPS feature of Replicat. When there is a high
number of barrier transactions in the overall workload of the coordinated Replicat,
using a high number of threads can actually degrade Replicat performance


(Oracle RAC) In a new Replicat configuration, you can increase the PCTFREE
attribute of the Replicat checkpoint table. However, this must be done before
Replicat is started for the first time. The recommended value of PCTFREE is 90

Use the BATCHSQL parameter to increase the performance of Replicat. BATCHSQL causes
Replicat to organize similar SQL statements into arrays and apply them at an
accelerated rate

In its normal mode, Replicat applies one SQL statement at a time

The gathering of SQL statements into batches improves efficiency but also consumes
memory. To maintain optimum performance, use the following BATCHSQL options:
BATCHESPERQUEUE
BYTESPERQUEUE
OPSPERBATCH
OPSPERQUEUE

BATCHTRANSOPS
controls the maximum number of batch operations that can be grouped into a
transaction before requiring a commit. The default for non-integrated Replicat is 1000.
The default for integrated Replicat is 50.

To determine the
number of wait dependencies, view the TOTAL_WAIT_DEPS column of the
V$GG_APPLY_COORDINATOR database view in the Oracle database.

If a target table does not have a primary key, a unique key, or a unique index, Replicat
uses all of the columns to build its WHERE clause. This is, essentially, a full table scan
To make row selection more efficient, use a KEYCOLS clause in the TABLE and MAP
statements to identify one or more columns as unique

If the target database cannot handle large transactions from the source database, you
can split them into a series of smaller ones by using the Replicat parameter
MAXTRANSOPS

MAXTRANSOPS is not valid for an integrated Replicat on an Oracle database
system.

The Replicat process maintains cursors for cached SQL statements and for SQLEXEC
operations. Without enough cursors, Replicat must age more statements

By default,
Replicat maintains as many cursors as allowed by the MAXSQLSTATEMENTS parameter

How to Improving Update Speed

Excessive block fragmentation causes Replicat to apply SQL statements at a slower
than normal speed. Reorganize heavily fragmented tables, and then stop and start
Replicat to register the new object ID.


Use the TRANSACTIONTIMEOUT parameter to prevent an uncommitted Replicat target
transaction from holding locks on the target database and consuming its resources
unnecessarily.

TRANSACTIONTIMEOUT limits the amount of time that Replicat can hold a target transaction
open if it has not received the end-of-transaction record for the last source transaction
in that transaction


By default, Replicat groups multiple source transactions into one
target transaction to improve performance, but it will not commit a partial source
transaction and will wait indefinitely for that last record. The Replicat parameter
GROUPTRANSOPS controls the minimum size of a grouped target transaction

Monitoring Oracle GoldenGate Processing
Chapter 18

LAG
Latency between last record processed and
timestamp in the data source

SEND {EXTRACT | REPLICAT} group
Depending on the process and selected
options, returns information about memory
pool, lag, TCP statistics, long-running
transactions, process status, recovery
progress, and more.

INFO PARAM Queries for and displays static information.
GETPARAMINFO Displays currently-running parameter values

To confirm that Extract is recovering properly, use the SEND EXTRACT command with the
STATUS option

For Extract, lag is the difference, in seconds, between the time that a record was
processed by Extract (based on the system clock) and the timestamp of that record in
the data source
For Replicat, lag is the difference, in seconds, between the time that the last record
was processed by Replicat (based on the system clock) and the timestamp of the
record in the trail

The INFO command also returns a lag statistic, but this statistic is taken from
the last record that was checkpointed, not the current record that is being
processed. It is less accurate than LAG or INFO.

LAGREPORTMINUTES or LAGREPORTHOURS parameter to specify the interval at which
Manager checks for Extract and Replicat lag.

LAGCRITICALSECONDS, LAGCRITICALMINUTES, or LAGCRITICALHOURS parameter to
specify a lag threshold that is considered critical, and to force a warning message to
the error log when the threshold is reached

LAGINFOSECONDS, LAGINFOMINUTES, or LAGINFOHOURS parameter to specify a lag
threshold; if lag exceeds the specified value, Oracle GoldenGate reports lag
information to the error log. If the lag exceeds the value specified with the LAGCRITICAL
parameter, Manager reports the lag as critical; otherwise, it reports the lag as an
informational message

You can use the default automatic heartbeat table functionality to monitor end-to-end
replication lag. Automatic heartbeats are sent from each source database into the
replication streams, by updating the records in a heartbeat seed table and a heartbeat
table, and constructing a heartbeat history table. Each of the replication processes in
the replication path process these heartbeat records and update the information in
them. These heartbeat records are inserted or updated into the heartbeat table at the
target databases.

Using the information in the heartbeat table and the heartbeat history table, the current
and historical lags in each of the replication can be computed

In a unidirectional configuration, the target database will populate only the incoming
columns in the heartbeat table

Ensure that Self-Describing Trail Files functionality is enabled

Enable the heartbeat functionality with the ENABLE_HEARTBEAT_TABLE parameter. This is
the default.

Add a heartbeat table to each of your databases with the ADD HEARTBEATTABLE
command. Add the heartbeat table to all source and target instances and then restart
existing Oracle GoldenGate processes to enable heartbeat functionality. Depending on your specific
database system, you may or may not be required to create or enable a job to
populate heartbeat table data

For Oracle Databases, you must ensure that the Oracle DBMS_SCHEDULER is
operating correctly as the heartbeat update relies on it. You can query the
DBMS_SCHEDULER by issuing:
select START_DATE, LAST_START_DATE, NEXT_RUN_DATE
from dba_scheduler_jobs
Where job_name ='GG_UPDATE_HEARTBEATS';
Then look for valid entries for NEXT_RUN_DATE, which is the next time the scheduler will
run. If this is a timestamp in the past, then no job will run and you must correct it.
A common reason for the scheduler not working is when the parameter
job_queue_processes is set too low (typically zero). Increase the number of
job_queue_processes configured in the database with the ALTER SYSTEM SET
JOB_QUEUE_PROCESSES = ##; command where ## is the number of job queue processes

Run an Extract, which on receiving the logical change records (LCR) checks the value
in the OUTGOING_EXTRACT column.
If the Extract name matches this value, the OUTGOING_EXTRACT_TS column is
updated and the record is entered in the trail.
If the Extract name does not match then the LCR is discarded.
If the OUTGOING_EXTRACT value is NULL, it is populated along with
OUTGOING_EXTRACT_TS and the record is entered in the trail.


Use the REPORTCOUNT parameter to report a count of transaction records that Extract or
Replicat processed since startup

Use the WARNRATE parameter to set a threshold for the number of SQL errors that can
be tolerated on any target table before being reported to the process report and to the
error log.

The maximum file size of discard file is 50 megabytes

If a proces is started from the command line of the operating system, it does not
generate a discard file by default

To account for time differences between source and target systems, use the
TCPSOURCETIMER parameter in the Extract parameter file. This parameter adjusts the
timestamps of replicated records for reporting purposes, making it easier to interpret
synchronization lag

Comments

Popular posts from this blog

Oracle Golden Gate FAQ

Oracle Golden gate - Long running transactions while stopping an extract