Tips and wisdom from 15 years of DBA experience

Wednesday, February 3, 2010

Streams vs. Golden Gate

In late 2009, Oracle purchased Golden Gate software whose technology is now the stated platform that Oracle will build upon for all new replication features. Streams will still be supported, but the new goodies and major focus will be on Golden Gate. My initial experiences with Golden Gate have been very positive, and for those of you who have ever used streams, you will be excited to learn that there is a single command line interface to do *everything*, and transformations are far, far simpler. Here is the breakdown:

Setup

Streams:
Set up database links, hope that your global names are set properly, use dbms_streams_adm.maintain_schemas(), or if you are really a masochist, build your capture & apply queues, and set up propagation, then spend a couple hours digging through V$ views to find out why things aren't working.

Golden Gate:
untar/unzip the software on both source and target to a NON-nfs mounted volume - I had problems using gg when the connection to the nfs server was slow, & gg does not support this config. Who installs binaries on nfs mounted volumes anyway? Duh.

type ./ggsci
> info all -- (gives you everything you need to know about what is running)

I found the easiest config was to walk through the steps in the install guide, then use the first example to create a reporting system in the admin guide. It's so simple and straightforward.

Transformations

Streams:
Create dml handlers or apply handlers, and have fun debugging them looking for errors in v$applyblahblah, v$captureblahblah...

Golden Gate:
Add a line like this to your source to only pull rows where col1 is greater than 1000:
TABLE schema1.table1, FILTER(col1 > 1000);
Done.
Other transformations are equally simple. Exclude a column?
TABLE schema1.table1, EXCCOL (col2);

...send data from one schema to another?
MAP schema1.*, TARGET schema2.*;

...map a column from one table to another?
MAP schema1.tab1, TARGET schema1.tab2, COLMAP (usedefaults, tab1_col=tab2_col)

My experience with GG proved that GG is faster than STREAMS even when using real-time downstream capture...oh, that's another thing - ever struggled setting up redo transport services for real time downstream capture in streams, yeah...none of that with GG. It does not use any Oracle-specific mechanism to transport the data. There is a manager process on each server involved in the replication that listens on a user defined port (default 7809). The extract process reads from the online redo log or archive log if necessary and sends what are called trail files over to the target. Those trail files are then read by the replicat process and applied on the target. Overall, GG looks very promising.

For more information on Golden Gate, see the following:
http://cglendenningoracle.blogspot.com/2010/11/transform-data-in-oracle-golden-gate.html

3 comments:

Franklin said...

How does Golden Gate transport the Trail Files from the Source to Destination? If the network transport medium is a 2-4mbps line how does Golden Gate assure delivery?

Anonymous said...

Does GG need archived redo log files? Or does it operate solely on online redo logs? We carried out a pilot using Streams for an online datafeed to our data warehouse. The production DBAs knocked it on the head, because Streams needs the archiver. Apparently this is too much of a performance hit on OPS. When I questioned this, I was dismissed with "do I think the OPS system is there purely to support the data warehouse?", only in words that were not so polite.

How does the performance of GG measure up to Streams? If the capture, propagation, or apply processes are broken, can GG restart automatically at the failure point?

What about licensing for GG?

Thanks for any help with these questions.

Anonymous said...

licensing is per source and per target

Followers