Friday, April 2, 2010

Schema replication with Oracle Streams

I tested table replication with Oracle Streams before. And now i used Schema Replication with an OLTP Tester schema. The source database is 10.2.0.4 on a windows OS and the destination is 11.1.0.7 database on same host with a windows 2008 Server. I faced a couple of problems listed at the end of this post. The purpose of the post is setting up a Schema replication using streams. And start the OLTP test and monitor the streams processes. Piece of cake!.

Lets start with the schema replication with the MAINTAIN_SCHEMAS which is the easiest way. Streams does the rest of it (creating queues, captre, propagation and apply processes.. etc. ). But before running streams packages there are some preconfiguration of the database that has to be done.

- Enabling Archivelog
- Preconfiguring Oracle Streams

And after all we should be ready for streams replication.
BEGIN
DBMS_STREAMS_ADM.MAINTAIN_SCHEMAS(
schema_names => 'U_EPEKER',
source_directory_object => 'DATA_PUMP_DIR',
destination_directory_object => 'DATA_PUMP_DIR',
source_database => 'ORCL',
destination_database => 'ORCL11G',
perform_actions => TRUE,
capture_name => 'CAPTURE_SCHEMA_U_EPEKER',
--capture_queue_table => NULL,
capture_queue_name => 'QUEUE_CAPT_SCH_U_EPEKER',
capture_queue_user => 'U_STREAMS_ADM',
propagation_name => 'PROP_SCH_U_EPEKER',
apply_name => 'APPLY_SCH_U_EPEKER',
--apply_queue_table => NULL,
apply_queue_name => 'QUEUE_APP_SCH_U_EPEKER',
apply_queue_user => 'U_STREAMS_ADM',
bi_directional => FALSE,
include_ddl => TRUE,
instantiation => DBMS_STREAMS_ADM.INSTANTIATION_SCHEMA_NETWORK);
END;

In case of any error following dba views can be queried and see which script has an error or in case of any hopeless situation you can recover the operation and remove the streams configuration and re-run the maintain_schemas after overcome any misconfiguration.

select * from DBA_RECOVERABLE_SCRIPT_BLOCKS;
select * from DBA_RECOVERABLE_SCRIPT_ERRORS;
execute DBMS_STREAMS_ADM.RECOVER_OPERATION('3A53A54E70764958BA0AFA68DAC0C7F0','PURGE');
exec DBMS_STREAMS_ADM.REMOVE_STREAMS_CONFIGURATION;
-- and on the remote server
exec DBMS_STREAMS_ADM.REMOVE_STREAMS_CONFIGURATION@ORCL11G;


Checking the streams processes if they are set all right after the maintain_schemas procedure.

select capture_name,
queue_name,
rule_set_name,
source_database,
status
from dba_capture;
/*
CAPTURE_NAME QUEUE_NAME RULE_SET_NAME SOURCE_DATABASE STATUS
------------ ---------- ------------- --------------- ------
CAPTURE_SCHEMA_U_EPEKER QUEUE_CAPT_SCH_U_EPEKER RULESET$_12 ORCL ENABLED
*/


select propagation_name,
source_queue_name,
destination_queue_name,
destination_dblink,
rule_set_name,
status
from dba_propagation;

/*
PROPAGATION_NAME SOURCE_QUEUE_NAME DESTINATION_QUEUE_NAME DESTINATION_DBLINK RULE_SET_NAME
--------------- ---------------- --------------------- ----------------- -------------
PROP_SCH_U_EPEKER QUEUE_CAPT_SCH_U_EPEKER QUEUE_APP_SCH_U_EPEKER ORCL11G RULESET$_9
*/


select sid,
serial#,
capture_name,
state
from v$streams_capture;

/*
SID SERIAL# CAPTURE_NAME STATE
--- ------ ------------ -----
128 8 CAPTURE_SCHEMA_U_EPEKER CAPTURING CHANGES
*/


Everything seems operational in the capture side. I need to check if apply process also created and runs smoothly.

select apply_name,
queue_name,
apply_captured,
rule_set_name,
status
from dba_apply@ORCL11G;

/*
APPLY_NAME QUEUE_NAME APPLY_CAPTURED RULE_SET_NAME STATUS
--------- ---------- -------------- ------------- ------
APPLY_SCH_U_EPEKER QUEUE_APP_SCH_U_EPEKER YES RULESET$_6 ENABLED
*/


select sid,
serial#,
state,
apply#,
apply_name,
total_applied,
total_errors
from v$streams_apply_coordinator@ORCL11G;

/*
SID SERIAL# STATE APPLY# APPLY_NAME
--- ------ ----- ----- ----------
116 19550 IDLE 1 APPLY_SCH_U_EPEKER
*/


I have created the schema before which includes an OLTP Simulator procedures. I am planning to share this schema in this blog further. Anyway, here is the script to create the schema: OLTP Simulator Script

Now I will use the P_CREATE_REPOS procedure to populate the schema. I hope streams is going to replicate the transactions. This create procedure will create 5 groups and 10 user each group. After that it will going to create 10 pages for each user.

exec U_EPEKER.P_CREATE_REPOS(5,10,1);

After executing this procedure there should be 5 groups and 10 users in each group and total 50 pages for 1 page each user.

select 'source ' || count(*) as user_count from u_epeker.t_user
union all
select 'destination ' || count(*) as user_count from u_epeker.t_user@ORCL11G;

/*
USER_COUNT
source 50
destination 50
*/


select 'source ' || count(*) as group_count from u_epeker.t_group
union all
select 'destination ' || count(*) as group_count from u_epeker.t_group@ORCL11G;

/*
GROUP_COUNT
source 5
destination 5
*/


select 'source ' || count(*) as page_count from u_epeker.t_pages
union all
select 'destination ' || count(*) as page_count from u_epeker.t_pages@ORCL11G;

/*
PAGE_COUNT
source 50
destination 50
*/


Seems everything goes fine. Now i will simulate an OLTP function in the source database and monitor the streams processes. For this I will use the P_CREATE_JOBS procedure of the OLTP Simulator schema and it will implicitly call the P_SIMULATE_REQUEST procedure sequentially.

-- with U_EPEKER user
exec U_EPEKER.P_CREATE_JOBS(20,1,'SIMULATE_JOBS',1);


With these parameter this procedure should create 20 jobs all intervals are 1 second means we are filling up T_VISIT_STATS_DETAILED and yes it works. Our replication should also work.

select count(*) from U_EPEKER.T_VISIT_STATS_DETAILED
union all
select count(*) from U_EPEKER.T_VISIT_STATS_DETAILED@ORCL11G;

/*
COUNT(*)
-------
385
376

on the second run

COUNT(*)
-------
385
385
*/


so it comes a few seconds behind (the time for shipping the logs i think). What about the capture and the apply processes and i can see TOTAL_MESSAGED_CAPTURED and the TOTAL_APPLIED values are getting higher.

select state, capture_name, total_messages_captured from v$streams_capture;
/*
STATE,CAPTURE_NAME,TOTAL_MESSAGES_CAPTURED
CAPTURING CHANGES,CAPTURE_SCHEMA_U_EPEKER,112817
*/


select state, apply_name, total_applied from v$streams_apply_coordinator@ORCL11G;
/*
STATE,APPLY_NAME,TOTAL_APPLIED
IDLE,APPLY_SCH_U_EPEKER,753
*/


Bonus: Excluding a table from the schema replication, it works!.

BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES
(
table_name => 'u_epeker.t_visit_stats_detailed',
streams_type => 'capture',
streams_name => 'capture_schema_u_epeker',
queue_name => 'queue_capt_sch_u_epeker',
include_dml => true,
include_ddl => true,
source_database => 'orcl',
inclusion_rule => false --specifies the negative rule set
);
END;

exec DBMS_RULE_ADM.DROP_RULE('T_VISIT_STATS_DETAILED50',true);
exec DBMS_RULE_ADM.DROP_RULE('T_VISIT_STATS_DETAILED51',true);


I can drop the simulation jobs and streams configuration for now.

-- with u_epeker
exec U_EPEKER.P_DROP_JOBS;
exec DBMS_STREAMS_ADM.REMOVE_STREAMS_CONFIGURATION;
-- and on the remote server
exec DBMS_STREAMS_ADM.REMOVE_STREAMS_CONFIGURATION@ORCL11G;


I faced some problem while creating this schema replication configuration with streams.

The weird problem was, cannot shipping the logs to the destination from the source. When I examine the alert.log file of the source database i saw lots of ORA-01031 Insufficient Privileges.
After a little seach on the otn i solved the problem with createing the passwordfile from scratch with orapwd. Strange problem and i still dont know the underlying reason, anyway it is solved.

While instantiation i got an error. As this is a schema replication dont forget to create the same tablespace on the destination or else user/schema creation fails.

And last, an ORA-00600 problem which is ORA-00600: internal error code, arguments: [kwqbmcrcpts101], [], [], [], [], [], [], [] . I figured this problem after i dropped all streams configuration from the both databases and after 2 days i released that host is run out of the disk space. I found that there are lots of dump files under bdump. After a search on otn, i think i hit a 10.2.0.3 bug which occurs on streams environment. As this is a test environment i bounced the db. The problem is solved as i have already deleted streams configuration. the subject on oracle forums

No comments:

Post a Comment