Wednesday, October 28, 2009

DRCP vs Oracle Shared Server

While i was hanging out in the Oracle Database 11g New Features documents on OTN I found something interesting which is not documented on Oracle 11g New Features, Database Resident Connection Pooling (DRCP). I was surprised because this feature is very handy and usable in a lot of environment especially if you serve your data to more than one application running on more than one application server and with enormous number of connections.

Before Oracle Database 11g we were all using Oracle Shared Server architecture to lower the server side user processes and their overheads to the server resources. But with 11g the similar architecture is available now called Database Resident Connection Pooling (DRCP).

Quoted from the related document linked at the end of the this post:


"In one of our test scenarios on a 2GB system, DRCP was able to support 10 times the number of connections compared to shared servers, and 20 times the number of connections compared to dedicated servers."


The question; is DRCP a replacement of Oracle Shared Server?

Related:
Database Resident Connection Pooling (DRCP) Oracle Database 11g Technical White Paper

Wednesday, October 14, 2009

Some Notes on Oracle Streams Table Replication

In this test, replication is made from Oracle Database version 11.1.0.7 to 11.1.0.6 database on a virtual machine. This is not a performance test but only a functional test of the streams basic abilities. This test is made on table replication basis maybe I can publish the schema replication based test in another article.

Some preparation needed first and i assume both the databases are in archivelog mode. Actually documentation says only the source database is enough for archiving redologs. That is reasonable for capturing the changes and not to miss one. But my databases are both in archivelog mode so I cover this necessity.


--check if there are unsupported column types.
select * from DBA_STREAMS_UNSUPPORTED;

--create a tablespace on the source database for test purposes
create tablespace TS_STREAMS_SRC DATAFILE 'D:\oracledb11g\oradata\orcl\TS_STREAMS_SRC_01.dbf'
SIZE 10M AUTOEXTEND ON NEXT 10M MAXSIZE 100M EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO;

--create a tablespace on the destination database for test purposes
create tablespace TS_STREAMS_DEST DATAFILE 'D:\oracledb11g\oradata\orcl\TS_STREAMS_DEST_01.dbf'
SIZE 10M AUTOEXTEND ON NEXT 10M MAXSIZE 100M EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO;

--creating the streams admin user on both of the databases.
create user U_STREAMS_ADM identified by password
DEFAULT TABLESPACE TS_STREAMS_SRC
QUOTA UNLIMITED ON TS_STREAMS_SRC;

--streams user should have dba privilege
grant connect,resource,dba to U_STREAMS_ADM;
exec DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE('U_STREAMS_ADM',TRUE);

--create database links on the databases to access both from SOURCE to DESTINATION and from DESTINATION to SOURCE

--here ORCL_SOURCE is the tns alias for the SOURCE database and ORCL_DESTINATION is the tns alias for the DESTINATION database


create database link ORCL.SOURCE.LOCAL connect to U_STREAMS_ADM_SRC identified by password using 'ORCL_SOURCE';

create database link ORCL.DESINATION.LOCAL connect to U_STREAMS_ADM_SRC identified by password using 'ORCL_DESTINATION';

--a second destination should be defined on the source database to send the redo logs to the destination database
alter system set LOG_ARCHIVE_DEST_2='SERVICE=ORCL ASYNC NOREGISTER VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=ORCL';

-- creating a table replication with MAINTAIN_TABLES
-- with this configuration CAPTURE and PROPAGATION processes are on the source database
-- APPLY process is on the DESTINATION database

DECLARE
tables DBMS_UTILITY.UNCL_ARRAY;
BEGIN
tables(1) := 'scott.dept';
DBMS_STREAMS_ADM.MAINTAIN_TABLES(
table_names => tables,
source_directory_object => NULL,
destination_directory_object => NULL,
source_database => 'ORCL.SOURCE.LOCAL',
destination_database => 'ORCL.DESTINATION.LOCAL',
capture_name => 'CAPTURE_TABLE_GRP1',
capture_queue_name => 'QUEUE_CAPT_TABLE_GRP1',
capture_queue_user => 'U_STREAMS_ADM',
propagation_name => 'PROPAGATE_TABLE_GRP1',
apply_name => 'APPLY_TABLE_GRP1',
apply_queue_name => 'QUEUE_APPLY_TABLE_GRP1',
apply_queue_user => 'U_STREAMS_ADM',
perform_actions => TRUE,
script_directory_object => NULL,
script_name => NULL,
bi_directional => FALSE,
include_ddl => TRUE,
instantiation => DBMS_STREAMS_ADM.INSTANTIATION_TABLE_NETWORK);
END;


The procedure worked smoothly while I was waiting for it to be end with an exception. But if there is a problem with the instantiation of the replication then the following set of commands helps you to clear the inadequate configuration and then you can run the MAINTAIN_TABLES again.


SELECT * FROM DBA_RECOVERABLE_SCRIPT_ERRORS;
execute DBMS_STREAMS_ADM.RECOVER_OPERATION('A6B753822B9C4C30A5CD87B9571ACF03','PURGE');
exec DBMS_STREAMS_ADM.REMOVE_STREAMS_CONFIGURATION();

--The insert tests just worked fine as follows.
insert into SCOTT.DEPT values (60,'IT-DEVELOPMENT','IST');
insert into SCOTT.DEPT values (70,'IT-STORAGE','IST');
insert into SCOTT.DEPT values (80,'IT-BACKUP','IST');
insert into SCOTT.DEPT values (90,'IT-WINDOWS','IST');
insert into SCOTT.DEPT values (95,'IT-UNIX','IST');
insert into SCOTT.DEPT values (96,'IT-SECURITY','IST');
commit;

--Findout the configuration and status of the processes
select * from dba_capture;
select * from dba_propagation;
select * from dba_apply;
select * from v$streams_capture;


Lets see if the configuration is flexible enough to add a new table rule to the existing set easily. I followed the documentation with the following commands in order.


--on source database
exec DBMS_CAPTURE_ADM.STOP_CAPTURE('CAPTURE_TABLE_GRP1');
exec DBMS_PROPAGATION_ADM.STOP_PROPAGATION('PROPAGATE_TABLE_GRP1');

--on the destination database
exec DBMS_APPLY_ADM.STOP_APPLY('APPLY_TABLE_GRP1');

--add new rule to the CAPTURE process
exec DBMS_STREAMS_ADM.ADD_TABLE_RULES(
TABLE_NAME =>'SCOTT.EMP',
STREAMS_TYPE => 'CAPTURE',
STREAMS_NAME => 'CAPTURE_TABLE_GRP1',
SOURCE_DATABASE => 'ORCL.SOURCE.LOCAL',
QUEUE_NAME => 'QUEUE_CAPT_TABLE_GRP1',
INCLUDE_DML => TRUE,
INCLUDE_DDL => TRUE,
--DDL_RULE_NAME => NULL,
--DML_RULE_NAME => NULL,
INCLUSION_RULE => TRUE,
AND_CONDITION => NULL);

--add new rule to the PROPAGATION process
exec DBMS_STREAMS_ADM.ADD_TABLE_PROPAGATION_RULES(
TABLE_NAME => 'SCOTT.EMP',
STREAMS_NAME => '', --if propagation exists on source_queue_name then uses the existing propagation
SOURCE_QUEUE_NAME => 'U_STREAMS_ADM.QUEUE_CAPT_TABLE_GRP1',
DESTINATION_QUEUE_NAME => 'U_STREAMS_ADM.QUEUE_APPLY_TABLE_GRP1@ORCL.DESTINATION.LOCAL',
INCLUDE_DML => TRUE,
INCLUDE_DDL => TRUE,
SOURCE_DATABASE => 'ORCL.SOURCE.LOCAL',
INCLUSION_RULE => TRUE,
AND_CONDITION => NULL,
QUEUE_TO_QUEUE => TRUE --TRUE=queue_to_queue FALSE=queue_to_dblink
);

--now the last thing is to add a new rule to the APPLY process
exec DBMS_STREAMS_ADM.ADD_TABLE_RULES(
TABLE_NAME =>'SCOTT.EMP',
STREAMS_TYPE => 'APPLY',
STREAMS_NAME => 'APPLY_TABLE_GRP1',
SOURCE_DATABASE => 'ORCL.SOURCE.LOCAL',
QUEUE_NAME => 'QUEUE_APPLY_TABLE_GRP1',
INCLUDE_DML => TRUE,
INCLUDE_DDL => TRUE,
--DDL_RULE_NAME => NULL,
--DML_RULE_NAME => NULL,
INCLUSION_RULE => TRUE,
AND_CONDITION => NULL);


We can now check if the new rules are added and the conditions of the rules with the following views. We cannot see and new process configuration as i added the rules to the existing APPLY, PROPAGATION and CAPTURE processes.

DBA_RULES
DBA_RULE_SETS
DBA_STREAMS_RULES
DBA_RULE_SET_RULES

If you did something wrong wile adding the rules you can easily delete the rules by running the following procedures which are under the DBMS_RULE_ADM. Be sure what are you deleting by selecting DBA_STREAMS_RULES. You dont want to delete a rule associated with a working process.


--If you want to delete a rule you should first remove it
DBMS_RULE_ADM.REMOVE_RULE
DBMS_RULE_ADM.DELETE_RULE


I added the rules and now I should instantiate the table/tables manually. MAINTAIN_TABLES procedure makes this instantiation automatically for you but if you want to add the rules explicitly then you need to instantiate them.


--set the instantiation SCN of the SOURCE table/tables
DECLARE
iscn NUMBER; --Variable to hold instantiation SCN value
BEGIN
iscn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER();
DBMS_APPLY_ADM.SET_TABLE_INSTANTIATION_SCN(
source_object_name => 'scott.emp',
source_database_name => 'ORCL.SOURCE.LOCAL',
instantiation_scn => iscn);
END;

--Create the table on the DESTINATION database.
create table scott.emp as select * from scott.emp@ORCL.SOURCE.LOCAL;

--The last thing is not to forget the SUPPLEMENTAL LOGGING for the replicated table to capture and apply the changes.
ALTER TABLE scott.emp ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;

--After checking all the configuration again now I can start the processes i have stopped.
-- on source database.

exec DBMS_CAPTURE_ADM.START_CAPTURE('CAPTURE_TABLE_GRP1');
exec DBMS_PROPAGATION_ADM.START_PROPAGATION('PROPAGATE_TABLE_GRP1');

--On the destination database.
exec DBMS_APPLY_ADM.START_APPLY('APPLY_TABLE_GRP1');


It seems everything worked fine as the Oracle documentation stated. Adding rules to working processes of the streams are easy accept you have to manually instantiate the tables via create table .. as .. or export/import or expdp/impdp if you are adding the rules manually. Now i want to go furher and alter a replication rule of an existing configuration of the apply process to capture only a subset of the changes.

Rules can be seen by selecting DBA_RULES, DBA_RULESETS, DBA_RULE_SET_RULES or DBA_STREAMS_RULES which i prefer to use. One should be careful which rule he/she is going to alter. There are DML and DDL rules for every process. And the rule that will be altered should be related with the correct CAPTURE PROCESS (since there can be more than one capture process).

In my case the rule that i will going to alter is BONUS85 as follows. Stopping the processes before any change in the configuration is a good idea. Without stopping and starting the processes the new configuration cannot be applied to the existing streams processes.


--On source database
exec DBMS_CAPTURE_ADM.STOP_CAPTURE('CAPTURE_TABLE_GRP1');
exec DBMS_PROPAGATION_ADM.STOP_PROPAGATION('PROPAGATE_TABLE_GRP1');

--On the destination database
exec DBMS_APPLY_ADM.STOP_APPLY('APPLY_TABLE_GRP1');

--Altering a rule capturing a subset of the data in the table.
execute dbms_rule_adm.alter_rule('BONUS85',':dml.get_object_owner()=''SCOTT'' AND :dml.get_object_name()=''BONUS''
AND :dml.is_null_tag()=''Y'' AND :dml.get_source_database_name() = ''ORCL.DIGITURK.LOCAL''
AND (:dml.get_value(''NEW'',''ENAME'').AccessVarchar2()=''ERGEM'')');

--On source database
exec DBMS_CAPTURE_ADM.START_CAPTURE('CAPTURE_TABLE_GRP1');
exec DBMS_PROPAGATION_ADM.START_PROPAGATION('PROPAGATE_TABLE_GRP1');

--On the destination database
exec DBMS_APPLY_ADM.START_APPLY('APPLY_TABLE_GRP1');

--Related views for streams monitoring
select * from v$streams_apply_coordinator
select * from v$streams_apply_reader
select * from v$streams_apply_server
select * from v$streams_capture
select * from v$streams_pool_advice
select * from v$streams_transaction
select * from v$streams_monitor


As a conclusion Oracle Streams is easy to configure and use. It seems flexible to change the configuration easily and altering the existing rules to gather the subset of the changes. I did not test the performance but functional tests seems OK.

Of course there is a lot of options you can use; Downstream Capture Processes, Sync Captures, Parallel Capture/Apply Processes and so on. Oracle Documentation has sufficient information about the Streams.

Related:
Oracle Streams Replication Administrators Guide
Oracle Streams Concepts and Administration
PSOUG Streams Demo