I installed Google Analytics site script to my blog two weeks ago and i received more than i imagine. Here are some report supplied by the dashboard today.
At first sight this is an expected result like visits, pagevisits and pages/visit ratio and so on. On the visitors tab of the dashboard you can obtain lots of information about the users locations and technical specs.
You can also investigate which search engine hits the site? and also the keywords. Another nice feature is you can also see which contents (in this case which posts) hit by the readers mostly? and with which keywords :)
As can be seen from the results Oracle Grid Custom Reports is the most visited post and Homepage follows it very closely. It seems Oracle Enterprise Manager Grid Control posts are mostly interested subject. Streams comes with the SQL Server Logical Consistency Based IO Error post which is a real pain for the admins.
Unfortunately there is no way (or I couldnt find) to share the reports anonymously like GoogleNotes or GoogleDocs. I hope this would be a new feature in a new version.
Some links:
http://www.google.com/analytics
Interesting post from google to google analytics blog
Tuesday, April 27, 2010
Google Analytics works better than i expected
Labels:
Google Apps
Friday, April 2, 2010
Schema replication with Oracle Streams
I tested table replication with Oracle Streams before. And now i used Schema Replication with an OLTP Tester schema. The source database is 10.2.0.4 on a windows OS and the destination is 11.1.0.7 database on same host with a windows 2008 Server. I faced a couple of problems listed at the end of this post. The purpose of the post is setting up a Schema replication using streams. And start the OLTP test and monitor the streams processes. Piece of cake!.
Lets start with the schema replication with the MAINTAIN_SCHEMAS which is the easiest way. Streams does the rest of it (creating queues, captre, propagation and apply processes.. etc. ). But before running streams packages there are some preconfiguration of the database that has to be done.
- Enabling Archivelog
- Preconfiguring Oracle Streams
And after all we should be ready for streams replication.
BEGIN
Checking the streams processes if they are set all right after the maintain_schemas procedure.
Everything seems operational in the capture side. I need to check if apply process also created and runs smoothly.
I have created the schema before which includes an OLTP Simulator procedures. I am planning to share this schema in this blog further. Anyway, here is the script to create the schema: OLTP Simulator Script
Now I will use the P_CREATE_REPOS procedure to populate the schema. I hope streams is going to replicate the transactions. This create procedure will create 5 groups and 10 user each group. After that it will going to create 10 pages for each user.
exec U_EPEKER.P_CREATE_REPOS(5,10,1);
After executing this procedure there should be 5 groups and 10 users in each group and total 50 pages for 1 page each user.
select 'source ' || count(*) as user_count from u_epeker.t_user
union all
select 'destination ' || count(*) as user_count from u_epeker.t_user@ORCL11G;
/*
USER_COUNT
source 50
destination 50
*/
select 'source ' || count(*) as group_count from u_epeker.t_group
union all
select 'destination ' || count(*) as group_count from u_epeker.t_group@ORCL11G;
/*
GROUP_COUNT
source 5
destination 5
*/
select 'source ' || count(*) as page_count from u_epeker.t_pages
union all
select 'destination ' || count(*) as page_count from u_epeker.t_pages@ORCL11G;
/*
PAGE_COUNT
source 50
destination 50
*/
Seems everything goes fine. Now i will simulate an OLTP function in the source database and monitor the streams processes. For this I will use the P_CREATE_JOBS procedure of the OLTP Simulator schema and it will implicitly call the P_SIMULATE_REQUEST procedure sequentially.
-- with U_EPEKER user
exec U_EPEKER.P_CREATE_JOBS(20,1,'SIMULATE_JOBS',1);
With these parameter this procedure should create 20 jobs all intervals are 1 second means we are filling up T_VISIT_STATS_DETAILED and yes it works. Our replication should also work.
so it comes a few seconds behind (the time for shipping the logs i think). What about the capture and the apply processes and i can see TOTAL_MESSAGED_CAPTURED and the TOTAL_APPLIED values are getting higher.
Bonus: Excluding a table from the schema replication, it works!.
I can drop the simulation jobs and streams configuration for now.
I faced some problem while creating this schema replication configuration with streams.
The weird problem was, cannot shipping the logs to the destination from the source. When I examine the alert.log file of the source database i saw lots of ORA-01031 Insufficient Privileges.
After a little seach on the otn i solved the problem with createing the passwordfile from scratch with orapwd. Strange problem and i still dont know the underlying reason, anyway it is solved.
While instantiation i got an error. As this is a schema replication dont forget to create the same tablespace on the destination or else user/schema creation fails.
And last, an ORA-00600 problem which is ORA-00600: internal error code, arguments: [kwqbmcrcpts101], [], [], [], [], [], [], [] . I figured this problem after i dropped all streams configuration from the both databases and after 2 days i released that host is run out of the disk space. I found that there are lots of dump files under bdump. After a search on otn, i think i hit a 10.2.0.3 bug which occurs on streams environment. As this is a test environment i bounced the db. The problem is solved as i have already deleted streams configuration. the subject on oracle forums
Lets start with the schema replication with the MAINTAIN_SCHEMAS which is the easiest way. Streams does the rest of it (creating queues, captre, propagation and apply processes.. etc. ). But before running streams packages there are some preconfiguration of the database that has to be done.
- Enabling Archivelog
- Preconfiguring Oracle Streams
And after all we should be ready for streams replication.
BEGIN
DBMS_STREAMS_ADM.MAINTAIN_SCHEMAS(In case of any error following dba views can be queried and see which script has an error or in case of any hopeless situation you can recover the operation and remove the streams configuration and re-run the maintain_schemas after overcome any misconfiguration.
schema_names => 'U_EPEKER',
source_directory_object => 'DATA_PUMP_DIR',
destination_directory_object => 'DATA_PUMP_DIR',
source_database => 'ORCL',
destination_database => 'ORCL11G',
perform_actions => TRUE,
capture_name => 'CAPTURE_SCHEMA_U_EPEKER',
--capture_queue_table => NULL,
capture_queue_name => 'QUEUE_CAPT_SCH_U_EPEKER',
capture_queue_user => 'U_STREAMS_ADM',
propagation_name => 'PROP_SCH_U_EPEKER',
apply_name => 'APPLY_SCH_U_EPEKER',
--apply_queue_table => NULL,
apply_queue_name => 'QUEUE_APP_SCH_U_EPEKER',
apply_queue_user => 'U_STREAMS_ADM',
bi_directional => FALSE,
include_ddl => TRUE,
instantiation => DBMS_STREAMS_ADM.INSTANTIATION_SCHEMA_NETWORK);
END;
select * from DBA_RECOVERABLE_SCRIPT_BLOCKS;
select * from DBA_RECOVERABLE_SCRIPT_ERRORS;
execute DBMS_STREAMS_ADM.RECOVER_OPERATION('3A53A54E70764958BA0AFA68DAC0C7F0','PURGE');
exec DBMS_STREAMS_ADM.REMOVE_STREAMS_CONFIGURATION;
-- and on the remote server
exec DBMS_STREAMS_ADM.REMOVE_STREAMS_CONFIGURATION@ORCL11G;
Checking the streams processes if they are set all right after the maintain_schemas procedure.
select capture_name,
queue_name,
rule_set_name,
source_database,
status
from dba_capture;
/*
CAPTURE_NAME QUEUE_NAME RULE_SET_NAME SOURCE_DATABASE STATUS
------------ ---------- ------------- --------------- ------
CAPTURE_SCHEMA_U_EPEKER QUEUE_CAPT_SCH_U_EPEKER RULESET$_12 ORCL ENABLED
*/
select propagation_name,
source_queue_name,
destination_queue_name,
destination_dblink,
rule_set_name,
status
from dba_propagation;
/*
PROPAGATION_NAME SOURCE_QUEUE_NAME DESTINATION_QUEUE_NAME DESTINATION_DBLINK RULE_SET_NAME
--------------- ---------------- --------------------- ----------------- -------------
PROP_SCH_U_EPEKER QUEUE_CAPT_SCH_U_EPEKER QUEUE_APP_SCH_U_EPEKER ORCL11G RULESET$_9
*/
select sid,
serial#,
capture_name,
state
from v$streams_capture;
/*
SID SERIAL# CAPTURE_NAME STATE
--- ------ ------------ -----
128 8 CAPTURE_SCHEMA_U_EPEKER CAPTURING CHANGES
*/
Everything seems operational in the capture side. I need to check if apply process also created and runs smoothly.
select apply_name,
queue_name,
apply_captured,
rule_set_name,
status
from dba_apply@ORCL11G;
/*
APPLY_NAME QUEUE_NAME APPLY_CAPTURED RULE_SET_NAME STATUS
--------- ---------- -------------- ------------- ------
APPLY_SCH_U_EPEKER QUEUE_APP_SCH_U_EPEKER YES RULESET$_6 ENABLED
*/
select sid,
serial#,
state,
apply#,
apply_name,
total_applied,
total_errors
from v$streams_apply_coordinator@ORCL11G;
/*
SID SERIAL# STATE APPLY# APPLY_NAME
--- ------ ----- ----- ----------
116 19550 IDLE 1 APPLY_SCH_U_EPEKER
*/
I have created the schema before which includes an OLTP Simulator procedures. I am planning to share this schema in this blog further. Anyway, here is the script to create the schema: OLTP Simulator Script
Now I will use the P_CREATE_REPOS procedure to populate the schema. I hope streams is going to replicate the transactions. This create procedure will create 5 groups and 10 user each group. After that it will going to create 10 pages for each user.
exec U_EPEKER.P_CREATE_REPOS(5,10,1);
After executing this procedure there should be 5 groups and 10 users in each group and total 50 pages for 1 page each user.
select 'source ' || count(*) as user_count from u_epeker.t_user
union all
select 'destination ' || count(*) as user_count from u_epeker.t_user@ORCL11G;
/*
USER_COUNT
source 50
destination 50
*/
select 'source ' || count(*) as group_count from u_epeker.t_group
union all
select 'destination ' || count(*) as group_count from u_epeker.t_group@ORCL11G;
/*
GROUP_COUNT
source 5
destination 5
*/
select 'source ' || count(*) as page_count from u_epeker.t_pages
union all
select 'destination ' || count(*) as page_count from u_epeker.t_pages@ORCL11G;
/*
PAGE_COUNT
source 50
destination 50
*/
Seems everything goes fine. Now i will simulate an OLTP function in the source database and monitor the streams processes. For this I will use the P_CREATE_JOBS procedure of the OLTP Simulator schema and it will implicitly call the P_SIMULATE_REQUEST procedure sequentially.
-- with U_EPEKER user
exec U_EPEKER.P_CREATE_JOBS(20,1,'SIMULATE_JOBS',1);
With these parameter this procedure should create 20 jobs all intervals are 1 second means we are filling up T_VISIT_STATS_DETAILED and yes it works. Our replication should also work.
select count(*) from U_EPEKER.T_VISIT_STATS_DETAILED
union all
select count(*) from U_EPEKER.T_VISIT_STATS_DETAILED@ORCL11G;
/*
COUNT(*)
-------
385
376
on the second run
COUNT(*)
-------
385
385
*/
so it comes a few seconds behind (the time for shipping the logs i think). What about the capture and the apply processes and i can see TOTAL_MESSAGED_CAPTURED and the TOTAL_APPLIED values are getting higher.
select state, capture_name, total_messages_captured from v$streams_capture;
/*
STATE,CAPTURE_NAME,TOTAL_MESSAGES_CAPTURED
CAPTURING CHANGES,CAPTURE_SCHEMA_U_EPEKER,112817
*/
select state, apply_name, total_applied from v$streams_apply_coordinator@ORCL11G;
/*
STATE,APPLY_NAME,TOTAL_APPLIED
IDLE,APPLY_SCH_U_EPEKER,753
*/
Bonus: Excluding a table from the schema replication, it works!.
BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES
(
table_name => 'u_epeker.t_visit_stats_detailed',
streams_type => 'capture',
streams_name => 'capture_schema_u_epeker',
queue_name => 'queue_capt_sch_u_epeker',
include_dml => true,
include_ddl => true,
source_database => 'orcl',
inclusion_rule => false --specifies the negative rule set
);
END;
exec DBMS_RULE_ADM.DROP_RULE('T_VISIT_STATS_DETAILED50',true);
exec DBMS_RULE_ADM.DROP_RULE('T_VISIT_STATS_DETAILED51',true);
I can drop the simulation jobs and streams configuration for now.
-- with u_epeker
exec U_EPEKER.P_DROP_JOBS;
exec DBMS_STREAMS_ADM.REMOVE_STREAMS_CONFIGURATION;
-- and on the remote server
exec DBMS_STREAMS_ADM.REMOVE_STREAMS_CONFIGURATION@ORCL11G;
I faced some problem while creating this schema replication configuration with streams.
The weird problem was, cannot shipping the logs to the destination from the source. When I examine the alert.log file of the source database i saw lots of ORA-01031 Insufficient Privileges.
After a little seach on the otn i solved the problem with createing the passwordfile from scratch with orapwd. Strange problem and i still dont know the underlying reason, anyway it is solved.
While instantiation i got an error. As this is a schema replication dont forget to create the same tablespace on the destination or else user/schema creation fails.
And last, an ORA-00600 problem which is ORA-00600: internal error code, arguments: [kwqbmcrcpts101], [], [], [], [], [], [], [] . I figured this problem after i dropped all streams configuration from the both databases and after 2 days i released that host is run out of the disk space. I found that there are lots of dump files under bdump. After a search on otn, i think i hit a 10.2.0.3 bug which occurs on streams environment. As this is a test environment i bounced the db. The problem is solved as i have already deleted streams configuration. the subject on oracle forums
Labels:
Streams
Thursday, April 1, 2010
"PAUSED" state for streams capture process
My test replication somehow got broken and does not apply changes to the target database (Seem a new challange for me!) First thing i checked was streams capture process and i saw the weird message following.
select sid,
serial#,
capture_name,
state
from v$streams_capture;
/*
SID SERIAL# CAPTURE_NAME STATE
-- ------- ------------ -----
121 1524 CAPTURE_SCHEMA_U_EPEKER PAUSED FOR FLOW CONTROL
*/
Now if capture process is stopped, may be i need to check what is going on the apply process. But it seems nothing is received by the capture process. I think the problem should be on the capture side.
select sid,
serial#,
state,
apply#,
apply_name,
total_applied,
total_errors
from v$streams_apply_coordinator@ORCL11G;
/*
SID SERIAL# STATE APPLY# APPLY_NAME TOTAL_APPLIED TOTAL_ERRORS
--- ------ ---- ------ ----------- ------------- ------------
110 4208 IDLE 1 APPLY_SCH_U_EPEKER 0 0
*/
I still didnt understand anything from this message "PAUSED FOR FLOW CONTROL". I did a little search on Google and the i come to Oracle Documentation as always. It seems i need to check the streams_pool_size parameter.
select queue_name, sender_name, num_msgs, unbrowsed_msgs, memory_usage, publisher_state from V$BUFFERED_PUBLISHERS;
/*
QUEUE_NAME SENDER_NAME NUM_MSGS UNBROWSED_MSGS MEMORY_USAGE PUBLISHER_STATE
--------- ----------- -------- -------------- ----------- ---------------
QUEUE_CAPT_SCH_U_EPEKER CAPTURE_SCHEMA_U_EPEKER 15057 15057 87 IN FLOW CONTROL: TOO MANY UNBROWSED MESSAGES
*/
If the memory usage stands for MB which seems 87, then there could be a problem. As with the following invesitgation it seems ASSM gives 56Mb to streams_pool_size and it seems there could be a lack of memory. I decided to increase the streams_pool_size to 200Mb.
select name, value from v$parameter where name = 'streams_pool_size';
/*
NAME VALUE
---- -----
streams_pool_size 0
*/
select name, value/1024/1024 as mb from v$parameter where name = 'sga_target';
/*
NAME MB
---- --
sga_target 512
*/
select pool, round(sum(bytes)/1024/1024) as MB from v$sgastat group by pool;
/*
POOL MB
---- --
140
java pool 12
streams pool 56
shared pool 300
large pool 4
*/
alter system set sga_target=1G scope=spfile;
alter system set streams_pool_size=200M scope=spfile;
shutdown immediate;
startup;
As soon as the database opened capture process is up again. And the publishers state is now "PUBLISHING MESSAGES"
select sid,
serial#,
capture_name,
state
from v$streams_capture;
/*
SID SERIAL# CAPTURE_NAME STATE
--- ------ ------------ -----
151 1 CAPTURE_SCHEMA_U_EPEKER CAPTURING CHANGES
*/
select queue_name, sender_name, num_msgs, unbrowsed_msgs, memory_usage, publisher_state from V$BUFFERED_PUBLISHERS;
/*
QUEUE_NAME SENDER_NAME NUM_MSGS UNBROWSED_MSGS MEMORY_USAGE PUBLISHER_STATE
---------- ----------- -------- -------------- ------------ ---------------
QUEUE_CAPT_SCH_U_EPEKER CAPTURE_SCHEMA_U_EPEKER 44 0 5 PUBLISHING MESSAGES
*/
select sid,
serial#,
capture_name,
state
from v$streams_capture;
/*
SID SERIAL# CAPTURE_NAME STATE
-- ------- ------------ -----
121 1524 CAPTURE_SCHEMA_U_EPEKER PAUSED FOR FLOW CONTROL
*/
Now if capture process is stopped, may be i need to check what is going on the apply process. But it seems nothing is received by the capture process. I think the problem should be on the capture side.
select sid,
serial#,
state,
apply#,
apply_name,
total_applied,
total_errors
from v$streams_apply_coordinator@ORCL11G;
/*
SID SERIAL# STATE APPLY# APPLY_NAME TOTAL_APPLIED TOTAL_ERRORS
--- ------ ---- ------ ----------- ------------- ------------
110 4208 IDLE 1 APPLY_SCH_U_EPEKER 0 0
*/
I still didnt understand anything from this message "PAUSED FOR FLOW CONTROL". I did a little search on Google and the i come to Oracle Documentation as always. It seems i need to check the streams_pool_size parameter.
select queue_name, sender_name, num_msgs, unbrowsed_msgs, memory_usage, publisher_state from V$BUFFERED_PUBLISHERS;
/*
QUEUE_NAME SENDER_NAME NUM_MSGS UNBROWSED_MSGS MEMORY_USAGE PUBLISHER_STATE
--------- ----------- -------- -------------- ----------- ---------------
QUEUE_CAPT_SCH_U_EPEKER CAPTURE_SCHEMA_U_EPEKER 15057 15057 87 IN FLOW CONTROL: TOO MANY UNBROWSED MESSAGES
*/
If the memory usage stands for MB which seems 87, then there could be a problem. As with the following invesitgation it seems ASSM gives 56Mb to streams_pool_size and it seems there could be a lack of memory. I decided to increase the streams_pool_size to 200Mb.
select name, value from v$parameter where name = 'streams_pool_size';
/*
NAME VALUE
---- -----
streams_pool_size 0
*/
select name, value/1024/1024 as mb from v$parameter where name = 'sga_target';
/*
NAME MB
---- --
sga_target 512
*/
select pool, round(sum(bytes)/1024/1024) as MB from v$sgastat group by pool;
/*
POOL MB
---- --
140
java pool 12
streams pool 56
shared pool 300
large pool 4
*/
alter system set sga_target=1G scope=spfile;
alter system set streams_pool_size=200M scope=spfile;
shutdown immediate;
startup;
As soon as the database opened capture process is up again. And the publishers state is now "PUBLISHING MESSAGES"
select sid,
serial#,
capture_name,
state
from v$streams_capture;
/*
SID SERIAL# CAPTURE_NAME STATE
--- ------ ------------ -----
151 1 CAPTURE_SCHEMA_U_EPEKER CAPTURING CHANGES
*/
select queue_name, sender_name, num_msgs, unbrowsed_msgs, memory_usage, publisher_state from V$BUFFERED_PUBLISHERS;
/*
QUEUE_NAME SENDER_NAME NUM_MSGS UNBROWSED_MSGS MEMORY_USAGE PUBLISHER_STATE
---------- ----------- -------- -------------- ------------ ---------------
QUEUE_CAPT_SCH_U_EPEKER CAPTURE_SCHEMA_U_EPEKER 44 0 5 PUBLISHING MESSAGES
*/
Labels:
Streams
Subscribe to:
Posts (Atom)