Thursday, December 30, 2010

Error in checking condition of instance on node

After rebooting both rac nodes srvctl started to complain about the condition of the second node of my cluster.


[oracle@EPRHEL6 admin]$ srvctl status database -d orcl
Instance ORCL1 is running on node eprhel5
PRKO-2015 : Error in checking condition of instance on node: eprhel6

[oracle@EPRHEL6 admin]$ sqlplus system/password@ORCL2

SQL*Plus: Release 10.2.0.1.0 - Production on Mon Dec 27 00:03:11 2010

Copyright (c) 1982, 2005, Oracle.  All rights reserved.

ERROR:
ORA-12514: TNS:listener does not currently know of service requested in connect
descriptor


Enter user-name: 



srvctl also complains when i was trying to start the instance on the second node. So i decided to start the instance manually by using sqlplus.

[oracle@EPRHEL6 admin]$ sqlplus "/ as sysdba"


SQL*Plus: Release 10.2.0.1.0 - Production on Mon Dec 27 00:03:24 2010

Copyright (c) 1982, 2005, Oracle.  All rights reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP and Data Mining options

SQL> startup;
Oracle instance started.

Total System Global Area 599785472 bytes
Fixed Size     2022600 bytes
Variable Size   188744504 bytes
Database Buffers  402653184 bytes
Redo Buffers     6365184 bytes
Database mounted.
Database opened.
SQL> alter system register;

System altered.

SQL> exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP and Data Mining options

[oracle@EPRHEL6 admin]$ sqlplus system/password@ORCL2

SQL*Plus: Release 10.2.0.1.0 - Production on Mon Dec 27 00:04:18 2010

Copyright (c) 1982, 2005, Oracle.  All rights reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP and Data Mining options

SQL> exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP and Data Mining options


It seems there is not any problem with the instance itself. Sqlplus barely connects to the instance ORCL2. There should be a problem about the way of communication between srvctl and the instance.

[oracle@EPRHEL6 admin]$ crs_stat -t
Name           Type           Target    State     Host        
------------------------------------------------------------
ora....L1.inst application    ONLINE    ONLINE    eprhel5     
ora....L2.inst application    ONLINE    UNKNOWN   eprhel6     
ora.ORCL.db    application    ONLINE    ONLINE    eprhel5     
ora....SM1.asm application    ONLINE    ONLINE    eprhel5     
ora....L5.lsnr application    ONLINE    ONLINE    eprhel5     
ora....el5.gsd application    ONLINE    ONLINE    eprhel5     
ora....el5.ons application    ONLINE    ONLINE    eprhel5     
ora....el5.vip application    ONLINE    ONLINE    eprhel5     
ora....SM2.asm application    ONLINE    ONLINE    eprhel6     
ora....L5.lsnr application    OFFLINE   OFFLINE               
ora....L6.lsnr application    ONLINE    ONLINE    eprhel6     
ora....el6.gsd application    ONLINE    ONLINE    eprhel6     
ora....el6.ons application    ONLINE    ONLINE    eprhel6     
ora....el6.vip application    ONLINE    ONLINE    eprhel6     

[oracle@EPRHEL6 admin]$ srvctl start listener -n EPRHEL6

[oracle@EPRHEL6 admin]$ crs_stat -t
Name           Type           Target    State     Host        
------------------------------------------------------------
ora....L1.inst application    ONLINE    ONLINE    eprhel5     
ora....L2.inst application    ONLINE    UNKNOWN   eprhel6     
ora.ORCL.db    application    ONLINE    ONLINE    eprhel5     
ora....SM1.asm application    ONLINE    ONLINE    eprhel5     
ora....L5.lsnr application    ONLINE    ONLINE    eprhel5     
ora....el5.gsd application    ONLINE    ONLINE    eprhel5     
ora....el5.ons application    ONLINE    ONLINE    eprhel5     
ora....el5.vip application    ONLINE    ONLINE    eprhel5     
ora....SM2.asm application    ONLINE    ONLINE    eprhel6     
ora....L5.lsnr application    OFFLINE   OFFLINE               
ora....L6.lsnr application    ONLINE    ONLINE    eprhel6     
ora....el6.gsd application    ONLINE    ONLINE    eprhel6     
ora....el6.ons application    ONLINE    ONLINE    eprhel6     
ora....el6.vip application    ONLINE    ONLINE    eprhel6  

[oracle@EPRHEL6 admin]$ sqlplus system/password@ORCL1

SQL*Plus: Release 10.2.0.1.0 - Production on Mon Dec 27 00:04:35 2010

Copyright (c) 1982, 2005, Oracle.  All rights reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP and Data Mining options

SQL> show parameter listener;

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
local_listener                       string
remote_listener                      string      LISTENERS_ORCL
SQL> exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP and Data Mining options


I think there is a problem with listener configuration or the listener resource itself. But everything seems fine except that OFFLINE resource. After searching google a little bit, i found a solution indicates listener configuration. I decided to recreate the listeners with netca. I will first delete listener named LISTENER from both ASM and DB homes using netca and then recreate them only using DB home. Maybe this resolves the problem.

My action plan is first stop all asm and db instances. Manually remove that OFFLINE listener which is very confusing. Remove all the listener configuration from the cluster with netca and recreate using db home. Here we go.

[oracle@EPRHEL6 db]$ lsnrctl status

LSNRCTL for Linux: Version 10.2.0.1.0 - Production on 27-DEC-2010 00:22:16

Copyright (c) 1991, 2005, Oracle.  All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
STATUS of the LISTENER
------------------------
Alias                     LISTENER_EPRHEL6
Version                   TNSLSNR for Linux: Version 10.2.0.1.0 - Production
Start Date                27-DEC-2010 00:02:31
Uptime                    0 days 0 hr. 19 min. 44 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /oracle/product/asm/network/admin/listener.ora
Listener Log File         /oracle/product/asm/network/log/listener_eprhel6.log
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=172.28.4.226)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=172.28.4.246)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC)))
Services Summary...
Service "+ASM" has 1 instance(s).
Instance "+ASM2", status BLOCKED, has 1 handler(s) for this service...
Service "+ASM_XPT" has 1 instance(s).
Instance "+ASM2", status BLOCKED, has 1 handler(s) for this service...
Service "ORCL" has 2 instance(s).
Instance "ORCL1", status READY, has 1 handler(s) for this service...
Instance "ORCL2", status READY, has 2 handler(s) for this service...
Service "ORCLXDB" has 2 instance(s).
Instance "ORCL1", status READY, has 1 handler(s) for this service...
Instance "ORCL2", status READY, has 1 handler(s) for this service...
Service "ORCL_XPT" has 2 instance(s).
Instance "ORCL1", status READY, has 1 handler(s) for this service...
Instance "ORCL2", status READY, has 2 handler(s) for this service...
The command completed successfully

[oracle@EPRHEL6 db]$ srvctl stop database -d orcl
[oracle@EPRHEL6 db]$ srvctl stop asm -n EPRHEL5
[oracle@EPRHEL6 db]$ srvctl stop asm -n EPRHEL6
[oracle@EPRHEL6 db]$ crs_stat -t
Name           Type           Target    State     Host        
------------------------------------------------------------
ora....L1.inst application    OFFLINE   OFFLINE               
ora....L2.inst application    OFFLINE   OFFLINE               
ora.ORCL.db    application    OFFLINE   OFFLINE               
ora....SM1.asm application    OFFLINE   OFFLINE               
ora....el5.gsd application    ONLINE    ONLINE    eprhel5     
ora....el5.ons application    ONLINE    ONLINE    eprhel5     
ora....el5.vip application    ONLINE    ONLINE    eprhel5     
ora....SM2.asm application    OFFLINE   OFFLINE               
ora....L5.lsnr application    OFFLINE   OFFLINE               
ora....el6.gsd application    ONLINE    ONLINE    eprhel6     
ora....el6.ons application    ONLINE    ONLINE    eprhel6     
ora....el6.vip application    ONLINE    ONLINE    eprhel6     

[oracle@EPRHEL6 db]$ crs_getperm ora.eprhel6.LISTENER_EPRHEL5.lsnr
Name: ora.eprhel6.LISTENER_EPRHEL5.lsnr
owner:oracle:rwx,pgrp:dba:rwx,other::r--,
[oracle@EPRHEL6 db]$ crs_unregister ora.eprhel6.LISTENER_EPRHEL5.lsnr
[oracle@EPRHEL6 db]$ crs_profile -delete ora.eprhel6.LISTENER_EPRHEL5.lsnr
CRS-0170: The resource 'ora.eprhel6.LISTENER_EPRHEL5.lsnr' doesn't exist.

[oracle@EPRHEL6 db]$ crs_stat -t
Name           Type           Target    State     Host        
------------------------------------------------------------
ora....L1.inst application    ONLINE    ONLINE    eprhel5     
ora....L2.inst application    ONLINE    ONLINE    eprhel6     
ora.ORCL.db    application    ONLINE    ONLINE    eprhel5     
ora....SM1.asm application    ONLINE    ONLINE    eprhel5     
ora....L5.lsnr application    ONLINE    ONLINE    eprhel5     
ora....el5.gsd application    ONLINE    ONLINE    eprhel5     
ora....el5.ons application    ONLINE    ONLINE    eprhel5     
ora....el5.vip application    ONLINE    ONLINE    eprhel5     
ora....SM2.asm application    ONLINE    ONLINE    eprhel6     
ora....L6.lsnr application    ONLINE    ONLINE    eprhel6     
ora....el6.gsd application    ONLINE    ONLINE    eprhel6     
ora....el6.ons application    ONLINE    ONLINE    eprhel6     
ora....el6.vip application    ONLINE    ONLINE    eprhel6     
[oracle@EPRHEL6 db]$ srvctl status database -d orcl
Instance ORCL1 is running on node eprhel5
Instance ORCL2 is running on node eprhel6
[oracle@EPRHEL6 db]$ 


It seems problem is solved.

Tuesday, December 28, 2010

Relocating CRS Resource

I have installed a one node RAC 10gR2 on RHEL5.5 for test purposes (my 10gR2 rac on RHEL5.5 vmware installation notes). After adding the second node to the cluster successfully, i realized that the new nodes vip resource is running on the first node. I have seen this problem before on a solaris system but i hadnt got any time to write about that.


[root@EPRHEL6]# crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy

[root@EPRHEL6]# crs_stat -t
Name           Type           Target    State     Host        
------------------------------------------------------------
ora....L1.inst application    ONLINE    ONLINE    eprhel5     
ora.ORCL.db    application    ONLINE    ONLINE    eprhel5     
ora....SM1.asm application    ONLINE    ONLINE    eprhel5     
ora....L5.lsnr application    ONLINE    ONLINE    eprhel5     
ora....el5.gsd application    ONLINE    ONLINE    eprhel5     
ora....el5.ons application    ONLINE    ONLINE    eprhel5     
ora....el5.vip application    ONLINE    ONLINE    eprhel5     
ora....el6.gsd application    ONLINE    ONLINE    eprhel6     
ora....el6.ons application    ONLINE    ONLINE    eprhel6     
ora....el6.vip application    ONLINE    ONLINE    eprhel5     

[root@EPRHEL6]# ping eprhel6-vip
PING eprhel6-vip (172.28.4.226) 56(84) bytes of data.
64 bytes from eprhel6-vip (172.28.4.226): icmp_seq=1 ttl=64 time=2.28 ms
64 bytes from eprhel6-vip (172.28.4.226): icmp_seq=2 ttl=64 time=1.03 ms
64 bytes from eprhel6-vip (172.28.4.226): icmp_seq=3 ttl=64 time=0.131 ms

[root@EPRHEL6]# ifconfig -a
eth0      Link encap:Ethernet  HWaddr 00:0C:29:DE:D8:FD  
inet addr:172.28.4.246  Bcast:172.28.4.255  Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fede:d8fd/64 Scope:Link
UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
RX packets:36798 errors:0 dropped:0 overruns:0 frame:0
TX packets:13478 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000 
RX bytes:21057458 (20.0 MiB)  TX bytes:10660215 (10.1 MiB)

eth1      Link encap:Ethernet  HWaddr 00:0C:29:DE:D8:07  
BROADCAST MULTICAST  MTU:1500  Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000 
RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)

[root@EPRHEL5]# ifconfig eth0
eth0      Link encap:Ethernet  HWaddr 00:0C:29:B7:92:45  
inet addr:172.28.4.245  Bcast:172.28.4.255  Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:feb7:9245/64 Scope:Link
UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
RX packets:9074707 errors:0 dropped:0 overruns:0 frame:0
TX packets:1212938 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000 
RX bytes:1173926429 (1.0 GiB)  TX bytes:1041963477 (993.6 MiB)

[root@EPRHEL5]# ifconfig eth0:1
eth0:1    Link encap:Ethernet  HWaddr 00:0C:29:B7:92:45  
inet addr:172.28.4.225  Bcast:172.28.4.255  Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

[root@EPRHEL5]# ifconfig eth0:2
eth0:2    Link encap:Ethernet  HWaddr 00:0C:29:B7:92:45  
inet addr:172.28.4.226  Bcast:172.28.4.255  Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1


I suppose this is because of the network settings of the newly added node and somehow crs could not assign the vip ip address to the nic card. crs_relocate may work on this.


[root@EPRHEL5]# crs_relocate ora.eprhel6.vip
Attempting to stop `ora.eprhel6.vip` on member `eprhel5`
Stop of `ora.eprhel6.vip` on member `eprhel5` succeeded.
Attempting to start `ora.eprhel6.vip` on member `eprhel6`
Start of `ora.eprhel6.vip` on member `eprhel6` succeeded.
[root@EPRHEL5]# crs_stat -t
Name           Type           Target    State     Host        
------------------------------------------------------------
ora....L1.inst application    ONLINE    ONLINE    eprhel5     
ora.ORCL.db    application    ONLINE    ONLINE    eprhel5     
ora....SM1.asm application    ONLINE    ONLINE    eprhel5     
ora....L5.lsnr application    ONLINE    ONLINE    eprhel5     
ora....el5.gsd application    ONLINE    ONLINE    eprhel5     
ora....el5.ons application    ONLINE    ONLINE    eprhel5     
ora....el5.vip application    ONLINE    ONLINE    eprhel5     
ora....el6.gsd application    ONLINE    ONLINE    eprhel6     
ora....el6.ons application    ONLINE    ONLINE    eprhel6     
ora....el6.vip application    ONLINE    ONLINE    eprhel6  

Now ifconfig on my new node should show the vip ip address information.

[root@EPRHEL6 network-scripts]# ifconfig -a
eth0      Link encap:Ethernet  HWaddr 00:0C:29:DE:D8:FD  
inet addr:172.28.4.246  Bcast:172.28.4.255  Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fede:d8fd/64 Scope:Link
UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
RX packets:36798 errors:0 dropped:0 overruns:0 frame:0
TX packets:13478 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000 
RX bytes:21057458 (20.0 MiB)  TX bytes:10660215 (10.1 MiB)

eth0:1    Link encap:Ethernet  HWaddr 00:0C:29:DE:D8:FD  
inet addr:172.28.4.226  Bcast:172.28.4.255  Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1




Saturday, November 13, 2010

Basic Team Coding Extension for Oracle

Our development group is complaining about override each other 's codes by recompiling the same PLSQL program unit that is actively in development with any other developer. TOAD has builtin Team Coding functionality but there are developers not using TOAD in the development department.

I developed a tiny Team Coding functionality works in the database with using a DDL trigger. By this BEFORE CREATE Trigger developers are not able to alter any program unit by without first checking it out. You can find a small example below.

Without first declaring in which schema you want Team Coding should work everything works as they should be.



connect SYSTEM/***@ORCL11G

create or replace function SYSTEM.FNC_TEST return date as 
begin
return sysdate;
end;

--FUNCTION FNC_TEST compiled



After declaring the schema you want to enable Team Coding.



exec PCK_TCODE_CONTROL.set_param('SCHEMA_NAME','SYSTEM','INSERT');
--anonymous block completed

create or replace function SYSTEM.FNC_TEST return date as 
begin
return sysdate;
end;

--ORA-20501: You are not allowed to compile the code! 
--           First PCK_TCODE_CONTROL.CHECKOUT the unit.

exec PCK_TCODE_CONTROL.checkout('FNC_TEST','SYSTEM');
--anonymous block completed

create or replace function SYSTEM.FNC_TEST return date as 
begin
return sysdate;
end;
--FUNCTION FNC_TEST compiled

connect u_epeker/***@ORCL11G

create or replace function SYSTEM.FNC_TEST return date as 
begin
return sysdate;
end;

--ORA-20500: Unit Locked by another user! 
--           Check PCK_TCODE_CONTROL.CHECK_AVAILIBILITY function

select PCK_TCODE_CONTROL.check_avail('FNC_TEST','SYSTEM') from dual;
/*
"PCK_TCODE_CONTROL.CHECK_AVAIL('FNC_TEST','SYSTEM')"
"TCODE: SYSTEM.FNC_TEST is checked out by SYSTEM on 13.11.2010 19:26:30"
*/

connect SYSTEM/***@ORCL11G

exec PCK_TCODE_CONTROL.checkin('FNC_TEST','SYSTEM');
--anonymous block completed

connect U_EPEKER/***@ORCL11G

select PCK_TCODE_CONTROL.check_avail('FNC_TEST','SYSTEM') from dual;
--null

exec PCK_TCODE_CONTROL.checkin('FNC_TEST','SYSTEM');
--anonymous block completed

select PCK_TCODE_CONTROL.check_avail('FNC_TEST','SYSTEM') from dual;
/*
"PCK_TCODE_CONTROL.CHECK_AVAIL('FNC_TEST','SYSTEM')"
"TCODE: SYSTEM.FNC_TEST is checked out by U_EPEKER on 13.11.2010 19:33:53"
*/

create or replace function SYSTEM.FNC_TEST return date as 
begin
return sysdate;
end;

--FUNCTION FNC_TEST compiled



As you can see people are not able to override their codes hence this functionality exists. You can add this functionality to you database by running the following script. This small script does not remember the versions of the program units. Anyone can develop version history functionality on top of this small script and repository or maybe I can handle this version history in another version.

You can find the script here


Thursday, October 28, 2010

Oracle SQL Fundamentals Course Notes

I was in Oracle SQL Course for last 4 days. The course went like a hands on lab so I'd like to share my course notes on Oracle SQL fundamentals.

Links:
Course Notes
Course Details

Monday, May 31, 2010

Duplicate Database with RMAN is as easy as it should be

Duplicating a database with RMAN saves you from an ordinary routine consists of a hot backup, restore, recover and open the aux database procedure.



C:\Users\dtepeker> set ORACLE_SID=ORCL2
C:\Users\dtepeker> sqlplus "/ as sysdba

SQL> shutdown abort;
ORACLE instance shut down.
SQL>
SQL>
SQL> create pfile='C:\oracle\product\initORCL2.ora' from spfile;

File created.

SQL>

C:\Users\dtepeker> more C:\oracle\product\initORCL2.ora
orcl2.__db_cache_size=369098752
orcl2.__java_pool_size=4194304
orcl2.__large_pool_size=4194304
orcl2.__shared_pool_size=142606336
orcl2.__streams_pool_size=8388608
*.audit_file_dest='C:\oracle\product\admin\ORCL2\adump'
*.background_dump_dest='C:\oracle\product\admin\ORCL2\bdump'
*.compatible='10.2.0.3.0'
*.control_files='C:\oracle\product\oradata\ORCL2\control01.ctl'
*.core_dump_dest='C:\oracle\product\admin\ORCL2\cdump'
*.db_block_size=8192
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_name='ORCL2'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=ORCL2XDB)'
*.job_queue_processes=10
*.open_cursors=300
*.pga_aggregate_target=268435456
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=536870912
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS1'
*.user_dump_dest='C:\oracle\product\admin\ORCL2\udump'

*.db_file_name_convert=(C:\oracle\product\oradata\ORCL,C:\oracle\product\oradata\ORCL2)
*.log_file_name_convert=(C:\oracle\product\oradata\ORCL,C:\oracle\product\oradata\ORCL2)

SQL> create spfile from pfile='C:\oracle\product\initORCL2.ora';

File created.

SQL>

C:\Users\dtepeker> more C:\oracle\product\db10g\network\admin\tnsnames.ora
# tnsnames.ora Network Configuration File: C:\oracle\product\db10g\network\admin\tnsnames.ora
# Generated by Oracle configuration tools.

### duplicate/destination db
ORCL2 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = dtl1optim1)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = ORCL2)
)
)
### target/source db
ORCL =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = dtl1optim1)(PORT = 1521))
(CONNECT_DATA =
(SID = ORCL)
)
)

EXTPROC_CONNECTION_DATA =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC0))
(CONNECT_DATA =
(PRESENTATION = RO)
(SID = PLSExtProc)
)
)

C:\oracle\product\oradata\ORCL2>del *.dbf

C:\oracle\product\oradata\ORCL2>dir
Volume in drive C has no label.
Volume Serial Number is 3A6A-99E9

Directory of C:\oracle\product\oradata\ORCL2

05/31/2010 12:06 PM .
05/31/2010 12:06 PM ..
05/31/2010 11:58 AM 7,389,184 CONTROL01.CTL
05/27/2010 04:58 PM 1,317 CONTROLFILE.CTL
05/27/2010 04:52 PM 854 initORCL2.ora
04/20/2010 03:27 PM ORCL2
05/31/2010 11:54 AM 52,429,312 REDO01.LOG
05/31/2010 11:54 AM 52,429,312 REDO02.LOG
05/31/2010 11:54 AM 52,429,312 REDO03.LOG
6 File(s) 164,679,291 bytes
3 Dir(s) 92,947,480,576 bytes free

C:\oracle\product\oradata\ORCL2>del *.log

C:\oracle\product\oradata\ORCL2>del *.ctl

C:\oracle\product\oradata\ORCL2>dir
Volume in drive C has no label.
Volume Serial Number is 3A6A-99E9

Directory of C:\oracle\product\oradata\ORCL2

05/31/2010 12:06 PM .
05/31/2010 12:06 PM ..
05/27/2010 04:52 PM 854 initORCL2.ora
04/20/2010 03:27 PM ORCL2
1 File(s) 854 bytes
3 Dir(s) 93,112,172,544 bytes free

C:\oracle\product\oradata\ORCL2>




C:\oracle\product\oradata\ORCL2>rman target sys/password@ORCL auxiliary /

Recovery Manager: Release 10.2.0.4.0 - Production on Mon May 31 12:07:32 2010

Copyright (c) 1982, 2007, Oracle. All rights reserved.

connected to target database: ORCL (DBID=1235781100)
connected to auxiliary database: ORCL2 (not mounted)

RMAN> duplicate target database to ORCL2;

Starting Duplicate Db at 31-MAY-10
using target database control file instead of recovery catalog
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: sid=155 devtype=DISK

contents of Memory Script:
{
set until scn 23097370;
set newname for datafile 1 to
"C:\ORACLE\PRODUCT\ORADATA\ORCL2\SYSTEM01.DBF";
set newname for datafile 2 to
"C:\ORACLE\PRODUCT\ORADATA\ORCL2\UNDOTBS01.DBF";
set newname for datafile 3 to
"C:\ORACLE\PRODUCT\ORADATA\ORCL2\SYSAUX01.DBF";
set newname for datafile 4 to
"C:\ORACLE\PRODUCT\ORADATA\ORCL2\USERS01.DBF";
set newname for datafile 5 to
"C:\ORACLE\PRODUCT\ORADATA\ORCL2\TS_STREAMS_01.DBF";
restore
check readonly
clone database
;
}
executing Memory Script

executing command: SET until clause

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

Starting restore at 31-MAY-10
using channel ORA_AUX_DISK_1

channel ORA_AUX_DISK_1: restoring datafile 00001
input datafile copy recid=20 stamp=720116353 filename=C:\ORACLE\PRODUCT\FRA\ORCL\ORCL\DATAFILE\O1_MF_SYSTEM_5ZWW6PW1_.DBF
destination for restore of datafile 00001: C:\ORACLE\PRODUCT\ORADATA\ORCL2\SYSTEM01.DBF
channel ORA_AUX_DISK_1: copied datafile copy of datafile 00001
output filename=C:\ORACLE\PRODUCT\ORADATA\ORCL2\SYSTEM01.DBF
channel ORA_AUX_DISK_1: restoring datafile 00002
input datafile copy recid=21 stamp=720116353 filename=C:\ORACLE\PRODUCT\FRA\ORCL\ORCL\DATAFILE\O1_MF_UNDOTBS1_5ZWW4TN9_.DBF
destination for restore of datafile 00002: C:\ORACLE\PRODUCT\ORADATA\ORCL2\UNDOTBS01.DBF
channel ORA_AUX_DISK_1: copied datafile copy of datafile 00002
output filename=C:\ORACLE\PRODUCT\ORADATA\ORCL2\UNDOTBS01.DBF
channel ORA_AUX_DISK_1: restoring datafile 00003
input datafile copy recid=22 stamp=720116353 filename=C:\ORACLE\PRODUCT\FRA\ORCL\ORCL\DATAFILE\O1_MF_SYSAUX_5ZWW5XQX_.DBF
destination for restore of datafile 00003: C:\ORACLE\PRODUCT\ORADATA\ORCL2\SYSAUX01.DBF
channel ORA_AUX_DISK_1: copied datafile copy of datafile 00003
output filename=C:\ORACLE\PRODUCT\ORADATA\ORCL2\SYSAUX01.DBF
channel ORA_AUX_DISK_1: restoring datafile 00004
input datafile copy recid=19 stamp=720116353 filename=C:\ORACLE\PRODUCT\FRA\ORCL\ORCL\DATAFILE\O1_MF_USERS_5ZWW7937_.DBF

destination for restore of datafile 00004: C:\ORACLE\PRODUCT\ORADATA\ORCL2\USERS01.DBF
channel ORA_AUX_DISK_1: copied datafile copy of datafile 00004
output filename=C:\ORACLE\PRODUCT\ORADATA\ORCL2\USERS01.DBF
channel ORA_AUX_DISK_1: restoring datafile 00005
input datafile copy recid=18 stamp=720116353 filename=C:\ORACLE\PRODUCT\FRA\ORCL\ORCL\DATAFILE\O1_MF_TS_STREA_5ZWW763V_.DBF
destination for restore of datafile 00005: C:\ORACLE\PRODUCT\ORADATA\ORCL2\TS_STREAMS_01.DBF
channel ORA_AUX_DISK_1: copied datafile copy of datafile 00005
output filename=C:\ORACLE\PRODUCT\ORADATA\ORCL2\TS_STREAMS_01.DBF
Finished restore at 31-MAY-10
sql statement: CREATE CONTROLFILE REUSE SET DATABASE "ORCL2" RESETLOGS ARCHIVELOG
MAXLOGFILES 16
MAXLOGMEMBERS 3
MAXDATAFILES 100
MAXINSTANCES 8
MAXLOGHISTORY 292
LOGFILE
GROUP 1 ( 'C:\ORACLE\PRODUCT\ORADATA\ORCL2\REDO01.LOG' ) SIZE 50 M REUSE,
GROUP 2 ( 'C:\ORACLE\PRODUCT\ORADATA\ORCL2\REDO02.LOG' ) SIZE 50 M REUSE,
GROUP 3 ( 'C:\ORACLE\PRODUCT\ORADATA\ORCL2\REDO03.LOG' ) SIZE 50 M REUSE
DATAFILE
'C:\ORACLE\PRODUCT\ORADATA\ORCL2\SYSTEM01.DBF'
CHARACTER SET WE8ISO8859P9


contents of Memory Script:
{
switch clone datafile all;
}
executing Memory Script

released channel: ORA_AUX_DISK_1
datafile 2 switched to datafile copy
input datafile copy recid=1 stamp=720446949 filename=C:\ORACLE\PRODUCT\ORADATA\ORCL2\UNDOTBS01.DBF
datafile 3 switched to datafile copy
input datafile copy recid=2 stamp=720446950 filename=C:\ORACLE\PRODUCT\ORADATA\ORCL2\SYSAUX01.DBF
datafile 4 switched to datafile copy
input datafile copy recid=3 stamp=720446950 filename=C:\ORACLE\PRODUCT\ORADATA\ORCL2\USERS01.DBF
datafile 5 switched to datafile copy
input datafile copy recid=4 stamp=720446950 filename=C:\ORACLE\PRODUCT\ORADATA\ORCL2\TS_STREAMS_01.DBF

contents of Memory Script:
{
set until scn 23097370;
recover
clone database
delete archivelog
;
}
executing Memory Script

executing command: SET until clause

Starting recover at 31-MAY-10
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: sid=155 devtype=DISK

starting media recovery

archive log thread 1 sequence 416 is already on disk as file C:\ORACLE\PRODUCT\ORADATA\ORCL\ARCHIVE\ARC00416_0708000366.001
archive log thread 1 sequence 417 is already on disk as file C:\ORACLE\PRODUCT\ORADATA\ORCL\ARCHIVE\ARC00417_0708000366.001
archive log thread 1 sequence 418 is already on disk as file C:\ORACLE\PRODUCT\ORADATA\ORCL\ARCHIVE\ARC00418_0708000366.001
archive log thread 1 sequence 419 is already on disk as file C:\ORACLE\PRODUCT\ORADATA\ORCL\ARCHIVE\ARC00419_0708000366.001
archive log thread 1 sequence 420 is already on disk as file C:\ORACLE\PRODUCT\ORADATA\ORCL\ARCHIVE\ARC00420_0708000366.001
archive log thread 1 sequence 421 is already on disk as file C:\ORACLE\PRODUCT\ORADATA\ORCL\ARCHIVE\ARC00421_0708000366.001
archive log thread 1 sequence 422 is already on disk as file C:\ORACLE\PRODUCT\ORADATA\ORCL\ARCHIVE\ARC00422_0708000366.001
archive log thread 1 sequence 423 is already on disk as file C:\ORACLE\PRODUCT\ORADATA\ORCL\ARCHIVE\ARC00423_0708000366.001
archive log thread 1 sequence 424 is already on disk as file C:\ORACLE\PRODUCT\ORADATA\ORCL\ARCHIVE\ARC00424_0708000366.001
archive log filename=C:\ORACLE\PRODUCT\ORADATA\ORCL\ARCHIVE\ARC00416_0708000366.001 thread=1 sequence=416
archive log filename=C:\ORACLE\PRODUCT\ORADATA\ORCL\ARCHIVE\ARC00417_0708000366.001 thread=1 sequence=417
archive log filename=C:\ORACLE\PRODUCT\ORADATA\ORCL\ARCHIVE\ARC00418_0708000366.001 thread=1 sequence=418
archive log filename=C:\ORACLE\PRODUCT\ORADATA\ORCL\ARCHIVE\ARC00419_0708000366.001 thread=1 sequence=419
archive log filename=C:\ORACLE\PRODUCT\ORADATA\ORCL\ARCHIVE\ARC00420_0708000366.001 thread=1 sequence=420
archive log filename=C:\ORACLE\PRODUCT\ORADATA\ORCL\ARCHIVE\ARC00421_0708000366.001 thread=1 sequence=421
archive log filename=C:\ORACLE\PRODUCT\ORADATA\ORCL\ARCHIVE\ARC00422_0708000366.001 thread=1 sequence=422
archive log filename=C:\ORACLE\PRODUCT\ORADATA\ORCL\ARCHIVE\ARC00423_0708000366.001 thread=1 sequence=423
archive log filename=C:\ORACLE\PRODUCT\ORADATA\ORCL\ARCHIVE\ARC00424_0708000366.001 thread=1 sequence=424
media recovery complete, elapsed time: 00:00:29
Finished recover at 31-MAY-10

contents of Memory Script:
{
shutdown clone;
startup clone nomount ;
}
executing Memory Script

database dismounted
Oracle instance shut down

connected to auxiliary database (not started)
Oracle instance started

Total System Global Area 536870912 bytes

Fixed Size 1297724 bytes
Variable Size 159384260 bytes
Database Buffers 369098752 bytes
Redo Buffers 7090176 bytes
sql statement: CREATE CONTROLFILE REUSE SET DATABASE "ORCL2" RESETLOGS ARCHIVELOG
MAXLOGFILES 16
MAXLOGMEMBERS 3
MAXDATAFILES 100
MAXINSTANCES 8
MAXLOGHISTORY 292
LOGFILE
GROUP 1 ( 'C:\ORACLE\PRODUCT\ORADATA\ORCL2\REDO01.LOG' ) SIZE 50 M REUSE,
GROUP 2 ( 'C:\ORACLE\PRODUCT\ORADATA\ORCL2\REDO02.LOG' ) SIZE 50 M REUSE,
GROUP 3 ( 'C:\ORACLE\PRODUCT\ORADATA\ORCL2\REDO03.LOG' ) SIZE 50 M REUSE
DATAFILE
'C:\ORACLE\PRODUCT\ORADATA\ORCL2\SYSTEM01.DBF'
CHARACTER SET WE8ISO8859P9


contents of Memory Script:
{
set newname for tempfile 1 to
"C:\ORACLE\PRODUCT\ORADATA\ORCL2\TEMP01.DBF";
switch clone tempfile all;
catalog clone datafilecopy "C:\ORACLE\PRODUCT\ORADATA\ORCL2\UNDOTBS01.DBF";
catalog clone datafilecopy "C:\ORACLE\PRODUCT\ORADATA\ORCL2\SYSAUX01.DBF";
catalog clone datafilecopy "C:\ORACLE\PRODUCT\ORADATA\ORCL2\USERS01.DBF";
catalog clone datafilecopy "C:\ORACLE\PRODUCT\ORADATA\ORCL2\TS_STREAMS_01.DBF";
switch clone datafile all;
}
executing Memory Script

executing command: SET NEWNAME

renamed temporary file 1 to C:\ORACLE\PRODUCT\ORADATA\ORCL2\TEMP01.DBF in control file

cataloged datafile copy
datafile copy filename=C:\ORACLE\PRODUCT\ORADATA\ORCL2\UNDOTBS01.DBF recid=1 stamp=720446995

cataloged datafile copy
datafile copy filename=C:\ORACLE\PRODUCT\ORADATA\ORCL2\SYSAUX01.DBF recid=2 stamp=720446995

cataloged datafile copy
datafile copy filename=C:\ORACLE\PRODUCT\ORADATA\ORCL2\USERS01.DBF recid=3 stamp=720446995

cataloged datafile copy
datafile copy filename=C:\ORACLE\PRODUCT\ORADATA\ORCL2\TS_STREAMS_01.DBF recid=4 stamp=720446996

datafile 2 switched to datafile copy
input datafile copy recid=1 stamp=720446995 filename=C:\ORACLE\PRODUCT\ORADATA\ORCL2\UNDOTBS01.DBF
datafile 3 switched to datafile copy
input datafile copy recid=2 stamp=720446995 filename=C:\ORACLE\PRODUCT\ORADATA\ORCL2\SYSAUX01.DBF
datafile 4 switched to datafile copy
input datafile copy recid=3 stamp=720446995 filename=C:\ORACLE\PRODUCT\ORADATA\ORCL2\USERS01.DBF
datafile 5 switched to datafile copy
input datafile copy recid=4 stamp=720446996 filename=C:\ORACLE\PRODUCT\ORADATA\ORCL2\TS_STREAMS_01.DBF

contents of Memory Script:
{
Alter clone database open resetlogs;
}
executing Memory Script

database opened
Finished Duplicate Db at 31-MAY-10

RMAN>


Related:
Oracle Database Backup and Recovery Advanced Users Guide
Oracle-Base.com



Sunday, May 2, 2010

Oracle Enterprise Manager gets bigger with Sun Opscenter

I was on the Sun Oracle event here in Istanbul, Turkey on 28.04.2010. The main course was about engaging Oracle with Sun.

The most interesting part for me was the future of Oracle Enterprise Manager. There were a lot of clue for it is going to get bigger, not only for database and application server management but also in virtual machine (both on Solaris and Enterprise linux) and operating system management (Solaris, Redhat Linux and Suse Linux).

Oracle has already began to integrate Sun Opscenter to the Oracle Enterprise Manager. Main idea is single handed management of a datacenter with Oracle Enterprise Manager.


There are some links about this topic.

http://www.oracle.com/oms/enterprisemanager11g/webcast-067871.html
http://www.oracle.com/us/products/enterprise-manager/index.html

Tuesday, April 27, 2010

Google Analytics works better than i expected

I installed Google Analytics site script to my blog two weeks ago and i received more than i imagine. Here are some report supplied by the dashboard today.



At first sight this is an expected result like visits, pagevisits and pages/visit ratio and so on. On the visitors tab of the dashboard you can obtain lots of information about the users locations and technical specs.





You can also investigate which search engine hits the site? and also the keywords. Another nice feature is you can also see which contents (in this case which posts) hit by the readers mostly? and with which keywords :)







As can be seen from the results Oracle Grid Custom Reports is the most visited post and Homepage follows it very closely. It seems Oracle Enterprise Manager Grid Control posts are mostly interested subject. Streams comes with the SQL Server Logical Consistency Based IO Error post which is a real pain for the admins.

Unfortunately there is no way (or I couldnt find) to share the reports anonymously like GoogleNotes or GoogleDocs. I hope this would be a new feature in a new version.

Some links:
http://www.google.com/analytics
Interesting post from google to google analytics blog

Friday, April 2, 2010

Schema replication with Oracle Streams

I tested table replication with Oracle Streams before. And now i used Schema Replication with an OLTP Tester schema. The source database is 10.2.0.4 on a windows OS and the destination is 11.1.0.7 database on same host with a windows 2008 Server. I faced a couple of problems listed at the end of this post. The purpose of the post is setting up a Schema replication using streams. And start the OLTP test and monitor the streams processes. Piece of cake!.

Lets start with the schema replication with the MAINTAIN_SCHEMAS which is the easiest way. Streams does the rest of it (creating queues, captre, propagation and apply processes.. etc. ). But before running streams packages there are some preconfiguration of the database that has to be done.

- Enabling Archivelog
- Preconfiguring Oracle Streams

And after all we should be ready for streams replication.
BEGIN
DBMS_STREAMS_ADM.MAINTAIN_SCHEMAS(
schema_names => 'U_EPEKER',
source_directory_object => 'DATA_PUMP_DIR',
destination_directory_object => 'DATA_PUMP_DIR',
source_database => 'ORCL',
destination_database => 'ORCL11G',
perform_actions => TRUE,
capture_name => 'CAPTURE_SCHEMA_U_EPEKER',
--capture_queue_table => NULL,
capture_queue_name => 'QUEUE_CAPT_SCH_U_EPEKER',
capture_queue_user => 'U_STREAMS_ADM',
propagation_name => 'PROP_SCH_U_EPEKER',
apply_name => 'APPLY_SCH_U_EPEKER',
--apply_queue_table => NULL,
apply_queue_name => 'QUEUE_APP_SCH_U_EPEKER',
apply_queue_user => 'U_STREAMS_ADM',
bi_directional => FALSE,
include_ddl => TRUE,
instantiation => DBMS_STREAMS_ADM.INSTANTIATION_SCHEMA_NETWORK);
END;

In case of any error following dba views can be queried and see which script has an error or in case of any hopeless situation you can recover the operation and remove the streams configuration and re-run the maintain_schemas after overcome any misconfiguration.

select * from DBA_RECOVERABLE_SCRIPT_BLOCKS;
select * from DBA_RECOVERABLE_SCRIPT_ERRORS;
execute DBMS_STREAMS_ADM.RECOVER_OPERATION('3A53A54E70764958BA0AFA68DAC0C7F0','PURGE');
exec DBMS_STREAMS_ADM.REMOVE_STREAMS_CONFIGURATION;
-- and on the remote server
exec DBMS_STREAMS_ADM.REMOVE_STREAMS_CONFIGURATION@ORCL11G;


Checking the streams processes if they are set all right after the maintain_schemas procedure.

select capture_name,
queue_name,
rule_set_name,
source_database,
status
from dba_capture;
/*
CAPTURE_NAME QUEUE_NAME RULE_SET_NAME SOURCE_DATABASE STATUS
------------ ---------- ------------- --------------- ------
CAPTURE_SCHEMA_U_EPEKER QUEUE_CAPT_SCH_U_EPEKER RULESET$_12 ORCL ENABLED
*/


select propagation_name,
source_queue_name,
destination_queue_name,
destination_dblink,
rule_set_name,
status
from dba_propagation;

/*
PROPAGATION_NAME SOURCE_QUEUE_NAME DESTINATION_QUEUE_NAME DESTINATION_DBLINK RULE_SET_NAME
--------------- ---------------- --------------------- ----------------- -------------
PROP_SCH_U_EPEKER QUEUE_CAPT_SCH_U_EPEKER QUEUE_APP_SCH_U_EPEKER ORCL11G RULESET$_9
*/


select sid,
serial#,
capture_name,
state
from v$streams_capture;

/*
SID SERIAL# CAPTURE_NAME STATE
--- ------ ------------ -----
128 8 CAPTURE_SCHEMA_U_EPEKER CAPTURING CHANGES
*/


Everything seems operational in the capture side. I need to check if apply process also created and runs smoothly.

select apply_name,
queue_name,
apply_captured,
rule_set_name,
status
from dba_apply@ORCL11G;

/*
APPLY_NAME QUEUE_NAME APPLY_CAPTURED RULE_SET_NAME STATUS
--------- ---------- -------------- ------------- ------
APPLY_SCH_U_EPEKER QUEUE_APP_SCH_U_EPEKER YES RULESET$_6 ENABLED
*/


select sid,
serial#,
state,
apply#,
apply_name,
total_applied,
total_errors
from v$streams_apply_coordinator@ORCL11G;

/*
SID SERIAL# STATE APPLY# APPLY_NAME
--- ------ ----- ----- ----------
116 19550 IDLE 1 APPLY_SCH_U_EPEKER
*/


I have created the schema before which includes an OLTP Simulator procedures. I am planning to share this schema in this blog further. Anyway, here is the script to create the schema: OLTP Simulator Script

Now I will use the P_CREATE_REPOS procedure to populate the schema. I hope streams is going to replicate the transactions. This create procedure will create 5 groups and 10 user each group. After that it will going to create 10 pages for each user.

exec U_EPEKER.P_CREATE_REPOS(5,10,1);

After executing this procedure there should be 5 groups and 10 users in each group and total 50 pages for 1 page each user.

select 'source ' || count(*) as user_count from u_epeker.t_user
union all
select 'destination ' || count(*) as user_count from u_epeker.t_user@ORCL11G;

/*
USER_COUNT
source 50
destination 50
*/


select 'source ' || count(*) as group_count from u_epeker.t_group
union all
select 'destination ' || count(*) as group_count from u_epeker.t_group@ORCL11G;

/*
GROUP_COUNT
source 5
destination 5
*/


select 'source ' || count(*) as page_count from u_epeker.t_pages
union all
select 'destination ' || count(*) as page_count from u_epeker.t_pages@ORCL11G;

/*
PAGE_COUNT
source 50
destination 50
*/


Seems everything goes fine. Now i will simulate an OLTP function in the source database and monitor the streams processes. For this I will use the P_CREATE_JOBS procedure of the OLTP Simulator schema and it will implicitly call the P_SIMULATE_REQUEST procedure sequentially.

-- with U_EPEKER user
exec U_EPEKER.P_CREATE_JOBS(20,1,'SIMULATE_JOBS',1);


With these parameter this procedure should create 20 jobs all intervals are 1 second means we are filling up T_VISIT_STATS_DETAILED and yes it works. Our replication should also work.

select count(*) from U_EPEKER.T_VISIT_STATS_DETAILED
union all
select count(*) from U_EPEKER.T_VISIT_STATS_DETAILED@ORCL11G;

/*
COUNT(*)
-------
385
376

on the second run

COUNT(*)
-------
385
385
*/


so it comes a few seconds behind (the time for shipping the logs i think). What about the capture and the apply processes and i can see TOTAL_MESSAGED_CAPTURED and the TOTAL_APPLIED values are getting higher.

select state, capture_name, total_messages_captured from v$streams_capture;
/*
STATE,CAPTURE_NAME,TOTAL_MESSAGES_CAPTURED
CAPTURING CHANGES,CAPTURE_SCHEMA_U_EPEKER,112817
*/


select state, apply_name, total_applied from v$streams_apply_coordinator@ORCL11G;
/*
STATE,APPLY_NAME,TOTAL_APPLIED
IDLE,APPLY_SCH_U_EPEKER,753
*/


Bonus: Excluding a table from the schema replication, it works!.

BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES
(
table_name => 'u_epeker.t_visit_stats_detailed',
streams_type => 'capture',
streams_name => 'capture_schema_u_epeker',
queue_name => 'queue_capt_sch_u_epeker',
include_dml => true,
include_ddl => true,
source_database => 'orcl',
inclusion_rule => false --specifies the negative rule set
);
END;

exec DBMS_RULE_ADM.DROP_RULE('T_VISIT_STATS_DETAILED50',true);
exec DBMS_RULE_ADM.DROP_RULE('T_VISIT_STATS_DETAILED51',true);


I can drop the simulation jobs and streams configuration for now.

-- with u_epeker
exec U_EPEKER.P_DROP_JOBS;
exec DBMS_STREAMS_ADM.REMOVE_STREAMS_CONFIGURATION;
-- and on the remote server
exec DBMS_STREAMS_ADM.REMOVE_STREAMS_CONFIGURATION@ORCL11G;


I faced some problem while creating this schema replication configuration with streams.

The weird problem was, cannot shipping the logs to the destination from the source. When I examine the alert.log file of the source database i saw lots of ORA-01031 Insufficient Privileges.
After a little seach on the otn i solved the problem with createing the passwordfile from scratch with orapwd. Strange problem and i still dont know the underlying reason, anyway it is solved.

While instantiation i got an error. As this is a schema replication dont forget to create the same tablespace on the destination or else user/schema creation fails.

And last, an ORA-00600 problem which is ORA-00600: internal error code, arguments: [kwqbmcrcpts101], [], [], [], [], [], [], [] . I figured this problem after i dropped all streams configuration from the both databases and after 2 days i released that host is run out of the disk space. I found that there are lots of dump files under bdump. After a search on otn, i think i hit a 10.2.0.3 bug which occurs on streams environment. As this is a test environment i bounced the db. The problem is solved as i have already deleted streams configuration. the subject on oracle forums

Thursday, April 1, 2010

"PAUSED" state for streams capture process

My test replication somehow got broken and does not apply changes to the target database (Seem a new challange for me!) First thing i checked was streams capture process and i saw the weird message following.

select sid,
serial#,
capture_name,
state
from v$streams_capture;

/*
SID SERIAL# CAPTURE_NAME STATE
-- ------- ------------ -----
121 1524 CAPTURE_SCHEMA_U_EPEKER PAUSED FOR FLOW CONTROL
*/


Now if capture process is stopped, may be i need to check what is going on the apply process. But it seems nothing is received by the capture process. I think the problem should be on the capture side.

select sid,
serial#,
state,
apply#,
apply_name,
total_applied,
total_errors
from v$streams_apply_coordinator@ORCL11G;

/*
SID SERIAL# STATE APPLY# APPLY_NAME TOTAL_APPLIED TOTAL_ERRORS
--- ------ ---- ------ ----------- ------------- ------------
110 4208 IDLE 1 APPLY_SCH_U_EPEKER 0 0
*/


I still didnt understand anything from this message "PAUSED FOR FLOW CONTROL". I did a little search on Google and the i come to Oracle Documentation as always. It seems i need to check the streams_pool_size parameter.

select queue_name, sender_name, num_msgs, unbrowsed_msgs, memory_usage, publisher_state from V$BUFFERED_PUBLISHERS;

/*
QUEUE_NAME SENDER_NAME NUM_MSGS UNBROWSED_MSGS MEMORY_USAGE PUBLISHER_STATE
--------- ----------- -------- -------------- ----------- ---------------
QUEUE_CAPT_SCH_U_EPEKER CAPTURE_SCHEMA_U_EPEKER 15057 15057 87 IN FLOW CONTROL: TOO MANY UNBROWSED MESSAGES
*/


If the memory usage stands for MB which seems 87, then there could be a problem. As with the following invesitgation it seems ASSM gives 56Mb to streams_pool_size and it seems there could be a lack of memory. I decided to increase the streams_pool_size to 200Mb.

select name, value from v$parameter where name = 'streams_pool_size';

/*
NAME VALUE
---- -----
streams_pool_size 0
*/

select name, value/1024/1024 as mb from v$parameter where name = 'sga_target';

/*
NAME MB
---- --
sga_target 512
*/


select pool, round(sum(bytes)/1024/1024) as MB from v$sgastat group by pool;
/*
POOL MB
---- --
140
java pool 12
streams pool 56
shared pool 300
large pool 4
*/


alter system set sga_target=1G scope=spfile;
alter system set streams_pool_size=200M scope=spfile;
shutdown immediate;
startup;


As soon as the database opened capture process is up again. And the publishers state is now "PUBLISHING MESSAGES"

select sid,
serial#,
capture_name,
state
from v$streams_capture;
/*
SID SERIAL# CAPTURE_NAME STATE
--- ------ ------------ -----
151 1 CAPTURE_SCHEMA_U_EPEKER CAPTURING CHANGES
*/


select queue_name, sender_name, num_msgs, unbrowsed_msgs, memory_usage, publisher_state from V$BUFFERED_PUBLISHERS;
/*
QUEUE_NAME SENDER_NAME NUM_MSGS UNBROWSED_MSGS MEMORY_USAGE PUBLISHER_STATE
---------- ----------- -------- -------------- ------------ ---------------
QUEUE_CAPT_SCH_U_EPEKER CAPTURE_SCHEMA_U_EPEKER 44 0 5 PUBLISHING MESSAGES
*/


Friday, March 26, 2010

Google Docs instead of Google Notes

I used to take notes on Google Notebook which is not supported any more and has some constraints (especially on note size).

Google Notebook - Shared Notes

From now on I decided to use Google Docs. Here are my shared documents.

Oracle Documents
UNIX Documents

There are scripts and documents more than i shared, but i have to reformat them before sharing. I hope i will do this handling on my spare times.

Monday, March 22, 2010

SA200-S10 - Solaris Admin I Course Notes

Last week i accomplished Sun Solaris 10 Admin I (SA200-S10) training for 5 days.

I used to admin RHEL4 and RHEL5 operating systems for years but Solaris is far beyond. Especially ZFS and filesystem snapshot utilities are pretty effective as well as service management features.

Here are my course notes

And the course details

It is good to be open and downloadable for free

Tuesday, February 23, 2010

testing SQL Server 2005 partitioning option

This post is a version of the pervious "SQL Server Partitioning Option" explanatory presentation of mine which i wrote for a friend of mine. The presentation starts with what is partitioning, why to use table partitions, how to use the partitons and advantages. All the chapters basis on one test case which consists of partitioning a table and examine the IO performance before and after the partition operation.

What is partitioning
Partitioning is basically aggregation of the data by its meaning or value on different physical structures and this also brings retrieve times to acceptible values.

As this option can be used for archiving needs it is usually used for faster reporting and query performances.

Why to use partitioning
The mostly used area of the partitioning is increasing full table scan performances. One of the most obvious reason of the unavoidable full table scans which are; accessing more than certain amount of table data (There are different methods to avoid or increasing performance on full table scans but they are not in the scope of this article while we are concerning partitioning option of Microsoft SQL Server).

As well as performance considerations, partitioning also helps archiving in large tables especially log tables that are stored on daily/monthly or even yearly basis. There are lots of structures that holds the older data of the table (say older than 6 months) on SATA disks which are cheaper and on compressed partitions.

How to use partitioning
I have used a Windows Server 2003 Standart Edition Operating system with SQL Server 2005 Enterprise edition (By the way SQL Server Standart Edition does not support partitioning). Recommended hardware requirements were 1Gb of ram 1 CPU and minimum 10Gb of disk space which i installed my virtual machine with these configurations.

First we need a comparison sql query which I used a standart built-in sql server Adventureworks database.


select 
cat.name as category_name, 
subcat.name as subcategory_name, 
prd.* 
from 
AdventureWorks.Production.Product prd LEFT JOIN 
AdventureWorks.Production.ProductSubCategory subcat ON
(prd.ProductSubcategoryID=subcat.ProductSubcategoryID) LEFT JOIN 
AdventureWorks.Production.ProductCategory cat ON
cat.ProductCategoryID=subcat.ProductCategoryID
where
cat.name is not null and
subcat.name is not null
order by 
cat.Name,subcat.name


Now i decided to build a structure that has four different filegroups which correspond to each category so that one query can stick on four of these datafiles simultaneously. To achieve this first i will create four filegroups and add one file to each filegroups as follows (corresponds creating tablespaces on a Oracle database and adding one datafile to each tablespace).

In the real world i would want to spread the datafiles over four individual drives to increase the disk throughput but in this test case i dont have that kind of test environment.


ALTER DATABASE AdventureWorks ADD FILEGROUP [FG1]
ALTER DATABASE AdventureWorks ADD FILEGROUP [FG2]
ALTER DATABASE AdventureWorks ADD FILEGROUP [FG3]
ALTER DATABASE AdventureWorks ADD FILEGROUP [FG4]

ALTER DATABASE AdventureWorks
ADD FILE 
(NAME = N'FG1_F1',
FILENAME = N'C:\FG1_F1.ndf',
SIZE = 5MB,
MAXSIZE = 100MB,
FILEGROWTH = 5MB)
TO FILEGROUP [FG1]

ALTER DATABASE AdventureWorks
ADD FILE 
(NAME = N'FG2_F1',
FILENAME = N'C:\FG2_F1.ndf',
SIZE = 5MB,
MAXSIZE = 100MB,
FILEGROWTH = 5MB)
TO FILEGROUP [FG2]

ALTER DATABASE AdventureWorks
ADD FILE 
(NAME = N'FG3_F1',
FILENAME = N'C:\FG3_F1.ndf',
SIZE = 5MB,
MAXSIZE = 100MB,
FILEGROWTH = 5MB)
TO FILEGROUP [FG3]

ALTER DATABASE AdventureWorks
ADD FILE 
(NAME = N'FG4_F1',
FILENAME = N'C:\FG4_F1.ndf',
SIZE = 5MB,
MAXSIZE = 100MB,
FILEGROWTH = 5MB)
TO FILEGROUP [FG4]


I decided to define a partition strategy over the "category" column which has evenly distributed datasets (you can check by aggregating the data with "group by and count").

The first thing we need is a partitioning function which will create a basis for partition scheme. And then create the partitioning schema based on the parititon function and on the desired filegroups. After all i will create the table on the partition scheme which means using the partition function and all of the filegroups.


-- create the partition function
CREATE PARTITION FUNCTION [f_part_category] (char(20))
AS RANGE LEFT FOR VALUES ('Bikes', 'Components', 'Clothing', 'Accessories');

-- check the partition function
select * from sys.partition_functions

name                function_id   type_desc       
------------------  ------------- --------------- 
f_part_category     65539         RANGE           

(1 row(s) affected)

-- create the partition scheme
CREATE PARTITION SCHEME s_part_category
AS PARTITION f_part_category
TO ([FG1], [FG2], [FG3], [FG4], [FG4]);

-- check the partition scheme
select * from sys.partition_schemes

name                 data_space_id      type  type_desc                  
-------------------- -----------------  ----  -----------------
s_part_category      65601              PS    PARTITION_SCHEME           

(1 row(s) affected)

CREATE TABLE [dbo].[t_product_part](
[category_name] [char](20) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
[subcategory_name] [nvarchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
[ProductID] [int] NOT NULL,
[Name] [nvarchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL,
[ProductNumber] [nvarchar](25) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL,
[MakeFlag] [bit] NOT NULL,
[FinishedGoodsFlag] [bit] NOT NULL,
[Color] [nvarchar](15) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
[SafetyStockLevel] [smallint] NOT NULL,
[ReorderPoint] [smallint] NOT NULL,
[StandardCost] [money] NOT NULL,
[ListPrice] [money] NOT NULL,
[Size] [nvarchar](5) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
[SizeUnitMeasureCode] [nchar](3) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
[WeightUnitMeasureCode] [nchar](3) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
[Weight] [decimal](8, 2) NULL,
[DaysToManufacture] [int] NOT NULL,
[ProductLine] [nchar](2) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
[Class] [nchar](2) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
[Style] [nchar](2) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
[ProductSubcategoryID] [int] NULL,
[ProductModelID] [int] NULL,
[SellStartDate] [datetime] NOT NULL,
[SellEndDate] [datetime] NULL,
[DiscontinuedDate] [datetime] NULL,
[rowguid] [uniqueidentifier] NOT NULL,
[ModifiedDate] [datetime] NOT NULL
) ON s_part_category (category_name)


Now; time to test the performance issue. As the data is not heavily loaded we can check the execution plans of the before and after queries.


insert into t_product_part select * from master.dbo.v_products2

select * from t_product_part

BEFORE PARTITIONING


AFTER PARTITIONING


Conclusion
Effective IO usage with using more than one datafile simultaneously.
Remarkable increase in the IO performance.
2.3 times faster IO performance with the sample data (with only one drive)