Wednesday, October 19, 2011

Oracle Restart hands on

Starting from Oracle Database 11g, a new product (or functionality) called Oracle Restart comes with the part of the Grid Infrastructure installation. It seems Oracle decided to use crsctl, crs_stat, and srvctl like RAC commands for also managing the processes of the single instance databases. This standardization seems handy to me as i have already get used to manage RAC databases day by day.

After upgraded one of the development databases in our data center from 10.2.0.4 to 11.2.0.2 as well as the ASM instance, i decided to spend some of my time to play with this new functionality.

As on the RAC installations status of the services can be investigated with the crs_stat -t command. I think it is understandable that there is not vip, ons, gsd services here as this is not a RAC database.


[oracle@rhel6]:/oracle > crs_stat -t
Name           Type           Target    State     Host        
------------------------------------------------------------
ora...._ASM.dg ora....up.type ONLINE    OFFLINE               
ora....ER.lsnr ora....er.type OFFLINE   OFFLINE               
ora.asm        ora.asm.type   OFFLINE   OFFLINE               
ora.cssd       ora.cssd.type  ONLINE    ONLINE    rhel6   
ora.diskmon    ora....on.type ONLINE    ONLINE    rhel6 

[oracle@rhel6]:/oracle > crs_stat
NAME=ora.DG_DB_ASM.dg
TYPE=ora.diskgroup.type
TARGET=ONLINE
STATE=OFFLINE

NAME=ora.LISTENER.lsnr
TYPE=ora.listener.type
TARGET=OFFLINE
STATE=OFFLINE

NAME=ora.asm
TYPE=ora.asm.type
TARGET=OFFLINE
STATE=OFFLINE

NAME=ora.cssd
TYPE=ora.cssd.type
TARGET=ONLINE
STATE=ONLINE on rhel6

NAME=ora.diskmon
TYPE=ora.diskmon.type
TARGET=ONLINE
STATE=ONLINE on rhel6 


The processes of the CRS (it is "HAS" for single instance) is again controlled by crsctl as it is in the RAC installations. You can use check, start, stop options to manage the processes as usual. A small note; CRS processes in the RAC installation is not installed for the single instance installations. For the single instance installations, there is the HAS processes stands for "High Availability Services" and covers the cssd and diskmon processes.


[oracle@rhel6]:/oracle > crsctl check has
CRS-4638: Oracle High Availability Services is online
[oracle@rhel6]:/oracle > crsctl check css
CRS-4529: Cluster Synchronization Services is online
[oracle@rhel6]:/oracle > crsctl check resource ora.cssd

[oracle@rhel6]:/oracle > crsctl stop has
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rhel6'
CRS-2673: Attempting to stop 'ora.cssd' on 'rhel6'
CRS-2677: Stop of 'ora.cssd' on 'rhel6' succeeded
CRS-2673: Attempting to stop 'ora.diskmon' on 'rhel6'
CRS-2677: Stop of 'ora.diskmon' on 'rhel6' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rhel6' has completed
CRS-4133: Oracle High Availability Services has been stopped.

[oracle@rhel6]:/oracle > crs_stat -t
CRS-0184: Cannot communicate with the CRS daemon.



After my upgrade process Oracle Restart could not be able to manage the upgraded database. By using srvctl i added the database resource to the repository so that i can manage the database services by using srvctl command line tool. One of the nicest option is, by using the "-a" option and supplying dependent diskgroups of the database makes Oracle Restart to start the ASM and mount the related diskgroups before starting up the database.


[oracle@rhel6]:/oracle > crs_stat -t
Name           Type           Target    State     Host        
------------------------------------------------------------
ora...._ASM.dg ora....up.type ONLINE    ONLINE    rhel6   
ora....ER.lsnr ora....er.type ONLINE    ONLINE    rhel6   
ora.asm        ora.asm.type   ONLINE    ONLINE    rhel6   
ora.cssd       ora.cssd.type  ONLINE    ONLINE    rhel6   
ora.diskmon    ora....on.type ONLINE    ONLINE    rhel6   

[oracle@rhel6]:/oracle > srvctl add database -h          

Adds a database configuration to be managed by Oracle Restart.

Usage: srvctl add database -d db_unique_name -o oracle_home
  [-m domain_name] 
  [-p spfile] 
  [-r {PRIMARY | PHYSICAL_STANDBY | LOGICAL_STANDBY | SNAPSHOT_STANDBY}] 
  [-s start_options] 
  [-t stop_options] 
  [-n db_name] 
  [-y {AUTOMATIC | MANUAL}] 
  [-a "diskgroup_list"]
-d db_unique_name      Unique name for the database
-o oracle_home         ORACLE_HOME path
-m domain              Domain for database. Must be set if database has DB_DOMAIN set.
-p spfile              Server parameter file path
-r role                Role of the database (primary, physical_standby, logical_standby, snapshot_standby)
-s start_options       Startup options for the database. Examples of startup options are open, mount, or nomount.
-t stop_options        Stop options for the database. Examples of shutdown options are normal, transactional, immediate, or abort.
-n db_name             Database name (DB_NAME), if different from the unique name given by the -d option
-y dbpolicy            Management policy for the database (AUTOMATIC or MANUAL)
-a "diskgroup_list"    Comma separated list of disk groups
-h                     Print usage

[oracle@rhel6]:/oracle > srvctl add database -d ORCLT -o /oracle/orahome1
[oracle@rhel6]:/oracle >
[oracle@rhel6]:/oracle >

[oracle@rhel6]:/oracle > crs_stat -t
Name           Type           Target    State     Host        
------------------------------------------------------------
ora...._ASM.dg ora....up.type ONLINE    ONLINE    rhel6   
ora....ER.lsnr ora....er.type ONLINE    ONLINE    rhel6   
ora.asm        ora.asm.type   ONLINE    ONLINE    rhel6   
ora.ORCLT.db   ora....se.type OFFLINE   OFFLINE               
ora.cssd       ora.cssd.type  ONLINE    ONLINE    rhel6   
ora.diskmon    ora....on.type ONLINE    ONLINE    rhel6   

[oracle@rhel6]:/oracle > srvctl start database -d ORCLT
[oracle@rhel6]:/oracle > ps -ef | grep smon
oracle  9109530        1   0 15:46:08      -  0:00 ora_smon_ORCLT
oracle 11075806        1   0 15:43:50      -  0:00 asm_smon_+ASM
[oracle@rhel6]:/oracle >

[oracle@rhel6]:/oracle >                                                                                                     
[oracle@rhel6]:/oracle >

[oracle@rhel6]:/oracle > srvctl status database -d ORCLT
Database is running.

[oracle@rhel6]:/oracle >            
                    
[oracle@rhel6]:/oracle > crs_stat -t
Name           Type           Target    State     Host        
------------------------------------------------------------
ora...._ASM.dg ora....up.type ONLINE    ONLINE    rhel6   
ora....ER.lsnr ora....er.type ONLINE    ONLINE    rhel6   
ora.asm        ora.asm.type   ONLINE    ONLINE    rhel6   
ora.ORCLT.db   ora....se.type ONLINE    ONLINE    rhel6   
ora.cssd       ora.cssd.type  ONLINE    ONLINE    rhel6   
ora.diskmon    ora....on.type ONLINE    ONLINE    rhel6   

[oracle@rhel6]:/oracle >

[oracle@rhel6]:/oracle > srvctl stop database -d ORCLT

[oracle@rhel6]:/oracle > crs_stat -t
Name           Type           Target    State     Host        
------------------------------------------------------------
ora...._ASM.dg ora....up.type ONLINE    ONLINE    rhel6   
ora....ER.lsnr ora....er.type ONLINE    ONLINE    rhel6   
ora.asm        ora.asm.type   OFFLINE   ONLINE    rhel6   
ora.ORCLT.db   ora....se.type OFFLINE   OFFLINE               
ora.cssd       ora.cssd.type  ONLINE    ONLINE    rhel6   
ora.diskmon    ora....on.type ONLINE    ONLINE    rhel6


The second handy feature is "enable" and "disable" of the srvctl which configures the related objects restart options on host restart or restart of the process on failure.


[oracle@rhel6]:/oracle > srvctl enable -h 

The SRVCTL enable command enables the named object so that it can run under 
  Oracle Restart for automatic startup, failover, or restart.

Usage: srvctl enable database -d db_unique_name
Usage: srvctl enable service -d db_unique_name -s "service_name_list"
Usage: srvctl enable asm
Usage: srvctl enable listener [-l lsnr_name]
Usage: srvctl enable diskgroup -g dg_name
Usage: srvctl enable ons [-v]
Usage: srvctl enable eons [-v]



Shutting down everything nicely with Oracle Restart.


[oracle@rhel6]:/oracle > srvctl stop database -d ORCLT
[oracle@rhel6]:/oracle > srvctl stop diskgroup -g DG_DB_ASM
[oracle@rhel6]:/oracle > srvctl stop asm
[oracle@rhel6]:/oracle > srvctl stop listener

[oracle@rhel6]:/oracle > crs_stat -t
Name           Type           Target    State     Host        
------------------------------------------------------------
ora...._ASM.dg ora....up.type OFFLINE   OFFLINE               
ora....ER.lsnr ora....er.type OFFLINE   OFFLINE               
ora.asm        ora.asm.type   OFFLINE   OFFLINE               
ora.ORCLT.db   ora....se.type OFFLINE   OFFLINE               
ora.cssd       ora.cssd.type  ONLINE    ONLINE    rhel6   
ora.diskmon    ora....on.type ONLINE    ONLINE    rhel6  




resources:
http://download.oracle.com/docs/cd/E14072_01/server.112/e10595/restart001.htm
$ srvctl -h


Monday, September 12, 2011

Random Data Generation and DBMS_RANDOM

Generating random data itself is already a subject of its own. But here i can only write about the needs of random data in a database environment and how a database developer or administrator can generate the random data to fulfill his/her requirements. I personally used random data generation in three main purposes up to now.

DATAMASKING

Almost in every company i have worked for, datamasking is a must-have procedure for test data generation, especially after restoring production data to a prelive or day-1 database. In common sense just truncating the sensitive data or updating the columns to NULL maybe enough for security purposes. But in some individual cases application logic needs to continue which depends on the data availability. In such cases there are two main roadmaps, hashing the real data and un-identify it or some random data may be generated for representing the real sensitive data. The generated data may also ensure the formatting of the known structures such as credit card information, postal code or telephone number of the customers.

PASSWORD GENERATION

After applying the new password policy to our database, forgotten passwords by the end users are not recovered that easily because of the password complexity functions (by applying password verify function) and the password_reuse_max parameter of the profile. This means that, the portal used by the level-1 support is not enough as the procedure lies behind the button is simply changes the password to a default value which is not changed from the beginning of the procedure at all, which causes ORA-28007: the password cannot be reused exception.

TEST DATA GENERATION

In order to test any functionality or any new features by myself, generating data for newly created tables is a necessity. Formatted varchar2 and number columns are very handy to test some functionalities and how some functionalities behave in large amount of structured table data.

Dictionary views can be used for this purpose but If it is not enough for the individual test conditions then generation a random data within the desired format is crucial.

DBMS_RANDOM PACKAGE

For all the mentioned purposes, oracle database has a built-in package which is DBMS_RANDOM to generate random data. There are several functions in this package which can be used to obtain same results with some manipulation of the output values. Main functions to produce random number would be;

DBMS_RANDOM.VALUE() whichs output value is NUMBER datatype, this means you can produce up to 32 digit numbers.


select dbms_random.value() as result from dual;

RESULT
------
0.33363563290178533954768590716355821427


DBMS_RANDOM.VALUE(arg1 number, arg2 number) returns a random value between the supplied limits. The following sql should return a random number between 1 and 20.


select dbms_random.value(1,20) as result from dual;

RESULT
------
14.25291834803147861791906299688905822678


DBMS_RANDOM.STRING() function can also be used in order to generate a random character or a string. Function has two parameters one for the type of characters to be used and the second parameter for the length of the string. The following function definition is copied here from and Oracle 11gR2 databases DBMS_RANDOM package spec. It explains itself very well and nothing needs to be added.


FUNCTION string (opt char, len NUMBER)

/* 
"opt" specifies that the returned string may contain:

'u','U': upper case alpha characters only
'l','L': lower case alpha characters only
'a','A': alpha characters only (mixed case)
'x','X': any alpha-numeric characters 'p','P': any printable characters
*/
RETURN VARCHAR2;  
--string of  characters


So, if you need to produce some random string with 8 characters long and all characters are alpha numeric and lower case then it is easy by using the DBMS_RANDOM.STRING() function:


select dbms_random.string('l',8) as result from dual;

RESULT
------
yhxctomv


DBMS_RANDOM BY EXAMPLES

All the explanations of the examples can be found in the document that i shared from here. You can also find the link at the bottom of this post to the same document.


--------------------------
--masking the card numbers
--Ex.1
--------------------------
select 
lpad(round(dbms_random.value*power(10,4)),4,0) || '-' ||
lpad(round(dbms_random.value*power(10,4)),4,0) || '-' ||
lpad(round(dbms_random.value*power(10,4)),4,0) || '-' ||
lpad(round(dbms_random.value*power(10,4)),4,0)  as card_number
from dual 
connect by level <=5; 
/*
CARD_NUMBER
-----------
2877-6639-0728-5456
6026-6002-2218-9038
7679-8441-0899-2826
8294-6783-6110-7988
1836-0407-9206-3333
*/

--------------------------
--masking the card numbers
--Ex.2
--------------------------
select 
ltrim(to_char(dbms_random.value(1,9999),'0000')) || '-' ||
ltrim(to_char(dbms_random.value(1,9999),'0000')) || '-' ||
ltrim(to_char(dbms_random.value(1,9999),'0000')) || '-' ||
ltrim(to_char(dbms_random.value(1,9999),'0000'))  as card_number
from dual 
connect by level <=5; 
/*
CARD_NUMBER
-----------
1558-9846-7194-5325
5109-3233-0641-9209
3081-5946-9840-6615
4400-9638-6333-9113
2928-9883-1771-0465
*/

--------------------------
--masking the card numbers
--Ex.3
--------------------------
select 
ltrim(replace(to_char(round(dbms_random.value*power(10,16)),'0000,0000,0000,0000'),',','-')) as card_number
from dual
connect by level <=5;
/*
CARD_NUMBER
-----------
0157-8125-6418-6025
3829-9039-1357-9048
2876-1086-5371-8152
2775-1748-2591-2523
2058-2404-1101-5320
*/

--------------------------
--masking the card numbers
--Ex.4
--------------------------
select 
substr(abs(dbms_random.random),1,4) || '-' ||
substr(abs(dbms_random.random),1,4) || '-' ||
substr(abs(dbms_random.random),1,4) || '-' ||
substr(abs(dbms_random.random),1,4)  as card_number
from dual 
connect by level <=5; 
/*
CARD_NUMBER
-----------
8639-7576-1359-3965
1317-1525-2526-1796
1043-5881-1000-7113
2106-3239-8662-3769
1461-7473-5870-6829
*/

--------------------------
--masking the phone number
--Ex.5
--------------------------
select 
  '+' || 
  round(DBMS_RANDOM.VALUE(1,99)) || '-' ||
  round(DBMS_RANDOM.VALUE(10,99)) || '-' ||
  round(DBMS_RANDOM.VALUE(1000000,9999999)) as phone_number
from dual
connect by level <= 5;
/*
PHONE_NUMBER
------------
+8-44-9146987
*/

----------------------
--masking the postcode
--Ex.6
----------------------
select 
  round(dbms_random.value(1000,9999)) || '-' || 
  dbms_random.string('U',2) as postcode 
from dual;
/*
POSTCODE
--------
4997-QP
*/



What if, you have a password verify function which commits the passwords will be at least 8 characters long and must contain alphanumeric characters and this password complexity merged with a profile which has a password_lifetime of two month and password_reuse_max is four. If there is a predefined automatic case which explained detailly in the paper mentioned before which needs random password generation then there is the example which can be used;

-----------------
--random password
--Ex.7
-----------------
select  
  DBMS_RANDOM.STRING('A',1) || 
  round(DBMS_RANDOM.VALUE()*10) || 
  DBMS_RANDOM.STRING('X',6) as password 
from dual;
/*
PASSWORD
--------
w7N3C1YG
*/


TEST DATA GENERATION

I generally use two different methods while generating test data to fill the test tables. One of them is by using the DBMS_RANDOM package and the other is filling the bulk data in the columns with the same output as you can find in the following examples.

In this first part of the following example, the code tries to simulate a username bu using lower case string and with random lengths between five and fifteen. The firstname starts with uppercase by using INICAP() function and the lastname is fully in uppercase by using the UPPER() inline function. The second part is not that clever and it just creates the same data over and over which can substitude a customer name or a username.

----------------------
--test data generation
--Ex.8
----------------------
select 
  initcap(dbms_random.string('L',round(dbms_random.value(5,15)))) || ' ' || 
  upper(dbms_random.string('L',round(dbms_random.value(5,15)))) as name
from dual
connect by level <= 5;
/*
NAME
----
Abzvsidgbcfa AGUGIR
Wvuogptkxwhdwa IPOOXTVBLLCNPV
Yiwcgh SGPFKJYCDISO
Radshiyidcrst ZNKNSEYUZXVWY
Daxeqzugq LKJILZJEYULVI
*/

select
initcap(lpad('x',round(dbms_random.value(5,15)),'x')) || ' ' || 
upper(lpad('y',round(dbms_random.value(5,15)),'y')) as name
from dual
connect by level <= 5;
/*
NAME
----
Xxxxxxxxxxxx YYYYYYYYYYY
Xxxxxxxxxxxxxxx YYYYYYYYYYYYY
Xxxxxxxxxxxxxx YYYYYYYYY
Xxxxxxxx YYYYYYYYYYY
Xxxxxx YYYYYYYY
*/


select 
  initcap(lpad('x',9,'x')) || ' ' ||
  upper(lpad('y',9,'y')) as name
from dual
connect by level <= 5;
/*
NAME
----
Xxxxxxxxx YYYYYYYYY
Xxxxxxxxx YYYYYYYYY
Xxxxxxxxx YYYYYYYYY
Xxxxxxxxx YYYYYYYYY
Xxxxxxxxx YYYYYYYYY
*/


GENERATING MEANINGFUL DATA

Upto now, the generated data was completely dummy. For example while masking the credit card number, it wasn’t important if this was a valid card number or not, or if the names are real names or not. All the data generated were dummy random data which does not make any sense, they are just string or numbers formed by number characters or digits.

But by using the following example random data can be generated which makes sense (or a little sensible than the previous methods).

In this small example, there is a lookup table (which can be extended as far as the individual case needs) and the phone number generator picks up a random country code from the lookup table to generate the phone number. Every random phone number generated will have a valid country code by using the following example. Of course this example can be extended for the area codes as well.

--------------------------
--generating sensible data
--Ex.9
--------------------------
create table t_country_codes 
  (key number(2), 
   code number(2), 
   country varchar2(20));
   
insert into t_country_codes values (1,1,'United States');
insert into t_country_codes values (2,31,'Netherlands');
insert into t_country_codes values (3,44,'United Kingdom');
insert into t_country_codes values (4,49,'Germany');
insert into t_country_codes values (5,90,'Turkey');
commit;

select 
  '+' || code || '-' || 
  round(dbms_random.value(10,99)) || '-' || 
  round(DBMS_RANDOM.VALUE(100000,999999)) as phone_number
from 
  t_country_codes 
where 
  key=(select round(DBMS_RANDOM.VALUE(1,5)) from dual);
/*  
PHONE_NUMBER
------------
+31-54-732777
*/  


This post is taken from the paper i wrote for an internal use and enchanced for my blog post. You can download the paper from the following link if you have google docs access. Generating Random Data in Oracle 11g Database

Monday, August 8, 2011

Howto Recover Archive Gap in Streams Configuration

When i realized that my archive logs are not shipping to the destination database (both the databases are version 10.2.0.4) which is using downstream capture process for streams replication it was too late that i already missed around 20 archived logs. I fixed the problem which was originated from the different service definition in the log_archive_dest_2 system parameter and the TNS alias. But what about the missing archived logs?

--SOURCE DB
SQL> select name, value from v$parameter where name = 'log_archive_dest_2';

NAME               VALUE
----               -----
log_archive_dest_2 SERVICE=ODSD ASYNC NOREGISTER
                   VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)   
                   DB_UNIQUE_NAME=ODSD


After correcting the entry in the tnsnames.ora file log shipping started from where it paused. I tested it by simply archiving the current redolog.

--SOURCE DB
SQL> alter system archive log current;
System altered.

SH> ls
-rw-r-----    1 oracle   dba      43451392 Aug 05 09:00 1_21759_657122256.dbf
-rw-r-----    1 oracle   dba       8708608 Aug 05 09:57 1_21760_657122256.dbf

--DESTINATION DB
SH> ls
-rw-r-----    1 oracle   dba      15930368 Aug  4 16:38 1_21727_657122256.dbf
-rw-r-----    1 oracle   dba       8708608 Aug  5 09:58 1_21760_657122256.dbf


Now the question how can i recover the gap between the last archivelog and the one created approximately 12 hours ago. First i should define the exact archived logs should be carried from the source to the target. I will check the capture process and see which SCN is it waiting for. As i am using a downstream capture process i should check the capture process in the target database.

--DESTINATION DB
SQL> select capture_name, state from v$streams_capture;

CAPTURE_NAME       STATE
------------       -----
CAPTURE_TABLE_GRP1 WAITING FOR DICTIONARY REDO: SCN 7523421102323


I find the SCN number which is waited in the target. I should find which archived log is this scn in the source database?

--SOURCE DB
SQL> SELECT 
  name, dest_id, sequence#, first_change#, next_change#, completion_time 
FROM V$ARCHIVED_LOG where 7523421102323 between first_change# and next_change#

NAME                               DEST_ID SEQUENCE# FIRST_CHANGE# COMPLETION_TIME
----                               ------- -------- ------------- ---------------
/oracle/.../1_21728_657122256.dbf  1       21728    7523421102321 04/08/11 16:38:52
ODSD,                              2       21728    7523421102321 04/08/11 16:38:54
/oracle/.../1_21729_657122256.dbf  1       21729    7523421102323 04/08/11 16:39:10


It seems that after sequence# 21728 archived logs are not shipped to the destination database. What i will do is to copy these archived logs manually to the destination host from the target by using OS commands.

After copying the archived logs i have to register the archived logs for the streams configuration. The beginning of the register command is similar to the one we already know. And a small oppss!...

--DESTINATION DB
SQL> alter database register logical logfile '/oracle/admin/ODSD/archive/1_21759_657122256.dbf';

ORA-16225: Missing LogMiner session name for Streams

SQL> select name, source_database from DBA_LOGMNR_SESSION;

ID NAME               SOURCE_DATABASE
-- ----               ---------------  
2  CAPTURE_TABLE_GRP1 CORED.CEB.LOCAL

SQL> alter database register logical logfile '/oracle/admin/ODSD/archive/1_21759_657122256.dbf' for 'CAPTURE_TABLE_GRP1';

Database altered.

SQL> select logmnr_session_id, name from DBA_LOGMNR_LOG; --or dba_registered_archived_log

LOGMNR_SESSION_ID NAME
----------------- ----
2                 /oracle/admin/ODSD/archive/1_21759_657122256.dbf
2                 /oracle/admin/ODSD/archive/1_21760_657122256.dbf
2                 /oracle/admin/ODSD/archive/1_21761_657122256.dbf


Lets check the capture process again. And the second oppss!..

--DESTINATION DB
SQL> select capture_name, state from v$streams_capture;

no rows selected.

SQL> select capture_name, status, captured_scn, applied_scn from dba_capture;

CAPTURE_NAME  STATUS CAPTURED_SCN APPLIED_SCN
------------  ------ ------------ -----------
CAPTURE_TABLE_GRP1 ABORTED 7523421102323 7523421102323


It seems the capture process is aborted while we are registering the archived logs. Maybe we need to restart it.


SQL> exec DBMS_CAPTURE_ADM.START_CAPTURE('CAPTURE_TABLE_GRP1');

SQL> select capture_name, state from v$streams_capture;

CAPTURE_NAME       STATE
------------       -----
CAPTURE_TABLE_GRP1 WAITING FOR DICTIONARY REDO: SCN 7523573032011


It seems the problem is solved and the capture process moves on with the next scn and the archived logs.

Tuesday, July 19, 2011

Hasty DBA's Most Probable Errors while testing TDE

Here I tested Oracle Transparent Data Encryption in 10gR2 (10.2.0.4) database which is installed on AIX 6.1 to see how it works and if there are some undocumented difficulty while using the technology. As i was too hasty and careless, i have seen most of the cheap errors.

In Summary;

ORA-28368: cannot auto-create wallet

Solution: Create the wallet directory (in this case default wallet directory which is $ORACLE_BASE/admin/$ORACLE_SID/wallet)

ORA-28336: cannot encrypt SYS owned objects

Solution: Do not test under SYS schema :)

ORA-39173: Encrypted data has been stored unencrypted in dump file set.

Solution: Use encryption_password parameter to store encrypted data safe in the export dump. Else it will be plain text.

ORA-31693: Table data object "ERGEMP"."T_ENCRYPTED" failed to load/unload and is being skipped due to error:
ORA-28336: cannot encrypt SYS owned objects

Solution: Use your own user for exporting instead of SYS as you cannot encrypt SYS obejcts even in the export dumpfile.

Error: ORA-39087: directory name DATA_PUMP_DIR is invalid

Solution: Dont forget to give write permission to your user which you are exporting data with :)


The rest is as follows.


[oracle@defbora01]:/oracle/admin/CORET > sql

SQL*Plus: Release 10.2.0.4.0 - Production on Thu Jul 14 12:18:08 2011

Copyright (c) 1982, 2007, Oracle.  All Rights Reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> alter system set encryption key identified by "pass1234";
alter system set encryption key identified by "pass1234"
*
ERROR at line 1:
ORA-28368: cannot auto-create wallet


SQL> exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
[oracle@defbora01]:/oracle/admin/CORET > mkdir wallet
[oracle@defbora01]:/oracle/admin/CORET > sql

SQL*Plus: Release 10.2.0.4.0 - Production on Thu Jul 14 12:20:03 2011

Copyright (c) 1982, 2007, Oracle.  All Rights Reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> alter system set encryption key identified by "pass1234";

System altered.

SQL> 


/*** ***/

SQL> create table t_encrypted(col1 varchar2(50) encrypt);
create table t_encrypted(col1 varchar2(50) encrypt)
*
ERROR at line 1:
ORA-28336: cannot encrypt SYS owned objects
SQL> create user ergemp identified by "pass1234" default tablespace users;

User created.

SQL> 
SQL> 
SQL> grant connect, resource to ergemp;

Grant succeeded.

SQL> create table ergemp.t_encrypted(col1 varchar2(50) encrypt);

Table created.

SQL> insert into ergemp.t_encrypted values ('this text should be encrypted');

1 row created.

SQL> commit;

Commit complete.

SQL> select * from dba_encrypted_columns;

OWNER           TABLE_NAME                     COLUMN_NAME               ENCRYPTION_ALG                SAL
--------------- ------------------------------ ------------------------- ----------------------------- ---
ERGEMP          T_ENCRYPTED                    COL1                      AES 192 bits key              YES

SQL> select * from ERGEMP.T_ENCRYPTED;

COL1
--------------------------------------------------
this text should be encrypted

SQL> 



I will test the datapump export. First i will not use the "encryption_password" parameter and search for the string of the column value if it is somewhere in the dumpfile.

And then i will run the same datapump export by using the "encryption_password" parameter to be sure the column value is not readable in the dumpfile with "string" utility.


[oracle@defbora01]:/oracle/orahome1/rdbms> expdp '/******** AS SYSDBA' directory=DATA_PUMP_DIR dumpfile=expdp.dmp logfile=expdp.log tables='ERGEMP.T_ENCRYPTED'


Export: Release 10.2.0.4.0 - 64bit Production on Friday, 15 July, 2011 11:53:18

Copyright (c) 2003, 2007, Oracle.  All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SYS"."SYS_EXPORT_TABLE_01":  '/******** AS SYSDBA' directory=DATA_PUMP_DIR dumpfile=expdp.dmp logfile=expdp.log tables=ERGEMP.T_ENCRYPTED 
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "ERGEMP"."T_ENCRYPTED"                      4.953 KB       1 rows
ORA-39173: Encrypted data has been stored unencrypted in dump file set.
Master table "SYS"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_TABLE_01 is:
/oracle/orahome1/rdbms/log/expdp.dmp
Job "SYS"."SYS_EXPORT_TABLE_01" completed with 1 error(s) at 11:56:53

[oracle@defbora01]:/oracle/orahome1/rdbms/log > strings expdp.dmp | grep "this text should be encrypted"
this text should be encrypted

[oracle@defbora01]:/oracle/orahome1/rdbms/log >



Without using the "encryption_password", data in the export files are not safe because the they are held as in plain text as it can be seen from the previous example.


[oracle@defbora01]:/oracle/orahome1/rdbms> expdp '/******** AS SYSDBA' directory=DATA_PUMP_DIR dumpfile=expdp.dmp logfile=expdp.log tables='ERGEMP.T_ENCRYPTED' encryption_password="pass1234"

Export: Release 10.2.0.4.0 - 64bit Production on Friday, 15 July, 2011 13:53:56

Copyright (c) 2003, 2007, Oracle.  All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SYS"."SYS_EXPORT_TABLE_01":  '/******** AS SYSDBA' directory=DATA_PUMP_DIR dumpfile=expdp.dmp logfile=expdp.log tables=ERGEMP.T_ENCRYPTED encryption_password=******** 
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
ORA-31693: Table data object "ERGEMP"."T_ENCRYPTED" failed to load/unload and is being skipped due to error:
ORA-28336: cannot encrypt SYS owned objects
Master table "SYS"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_TABLE_01 is:
/oracle/orahome1/rdbms/log/expdp.dmp
Job "SYS"."SYS_EXPORT_TABLE_01" completed with 1 error(s) at 13:55:55

[oracle@defbora01]:/oracle/orahome1/rdbms> expdp 'ergemp/********' directory=DATA_PUMP_DIR dumpfile=expdp.dmp logfile=expdp.log tables='ERGEMP.T_ENCRYPTED' encryption_password="pass1234"

Export: Release 10.2.0.4.0 - 64bit Production on Friday, 15 July, 2011 13:59:17

Copyright (c) 2003, 2007, Oracle.  All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ORA-39002: invalid operation
ORA-39070: Unable to open the log file.
ORA-39087: directory name DATA_PUMP_DIR is invalid

SQL> grant read,write on directory DATA_PUMP_DIR to ERGEMP;

Grant succeeded.

[oracle@defbora01]:/oracle/orahome1/rdbms> expdp 'ergemp/********' directory=DATA_PUMP_DIR dumpfile=expdp.dmp logfile=expdp.log tables='ERGEMP.T_ENCRYPTED' encryption_password="pass1234"

Export: Release 10.2.0.4.0 - 64bit Production on Friday, 15 July, 2011 14:08:59

Copyright (c) 2003, 2007, Oracle.  All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "ERGEMP"."SYS_EXPORT_TABLE_01":  'ergemp/********' directory=DATA_PUMP_DIR dumpfile=expdp.dmp logfile=expdp.log tables=ERGEMP.T_ENCRYPTED encryption_password=******** 
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "ERGEMP"."T_ENCRYPTED"                          5 KB       1 rows
Master table "ERGEMP"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for ERGEMP.SYS_EXPORT_TABLE_01 is:
/oracle/orahome1/rdbms/log/expdp.dmp
Job "ERGEMP"."SYS_EXPORT_TABLE_01" successfully completed at 14:10:19

[oracle@defbora01]:/oracle/orahome1/rdbms/log > strings expdp.dmp | grep "this text should be encrypted"

[oracle@defbora01]:/oracle/orahome1/rdbms/log >



By using the "encryption_password" while exporting the table the data in the column cannot be distinguished by digging the plain strings in the export file.

resources:
Oracle Advanced Security Admin Guide

Wednesday, July 13, 2011

Howto setup a manual logshipping in SQL server

Whenever i receive an error during the setup of the SQL server logshipping i was always hopeless. But from now on i have a procedure to setup a manual logshipping schedule from primary sql server to secondary. It is important to mention that i used the following structure in SQL Server 2000 and it worked like a piece of cake.

In summary, the main steps are as follows;

1- on primary database: create a job to backup transaction logs regularly
2- on primary database: share the transaction log backup directory.
3- backup and restore the database to be log shipped from prımary to secondary database
4- on secondary database: create the following structure (tables and procedures)
5- on secondary database: create the copy and load jobs via the created structure
6- on secondary database: monitor the process

There are 3 stored procedures and 3 tables to create;


USE msdb
CREATE TABLE backup_movement_plans
(
  plan_id         UNIQUEIDENTIFIER NOT NULL PRIMARY KEY CLUSTERED,
  plan_name       sysname          NULL,
  source_dir      NVARCHAR(256)    NOT NULL,
  destination_dir NVARCHAR(256)    NOT NULL,
  database_subdir BIT              NOT NULL DEFAULT (1)
)

USE msdb
CREATE TABLE backup_movement_plan_databases
(
plan_id  UNIQUEIDENTIFIER NOT NULL FOREIGN KEY REFERENCES       backup_movement_plans(plan_id),
  source_database      sysname          NOT NULL,
  destination_database sysname          NOT NULL,
  source_server        sysname          NOT NULL DEFAULT (@@servername),
  load_delay           INT              NOT NULL DEFAULT(0),  -- In minutes
  load_all             BIT              NOT NULL DEFAULT(1),
  retention_period     INT              NOT NULL DEFAULT(48), -- In hours
  last_file_copied     NVARCHAR(256)    NULL,
  date_last_copied     DATETIME         NULL,
  last_file_loaded     NVARCHAR(256)    NULL,
  date_last_loaded     DATETIME         NULL
)

USE msdb
CREATE TABLE backup_movement_plan_history
(
  sequence_id          INT              NOT NULL IDENTITY UNIQUE CLUSTERED,
  plan_id              UNIQUEIDENTIFIER NOT NULL DEFAULT ('00000000-0000-0000-0000-000000000000'),
  plan_name            sysname          NOT NULL DEFAULT ('All ad-hoc plans'),
  destination_server   sysname          NOT NULL DEFAULT (@@servername),
  source_server        sysname          NOT NULL DEFAULT (@@servername),
  source_database      sysname          NOT NULL,
  destination_database sysname          NOT NULL,
  activity             BIT              NOT NULL DEFAULT (0),
  succeeded            BIT              NOT NULL DEFAULT (1),
  num_files            INT              NOT NULL DEFAULT (0),
  last_file            NVARCHAR(256)    NULL,
  end_time             DATETIME         NOT NULL DEFAULT (GETDATE()),
  duration             INT              NULL     DEFAULT (0),
  error_number         INT              NOT NULL DEFAULT (0),
  message              NVARCHAR(512)    NULL
)  

USE msdb
CREATE PROCEDURE sp_create_backup_movement_plan
    @name          sysname,
    @source_dir    VARCHAR(256),
    @dest_dir      VARCHAR(256),
    @sub_dir       BIT = 1, -- Each database has it's own sub-directory
    @load_job_freq INT = 5, -- In Minutes
    @copy_job_freq INT = 5  -- In Minutes
AS
BEGIN

BEGIN TRANSACTION
  SET       NOCOUNT             ON
  SET       QUOTED_IDENTIFIER   OFF
  SET       ANSI_NULLS          ON 

  DECLARE   @PlanID        uniqueidentifier
  DECLARE   @CopyJobName   sysname
  DECLARE   @LoadJobName   sysname
  DECLARE   @CopyCommand   VARCHAR(500)
  DECLARE   @LoadCommand   VARCHAR(500)
  DECLARE   @ReturnCode    INT

  -- Create a GUID for the plan
  SELECT @PlanID = NEWID()
    
  -- Check if a plan with the same name exists
  IF (EXISTS (SELECT * 
              FROM   msdb.dbo.backup_movement_plans
              WHERE  plan_name = @name ))
  BEGIN
    RAISERROR('A backup movement plan with the same name already exists. Specify a different name.'', 16, 1)
    GOTO QuitWithRollback
  END

  -- Insert plan in the table
  INSERT msdb.dbo.backup_movement_plans 
         (plan_id, plan_name, source_dir, destination_dir, database_subdir)
  VALUES
         (@PlanID, @name, @source_dir, @dest_dir, @sub_dir)

  SELECT @CopyJobName = N'Copy Job For ' + @name
  SELECT @LoadJobName = N'Load Job For ' + @name
  SELECT @CopyCommand = N'EXECUTE master.dbo.xp_sqlmaint ''-CopyPlanName "' + @name + '" '' '
  SELECT @LoadCommand = N'EXECUTE master.dbo.xp_sqlmaint ''-LoadPlanName "' + @name + '" '' '
  
  -- Create the load job
  EXECUTE @ReturnCode = msdb.dbo.sp_add_job @job_name = @LoadJobName

  IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback

  EXECUTE @ReturnCode = msdb.dbo.sp_add_jobstep @job_name = @LoadJobName, 
     @step_id=1,
     @step_name = N'step1', 
     @command = @LoadCommand, 
     @subsystem = N'TSQL', 
     @on_success_step_id = 0, 
     @on_success_action = 1, 
     @on_fail_step_id = 0, 
     @on_fail_action = 2, @flags = 4

  IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback 

  EXECUTE @ReturnCode = msdb.dbo.sp_add_jobschedule @job_name = @LoadJobName, 
     @freq_subday_interval = @load_job_freq, 
     @name = N'sch1', 
     @enabled = 1, 
     @freq_type = 4, 
     @active_start_date = 19980402, 
     @active_start_time = 0, 
     @freq_interval = 1, 
     @freq_subday_type = 4, 
     @freq_relative_interval = 0, 
     @freq_recurrence_factor = 0, 
     @active_end_date = 99991231, 
     @active_end_time = 235959

  IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback 

  EXECUTE @ReturnCode = msdb.dbo.sp_add_jobserver @job_name = @LoadJobName, @server_name = N'(local)' 

  IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback 

  -- Create the Copy Job
  EXECUTE @ReturnCode = msdb.dbo.sp_add_job @job_name = @CopyJobName

  IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback 

  EXECUTE @ReturnCode = msdb.dbo.sp_add_jobstep @job_name = @CopyJobName, 
     @step_id = 1, 
     @step_name = N'step1', 
     @command = @CopyCommand, 
     @subsystem = N'TSQL', 
     @on_success_step_id = 0, 
     @on_success_action = 1, 
     @on_fail_step_id = 0, 
     @on_fail_action = 2, 
     @flags = 4

  IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback 

  EXECUTE @ReturnCode = msdb.dbo.sp_add_jobschedule @job_name = @CopyJobName, 
     @freq_subday_interval = @copy_job_freq, 
     @name = N'sch1', 
     @enabled = 1, 
     @freq_type = 4, 
     @active_start_date = 19980402, 
     @active_start_time = 0, 
     @freq_interval = 1, 
     @freq_subday_type = 4, 
     @freq_relative_interval = 0, 
     @freq_recurrence_factor = 0, 
     @active_end_date = 99991231, 
     @active_end_time = 235959

  IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback 

  EXECUTE @ReturnCode = msdb.dbo.sp_add_jobserver @job_name = @CopyJobName, @server_name = N'(local)' 

  IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback 

COMMIT TRANSACTION          
GOTO   EndSave              
QuitWithRollback:
 IF (@@TRANCOUNT > 0) ROLLBACK TRANSACTION 
EndSave: 
END 

USE msdb
CREATE PROCEDURE sp_add_db_to_backup_movement_plan
    @plan_id       uniqueidentifier   = NULL,
    @plan_name     sysname            = NULL,
    @source_db     sysname,
    @dest_db       sysname,
    @load_delay    INT                = 0,            -- In Minutes
    @load_all      BIT                = 1,
    @source_server sysname            = @@servername,
    @retention_period INT             = 48            -- In Hours. 0 implies do not delete files 
AS
BEGIN
    SET       NOCOUNT             ON
    SET       QUOTED_IDENTIFIER   OFF
    SET       ANSI_NULLS          ON 
    DECLARE   @PlanID             uniqueidentifier

    if((@plan_id IS NULL) AND (@plan_name IS NULL))
    BEGIN
      RAISERROR('You must supply the plan name or the plan id.', 16, 1)
      RETURN(1)
    END

    IF (@plan_id IS NULL)
    BEGIN        
        IF (NOT EXISTS (SELECT * 
                        FROM   msdb.dbo.backup_movement_plans
                        WHERE  plan_name = @plan_name ) )
        BEGIN
          RAISERROR('Backup movement plan with this name was not found.', 16, 1)
          RETURN(1)
        END

        IF (SELECT COUNT(*) 
            FROM   msdb.dbo.backup_movement_plans
            WHERE  plan_name = @plan_name) > 1
        BEGIN
          RAISERROR('There are more than one backup movement plans with this name.', 16, 1)
          RETURN(1)
        END

        SELECT @PlanID = plan_id 
        FROM   msdb.dbo.backup_movement_plans
        WHERE  plan_name = @plan_name
    END
    ELSE
    BEGIN
        SELECT @PlanID = @plan_id 
        IF (NOT EXISTS (SELECT * 
                        FROM   msdb.dbo.backup_movement_plans
                        WHERE  plan_id = @plan_id ) )
        BEGIN
          RAISERROR('Backup movement plan with this id.', 16, 1)
          RETURN(1)
        END

        IF (SELECT COUNT(*) 
            FROM   msdb.dbo.backup_movement_plans
            WHERE  plan_id = @plan_id) > 1
        BEGIN
          RAISERROR('There are more than one backup movement plans with this id.', 16, 1)
          RETURN(1)
        END
    END

    IF (EXISTS ( SELECT *
                 FROM msdb.dbo.backup_movement_plan_databases
                 WHERE plan_id = @PlanID AND source_database = @source_db AND destination_database = @dest_db ))
    BEGIN
      RAISERROR('These databases are already included in this plan', 16, 1)
      RETURN(1)
    END
        
    INSERT msdb.dbo.backup_movement_plan_databases
   (plan_id, source_database, destination_database, load_delay, load_all, source_server, retention_period)
    VALUES
          (@PlanID, @source_db, @dest_db, @load_delay, @load_all, @source_server, @retention_period)
END


After creating the infrastructure you can create the transaction log backup copy and load jobs by running the following script. After running the script logshipping started via the created jobs and the process can be monitored by creating and using the following stored procedure.


exec msdb..sp_create_backup_movement_plan 
  @name = "DB01_logshipping",
  @source_dir = "\\PRMDB01\trnlogs", 
  @dest_dir = "D:\logshipping\",
  @sub_dir = "DB01",
  @load_job_freq =30,
  @copy_job_freq = 30

exec msdb..sp_add_db_to_backup_movement_plan 
  @plan_name = "DB01_logshipping",
  @source_db = "DB01", 
  @dest_db = "DB01",
  @load_delay = 10, 
  @load_all = 1,
  @source_server = 'PRMYDB01', 
  @retention_period = 30  --in days


Here is the stored procedure to monitor the log shipping process.


USE msdb
CREATE PROCEDURE dbo.sp_log_ship_status
 @p_svr         varchar( 30 ) = NULL,
 @p_db          varchar( 30 )= NULL
AS
Begin

set nocount on
DECLARE @dest_db char(30),
 @history_id int,
 @time_delta int

CREATE TABLE #table ( destination_db  CHAR(30),
   time_delta INT)

DECLARE log_ship_cursor CURSOR
 FOR SELECT destination_database 
 from backup_movement_plan_databases

OPEN log_ship_cursor

FETCH NEXT FROM log_ship_cursor into @dest_db

WHILE @@FETCH_STATUS = 0
BEGIN
  set nocount on

  select  @history_id = (select max(restore_history_id) 
                           from restorehistory 
                           where destination_database_name = @dest_db)

  select  @time_delta = (select datediff(mi, (select backup_start_date 
                           from backupset 
                           where backup_set_id = (select backup_set_id 
                                                   from restorehistory 
                                                   where restore_history_id = @history_id)), getdate()))

  INSERT INTO #table VALUES( @dest_db, @time_delta)
  FETCH NEXT from log_ship_cursor into @dest_db 
end

close log_ship_cursor
DEALLOCATE log_ship_cursor

SELECT "Primary Srv" = CONVERT(char(30),source_server),
 "Primary DB" = CONVERT(char(30),source_database),
 "Secondary DB" = CONVERT(char(30),destination_database),
 "Delta" = time_delta,
 "Load All" = CASE WHEN (load_all = 0) THEN "No" ELSE "Yes" end,
 "Load Delay" = load_delay,
 "Save Period" = retention_period,
 "Last File Copied" = CONVERT(char(75),last_file_copied),
 "Copy Logged Time" = date_last_copied,
 "Last File Loaded" = CONVERT(char(75),last_file_loaded),
 "Load Logged Time" = date_last_loaded
FROM  msdb..backup_movement_plan_databases,
 #table
WHERE (@p_svr is NULL or source_server like @p_svr)
AND (@p_db is NULL or source_database like @p_db)
AND destination_database = destination_db

drop table #table
END

-------------------------
-- it seems it is working
-------------------------
dbo.log_ship_status
/*
Primary Srv Primary DB      Secondary DB      Delta    Load All Load Delay  Save Period Last File Copied                                 Copy Logged Time        Last File Loaded                               Load Logged Time
----------- --------------- ----------------- -------- -------- ----------- ----------- ------------------------------------------------ ----------------------- -------------------------------------------    -----------------------
crmsbdb01   siebeldb        siebeldb          34       Yes      1           60          \\crmsbdb01\t-logs\siebeldb\siebeldb_tlog_201... 2011-07-13 15:10:01.610 E:\t-logs\siebeldb\siebeldb_tlog_2011071314... 2011-07-13 15:00:12.563
crmsbdb01   IVR_INTEGRATE   IVR_INTEGRATE     34       Yes      1           60          \\crmsbdb01\t-logs\IVR_INTEGRATE\IVR_INTEGRAT... 2011-07-13 15:10:03.313 E:\t-logs\IVR_INTEGRATE\IVR_INTEGRATE_tlog_... 2011-07-13 14:50:07.627
*/

Tuesday, June 28, 2011

Export problem and the invalid XDB library

This was about to be a regular export to the filesystem which is running on AIX 5.1 Operating system and the Oracle Database version is 9.2.0.7 and the Oracle client version is 10.2.0.3. But the weird error just popped out and interesting search results came up. Here is the story of the export.


Export: Release 10.2.0.3.0 - Production on Wed Oct 3 11:52:37 2007

Copyright (c) 1982, 2005, Oracle.  All rights reserved.


Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
Export done in WE8ISO8859P9 character set and AL16UTF16 NCHAR character set

About to export specified users ...
. exporting pre-schema procedural objects and actions
. exporting foreign function library names for user PARITEM 
. exporting PUBLIC type synonyms
. exporting private type synonyms
. exporting object type definitions for user PARITEM 
About to export PARITEM's objects ...
. exporting database links
. exporting sequence numbers
. exporting cluster definitions
EXP-00056: ORACLE error 600 encountered
ORA-00600: internal error code, arguments: [unable to load XDB library], [], [], [], [], [], [], []
EXP-00000: Export terminated unsuccessfully


It seems there is a problem with the XDB library. When i select from dba_registry to see the components and the statuses, i realized there is a problem with the XML Database component. After investigation and searching the metalink, i came up to the a document which explains how to recreate the XML database (please see the end of the post for the related resources)


--INVALID XML Database component

SQL> col comp_name format a30
SQL> set pagesize 100
SQL> select comp_name, status, version from dba_registry;

COMP_NAME                      STATUS      VERSION
------------------------------ ----------- ------------------------------
Oracle9i Catalog Views         VALID       9.2.0.7.0
Oracle9i Packages and Types    VALID       9.2.0.7.0
Oracle Workspace Manager       VALID       9.2.0.1.0
JServer JAVA Virtual Machine   VALID       9.2.0.7.0
Oracle XDK for Java            VALID       9.2.0.9.0
Oracle9i Java Packages         VALID       9.2.0.7.0
Oracle Text                    VALID       9.2.0.7.0
Oracle XML Database            INVALID     9.2.0.7.0
Spatial                        VALID       9.2.0.7.0
Oracle Ultra Search            VALID       9.2.0.7.0
Oracle Data Mining             VALID       9.2.0.7.0
OLAP Analytic Workspace        UPGRADED    9.2.0.7.0
Oracle OLAP API                UPGRADED    9.2.0.7.0
OLAP Catalog                   VALID       9.2.0.7.0

14 rows selected.

-- Dropping xml database
SQL> @?/rdbms/admin/catnoqm.sql;
SQL> drop trigger sys.xdb_installation_trigger;
SQL> drop trigger sys.dropped_xdb_instll_trigger;
SQL> drop table dropped_xdb_instll_tab;

-- Recreating the xml database
SQL> startup migrate;
SQL> @?/rdbms/admin/catproc.sql;
SQL> @?/rdbms/admin/catqm.sql;
SQL> @?/rdbms/admin/dbmsxsch.sql;
SQL> @?/rdbms/admin/catxdbj.sql; -- only in 9i
SQL> @?/rdbms/admin/xdbpatch;

SQL> select comp_name, status, version from dba_registry;

COMP_NAME                      STATUS      VERSION
------------------------------ ----------- ------------------------------
Oracle9i Catalog Views         VALID       9.2.0.7.0
Oracle9i Packages and Types    VALID       9.2.0.7.0
Oracle Workspace Manager       VALID       9.2.0.1.0
JServer JAVA Virtual Machine   VALID       9.2.0.7.0
Oracle XDK for Java            VALID       9.2.0.9.0
Oracle9i Java Packages         VALID       9.2.0.7.0
Oracle Text                    VALID       9.2.0.7.0
Oracle XML Database            VALID       9.2.0.7.0
Spatial                        VALID       9.2.0.7.0
Oracle Ultra Search            VALID       9.2.0.7.0
Oracle Data Mining             VALID       9.2.0.7.0
OLAP Analytic Workspace        UPGRADED    9.2.0.7.0
Oracle OLAP API                UPGRADED    9.2.0.7.0
OLAP Catalog                   VALID       9.2.0.7.0

14 rows selected.

Re run the export now !...


resources:
http://www.oratransplant.nl/2005/11/22/unable-to-export-char-semantic-102-database/
Note:339938.1 - Full Export From 10.2.0.1 Aborts With EXP-56 ORA-932 (Inconsistent Datatypes) EXP-0
Note:243554.1 - How to Deinstall and Reinstall XML Database (XDB)

Thursday, June 16, 2011

The way of opatch 10.2.0.4 Database on AIX


[oracle@]:/oracle/asmhome1/OPatch > opatch version
Invoking OPatch 10.2.0.5.1

OPatch Version: 10.2.0.5.1

OPatch succeeded.

[oracle@]:/oracle/asmhome1/OPatch > cd
[oracle@]:/oracle > . .profile

[oracle@]:/oracle > sql

SQL*Plus: Release 10.2.0.4.0 - Production on Wed Jun 1 09:48:52 2011

Copyright (c) 1982, 2007, Oracle.  All Rights Reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> alter system checkpoint;

System altered.

SQL> shutdown abort;
ORACLE instance shut down.
SQL> exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

[oracle@]:/oracle > 
[oracle@]:/oracle > . .profile_asm
[YOU HAVE NEW MAIL]
[oracle@]:/oracle > sql

SQL*Plus: Release 10.2.0.4.0 - Production on Wed Jun 1 09:54:58 2011

Copyright (c) 1982, 2007, Oracle.  All Rights Reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> shutdown;
ASM diskgroups dismounted
ASM instance shutdown
SQL> exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
[oracle@]:/oracle > lsnrctl stop

LSNRCTL for IBM/AIX RISC System/6000: Version 10.2.0.4.0 - Production on 01-JUN-2011 09:49:09

Copyright (c) 1991, 2007, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=defbora01)(PORT=1521)))
The command completed successfully

[oracle@]:/oracle/orahome1/OPatch > su - 
root's Password: 
[root@]:/home/root> /usr/sbin/slibclean
[root@]:/home/root>

[oracle@]:/oracle > cd $ORACLE_HOME/OPatch
[oracle@]:/oracle/asmhome1/OPatch > ls
11725015                              jlib                                  opatch.pl
README.txt                            ocm                                   opatchprereqs
crs                                   opatch                                p11725015_10204_AIX5L-cpuapr2011.zip
docs                                  opatch.bat                            p6880880_102000_AIX64-5L-opatch.zip
emdpatch.pl                           opatch.ini
[oracle@]:/oracle/asmhome1/OPatch > cd 11725015 
[oracle@]:/oracle/asmhome1/OPatch/11725015 > opatch napply --skip_subset --skip_duplicate

... 
(output truncated)
...

--------------------------------------------------------------------------------
**********************************************************************
**                       ATTENTION                                  **
**                                                                  **
** Please note that this Patch Installation is                      **
** not complete until all the Post Installation instructions        **
** noted in the Readme accompanying this patch, have been           **
** successfully completed.                                          **
**                                                                  **
**********************************************************************

--------------------------------------------------------------------------------


The local system has been patched and can be restarted.

UtilSession: N-Apply done.

OPatch succeeded.

[oracle@]:/oracle/asmhome1/OPatch/11725015 > 
[oracle@]:/oracle/asmhome1/OPatch/11725015 > opatch lsinv

... 
(output truncated)
...

[oracle@]:/oracle > . .profile
[oracle@]:/oracle/orahome1/OPatch > cd 11725015 
[oracle@]:/oracle/orahome1/OPatch/11725015 > opatch napply --skip_subset --skip_duplicate

...
(output truncated)
...

[oracle@]:/oracle/orahome1/OPatch/11725015 > sql

SQL*Plus: Release 10.2.0.4.0 - Production on Wed Jun 1 10:20:14 2011

Copyright (c) 1982, 2007, Oracle.  All Rights Reserved.

Connected to an idle instance.

SQL> startup upgrade;
ORACLE instance started.

Total System Global Area 3221225472 bytes
Fixed Size                  2087480 bytes
Variable Size             637535688 bytes
Database Buffers         2566914048 bytes
Redo Buffers               14688256 bytes
Database mounted.
Database opened.
SQL> @?/rdbms/admin/catbundle.sql cpu apply
SQL> @?/rdbms/admin/utlrp.sql

[oracle@]:/oracle/orahome1/cpu/view_recompile > ls -l
total 24
-rwxr-xr-x    1 oracle   dba            2095 Jul 10 2008  recompile_precheck_jan2008cpu.sql
-rwxr-xr-x    1 oracle   dba            5143 Jul 10 2008  view_recompile_jan2008cpu.sql
[oracle@]:/oracle/orahome1/cpu/view_recompile > sql

SQL*Plus: Release 10.2.0.4.0 - Production on Wed Jun 1 10:34:16 2011

Copyright (c) 1982, 2007, Oracle.  All Rights Reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> @recompile_precheck_jan2008cpu.sql;

Running precheck.sql...

Number of views to be recompiled :2226
-----------------------------------------------------------------------

Number of objects to be recompiled :4347
Please follow the README.txt instructions for running viewrecomp.sql

PL/SQL procedure successfully completed.

SQL> @view_recompile_jan2008cpu.sql;

PL/SQL procedure successfully completed.


PL/SQL procedure successfully completed.


PL/SQL procedure successfully completed.


1 row created.


Commit complete.

No. of Invalid Objects is :1848
Please refer to README.html to for instructions on validating these objects

PL/SQL procedure successfully completed.

Logfile for the current viewrecomp.sql session is : vcomp_CORET_01Jun2011_11_47_53.log
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup;
ORACLE instance started.

Total System Global Area 3221225472 bytes
Fixed Size                  2087480 bytes
Variable Size             637535688 bytes
Database Buffers         2566914048 bytes
Redo Buffers               14688256 bytes
Database mounted.
Database opened.
SQL> 

SQL> select bundle_series, action, version from dba_registry_history;

BUNDLE_SERIES          ACTION  VERSION
---------------------- ---------------- --------
CPU                    APPLY CPU 10.2.0.4


SQL> SELECT * FROM registry$history where ID = '6452863';

SQL> select bundle_series, action, version from registry$history where ID = '6452863';

BUNDLE_SERIES                  ACTION     VERSION
------------------------------ ---------- ------------------------------
CPU