Sunday, December 14, 2008

why auditing

In general it seems that security is one of the least important routines of the dba. But in my opinion this issue should be one of the most important issues. Not only be able to supply audit records to firm-wide audit documentations but also knowing what is going on our databases to be sure everything goes straight. Auditing is not only blaming someone for a wrong action. This routine is for early clearing up for the wrong events.
I start auditing with "db, extended" AUDIT_TRAIL parameter to track the sql statements with the actions by altering the system, restarting the instance is a must as this parameter cannot be changed online:

SQL> alter system set audit_trail="db, extended" scope=spfile;
SQL> shutdown immediate;
SQL> startup;

After enabling the auditing, i decided what actions should be audited. Here i was not very selective, i simply enabled all DML actions for the individual users.

AUDIT 
  INSERT ANY TABLE, 
  DELETE ANY TABLE, 
  UPDATE ANY TABLE 
BY "username" ;

AUDIT 
  INSERT TABLE, 
  DELETE TABLE, 
  UPDATE TABLE 
BY "username" ;

AUDIT 
  DROP ANY TABLE, 
  DROP ANY INDEX, 
  DROP ANY VIEW, 
  DROP ANY SYNONYM, 
  DROP ANY PROCEDURE, 
  DROP ANY SEQUENCE, 
  DROP ANY TRIGGER, 
  DROP ANY TYPE 
BY "username" ;

AUDIT 
  ALTER ANY TABLE, 
  ALTER ANY INDEX, 
  ALTER ANY PROCEDURE, 
  ALTER ANY SEQUENCE, 
  ALTER ANY TRIGGER, 
  ALTER ANY TYPE 
BY "username" ;

AUDIT 
  CREATE ANY TABLE, 
  DROP ANY INDEX, 
  DROP ANY VIEW, 
  DROP ANY SYNONYM, 
  DROP ANY PROCEDURE, 
  DROP ANY SEQUENCE, 
  DROP ANY TRIGGER, 
  DROP ANY TYPE 
BY "username";

Move sys.aud$ table to data tablespace: We dont want to oversize the SYSTEM tablespace. As far as i read from oracle documentation moving AUD$ table to another tablespace is not supported. Instead of this action writing some control and maintenance scripts are suggested to be used.

select * from sys.AUD$;
alter table sys.AUD$ move tablespace PQ_DATA;

List the auditing privileges and users: To check what is being audited by my database.
select * from dba_priv_audit_opts;
After two weeks of auditing, i nearly captured more than three commonly wrong routines especially using wrong tablespaces and deleting routines.


Saturday, November 15, 2008

creating awr reports manually

When working with the third parties on the same database there happens some confusion. The operational dba 's are responsible from the database performance and other standard operational duties like backup, security, architecture .. and so on. Vendor dba 's are also responsible to achieve SLA contract requirements and promised performance values. Both side should be causious not to overlap the other side 's duties. Here i tried to answer one of the requests of our vendor dba by supplying hourly AWR reports directly in their directories on the UNIX environment. I think they can follow database activitiy better from now on.


### creating the awr report ###
ORACLE_HOME=/oracle/product/10.2.0; export ORACLE_HOME
ORACLE_SID=montc1; export ORACLE_SID
PATH=$PATH:$ORACLE_HOME/bin; export PATH

$ORACLE_HOME/bin/sqlplus "/ as sysdba" << EOF
declare
i_begin_snap_id number;
i_end_snap_id number;
i_db_id number;
i_inst_num number;
begin

--select the db_id and the instance_number of the instance,
--these values are going to be used to create the awr_report
select DBID, INSTANCE_NUMBER into i_db_id, i_inst_num from dba_hist_database_instance where rownum=1;

--find the snap_id of the time being
select SNAP_ID into i_end_snap_id from dba_hist_snapshot where trunc(end_interval_time,'HH24') = trunc(sysdate,'HH24') order by SNAP_ID desc;

--set the begin snap_id of the previous hour as we are generation hourly awr reports
i_begin_snap_id := i_end_snap_id - 1;

--generate and write the report to a table
insert into avea_sysmon.temp_awr_output select output from table (SYS.DBMS_WORKLOAD_REPOSITORY.AWR_REPORT_HTML(i_db_id, i_inst_num, i_begin_snap_id, i_end_snap_id));
commit;

end;
/
EOF

### creating report name ###
report_name=/moneta_home/avea_data/awr/awr_report_`date | awk '{print $3"_"$2"_"$6"_"$4}'`.html
export report_name

### spooling previously created report to the file specified ###
$ORACLE_HOME/bin/sqlplus -s "/ as sysdba" << EOF
set linesize 1500;
set echo off;
set heading off;
set termout on;

spool $report_name;
select s_output from avea_sysmon.temp_awr_output;
truncate table avea_sysmon.temp_awr_output;
spool off;
EOF

### delete the older reports ###
ctl=`ls -l /moneta_home/avea_data/awr | grep .html | wc -l`
if [ $ctl -gt 110 ]
then
ls -l /moneta_home/avea_data/awr | grep .html | head -10 | awk '{print "/moneta_home/avea_data/awr/"$9}' | xargs rm
#else
#echo "dont delete anything"
fi


After i tested the script activity of the script, i used crontab to schedule this script as a job.


$ crontab -l
00,10,20,30,40,50 * * * * /oracle/epeker/check_arch.sh > /oracle/epeker/check_arch.log 2>&1
20 * * * * /oracle/awr_reports/get_awr.sh > /oracle/awr_reports/get_awr.log 2>&1


The awr reports of the specified time are now available on application directories.


$ ls -lrt /moneta_home/avea_data/awr
total 1053328
-rw-r--r-- 1 oracle dba 5229793 Nov 6 14:20 awr_report_6_Nov_2008_14:20:45.html
-rw-r--r-- 1 oracle dba 4911565 Nov 6 15:20 awr_report_6_Nov_2008_15:20:08.html
-rw-r--r-- 1 oracle dba 4865032 Nov 6 16:20 awr_report_6_Nov_2008_16:20:12.html
-rw-r--r-- 1 oracle dba 6112427 Nov 6 17:20 awr_report_6_Nov_2008_17:20:06.html
-rw-r--r-- 1 oracle dba 5057170 Nov 6 18:20 awr_report_6_Nov_2008_18:20:03.html
-rw-r--r-- 1 oracle dba 5031651 Nov 6 19:20 awr_report_6_Nov_2008_19:20:02.html
-rw-r--r-- 1 oracle dba 4798984 Nov 6 20:20 awr_report_6_Nov_2008_20:20:03.html
-rw-r--r-- 1 oracle dba 5016641 Nov 6 21:20 awr_report_6_Nov_2008_21:20:02.html
-rw-r--r-- 1 oracle dba 4895053 Nov 6 22:20 awr_report_6_Nov_2008_22:20:05.html
-rw-r--r-- 1 oracle dba 5228292 Nov 6 23:20 awr_report_6_Nov_2008_23:20:09.html
-rw-r--r-- 1 oracle dba 5018142 Nov 7 00:20 awr_report_7_Nov_2008_00:20:03.html
-rw-r--r-- 1 oracle dba 5072180 Nov 7 01:20 awr_report_7_Nov_2008_01:20:10.html




Monday, November 3, 2008

Time to start securing the listeners

I decided to apply basic security issues to all of my databases listener processes. As always i prefer Oracle Documentation for a reliable reference, this time i found a detailed white paper on integrigy web site which has a lot of hits from external links.

Both two resources call attention for two most important issues. Setting a password and Logging for the listener. The other issues can be taken in account are securing lsnrctl and tnslsnr executables in $ORACLE_HOME/bin/ directory, setting ADMIN_RESTRICTIONS parameter of the listener itself, changing default port of the listener utility and so on mentioned in the referenced documents. But here i decided to secure most common tasks of setting a password for listener and logging the listener activity for my initial action.

Setting a listener password:

-bash-3.00$ lsnrctl

LSNRCTL for Linux: Version 10.2.0.3.0 - Production on 03-NOV-2008 14:06:01

Copyright (c) 1991, 2006, Oracle. All rights reserved.

Welcome to LSNRCTL, type "help" for information.

LSNRCTL> CHANGE_PASSWORD
Old password: <if not set then just hit enter>
New password:
Reenter new password:
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=foa-op)(PORT=1521)))
Password changed for LISTENER
The command completed successfully
LSNRCTL> SAVE_CONFIG
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=foa-op)(PORT=1521)))
Saved LISTENER configuration parameters.
Listener Parameter File /oracle/product/10.2.0/OFADB/network/admin/listener.ora
Old Parameter File /oracle/product/10.2.0/OFADB/network/admin/listener.bak
The command completed successfully
LSNRCTL> status
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=foa-op)(PORT=1521)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 10.2.0.3.0 - Production
Start Date 03-NOV-2008 11:12:23
Uptime 0 days 2 hr. 54 min. 43 sec
Trace Level off
Security ON: Password or Local OS Authentication
SNMP OFF
Listener Parameter File /oracle/product/10.2.0/OFADB/network/admin/listener.ora
Listener Log File /oracle/product/10.2.0/OFADB/network/log/listener.log
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=foa-op)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC0)))
Services Summary...
Service "OFADB" has 1 instance(s).
Instance "OFADB", status READY, has 1 handler(s) for this service...
Service "OFADBXDB" has 1 instance(s).
Instance "OFADB", status READY, has 1 handler(s) for this service...
Service "OFADB_XPT" has 1 instance(s).
Instance "OFADB", status READY, has 1 handler(s) for this service...
Service "PLSExtProc" has 1 instance(s).
Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service...
The command completed successfully
LSNRCTL> exit


Logging the listener:

-bash-3.00$ lsnrctl

LSNRCTL for Linux: Version 10.2.0.3.0 - Production on 03-NOV-2008 11:18:43

Copyright (c) 1991, 2006, Oracle. All rights reserved.

Welcome to LSNRCTL, type "help" for information.

LSNRCTL> SET LOG_DIRECTORY /oracle/product/10.2.0/OFADB/network/admin/lsnr_log
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=foa-op)(PORT=1521)))
LISTENER parameter "log_directory" set to /oracle/product/10.2.0/OFADB/network/admin/lsnr_log
The command completed successfully
LSNRCTL> set log_file lsnr.log
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=foa-op)(PORT=1521)))
LISTENER parameter "log_file" set to lsnr.log
The command completed successfully
LSNRCTL> set log_status on
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=foa-op)(PORT=1521)))
LISTENER parameter "log_status" set to ON
The command completed successfully
LSNRCTL> SAVE_CONFIG
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=foa-op)(PORT=1521)))
Saved LISTENER configuration parameters.
Listener Parameter File /oracle/product/10.2.0/OFADB/network/admin/listener.ora
Old Parameter File /oracle/product/10.2.0/OFADB/network/admin/listener.bak
The command completed successfully
LSNRCTL> exit
-bash-3.00$ lsnrctl status

LSNRCTL for Linux: Version 10.2.0.3.0 - Production on 03-NOV-2008 11:20:08

Copyright (c) 1991, 2006, Oracle. All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=foa-op)(PORT=1521)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 10.2.0.3.0 - Production
Start Date 03-NOV-2008 11:12:23
Uptime 0 days 0 hr. 7 min. 44 sec
Trace Level off
Security ON: Password or Local OS Authentication
SNMP OFF
Listener Parameter File /oracle/product/10.2.0/OFADB/network/admin/listener.ora
Listener Log File /oracle/product/10.2.0/OFADB/network/admin/lsnr_log/lsnr.log
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=foa-op)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC0)))
Services Summary...
Service "OFADB" has 1 instance(s).
Instance "OFADB", status READY, has 1 handler(s) for this service...
Service "OFADBXDB" has 1 instance(s).
Instance "OFADB", status READY, has 1 handler(s) for this service...
Service "OFADB_XPT" has 1 instance(s).
Instance "OFADB", status READY, has 1 handler(s) for this service...
Service "PLSExtProc" has 1 instance(s).
Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service...
The command completed successfully


But while i was monitoring the log file of the listener i found some warning messages right before every server process created by listener. The warning is something like;

WARNING: Subscription for node down event still pending


There are some comments on forums for this warning message but the general sentence is setting the

SUBSCRIBE_FOR_NODE_DOWN_EVENT_<listener_name>


parameter to OFF in listener.ora file. By disabling this option we prevent listener to automatically registers itself to a notification service as it is unsuccessful to do because there is not one installed. Setting this parameter to OFF is worked for my environment and warning messages are cleared. Following you can find my final listener.ora file.


-bash-3.00$ cat listener.ora

SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(SID_NAME = PLSExtProc)
(ORACLE_HOME = /oracle/product/10.2.0/OFADB)
(PROGRAM = extproc)
)
)

LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = xxx-xx)(PORT = xxxx))
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC0))
)
)

#----ADDED BY TNSLSNR 03-NOV-2008 11:08:38---
PASSWORDS_LISTENER = xxxxxxxxxxxxxxxx
LOGGING_LISTENER = ON
#--------------------------------------------

#----ADDED BY TNSLSNR 03-NOV-2008 11:19:57---
LOG_FILE_LISTENER = lsnr.log
LOG_DIRECTORY_LISTENER = /oracle/product/10.2.0/OFADB/network/admin/lsnr_log
#--------------------------------------------

SUBSCRIBE_FOR_NODE_DOWN_EVENT_LISTENER=OFF


Another handy operation of setting the password for the listener is remote administration option. The following demonstration is from my client computer to administer the server listener remotely. In a nutshell if you now the ip and port that the listeners runs on and also know the password, then you can remotely administer you listener.


Microsoft Windows XP [Version 5.1.2600]
(C) Copyright 1985-2001 Microsoft Corp.

C:\>lsnrctl status 10.4.38.32:1521

LSNRCTL for 32-bit Windows: Version 10.2.0.1.0 - Production on 03-NOV-2008 14:36:02

Copyright (c) 1991, 2005, Oracle. All rights reserved.

Connecting to (DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=10.4.38.32))(ADDRESS=(PRO
TOCOL=TCP)(HOST=10.4.38.32)(PORT=1521)))
TNS-01169: The listener has not recognized the password
TNS-01189: The listener could not authenticate the user

C:\>lsnrctl

LSNRCTL for 32-bit Windows: Version 10.2.0.1.0 - Production on 03-NOV-2008 14:36:05

Copyright (c) 1991, 2005, Oracle. All rights reserved.

Welcome to LSNRCTL, type "help" for information.

LSNRCTL> set password
Password:
The command completed successfully
LSNRCTL> status 10.4.38.32:1521
Connecting to (DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=10.4.38.32))(ADDRESS=(PRO
TOCOL=TCP)(HOST=10.4.38.32)(PORT=1521)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 10.2.0.3.0 - Production
Start Date 03-NOV-2008 11:12:23
Uptime 0 days 3 hr. 24 min. 0 sec
Trace Level off
Security ON: Password or Local OS Authentication
SNMP OFF
Listener Parameter File /oracle/product/10.2.0/OFADB/network/admin/listener.ora
Listener Log File /oracle/product/10.2.0/OFADB/network/admin/lsnr_log/lsnr.log
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=foa-op)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC0)))
Services Summary...
Service "OFADB" has 1 instance(s).
Instance "OFADB", status READY, has 1 handler(s) for this service...
Service "OFADBXDB" has 1 instance(s).
Instance "OFADB", status READY, has 1 handler(s) for this service...
Service "OFADB_XPT" has 1 instance(s).
Instance "OFADB", status READY, has 1 handler(s) for this service...
Service "PLSExtProc" has 1 instance(s).
Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service...
The command completed successfully
LSNRCTL>

Tuesday, October 7, 2008

Querying Oracle with C# and odp.net

Oracle recommends odp.net (Oracle Data Provider for .net) for opening and managing connections to an Oracle Database via .NET development environment. Odp.net has its own connection and command objects as well as dataadapter and datareader structures optimized for Oracle database. Here you can find the simple way of connecting and running a command on an Oracle Database v10.2.0.3.

First of all, ODAC should be downloaded from Oracle 's odp.net download page. After installing the software according to the installation instructions you can follow the following snapshots to include .dll file in Microsoft Visual C# IDE. After that we are able to use the classes in that dll.










After importing the .dll file successfully you should easily use the classes.


/*
* Defining variables
*/

// Connection Object
private Oracle.DataAccess.Client.OracleConnection ocnn;

// Command object
private Oracle.DataAccess.Client.OracleCommand ocmd;

// Structures to manipulate data
private Oracle.DataAccess.Client.OracleDataAdapter oda;
private Oracle.DataAccess.Client.OracleDataReader odr;
private DataSet ods;


// ------------------------------------------------------------
// After all the definitions are made a sample try-catch block
// that writes the sysdate to a label should be as follows
// ------------------------------------------------------------

try
{
// Creating the connection object
ocnn = new Oracle.DataAccess.Client.OracleConnection();

// Defining the connection string to connect to the specified database
// User id and Password informations are obvious. Data Source value is the TNS alias of the database
// There can be different values that can be used. Next example will cover another value.
ocnn.ConnectionString = "User id=epeker;Password=********;Data Source=GRID_10.4.46.70";

// Opening the connection and information is written as a label text
ocnn.Open();
lblerr.Text = "connected";

// Now we need a command object that we want to execute on the database
ocmd = new Oracle.DataAccess.Client.OracleCommand();

// we defşne the previously created connection as the connection property of the command
ocmd.Connection = ocnn;

// commandText will be run on the database
ocmd.CommandText = "select sysdate from dual";

// with ExecuteReader() method of the command object we run the command and return the result to a reader object.
odr = ocmd.ExecuteReader();

// one or more rows could be fetched from the database
// so we are readering from the reader of all rows
while (odr.Read())
{
lblerr.Text = lblerr.Text + '-' + odr[0].ToString();
}

}
catch (Exception ex)
{
// if an exception raises we catch it here and write to our info label
lblerr.Text = ex.Message;
}
finally
{
// finally, we cleaning and tidying the memory and the database connections
if (ocnn != null)
{
if (ocnn.State != ConnectionState.Closed)
{
ocnn.Close();
ocnn = null;
}
}
}


// ------------------------------------------------------------------------------------------------
// This block uses standart Net8 connection string instead of a TNS alias and also demonstrates
// the bind variable usage
// ------------------------------------------------------------------------------------------------

try
{
ocnn = new Oracle.DataAccess.Client.OracleConnection();
ocnn.ConnectionString = "User id=epeker;Password=********;Data Source=(DESCRIPTION= " +
" (ADDRESS= " +
" (PROTOCOL=TCP)" +
" (HOST=10.4.46.70)" +
" (PORT=1521))" +
" (CONNECT_DATA=(SERVICE_NAME=GRID)))";
ocnn.Open();
lblerr.Text = "connected";

ods = new DataSet();
oda = new Oracle.DataAccess.Client.OracleDataAdapter();

ocmd = new Oracle.DataAccess.Client.OracleCommand();
string str_sql = "select * from v$datafile where file# = :f_number ";

ocmd.CommandType = CommandType.Text;
ocmd.CommandText = str_sql;
ocmd.Connection = ocnn;

oda.SelectCommand = ocmd;

// this parameter can also be set at the command level
oda.SelectCommand.Parameters.Add("f_number",
Oracle.DataAccess.Client.OracleDbType.Int32,15).Value=2;

// another difference is dataset object
// in the previous example we used datareader to read from the database which fetches the data row by row basis
// Fill() method of the dataadapter object fetches all the recordset to the client side at one time
oda.Fill(ods);

// i will not cover the details of the dataadapter and dataset here
lblerr.Text = ods.Tables[0].Rows[0]["name"].ToString();
}
catch (Exception ex)
{
lblerr.Text = ex.Message;
}
finally
{
if (ocnn != null)
{
if (ocnn.State != ConnectionState.Closed)
{
ocnn.Close();
ocnn = null;
}
}
}

// ----------------------------------------------------------------------------
// last example demonstrates how to call a Oracle stored procedure with odp.net
// ----------------------------------------------------------------------------

try
{
ocnn = new Oracle.DataAccess.Client.OracleConnection();
ocnn.ConnectionString = "User id=epeker;Password=********;Data Source=GRID_10.4.46.70;";
ocnn.Open();
lblerr.Text = "connected";

oda = new Oracle.DataAccess.Client.OracleDataAdapter();
ods = new DataSet();

ocmd = new Oracle.DataAccess.Client.OracleCommand();
ocmd.Connection = ocnn;
ocmd.CommandText = "epeker.p_return_cursor";

// this time command type is a StoredProcedure
ocmd.CommandType = CommandType.StoredProcedure;

// this time parameters have directions
ocmd.Parameters.Add("c_return",
Oracle.DataAccess.Client.OracleDbType.RefCursor).Direction = ParameterDirection.Output;
ocmd.Parameters.Add("i_var",
Oracle.DataAccess.Client.OracleDbType.Int32).Direction = ParameterDirection.Input;
ocmd.Parameters["i_var"].Value = 2;

oda.SelectCommand = ocmd;

oda.Fill(ods);

// gw_result is a gridview object of .NET framework
// helps to view the recordset as a table
gw_result.DataSource = ods.Tables[0];
gw_result.Refresh();

}
catch (Exception ex)
{
lblerr.Text = ex.Message;
}
finally
{
if (ocnn != null)
{
if (ocnn.State != ConnectionState.Closed)
{
ocnn.Close();
ocnn = null;
}
}
}


/*
---------------------------------------------
--stored procedure used with the last example
---------------------------------------------

-- package needed for the ref_type declaration
CREATE OR REPLACE package EPEKER.pck_ref_type as
type ref_type is ref cursor;
end pck_ref_type;

-- procedure itself
CREATE OR REPLACE procedure EPEKER.p_return_cursor
(c_return out epeker.pck_ref_type.ref_type, i_var in number)
is
begin

open c_return for
select * from v$datafile where file#=i_var;

exception
when others then
rollback;

end p_return_cursor;
*/



Some other articles for .NET
odp.net homepage


Tuesday, September 2, 2008

sql loader performance experiences

Lots of the reporting databases has data loading scripts, using oracle SQL Loader to load mass amount of data from a text file. Loaded data is used for reporting purposes in day time. Commonly reported problem is long running loader scripts and solving this kind of problems has genarally similar approaches.

1- Dropping unnecessary indexes improves the performance of the batch loader operations hence every insert needs to update the indexes after updating the table. Fewer indexes means fewer operation for the DML statements. In my databases i examine v$sql, v$sqlarea, v$sqltext and dba_hist_sqltext views to examine the daily routine of the executed sql statements on the database to find out the index usage percentage of the statements. This gives a brief information about the index usage and to decide if i can drop any.

2- Using nologging option for the tables generates less redo logs. If the database in archive mode then less redo means less work for the archiver. log writer related wait events can also be improved. Using this option causes database not to log the operation and database admin should be cautious about the backup and restore operations. Bacause a recover operation may not recover your nologging tables data. I prefer full and incremental backups with rman in this kind of databases.

3- By setting Direct=true option of the sqlloader direct can be enabled, which loads data directly to datafiles by bypassing most of the database operations.

4- Sometimes table data is deleted periodically. This delete operation fragments the segments of the table and indexes of the table. Moving the tables and the indexes defragments the segments. If the table is fragmented, i prefer dropping the indexes first and move the table and then recreate the indexes again. One of my early postings covers this topic.

There are some other parameters; rows (row size to be commited) and unrecoverable (if used with direct=true option, redo writes are bypassed) parameters for further enhancement for loader performance. But i frequently use the mentioned parameters and they worked more than enough for most cases.


Tuesday, August 12, 2008

srvctl problem after 10.2.0.4 patch

New migration project of the database and system group in our company is from Oracle Database 9.2.0.5 RAC on LINUX AS3 32 bit to Oracle Database RAC 10.2.0.4 on LINUX ES4 Update6 64 bit. After installing clusterware and the database 10.2.0.1 successfully, i decided to upgrade the database and the clusterware to the last level of patchset 10.2.0.4. After installing the patchset i realised that srvctl cannot open the databases with the command srvctl start database -d PQ. But i can open the databases individually with the sqlplus. The wierd thing is srvctl works with "status" and "close" parameters. When i examine the crs logs i founded the following error logs.

#cat /oracle/product/crs/log/pqvrtsrv1/crsd/crsd.log
2008-07-16 17:52:37.840: [ CRSAPP][1535326560]0StartResource error for ora.PQ.db error code = 1
2008-07-16 17:52:37.892: [ CRSRES][1535326560]0Start of `ora.PQ.db` on member `pqvrtsrv1` failed.

This one error message did not satisfy me to start taking action for the problem hence it has too few information to make a conclusion. I decided to look at the databases alert.log, if i can find any clue about the problem. But the last logs written to the alert.log was the databases closing messages. This may be a clue, because this means srvctl cannot reach the database to open it, now i can start from srvctl settings to examine.

$ srvctl config database -d PQ -a
pqvrtsrv1 PQ1 /oracle/product/db10g
pqvrtsrv2 PQ2 /oracle/product/db10g
DB_NAME: PQ
ORACLE_HOME: /oracle/product/db10g
SPFILE: /pqdata1/PQ/PQ/spfilePQ.ora
DOMAIN: null
DB_ROLE: null
START_OPTIONS: null
POLICY: AUTOMATIC
ENABLE FLAG: DB ENABLED

Now i can see things. SPFILE configuration is wrong for this database. I clearly remember that i have changed the spfile location to /pqdata1/PQ/spfilePQ.ora. But somehow after the patch is installed, setting are confused. I changed the SPFILE configuration with the correct settings and my problem is now solved.

#srvctl modify database -d PQ -p /pqdata1/PQ/spfilePQ.ora
#srvctl start instance -d PQ -i PQ2
#srvctl status database -d PQ
Instance PQ1 is running on node pqvrtsrv1
Instance PQ2 is running on node pqvrtsrv2


Resources:

Server Control Utility Reference


Tuesday, August 5, 2008

Changing sysman password in OMS

I had to change the sysman password today. There are nice articles on the web but in summary, if you want to change the sysman password of the Oracle Enterprise Manager Grid Control Server;

1- stop all opmn processes
2- stop the oms
3- stop the agent
4- change the password of the sysman user
5- update the $AGENT_HOME/sysman/emd/targets.xml file
6- update the $OMS_HOME/sysman/config/emoms.properties file
7- start the agent
8- start the oms
9- start all opmn processes (actually they are being started by oms)

Here are the handy articles i used;

http://oraclespin.wordpress.com/2008/07/18/how-to-change-sysman-password-in-oem-oms/
http://thetendjee.wordpress.com/2006/07/19/changing-sysman-and-dbsnmp-passwords-it-aint-that-easy/
http://dbataj.blogspot.com/2007/07/change-sysman-password.html


Tuesday, July 29, 2008

Problem with CRS startup after disk migration

Our unix and storage group has a project for migration from old storage subsystem to a brand new one. Action plan was simple;

1- shutdown the database with
srvctl stop database -d 

2- shutdown the crs with
crsctl stop crs

3- move the data from old disks to new ones
4- rename the new disks with the original names
5- open the crs with
crsctl start crs

6- startup the database with
srvctl start database -d 


But right after step 4, crs was unable to open and I am called at night. When i tried to start crs with crsctl start crs, i had errors indicates crs was unable to reach the OCR disk. I tried to check the OCR with ocrcheck and trace the log file of the OCR in $CRS_HOME/log//client/

[root@raquality00 client]$ pwd
/oracle/product/10.2.0/crs/log/raquality00/client
[root@raquality00 client]# cat ocrcheck_13416.log
Oracle Database 10g CRS Release 10.2.0.1.0 Production Copyright 1996, 2005 Oracle. All rights reserved.
2008-06-18 02:13:39.487: [OCRCHECK][2538462912]ocrcheck starts...
2008-06-18 02:13:39.488: [ OCROSD][2538462912]utstoragetype: /oradata1/orcfile.ora is on FS type 1952539503. Not supported.
2008-06-18 02:13:39.488: [ OCROSD][2538462912]utopen:6'': OCR location /oradata1/orcfile.ora configured is not valid storage type. Return code [37].
2008-06-18 02:13:39.488: [ OCRRAW][2538462912]proprinit: Could not open raw device
2008-06-18 02:13:39.488: [ default][2538462912]a_init:7!: Backend init unsuccessful : [37]
2008-06-18 02:13:39.488: [OCRCHECK][2538462912]Failed to initialize OCR context: [PROC-37: Oracle Cluster Registry does not support the storage type configured]
2008-06-18 02:13:39.488: [OCRCHECK][2538462912]Failed to initialize ocrchek2
2008-06-18 02:13:39.488: [OCRCHECK][2538462912]Exiting [status=failed]...

Log file says there is a problem with the ocrfile.ora. But I am suspicious if the filename is true. I should be sure with the filename so I examine the install logs in $CRS_HOME/install/ directory.

[oracle@raquality00 install]$ pwd
/oracle/product/10.2.0/crs/install
[oracle@raquality00 install]$ cat paramfile.crs
ORA_CRS_HOME=/oracle/product/10.2.0/crs
CRS_ORACLE_OWNER=oracle
CRS_DBA_GROUP=oinstall
CRS_VNDR_CLUSTER=false
CRS_OCR_LOCATIONS=/oradata1/orcfile.ora
CRS_CLUSTER_NAME=raqcrs
CRS_HOST_NAME_LIST=raquality00,1,raquality01,2
CRS_NODE_NAME_LIST=raquality00,1,raquality01,2
CRS_PRIVATE_NAME_LIST=raquality00-priv,1,raquality01-priv,2
CRS_LANGUAGE_ID='AMERICAN_AMERICA.WE8ISO8859P1'
CRS_VOTING_DISKS=/oradata1/votingfile.ora
CRS_NODELIST=raquality00,raquality01
CRS_NODEVIPS='raquality00/raquality00-vip/255.255.255.0/eth0,raquality01/raquality01-vip/255.255.255.0/eth0'

Lets see if the location of the ocr file is changed by someone after installation by inspecting the /etc/oracle/ocr.loc.

[oracle@raquality00 oracle]$ pwd
/etc/oracle
[oracle@raquality00 oracle]$ cat ocr.loc
ocrconfig_loc=/oradata1/orcfile.ora
local_only=FALSE

OCR location and the filename seems correct. Now we now it hasnt been changed after the installation. It seems there may be corruption with the OCR file. Checking the backups of the OCR is handy. Maybe I will need to restore from the backup. The command I runned is ocrconfig -showbackup. The output shows the case goes bad.

[root@raquality00 client]# cat ocrconfig_9347.log
Oracle Database 10g CRS Release 10.2.0.1.0 Production Copyright 1996, 2005 Oracle. All rights reserved.
2008-06-18 02:04:20.350: [ OCRCONF][2538462912]ocrconfig starts...
2008-06-18 02:04:20.353: [ OCROSD][2538462912]utstoragetype: /oradata1/orcfile.ora is on FS type 1952539503. Not supported.
2008-06-18 02:04:20.353: [ OCROSD][2538462912]utopen:6'': OCR location /oradata1/orcfile.ora configured is not valid storage type. Return code [37].
2008-06-18 02:04:20.353: [ OCRRAW][2538462912]proprcow: problem reading the bootblock
2008-06-18 02:04:20.353: [ OCRCONF][2538462912]Failure in overwriting OCR configuration on disk
2008-06-18 02:04:20.353: [ OCRCONF][2538462912]Exiting [status=failed]...

I am now getting doubt. Maybe there is a problem about the filesystems. I decided to check the filesystem types from fstab. But they seem OK.

[root@raquality00 raqcrs]# cat /etc/fstab
# This file is edited by fstab-sync - see 'man fstab-sync' for details
LABEL=/1 / ext3 defaults 1 1
LABEL=/boot1 /boot ext3 defaults 1 2
#LABEL=/oradata /oradata1 ocfs2 _netdev,datavolume,nointr 0 0
LABEL=/oradata1_new /oradata1 ocfs2 _netdev,datavolume,nointr 0 0
#LABEL=/oradata2 /oradata2 ocfs2 _netdev,datavolume,nointr 0 0
LABEL=/oradata2_new /oradata2 ocfs2 _netdev,datavolume,nointr 0 0
#LABEL=/oradata3 /oradata3 ocfs2 _netdev,datavolume,nointr 0 0
#LABEL=/oradata4 /oradata4 ocfs2 _netdev,datavolume,nointr 0 0
#LABEL=/oradata5 /oradata5 ocfs2 _netdev,datavolume,nointr 0 0
#LABEL=/oradata6 /oradata6 ocfs2 _netdev,datavolume,nointr 0 0
#LABEL=/oradata7 /oradata7 ocfs2 _netdev,datavolume,nointr 0 0
#LABEL=/oradata8 /oradata8 ocfs2 _netdev,datavolume,nointr 0 0
#LABEL=/oradata9 /oradata9 ocfs2 _netdev,datavolume,nointr 0 0
LABEL=/oradata9_new /oradata9 ocfs2 _netdev,datavolume,nointr 0 0
LABEL=/oradata10_new /oradata10 ocfs2 _netdev,datavolume,nointr 0 0
#LABEL=/oradata11 /oradata11 ocfs2 _netdev,datavolume,nointr 0 0
LABEL=/oradata11_new /oradata11 ocfs2 _netdev,datavolume,nointr 0 0
#LABEL=/oradata12 /oradata12 ocfs2 _netdev,datavolume,nointr 0 0
LABEL=/oradata12_new /oradata12 ocfs2 _netdev,datavolume,nointr 0 0
#/dev/mapper/vgra-lvra01 /ra01 ext3 defaults 1 2
/dev/vgra00/lvra01 /ra01_new ext3 defaults 1 2
none /dev/pts devpts gid=5,mode=620 0 0
none /dev/shm tmpfs defaults 0 0
none /proc proc defaults 0 0
none /sys sysfs defaults 0 0
LABEL=SWAP-sda2 swap swap defaults 0 0
/dev/sdb1 swap swap defaults 0 0
#/dev/vgora/lvora /oracle ext3 defaults 1 2
/dev/vgra00/lvoracle /oracle ext3 defaults 1 2
/dev/hda /media/cdrom auto pamconsole,exec,noauto,managed 0 0

When i recheck the error logs of ocrcheck command i realised that there are some other error notifications after invalid storage type error line. OCR complains about reading bootblock and overwriting the file. There maybe a process holding the file.

[oracle@raquality00 install]$ ps -ef | grep crs
root 10239 7791 0 Jun18 ? 00:37:59 /oracle/product/10.2.0/crs/bin/crsd.bin reboot

Now it is more clear, maybe our admins forgot to disable crs service by executing
/etc/init.d/init.crs stop
after stopping crs with
crsctl stop crs
. And data migration is made with the running crs service. I recommended a reboot but stopping the the service may also be OK. After the reboot crs is opened successfully and databases are now up and running.

[oracle@raquality00 install]$ crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy

[oracle@raquality00 install]$ srvctl status database -d RADB
Instance RADB1 is running on node raquality00
Instance RADB2 is running on node raquality01

Saturday, July 5, 2008

Alter the table to change the structure

What will happen if one of the source files structure changes, and all data loading procedures and "sql loader" scripts are ready after long and busy working days. Oracle has options to change the table structure.

Altering a column

create table EPEKER.ep_test (col1 varchar(10));

insert into EPEKER.ep_test values ('some text here');
--ORA-12899: value too large for column "EPEKER"."EP_TEST"."COL1" (actual: 14, maximum: 10)

alter table EPEKER.ep_test modify (col1 varchar2(20));
insert into EPEKER.ep_test values ('some text here');
--1 row inserted
commit;

select * from EPEKER.ep_test;


Adding/Dropping a column

alter table EPEKER.ep_test add (col2 date);
alter table EPEKER.ep_test add (col2 date DEFAULT sysdate);

alter table EPEKER.ep_test drop (col2);


Renaming Columns

alter table EPEKER.ep_test rename column col3 to col4;


Changing Datatypes

But changing datatype of a column is not that easy. As the following example shows to change a datatype of a column, the column to be modified must be empty.

insert into EPEKER.ep_test (col3) values ('123')
commit;
alter table EPEKER.ep_test modify (col3 number)
--ORA-01439: column to be modified must be empty to change datatype

update EPEKER.ep_test set col3=NULL;
commit;
alter table EPEKER.ep_test modify (col3 number)

desc EPEKER.ep_test


Compressed Tables

You can modify and add columns to compressed tables. But dropping a column is still not supported (v10.2). To drop a column one should first MOVE NOCOMPRESS, drop the column and again MOVE COMPRESS a table.

alter table EPEKER.ep_test move compress;
alter table EPEKER.ep_test add (col3 varchar2(10));
alter table EPEKER.ep_test modify (col3 varchar2(20));

alter table EPEKER.ep_test drop (col3);
--ORA-39726: unsupported add/drop column operation on compressed tables

Wednesday, June 18, 2008

Oracle EM Grid Control - Custom Reports

Creating Custom Grid Report for database availability

Oracle Enterprise Manager GRID Server contains built in reports to support Database Administrators to generate sightly reports. However, in my company, we need some reports that cannot be find in GRID and needs custom development. Here is how i did it.

First Thing is to click to "Reports" tab on the top, right hand side of the GRID Management Console (figure_grid_management_console). This page is where all the predefined and custom reports can be find. Now click on the "Create" Button on top of the page to create a new custom report. "Create Report Definition" page comes to the screen. There are four tabs for the definition of the report.

General
After specifiying title of the report then select category and subcategory from the comboboxes. Category and subcategory defines our newly custom report's group and standing place in the reports list of the GRID reports page. You can also create new categories by clicking the buttons between the comboboxes.

Targets section enables you to limit the available list of target selection while generating the report or you can select a specified target to be used. I used "Specified Target" and left the textbox blank because this report will apply to all database and rac targets and is not allowed to be changed.

You can enable to select time periods by end user or you can preset the time period from the "Time Period" section (figure_create_report_general_tab). This custom report always displays the last known and most updated state of the databases uptimes and downtimes.

Elements
In the Elements tab, you can define the report elements and their positions in the report page (figure_add_elements_1). Report elements are structures that displays selected data from the GRID Servers repository. Click "Add" Button to add elements. I used IPMSG_USER_CHART_FROM_SQL and IPMSG_USER_TABLE_FROM_SQL elements (figure_add_elements_2). Click on the "Set Parameters" icon on the right hand side of the listed items to obtain a statement for the elements (figure_add_elements_2). Here you can enter the sql statement that brings data into the element from Management Servers repository. Following you can find the sql statements for the elements.

IPMSG_USER_TABLE_FROM_SQL

select
target_name,
target_type,
sum(up_time) up_time,
sum(down_time) down_time,
sum(blackout_time) blackout_time,
trunc(sum(up_time)/(sum(nvl(up_time,1))+sum(nvl(down_time,1)))*100) availability_pct
from
(
select
target_name,
target_type,
sum(trunc((nvl(end_timestamp,sysdate)-start_timestamp)*24)) total_hours,
case availability_status
when 'Target Down' then
0
when 'Target Up' then
0
when 'Blackout' then
sum(trunc((nvl(end_timestamp,sysdate)-start_timestamp)*24))
end blackout_time,
case availability_status
when 'Target Down' then
0
when 'Target Up' then
sum(trunc((nvl(end_timestamp,sysdate)-start_timestamp)*24))
when 'Blackout' then
0
end up_time,
case availability_status
when 'Target Down' then
sum(trunc((nvl(end_timestamp,sysdate)-start_timestamp)*24))
when 'Target Up' then
0
when 'Blackout' then
0
end down_time,
availability_status
from
MGMT$AVAILABILITY_HISTORY
where
target_type in ('oracle_database','rac_database') and
availability_status in ('Target Down','Target Up','Blackout')
group by
target_name, target_type, availability_status
order by target_name, availability_status
)
group by target_name, target_type
order by target_name

--sample output:

TARGET_NAME TARGET_TYPE UP_TIME DOWN_TIME BLACKOUT_TIME AVAILABILITY_PCT
----------- ----------- ------- --------- ------------- ----------------
ACTV rac_database 4823 0 1 100
ACTV_ACTV1 oracle_database 4823 0 1 100
ACTV_ACTV2 oracle_database 3964 0 1 100
ARCH.AVEA.COM oracle_database 3334 0 0 100
ARCHCM oracle_database 2210 1126 0 66
ARCHDB rac_database 3858 0 653 100
ARCHDB_ARCHDB1 oracle_database 3514 31 652 99
ARCHDB_ARCHDB2 oracle_database 3725 0 652 100
ASPDB oracle_database 1750 2 0 99
...


IPMSG_USER_CHART_FROM_SQL

select
availability_status,
sum(trunc((end_timestamp-start_timestamp)*24)) total_hours
from
MGMT$AVAILABILITY_HISTORY
where
target_type in ('oracle_database','rac_database') and
availability_status in ('Target Down','Target Up','Blackout')
group by
availability_status

--sample output:

AVAILABILITY_STATUS TOTAL_HOURS
------------------- -----------
Target Up 130008
Target Down 2086
Blackout 2405


Schedule
You can schedule a report to be emailed to an email by setting schedules (figure_schedule_tab).

Access
Report creator can define user groups or users who will be allowed to access this report. Here i used user groups (figure_access_tab).

Database availability report is now ready for managers

Friday, June 13, 2008

deleting from table causes fragmentation

One of our application groups periodically deletes data, which are older than 10 days, from some tables. This periodic action splits the table blocks to a wide range of datafile blocks. This causes performance problems on daily data loading and daytime reporting. As the first action plan gather statistics for trusted analysis.
begin
dbms_stats.gather_table_stats(
    ownname=>'PQ',
    tabname=>'GPRS_CELL_GPRS',
    estimate_percent=>33,
    degree=>2,
    cascade=>true);
end;
You can see the space fragmentation from the following. Currently the tables consumes nearly 3Gb 's of datafile usage. But if you calculate the average row length with the number of table rows, datafile usage should be 328Mb.
select ceil((blocks*8)/1024) as table_mb from dba_tables 
    where table_name='GPRS_CELLGPRS' and owner='PQ'
/*
TABLE_MB
2926
*/
select ceil((avg_row_len*num_rows)/1024/1024) as table_mb from dba_tables 
    where table_name='GPRS_CELLGPRS' and owner='PQ'
/*
TABLE_MB
328
*/ 
After this finding, following procedure is set to run periodically to delete 10 days older data from the table. After the delete operation table is moved for defragmentation. Table indexes should also be rebuilded. After the move and rebuild operation statistics should also be gathered. Keep in mind that table move and index rebuild operations blocks user access.

create or replace procedure avea_sysmon.P_TABLE_MAINTENANCE
  (g_table_name varchar2, g_owner varchar2) AUTHID CURRENT_USER as 
  
  cursor c1 is
    select * from sys.dba_indexes where table_owner=g_owner and table_name=g_table_name;

  r1 c1%rowtype;  
begin
    
    execute immediate 'delete from PQ.GPRS_TRAFGPRS3 where ' || 
                      'to_number(substr(CDATE,1,6)) < to_char(sysdate-15,''YYMMDD'')';
    commit; 
    execute immediate 'alter table ' || g_owner || '.' || g_table_name || ' move';
    
    open c1;    
    loop
      fetch c1 into r1;
      exit when c1%NOTFOUND;  
      execute immediate 'alter index ' || r1.table_owner || '.' || 
                         r1.index_name || '  rebuild';      
    end loop;
    
    dbms_stats.gather_table_stats(ownname=>g_owner,tabname=>g_table_name,
                                  estimate_percent=>33,cascade=>true);
    
end P_TABLE_MAINTENANCE;
FOLLOW UP: (23.11.2010) With Oracle10g there is a new way of reorganising your table segments. The best thing is there is not any unusable indexes anymore.
select owner, table_name, 
  trunc((avg_row_len*num_rows)/1024/1024) as ROWS_MB, 
  trunc((blocks*8192)/1024/1024) as TOTAL_MB 
  from dba_tab_statistics where table_name = 'WIZ_CUSTOMER_MAILBOX';
--OWNER, TABLE_ANME, ROWS_MB, TOTAL_MB
--PROD_DBA,WIZ_CUSTOMER_MAILBOX,19982,29790

alter table PROD_DBA.WIZ_CUSTOMER_MAILBOX enable row movement;
alter table PROD_DBA.WIZ_CUSTOMER_MAILBOX shrink space compact;

select owner, table_name, 
  trunc((avg_row_len*num_rows)/1024/1024) as ROWS_MB, 
  trunc((blocks*8192)/1024/1024) as TOTAL_MB 
  from dba_tab_statistics where table_name = 'WIZ_CUSTOMER_MAILBOX';
--OWNER, TABLE_ANME, ROWS_MB, TOTAL_MB
--PROD_DBA,WIZ_CUSTOMER_MAILBOX,19982,26790

Monday, May 26, 2008

A Security Hardening Approach for userproof databases

What if you have some confusion about some of your database users and want to trace the users or even be notified about their actions in the database or just want to warn them about probable mistakes can be made by db users. Oracle Database software has some solutions about likely cases. Auditing is one of the most used option with tracing. DDL triggers can also be used. There are also some solutions at the opsys level.

1- Logon Triggers to Trace Users
Oracle database has logon triggers to run on any users login to the database. DBA can use this feature to set a trace file for the logged in user. tkprof utility can be used to examine users activity.

CREATE OR REPLACE TRIGGER SYS.TRG_SESSION_LOGON after logon on database
declare
s_username varchar2(20);
s_sessid varchar2(20);
s_stamp varchar2(20);
begin

-- obtain sid of the session
select distinct(substr(sid,1,20)) into s_sessid from v$mystat;

-- obtain username of the session
select username into s_username from v$session where sid=s_sessid;

-- create a timestamp to uniquely identify the trace file
select to_char(sysdate,'DDMMYYYY_HH24MISS') into s_stamp from dual;

-- set the tracefile size to unlimited. careful about that (!)
execute immediate 'alter session set max_dump_file_size=unlimited';

-- define an identifier to the tracefile to distinguish the trace files
execute immediate 'alter session set tracefile_identifier=' || s_username ||
'_' || s_sessid || '_' || s_stamp;

-- i will take 10046 trace level 8 to see the waits
-- (check timed_statistics init parameter!)
execute immediate 'alter session set events ''10046 trace name context forever, level 8''';

-- finally start the trace
execute immediate 'alter session set sql_trace=true';

exception
when others then
raise;

end TRG_LOGON;

You can find the trace files from user_dump_dest parameter

SQL> show parameter user_dump_dest;

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
user_dump_dest string D:\ORACLEDB10G\ADMIN\ORCL\UDUMP

To examine the dump file tkprof utility can be used.

tkprof orcl_ora_5412_epeker_148_22052008_162752.trc tkprof.out

tkprof.out file.

2- Database Audit
To enable audit option, audit_trail init parameter should be set.

select name, value from v$parameter where name = 'audit_trail';
alter system set audit_trail='DB_EXTENTED' scope=spfile;
--After changing audit_trail parameter bounce the database.

audit select table, insert table, update table, delete table by epeker;
audit session by epeker;


Now login to the database with the audited user and make some ddl.

select * from EPEKER.T_TABLE_DBA_TABLES;
insert into EPEKER.T_TABLE_DBA_TABLES select * from DBA_TABLES where rownum<10;
commit;
select * from EPEKER.T_TABLE_DBA_TABLES;


All the audit information should be in the SYS.AUD$ table

select
SESSIONID, USERID, USERHOST,TERMINAL, OBJ$CREATOR, OBJ$NAME, NTIMESTAMP#, SCN, SQLTEXT
from
SYS.AUD$
order by NTIMESTAMP# desc;


/*
SESSIONID USERID OBJ$CREATOR OBJ$NAME NTIMESTAMP# SCN SQLTEXT
1249 EPEKER SYS, USER$ 5/23/2008 11:45:00 AM 935996 insert into EPEKER.T_TABLE_DBA_TABLES select * from DBA_TABLES where rownum<10
1249 EPEKER SYS, TS$ 5/23/2008 11:45:00 AM 935996 insert into EPEKER.T_TABLE_DBA_TABLES select * from DBA_TABLES where rownum<10
1249 EPEKER SYS, SEG$ 5/23/2008 11:45:00 AM 935996 insert into EPEKER.T_TABLE_DBA_TABLES select * from DBA_TABLES where rownum<10
1249 EPEKER SYS, X$KSPPCV 5/23/2008 11:45:00 AM 935996 insert into EPEKER.T_TABLE_DBA_TABLES select * from DBA_TABLES where rownum<10
1249 EPEKER SYS, TAB$ 5/23/2008 11:45:00 AM 935996 insert into EPEKER.T_TABLE_DBA_TABLES select * from DBA_TABLES where rownum<10
1249 EPEKER SYS, OBJ$ 5/23/2008 11:45:00 AM 935996 insert into EPEKER.T_TABLE_DBA_TABLES select * from DBA_TABLES where rownum<10
1249 EPEKER SYS, DBA_TABLES 5/23/2008 11:45:00 AM 935996 insert into EPEKER.T_TABLE_DBA_TABLES select * from DBA_TABLES where rownum<10
1249 EPEKER SYS, X$KSPPI 5/23/2008 11:45:00 AM 935996 insert into EPEKER.T_TABLE_DBA_TABLES select * from DBA_TABLES where rownum<10
1249 EPEKER EPEKER, T_TABLE_DBA_TABLES 5/23/2008 11:44:10 AM 935944 select * from T_TABLE_DBA_TABLES
1268 EPEKER 5/23/2008 12:01:56 PM
*/

You can revoke the audit by noaudit command.

noaudit select table, insert table, update table, delete table by epeker;
noaudit session by epeker;


3- DDL Triggers
Oracle database has many different trigger options. You have just read about "logon triggers". There is also ddl triggers can be used to disable specific users from mistakenly running dangerous ddl statements. Any user wants to execute a ddl statment (create, truncate.. etc) can be logged and the user can also be warned by the administrator. The following trigger simply disables a user to drop and/or truncate database tables.

CREATE OR REPLACE TRIGGER SYS.TRG_DDL before ddl
ON DATABASE
declare
s_sysevent varchar2(32);
s_owner varchar2(32);
s_objname varchar2(32);
s_user varchar2(32);
begin

select ora_sysevent, ora_dict_obj_owner, ora_dict_obj_name, USER
into s_sysevent, s_owner, s_objname, s_user from dual;

insert into epeker.t_trg_ddl_test values (ora_sysevent);
if s_sysevent in ('DROP','TRUNCATE') then
if s_user = 'EPEKER' then
raise_application_error(-20010,'You are not authorized to drop a segment!.. ');
null;
end if;
end if;

end TRG_DDL;


4- database privileges
Privileges is important (maybe the most important) checklist to be careful about. Privileges and Roles should be carefully assgined and also revoked from users to minimize the mistakenly made data loss in the databases.

5- opsys suggested tasks (alias, sudo, execute privileges, op.sys groups)
Beside database auditing tasks, some securtiy hardening can be made in operating system side. Aliases can be used as of direct op. sys. commands for auditing and preventing mistakes by end users or application users. Some commands need root privileges, this kind of commands can be run through sudo in unix. Application users should not be in dba group and Oracle Home and datafiles should be protected with op. sys. file permissions. Grid agent like management and monitoring tools can be used to audit or proactively solve the likely problems.

Monday, May 19, 2008

Table Monitoring worked well

I was logged in one of our forgotten databases to partition some tables and maintain dba routines. After all my work is done i decided to examine the tables which are not used any more.

First thing is to find if it worths to spend some time on it. I checked the sizes of the tables and found that one of the unused tables consume approximately 21Gb of space. That should be so valuable to drop this table on this database because of the problematic disk space usage.

select
segment_name, sum(bytes/1024/1024) as MB
from
dba_segments
where
segment_name in ('WAP_STATREC','PAYFORME_TEMP','PAY4ME_TEMP') and
owner ='PQ'
group by
segment_name
order by MB desc;

SEGMENT_NAME MB
------------ --
WAP_STATREC 21443
PAYFORME_TEMP 145
PAY4ME_TEMP 1


To obtain if these tables are actively used by developers or any application, i altered these tables to enable monitoring and logout the system.


alter table pq.wap_statrec monitoring
alter table pq.payforme_temp monitoring
alter table pq.pay4me_temp monitoring


Today, when i remembered what have i done for a few weeks earlier, i logged in the database and check the dba_tab_modifications management view to find if any modifications done to the tables.


select
table_owner, table_name, inserts, updates, deletes, timestamp, truncated
from
sys.dba_tab_modifications
where
table_name in ('WAP_STATREC','PAYFORME_TEMP','PAY4ME_TEMP');

TABLE_OWNER TABLE_NAME INSERTS UPDATES DELETES TIMESTAMP TRUNCATED
----------- ---------- ------- ------- ------- --------- ---------
PQ PAYFORME_TEMP 10695476 0 0 5/7/2008 YES



As you can see only one of the tables is actively used by an application and the others are not used for 2 weeks. Now time to talk with the application operation or development group to drop their unused tables.

Tuesday, May 6, 2008

Handling Partitioned Tables

Subject: We have lots of partitioned tables to handle manually. Every begining of the month or year we have add new partitions to prevent "ORA-14400: inserted partition key does not map to any partition". As we have a increasing number of partitioned tables, we faced this problem increasingly for last months.

I decided to write a plsql script that runs in period by a scheduled job. You can find the scheduled job settings in my previous postings. In this posting i will share the procedure that automatically creates the new months partition, archives, compresses and deletes the older partitions by the supplied retention.


CREATE OR REPLACE procedure AVEA_SYSMON.p_check_partition2
(s_table_name IN varchar2,
s_schema_name IN varchar2,
s_archive_ts IN varchar2,
n_archive_ret number,
n_delete_ret number)
/*As i run this procedure from a job. I want running user to own the privileges*/
AUTHID CURRENT_USER
AS
s_compression varchar2(20);
s_ts_name varchar2(20);

n_max_month number;
n_max_year number;

n_next_month number;
n_next_year number;

n_this_year number;
n_this_month number;

part_next_month number;
part_next_year number;

n_ctl number;

d_next_month date;
s_part_less_than varchar2(200);
s_cmd varchar2(200);
begin

/*I will read the dba_tab_partitions management view to find the table partitions.
This procedure works with the formatted name of the partition.
Here you should have YYYYMM at the end of the partition name*/

select
max(substr(partition_name,-6,4)) into n_max_year
from
dba_tab_partitions t1
where
table_owner=s_schema_name and
table_name=s_table_name;

-- get max month of table partition

select
max(substr(partition_name,-2,2)) into n_max_month
from
dba_tab_partitions t1
where
table_owner=s_schema_name and
table_name=s_table_name and
substr(partition_name,-6,4) = n_max_year;

-- find this month this year;
n_this_year := to_number(to_char(sysdate,'YYYY'));
n_this_month := to_number(to_char(sysdate,'MM'));

-- find next month next year
n_next_year := to_number(to_char(add_months(sysdate,1),'YYYY'));
n_next_month := to_number(to_char(add_months(sysdate,1),'MM'));

-- partition definition for next month next year should be one more
part_next_year := to_number(to_char(add_months(sysdate,2),'YYYY'));
part_next_month := to_number(to_char(add_months(sysdate,2),'MM'));


-- check if next months partition exists, create it if not exists

if mod(n_this_month,12) = mod(n_max_month,12) then

s_part_less_than := 'to_date(''01.' || trim(to_char(part_next_month,'09')) || '.' || part_next_year || ''',''DD.MM.YYYY'')';

s_cmd:='alter table ' || s_schema_name || '.' || s_table_name || ' add partition ' || s_table_name || '_' || n_next_year || '_' || trim(to_char(n_next_month,'09')) || ' values less than (' || s_part_less_than || ') ';

--dbms_output.put_line(s_cmd);
begin
execute immediate (s_cmd);
exception
when others then
null;
end;
end if;

/*compress and move partitions to archive tablespace which are
months older*/

begin
select
compression, tablespace_name into s_compression, s_ts_name
from
dba_tab_partitions
where
table_owner=s_schema_name and
table_name=s_table_name and
substr(partition_name,-6,4)=to_char(add_months(sysdate,n_archive_ret),'YYYY') and
substr(partition_name,-2,2)=to_char(add_months(sysdate,n_archive_ret),'MM');

/*first i should be sure if the partition to be compressed and archived
is exists. And move the partition to archive tablespace if it is not already
there and compress if it is not already compresses*/

if s_ts_name!=s_archive_ts then
s_cmd := 'alter table ' || s_schema_name || '.' || s_table_name ||
' move partition ' || s_table_name || '_' ||
trim(to_char(add_months(sysdate,n_archive_ret),'YYYY')) || '' ||
trim(to_char(add_months(sysdate,n_archive_ret),'MM')) ||
' tablespace ' || s_archive_ts;
if s_compression!='ENABLED' then
s_cmd := s_cmd || ' COMPRESS';
end if;
s_cmd := s_cmd || ' update global indexes';
execute immediate (s_cmd);
end if;

s_cmd:='';
exception
when no_data_found then
null;
end;

-- drop partitions of year old

select
count(*) into n_ctl
from
dba_tab_partitions
where
table_owner=s_schema_name and
table_name=s_table_name and
substr(partition_name,-6,4)=to_char(add_months(sysdate,n_delete_ret),'YYYY') and
substr(partition_name,-2,2)=to_char(add_months(sysdate,n_delete_ret),'MM');

/* if the partition to be dropped exists then drop it */

if n_ctl=1 then
s_cmd := 'alter table ' || s_schema_name || '.' || s_table_name ||
' drop partition ' || s_table_name || '_' ||
to_char(add_months(sysdate,n_delete_ret),'YYYY') || '' ||
to_char(add_months(sysdate,n_delete_ret),'MM') ||
' update global indexes';

execute immediate (s_cmd);
end if;

end p_check_partition2;
/


The procedure is OK now. You should now be careful to give the right parameters. Be sure not to drop something should not be dropped. In our site our delete retention is one year and archive retention is 3 months. I run this procedure like the following parameters;

begin
p_check_partition2('PAYFORME','PQ','PQ_ARCHIVE',-3,-12);
end;

Wednesday, April 16, 2008

Resetting Timezone of a Grid Agent 10.2

Problem: Operating system Timezone is changed after an Oracle Grid Agent installation. This problem occured on an IBM AIX 5.3 host. Maybe one of our UNIX administrators decided to change the Timezone setting $TZ in our UNIX environment. Our Oracle Grid Agent couldnt be opened after that.

#> $AGENT_HOME/bin/emctl start agent
Oracle Enterprise Manager 10g Release 3 Grid Control 10.2.0.3.0.
Copyright (c) 1996, 2007 Oracle Corporation. All rights reserved.
Starting agent ....... failed.
The agentTZRegion value in /oracle/product/10.2.0/agent/agent10g/dwhnserp0/sysman/config/emd.properties is not in agreement with what agent thinks it should be.Please verify your environment to make sure that TZ setting has not changed since the last start of the agent.
If you modified the timezone setting in the environment, please stop the agent and exectute 'emctl resetTZ agent' and also execute the script mgmt_target.set_agent_tzrgn(, ) to get the value propagated to repository.
Consult the log files in: /oracle/product/10.2.0/agent/agent10g/dwhnserp0/sysman/log


#> echo $TZ
MEST-2MEDT,M3.5.0,M10.5.0


#> export TZ=+2:00

#> $AGENT_HOME/bin/emctl resetTZ agent

emctl resetTZ agent command automatically changes the agentTZRegion entry of the emd.properties file. But here I also changed the last row of the file $AGENT_HOME/sysman/config/emd.properties to '+2:00' manually.
#agentTZRegion=Etc/GMT-3
agentTZRegion=+2:00

Now on the Grid Server run the following command. First parameter is the name of the target, and the second one is timezone value. Name of the target is usually nodename:portnumber.

#> sqlplus "/ as sysdba"
sqlplus> exec mgmt_target.set_agent_tzrgn('dwhnserp0:3872', '-2:00');
sqlplus> exit;
#>

You can also check the registered timezone property of the target with the following query. Again from the Grid Server:

#> sqlplus "/ as sysdba"
sqlplus> select timezone_region from mgmt_targets
where target_name = 'dwhserp00:3872';
sqlplus> exit;
#>


#> $AGENT_HOME/bin/emctl start agent
Oracle Enterprise Manager 10g Release 3 Grid Control 10.2.0.3.0.
Copyright (c) 1996, 2007 Oracle Corporation. All rights reserved.
Starting agent ............... started.

#> $AGENT_HOME/bin/emctl status agent

Oracle Enterprise Manager 10g Release 3 Grid Control 10.2.0.3.0.
Copyright (c) 1996, 2007 Oracle Corporation. All rights reserved.
---------------------------------------------------------------
Agent Version : 10.2.0.3.0
OMS Version : 10.2.0.3.0
Protocol Version : 10.2.0.2.0
Agent Home : /oracle/product/10.2.0/agent/agent10g/dwhnserp0
Agent binaries : /oracle/product/10.2.0/agent/agent10g
Agent Process ID : 525810
Parent Process ID : 143586
Agent URL : https://dwhnserp0:3872/emd/main
Repository URL : https://oragrid:1159/em/upload
Started at : 2008-04-11 16:28:27
Started by user : oracle
Last Reload : 2008-04-11 16:28:27
Last successful upload : (none)
Last attempted upload : (none)
Total Megabytes of XML files uploaded so far : 0.00
Number of XML files pending upload : 6
Size of XML files pending upload(MB) : 0.34
Available disk space on upload filesystem : 41.66%
Last successful heartbeat to OMS : 2008-04-11 16:28:46
---------------------------------------------------------------
Agent is Running and Ready


You can find the supported Timezone values in a file under sysman directory

cat $AGENT_HOME/sysman/admin/supportedtzs.lst | more
# Timezone regions with +02:00 standard offset from GMT
Africa/Cairo
Africa/Johannesburg
Africa/Khartoum
Africa/Tripoli
Africa/Windhoek
Asia/Beirut
Asia/Damascus
Asia/Gaza
Asia/Istanbul
Asia/Jerusalem
Asia/Nicosia
Asia/Tel_Aviv
EET
Egypt
Etc/GMT-2
Europe/Athens
Europe/Bucharest
Europe/Helsinki
Europe/Istanbul
Europe/Kaliningrad
Europe/Kiev
Europe/Minsk
Europe/Riga
Europe/Simferopol
Europe/Sofia
Europe/Tallinn
Europe/Vilnius
Israel
Libya
Turkey

# Timezone regions with +03:00 standard offset from GMT
Africa/Djibouti
Africa/Mogadishu
Africa/Nairobi
Asia/Aden
Asia/Amman
Asia/Baghdad
Asia/Bahrain
Asia/Kuwait

Scheduling a datapump job in Oracle 10g - Part2

Previous part of this article was about exporting a table data by using dbms_datapump via plsql. You can obtain information about creating a procedure that exports a table or group of table to an operationg system file.

This part includes information about creating a Scheduled Job to run the export procedure periodically. As Scheduled Jobs come with Oracle 10g, scripts in this article will work on databases later than Oracle 10.1.x.x

The sys package we will mainly use is DBMS_SCHEDULER. First we should create a job. You can find the following command that i used to create a job. Alternate parameters are explained with comments.

BEGIN
DBMS_SCHEDULER.CREATE_JOB(
job_name => 'log_to_table',
job_type => 'STORED_PROCEDURE',

-- Possible values could be:
-- PLSQL_BLOCK,
-- STORED_PROCEDURE,
-- EXECUTABLE,
-- CHAIN.

job_action => 'D_EPEKER.p_test_01', -- procedure name to run
start_date => 'sysdate', -- date_to_start_execution date
repeat_interval => 'FREQ=HOURLY', -- every other day

-- Possible values could be:
-- "FREQ = YEARLY" | "MONTHLY" | "WEEKLY" |
-- "DAILY" | "HOURLY" | "MINUTELY" | "SECONDLY"
-- INTERVAL = 1 through 99

end_date => 'sysdate+1', -- date_to_end_execution
job_class => 'logging_class',
comments => 'job_logs_something');-- explanation of the job
END;

/*
Jobs are created disabled by default and need to be enabled to run. You can find the
following procedure to alter a Scheduled Jobs attribute. By default a scheduled job
drops itself after its execution ends. You should set "auto_drop" attribute to
FALSE to hold the Scheduled Job after its execution ends.
*/
--
-- enabling a job
--

BEGIN
SYS.DBMS_SCHEDULER.ENABLE
(name => 'D_EPEKER.LOG_TO_TABLE');
END;

--
-- altering a job attribute
--

BEGIN
DBMS_SCHEDULER.SET_ATTRIBUTE(
name => 'log_to_table', -- name of the job
attribute => 'auto_drop', -- attribute name
value => FALSE -- new value of the attribute
);
END;

/*
As you can schedule a job, you can also run any Scheduled Job before its Scheduled
time comes. To run or stop a job immediately you can use the following command.
*/
--
-- executing a job immediately
--

exec DBMS_SCHEDULER.RUN_JOB('log_to_table',FALSE);
exec DBMS_SCHEDULER.STOP_JOB('log_to_table',TRUE);

/*
The most likely property of this newly "Scheduled Job" is that they can be easily
maintained and monitored by system views. You can find the related view in the
following.
*/
-- schedued jobs whichs owner is D_EPEKER

select * from dba_scheduler_jobs where owner='D_EPEKER';

-- you can also user user_scheduled_jobs view to find out the jobs and their properties

select * from user_scheduler_jobs;

-- run details of the scheduled jobs

select * from DBA_SCHEDULER_JOB_RUN_DETAILS
where lower(job_name) = 'log_to_file_external;

/*
There is a lot of way to determine the repeat interval. There is lots of examples
in the following and you can also find the explanations of the frequencies.
*/
--
-- setting repeat_interval
--

--Run every Friday
FREQ=DAILY; BYDAY=FRI;
FREQ=WEEKLY; BYDAY=FRI;
FREQ=YEARLY; BYDAY=FRI;

--Run every other Friday
FREQ=WEEKLY; INTERVAL=2; BYDAY=FRI;

--Run on the last day of every month.
FREQ=MONTHLY; BYMONTHDAY=-1;

--Run on the next to last day of every month.
FREQ=MONTHLY; BYMONTHDAY=-2;

--Run every 10 days
FREQ=DAILY; INTERVAL=10;

--Run daily at 4, 5, and 6PM.
FREQ=DAILY; BYHOUR=16,17,18;

--Run on the 15th day of every other month
FREQ=MONTHLY; INTERVAL=2; BYMONTHDAY=15;

--Run on the 29th day of every month
FREQ=MONTHLY; BYMONTHDAY=29;

--Run on the second Wednesday of each month
FREQ=MONTHLY; BYDAY=2WED;

--Run on the last Friday of the year.
FREQ=YEARLY; BYDAY=-1FRI;

--Run every 50 hours.
FREQ=HOURLY; INTERVAL=50;

--Run on the last day of every other month.
FREQ=MONTHLY; INTERVAL=2; BYMONTHDAY=-1;

--Run hourly for the first three days of every month.
FREQ=HOURLY; BYMONTHDAY=1,2,3;

--Run on the last workday of every month, excluding company holidays
--(This example references an existing named schedule called Company_Holidays.)
FREQ=MONTHLY; BYDAY=MON,TUE,WED,THU,FRI; EXCLUDE=Company_Holidays; BYSETPOS=-1

--Run at noon every Friday and on company holidays.
FREQ=YEARLY;BYDAY=FRI;BYHOUR=12;INCLUDE=Company_Holidays

Monday, February 18, 2008

Scheduling a datapump job in Oracle 10g - Part1

Subject: Periodically exporting a table data using dbms_datapump and scheduled with dbms_scheduler. We will first create a procedure that exports the table data and then set a scheduled job calls the procedure.

First of all we need an Oracle Directory to export the data. As dbms_datapump runs database side, dbms_datapump package uses database directories and objects to export objects.

create or replace directory expdp_dir as '/oradata/exports/';
grant read,write to public on directory expdp_dir;


One should also have EXP_FULL_DATABASE and IMP_FULL_DATABASE to export and import any other schema object except his own schema.

grant EXP_FULL_DATABASE to ;


You can find the rest of the explanations as comments in the following procedure.

DECLARE

-- handle number comes with DBMS_DATAPUMP.OPEN procedure
n_dp_handle number;

BEGIN

/*
First thing to do is to open a datapump session.

operation: EXPORT | IMPORT | SQL_FILE (DBMS_DATAPUMP.ADD_FILE should be run before)
export modes: FULL | SCHEMA | TABLE | TABLESPACE | TRANSPORTABLE
version: COMPATIBLE (default) | LATEST | 10.0.0
*/

n_dp_handle := DBMS_DATAPUMP.open(
operation=>'EXPORT',
job_mode=>'TABLE',
remote_link=>NULL,
job_name=>'LOGVRM_TMP_EXPORT',
version=>'COMPATIBLE');

/*
add the export file to opened datapump session
you must use '%U' in the filename, incase of parallel export
*/

DBMS_DATAPUMP.add_file(
handle => l_dp_handle,
filename => s_file || '.dmp', --dump file name
directory => 'EP_EXPORT'); --oracle directory

/*
now we start filtering the objects we want to export
name: INCLUDE_NAME_EXPR | EXCLUDE_NAME_EXPR | SCHEMA_EXPR | TABLESPACE_EXPR
INCLUDE_PATH_EXPR | EXCLUDE_PATH_EXPR

in the following example we filter schema, object and partition objects
respectively
*/

DBMS_DATAPUMP.metadata_filter(
handle => l_dp_handle,
name => 'SCHEMA_EXPR',
value => 'IN (''VERIM'')');

DBMS_DATAPUMP.metadata_filter(
handle => l_dp_handle,
name => 'NAME_EXPR',
value => 'IN (''BIR_GERI_ODEME_LOG_VRM'',
''BIR_TEKLIF_LOG_VRM'',
''M_D_HESAP_LOG_VRM'',
''T_GNK_TESIS_LOG_VRM'')',
object_type => 'TABLE');

DBMS_DATAPUMP.data_filter(
handle => l_dp_handle,
name => 'PARTITION_EXPR',
value => 'IN (''PAR_TEMP'')');

/*
you can set parallelism in datapump jobs
every thread of parallel job writes to its own part of file
so dont forget to use %U parameter in the filename
*/

DBMS_DATAPUMP.SET_PARALLEL(
handle => l_dp_handle,
degree => 1);

/*
now we can start the job
*/

DBMS_DATAPUMP.start_job(l_dp_handle);

/*
perodically check the expdp job for the status
s_le_log is the final varchar2 variable that holds the last status of the job
you can check the 'error' string in the s_le_log variable to determine
if the end status has any errors
*/

job_state := 'UNDEFINED';
WHILE (job_state != 'COMPLETED') AND (job_state != 'STOPPED')
LOOP
dbms_datapump.get_status(l_dp_handle,
dbms_datapump.ku$_status_job_error +
dbms_datapump.ku$_status_job_status +
dbms_datapump.ku$_status_wip, -1, job_state, sts);

IF (BITAND(sts.mask,dbms_datapump.ku$_status_wip) != 0)
THEN
le := sts.wip; --work is still in progress
ELSE
IF (BITAND(sts.mask,dbms_datapump.ku$_status_job_error) != 0)
THEN
le := sts.error; --work get an error
ELSE
le := NULL;
END IF;
END IF;
END LOOP;


/*
check the log entry of the job status
*/

IF le IS NOT NULL
THEN
le_ind := le.first;
s_le_log := '';
WHILE le_ind IS NOT NULL LOOP
s_le_log := s_le_log || ' ' || le(le_ind).LogText;
le_ind := le.next(le_ind);
END LOOP;
END IF;

/*
if job hangs drop the master table of the job
every datapump job creates table for itself for logging and monitoring issues
if the job end the table should be dropped automatically,
if not drop the table manual else oracle doesnt allow to run the same job again
*/

select
count(*) into n_ctl2
from
dba_datapump_jobs
where
job_name = 'LOGVRM_TMP_EXPORT' and
owner_name='MARDATA';

IF n_ctl2 > 0 THEN
execute immediate ('drop table MARDATA.LOGVRM_TMP_EXPORT');
END IF;

/*
Check the s_le_log for any error messages
You can also insert the log with a date column
to know what happened during the export
*/

IF (INSTR(LOWER(s_le_log),'error',1) > 0 OR
INSTR(LOWER(s_le_log),'hata',1) > 0)
THEN
NULL;
END IF;


/*
log the export log to know what happened during export
*/

UPDATE DBA_LOGVRM_EXPORT_LOG SET
ENDDATE=SYSDATE,
MSG=substr(s_le_log,1,1000)
WHERE FILENAME=s_full_file; --dump file name with full path
COMMIT;


/*
Dont forget to detach from the job
*/

DBMS_DATAPUMP.DETACH(l_dp_number);

END;


Second Part of this article will cover dbms_scheduler to schedule this procedure to perodically backup any database object with datapump.

Tuesday, January 29, 2008

When the Grid Agent couldnt find any target on host...

Subject: Manually adding targets to EM Grid Agent installed on a host.

Problem: As the installation procedure finishes successfully on a target host, EM agent examines "/etc/oratab" file (on UNIX operating system) to find the installed oracle products on the host. What if the specified file doesnt exist or somehow agent couldnt read the file?

Solution: Before the installation, you should check "/etc/oratab" file if it exists. If not, check the "/var/oracle" directory for the oratab file and make symbolic link named oratab under "/etc" directory.

If the agent cannot find any target on the host in all circumstances you can try adding the targets manually after installation. You will find configuration file, template entry for a "database" and a "listener".

EM Agent stores target information in targets.xml file located at "$AGENT_HOME/sysman/emd" directory. A sample file without any targets can be find as follows.



<Targets AGENT_TOKEN="e7244b84afad62c94aad53fe407af2514e3b7f04">
<Target TYPE="oracle_emd" NAME="oragrid:3872"/>
<Target TYPE="host" NAME="oragrid"/>
</Targets>



You can add a listener as a target to targets.xml file by using the following template. (values between the three stars - *** *** - are system specific. You should use your own systems parameters instead of *** *** values)



<Target TYPE="oracle_listener" NAME="***LISTENER_NAME***">
<Property NAME="ListenerOraDir" VALUE="$ORACLE_HOME/network/admin"/>
<Property NAME="LsnrName" VALUE="***LISTENER_NAME***"/>
<Property NAME="Machine" VALUE="***HOSTNAME***"/>
<Property NAME="OracleHome" VALUE="***$ORACLE_HOME***"/>
<Property NAME="Port" VALUE="***LISTENER_PORT***"/>
</Target>



The following template is for manually entering database target to a EM Agent. (values between the three stars - *** *** - are system specific. You should use your own systems parameters instead of *** ... *** values)



<Target TYPE="oracle_database" NAME="***DB_NAME***"/>
<Property NAME="OracleHome" VALUE="***$ORACLE_HOME***"/>
<Property NAME="UserName" VALUE="dbsnmp"/>
<Property NAME="MachineName" VALUE="***HOSTNAME***"/>
<Property NAME="Port" VALUE="1521"/>
<Property NAME="SID" VALUE="***$ORACLE_SID***"/>
<Property NAME="ServiceName" VALUE="***$SERVICE_NAME***"/>
<Property NAME="password" VALUE="***PASSWORD***" ENCRYPTED="FALSE"/>
<Property NAME="Role" VALUE="NORMAL"/>
</Target>



After you are done with your "targets.xml" file you should run emctl reload agent to read the configuration files from scratch. You can check the defined targets by running emctl config agent listtargets

Sunday, January 13, 2008

Installing Grid Agent to a target host

Subject: Installing Oracle Enterprise Manager Grid Control 10.2.0.3 agent on a target host.

I will examine the installation process in three main part. Preinstallation part, is a group of work should be done before executing runInstaller installation script. There is some work to to on OMS Server and on the TARGET side. The second part includes some screenshots and it is actually the time after you execute runInstaller script. Tha last part is the postinstallation part stands for some settings.


PREINSTALLATION

OMS

First you should enter a host record for the target host. IP and HOSTNAME record should be entered to /etc/hosts operating system file. One thing to remember, IP number should be the one listener executes on. Ping should answer after the configuration.


[oracle@oragrid ~]$ ping fmsserp1
ping: unknown host fmsserp1
[oracle@oragrid ~]$ su -
Password:
[root@oragrid ~]# cat /etc/hosts | grep fmsserp1
[root@oragrid ~]# vi /etc/hosts
[root@oragrid ~]# ping fmsserp1
PING fmsserp1 (10.4.9.83) 56(84) bytes of data.
64 bytes from fmsserp1 (10.4.9.83): icmp_seq=0 ttl=254 time=0.219 ms
64 bytes from fmsserp1 (10.4.9.83): icmp_seq=1 ttl=254 time=0.197 ms
64 bytes from fmsserp1 (10.4.9.83): icmp_seq=2 ttl=254 time=0.198 ms
64 bytes from fmsserp1 (10.4.9.83): icmp_seq=3 ttl=254 time=0.193 ms
64 bytes from fmsserp1 (10.4.9.83): icmp_seq=4 ttl=254 time=0.197 ms

--- fmsserp1 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4000ms
rtt min/avg/max/mdev = 0.193/0.200/0.219/0.020 ms, pipe 2
[root@oragrid ~]# cat /etc/hosts | grep fmsserp1
10.4.9.83 fmsserp1
[root@oragrid ~]#


AGENT

Agent should also know the IP HOSTNAME pair of the OMS Server. You can also find the listening IP address to enter the OMS Server host file.

$AGENT_HOME should be exported. It is not a must but nice to have especially while searching the setup directory of the agent.

Be sure /etc/oratab file exists. On some operating systems the file exists on /var/oracle directory, if so you should create a soft link /etc/oratab for /var/oracle/oratab because agent will find the targets installed on the host by looking this file.

You can now go the installer directory to run the runInstaller. Be sure you have enough space in /tmp directory. If not you can delete older oracle installer logs.


$ hostname
fmsserp1
$ netstat -ni
Name Mtu Network Address Ipkts Ierrs Opkts Oerrs Coll
lan0:1 1500 10.4.9.0 10.4.9.83 31833330 0 2849543 0 0
lan2* 1500 none none 0 0 0 0 0
lan1 1500 192.168.0.0 192.168.0.2 6331944 0 11558318 0 0
lan0 1500 10.4.9.0 10.4.9.82 2445705484 0 1271175018 0 0
lo0 4136 127.0.0.0 127.0.0.1 206103084 0 206104962 0 0
$ ping oragrid
sh: ping: not found.
$ su -
Password:
fmsserp1:/#ping oragrid
ping: unknown host oragrid
fmsserp1:/#
fmsserp1:/#cat /etc/hosts | grep oragrid
fmsserp1:/#
fmsserp1:/#cat /etc/hosts | grep oragrid
10.4.46.70 oragrid
fmsserp1:/#tail -2 /etc/hosts
#EM Agent
10.4.46.70 oragrid

fmsserp1:/#
fmsserp1:/#id
uid=0(root) gid=3(sys) groups=0(root),1(other),2(bin),4(adm),5(daemon),6(mail),7(lp),20(users)
fmsserp1:/#exit
logout root
$ id
uid=106(oradet) gid=104(dba)
$ cd $ORACLE_HOME/network/admin
$ cat listener.ora
LISTENER_ARCHIVE =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = 10.4.9.83)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = 10.4.9.83)(PORT = 1526))
)


SID_LIST_LISTENER_ARCHIVE =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = AVEA93_C )
(ORACLE_HOME = /fmsarch_ora/product/9.2.0 )
(SID_NAME = AVEA93_C)
)
)


$
$ lsnrctl status LISTENER_ARCHIVE

LSNRCTL for HPUX: Version 9.2.0.6.0 - Production on 10-JAN-2008 11:42:22

Copyright (c) 1991, 2002, Oracle Corporation. All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=TCP)(HOST=10.4.9.83)(PORT=1521))
STATUS of the LISTENER
------------------------
Alias LISTENER_ARCHIVE
Version TNSLSNR for HPUX: Version 9.2.0.6.0 - Production
Start Date 20-NOV-2007 06:19:08
Uptime 51 days 5 hr. 23 min. 14 sec
Trace Level off
Security OFF
SNMP OFF
Listener Parameter File /fmsarch_ora/product/9.2.0/network/admin/listener.ora
Listener Log File /fmsarch_ora/product/9.2.0/network/log/listener_archive.log
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.4.9.83)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.4.9.83)(PORT=1526)))
Services Summary...
Service "AVEA93_C" has 1 instance(s).
Instance "AVEA93_C", status UNKNOWN, has 1 handler(s) for this service...
The command completed successfully
$ echo $AGENT_HOME

$ cd
$ export AGENT_HOME=/fmsarch_ora/product/agent10g/agent10g
$ echo $AGENT_HOME
/fmsarch_ora/product/agent10g/agent10g
$ cat /etc/oratab
#

# This file is used by ORACLE utilities. It is created by root.sh
# and updated by the Database Configuration Assistant when creating
# a database.

# A colon, ':', is used as the field terminator. A new line terminates
# the entry. Lines beginning with a pound sign, '#', are comments.
#
# Entries are of the form:
# $ORACLE_SID:$ORACLE_HOME::
#
# The first and second fields are the system identifier and home
# directory of the database respectively. The third filed indicates
# to the dbstart utility that the database should , "Y", or should not,
# "N", be brought up at system boot time.
#
# Multiple entries with the same $ORACLE_SID are not allowed.
#
#
*:/fmsarch_ora/product/9.2.0:N
$
$ cd oracle_setup
$ ls
HPI_Grid_Control_agent_download_10_2_0_3_0.zip hpi
agent_download.rsp.bak instructions.txt
agentdeployroot.sh
$ cd hpi
$ ls
addons agentDownload.hpi agentdeploy prereqs
agent agent_scripts.jar oui response
$ cd agent
$ ls
install runInstaller stage
$ DISPLAY=10.1.22.210:0.0
$ export DISPLAY
$ ./runInstaller
Starting Oracle Universal Installer...

Checking installer requirements...

Checking operating system version: must be B.11.23. Actual B.11.23
Passed

Checking swap space: must be greater than 250 MB. Actual 4096 MB Passed
Checking temp space: 257 MB available, 500 MB required. Failed <<<<

Some requirement checks failed. You must fulfill these requirements before

continuing with the installation,at which time they will be rechecked.

Continue? (y/n) [n] n

User Selected: No
$ cd /tmp
$ ls -l | grep Ora
drwxrwxr-x 4 fmsarch fms 96 Jun 7 2006 OraInstall2006-06-07_01-03-59PM
drwxrwxr-x 4 oraarch fms 8192 Jun 7 2006 OraInstall2006-06-07_01-08-20PM
drwxrwxr-x 4 oraarch fms 8192 Jun 7 2006 OraInstall2006-06-07_01-27-41PM
drwxr-xr-x 4 oraarch fms 8192 Jun 7 2006 OraInstall2006-06-07_01-34-00PM
drwxr-xr-x 4 oraarch fms 8192 Jun 7 2006 OraInstall2006-06-07_01-50-37PM
drwxrwxr-x 4 fmsarch fms 8192 Jun 7 2006 OraInstall2006-06-07_11-42-22AM
drwxrwxr-x 4 fmsarch fms 8192 Jun 7 2006 OraInstall2006-06-07_11-44-54AM
drwxrwx--- 2 oraarch dba 96 Jan 10 09:49 OraInstall2008-01-10_09-49-20AM
$ ls -l | grep Ora | awk '{print $9}'
OraInstall2006-06-07_01-03-59PM
OraInstall2006-06-07_01-08-20PM
OraInstall2006-06-07_01-27-41PM
OraInstall2006-06-07_01-34-00PM
OraInstall2006-06-07_01-50-37PM
OraInstall2006-06-07_11-42-22AM
OraInstall2006-06-07_11-44-54AM
OraInstall2008-01-10_09-49-20AM
$ ls -l | grep Ora | awk '{print $9}' | xargs rm –r


RUNINSTALLER

This is the easiest part of the installation once you pass the "system check" screen. You should only know the IP address of the OMS Server and registration password, the one you obtain while installing the Grid Server.

After the root.sh script runs successfully there is only a few things to do.


$ ./runInstaller
Starting Oracle Universal Installer...

Checking installer requirements...

Checking operating system version: must be B.11.23. Actual B.11.23
Passed

Checking swap space: must be greater than 250 MB. Actual 4096 MB Passed
Checking Temp space: must be greater than 500 MB. Actual 640 MB Passed

All installer requirements met.

Preparing to launch Oracle Universal Installer from /tmp/OraInstall2008-01-10_09-52-44AM. Please wait ...























fmsserp1:/fmsarch_ora/product/agent10g/agent10g #
$ id
uid=109(oraarch) gid=104(dba) groups=105(fms)
$ su - root
Password:
fmsserp1:/#
fmsserp1:/#cd /fmsarch_ora/product/agent10g/agent10g
fmsserp1:/fmsarch_ora/product/agent10g/agent10g#./root.sh
Running Oracle10 root.sh script...

The following environment variables are set as:
ORACLE_OWNER= oraarch
ORACLE_HOME= /fmsarch_ora/product/agent10g/agent10g

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n) [n]:
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]:
The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]:

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
Finished product-specific root actions.






POSTINSTALLATION

Agent software uses dbsnmp user to obtain information from the database instances. So you should unlock the user if it is locked. Here I also change the password for security.

After unlocking and chaning the password of the dbsnmp user you should also inform the agent software about the changes. Rest of the subject is as follows :)


fmsserp1:/fmsarch_ora/product/agent10g/agent10g#exit
logout root
$ id
uid=109(oraarch) gid=104(dba) groups=105(fms)
$ sqlplus "/ as sysdba"

SQL*Plus: Release 9.2.0.6.0 - Production on Thu Jan 10 10:17:18 2008

Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.


Connected to:
Oracle9i Enterprise Edition Release 9.2.0.6.0 - 64bit Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.6.0 – Production

SQL> select name from v$database;

NAME
---------
AVEA93_C

SQL>
SQL> alter user dbsnmp account unlock;

User altered.

SQL> alter user dbsnmp identified by ****;

User altered.

SQL> exit
Disconnected from Oracle9i Enterprise Edition Release 9.2.0.6.0 - 64bit Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.6.0 - Production

$ ./emctl config agent listtargets
Oracle Enterprise Manager 10g Release 3 Grid Control 10.2.0.3.0.
Copyright (c) 1996, 2007 Oracle Corporation. All rights reserved.
[fmsserp1:3872, oracle_emd]
[fmsserp1, host]
[LISTENER_ARCHIVE, oracle_listener]
[AVEA93_C, oracle_database]
$ ./emctl config agent credentials AVEA93_C:oracle_database
Oracle Enterprise Manager 10g Release 3 Grid Control 10.2.0.3.0.
Copyright (c) 1996, 2007 Oracle Corporation. All rights reserved.
Name = AVEA93_C, Type = oracle_database
Want to change for "UserName" (y/n):n
Want to change for "password" (y/n):y
Enter the new value for "password" :*******************
Re-enter the new value for "password" :*******************
EMD reload completed successfully
$ ./emctl reload agent
Oracle Enterprise Manager 10g Release 3 Grid Control 10.2.0.3.0.
Copyright (c) 1996, 2007 Oracle Corporation. All rights reserved.
---------------------------------------------------------------
EMD reload completed successfully
$ ./emctl upload agent
Oracle Enterprise Manager 10g Release 3 Grid Control 10.2.0.3.0.
Copyright (c) 1996, 2007 Oracle Corporation. All rights reserved.
---------------------------------------------------------------
EMD upload completed successfully


When the works done successfully everything should be fine. Now is the time to wait a little while for Grid Server (OMS) to discovery newly installed target.