Showing posts with label Oracle Database. Show all posts
Showing posts with label Oracle Database. Show all posts

Tuesday, December 9, 2014

Fetching Multiple Rows with EXECUTE IMMEDIATE

execute immediate is very handy for executing dynamic SQL statements inside PLSQL block. It can accept IN and OUT parameters and can also return output values to PLSQL variables by into clause. But returning values can be tricky because of the multiple returning rows. For single row returns into clause works fine but for multiple rows, hitting the following exception is quite easy.

declare  
  
  v_cmd varchar2(100);
  v_table_owner varchar2(30) := 'SCH_OLTP';  
  v_table_name varchar2(30) := 'T_VISIT_STATS_DETAILED';
  
  v_column_name varchar2(30);
  
begin 

  v_cmd := 'select column_name from dba_tab_columns where owner=:v_table_owner and table_name=:v_table_name';
   
  execute immediate v_cmd into v_column_name
    using v_table_owner, v_table_name;
  
end;
/*
ORA-01422: exact fetch returns more than requested number of rows
ORA-06512: at line 13
*/

So it is important to check the returning row count before using execute immediate is important. Or you can use the bulk collect clause to fetch multiple returning rows into an array.

declare 
  type typ_varcharlist is table of varchar2(30);
  
  col_list typ_varcharlist;  
  
  v_cmd varchar2(100);
  v_table_owner varchar2(30) := 'SCH_OLTP';  
  v_table_name varchar2(30) := 'T_VISIT_STATS_DETAILED';
  
begin 
  v_cmd := 'select column_name from dba_tab_columns where owner=:v_table_owner and table_name=:v_table_name';
  
  
  execute immediate v_cmd bulk collect into col_list 
    using v_table_owner, v_table_name;
  
  for n_ctl in 1 .. col_list.count() loop
    dbms_output.put_line (col_list(n_ctl));
  end loop;
  
end;
/*
DUMMY
SUCCESS
VISITED_USER_ID
FROM_IP
VISIT_DATE
PAGE_ID
*/

Reference:
Oracle PLSQL Users Guide
Thesis and Papers > Oracle 11g Execute Immediate


Monday, July 7, 2014

Change Data Capture using Streams

I found this dusty script from my old shelves which enables change data capture feature enabled in oracle database 11.2.0.4 enterprise edition running RHEL 6.2 64bit.

I remember this script is for keeping a product table changes in a CDC table for a java process to read the CDC$xxx table. This consumer process reads the changed records is developed by our talented java development group and is beyond the scope of this post as can be understood.

Here, i will share how i enabled my oracle database change data capture feature.

There is some preparation steps in order to start the change data capture. Most of them are like to streams preparations scripts which makes sense hence streams is the underlying option if this feature.



--prepare the publisher user

--publisher user needs the following privileges and quotas

grant execute_catalog_role to dba_planet;
grant select_catalog_role to dba_planet;
grant execute on dbms_cdc_publish to dba_planet;
grant create sequence to dba_planet;
grant dba to dba_planet;
exec dbms_streams_auth.grant_admin_privilege('dba_planet');

alter user dba_planet quota unlimited on system;
alter user dba_planet quota unlimited on sysaux;


--prepare the database

--supplemental logging should be enabled in database level

alter database force logging;
alter database add supplemental log data;


--check database level supplemental logging level of the database

select SUPPLEMENTAL_LOG_DATA_MIN,
       SUPPLEMENTAL_LOG_DATA_PK,
       SUPPLEMENTAL_LOG_DATA_UI from v$database;


--prepare the instance

--depending on the configuration there will some process overhead when enabling the CDC
--so instance parameters should be reviewed

select name, value, isspecified
  from v$spparameter
  where name in
    ('compatible',
     'java_pool_size',
     'job_queue_processes',
     'parallel_max_servers',
     'processes',
     'sessions',
     'streams_pool_size',
     'undo_retention');


--minimum value of the undo_retention parameter should be 7200sec

alter system set undo_retention=7200 scope=both sid='*';



--prepare source tables

--create necessary log groups
--every column to be change logged should be supplemental logging enabled
--I preferred to create individual log groups for every column I will track

alter table sch_dmall.product add supplemental log data (all) columns;
alter table sch_dmall.product add supplemental log data (primary key) columns;
alter table sch_dmall.product add supplemental log data (foreign key) columns;
alter table sch_dmall.product add supplemental log data (unique) columns;

alter table sch_dmall.product add supplemental log group lg_product1 (id, deleted, seller_id,lastmodifieddate,category_id,dis_price,price,title,subtitle,productstatus,salestatus, urlwords) always;
alter table sch_dmall.product add supplemental log group lg_product2 (id) always;
alter table sch_dmall.product add supplemental log group lg_product3 (deleted) always;
alter table sch_dmall.product add supplemental log group lg_product4 (seller_id) always;
alter table sch_dmall.product add supplemental log group lg_product5 (lastmodifieddate) always;
alter table sch_dmall.product add supplemental log group lg_product6 (category_id) always;
alter table sch_dmall.product add supplemental log group lg_product7 (dis_price) always;
alter table sch_dmall.product add supplemental log group lg_product8 (price) always;
alter table sch_dmall.product add supplemental log group lg_product9 (title) always;
alter table sch_dmall.product add supplemental log group lg_product10 (subtitle) always;
alter table sch_dmall.product add supplemental log group lg_product11 (productstatus) always;
alter table sch_dmall.product add supplemental log group lg_product12 (salestatus) always;
alter table sch_dmall.product add supplemental log group lg_product13 (urlwords) always;



it begins with the famous streams starting point which is prepare_table_instantiation procedure to log the current SCN of the table to be change captured. This procedure will let CDC processes to know from which starting SCN point to start the Change Data Capture.

begin
--dbms_capture_adm.build;
dbms_capture_adm.prepare_table_instantiation(table_name => 'sch_dmall.product');
end;
/




Now we should create the change set to associate with the CDC table. One change set can include one or more CDC tables to process.

--if exists, first drop the change set
exec dbms_cdc_publish.drop_change_set('CS_PRODUCT');

--create the change set
begin
  dbms_cdc_publish.create_change_set(
    change_set_name => 'CS_PRODUCT',
    description => 'Change set for test',
    change_source_name => 'HOTLOG_SOURCE',
    stop_on_ddl => 'n',
    begin_date => sysdate,
    end_date => null);
end;
/



After successfully create the change set, it is time to create the change table. This important because columns to be change captured and the source table will be defined here by the column_type_list and source_table parameters.

--if exists, first drop the change set
exec dbms_cdc_publish.drop_change_table('SCH_DMALL','PRODUCT_CDC','Y');

--create the change table
begin
  dbms_cdc_publish.create_change_table(
  owner             => 'sch_dmall',
  change_table_name => 'product_cdc',
  change_set_name   => 'CS_PRODUCT',
  source_schema     => 'SCH_DMALL',
  source_table      => 'PRODUCT',
  column_type_list => 'ID NUMBER, DELETED NUMBER, SELLER_ID NUMBER, LASTMODIFIEDDATE DATE, CATEGORY_ID NUMBER, ' || 
                      'DIS_PRICE NUMBER, PRICE NUMBER, TITLE VARCHAR2(255), SUBTITLE VARCHAR2(255), ' || 
                      'PRODUCTSTATUS VARCHAR2(255), SALESTATUS VARCHAR2(255), URLWORDS VARCHAR2(255) ',
  capture_values  =>'both',
  rs_id           => 'y',
  row_id          => 'y',
  user_id         => 'y',
  timestamp       => 'y',
  object_id       => 'n',
  source_colmap   => 'n',
  target_colmap   => 'y',
  options_string  => 'TABLESPACE TS_DMALL_DATA');
end;
/

After all to start the CDC processes.

begin
  dbms_cdc_publish.alter_change_set(
    change_set_name => 'CS_PRODUCT',
    enable_capture => 'y');
end;
/

When examining the product_cdc table where the changes are collected, there will be some columns ending with '$' sign will exist. These columns are for holding the streams CDC processes internal information (called control columns) which are very helpful for identifying the row information. Especially the 'operation$' and 'commit_timestamp$' columns;

operation$ : D (delete), UN (updated after image), UO (updated before image),  Inserted
commit_timestamp$ : commit timestamp

Before handing the data over the development teams, creating a view can be a good idea. In this case I have created the following view for the development team to consume. Cheers, Ergem.


create or replace view sch_dmall.v_product_cdc as
select
  case
    when operation$='I' then 'I'
    when operation$='UN' and deleted=0 then 'U'
    when operation$='UN' and deleted=1 then 'D'
    when operation$='D' then 'D'
    else operation$
  end as operation,
  id as product_id,
  deleted,
  seller_id,
  category_id,
  dis_price,
  price,
  title,
  subtitle,
  productstatus,
  salestatus,
  cscn$ id,
  commit_timestamp$ create_date
from sch_dmall.product_cdc
where operation$ in ('D','I','UN');



References;
http://docs.oracle.com/cd/E11882_01/server.112/e25554/cdc.htm#DWHSG016

Thursday, September 26, 2013

OOW Notes - JSON data in Oracle Database

Wednesday 17:00 – Storing and Querying JSON Data in Oracle Database 12c


Understanding schema-less data management


growing trend: semi-structured data without string schema requirements
data first → schema later


growing trend adopting semistructured data tinto the database


  • xml
  • json (nosql - mongodb)


needs


structural and nonstructural coexist
  • using single RDBMS to store both
  • using one declarative set oriented SQL
  • parallel replication security transaction in one infrastructure


data can be stored without schema
  • store semi structural data in its native aggregated form without shredding


data can be queried without schema
  • use sql extended with semi structured query languages


example
  • SQL/XML embedding XQUERY/XPATH
  • SQL/SJON embedding JSON path language


data can be indexed without schema
  • indexing XML using structural XML index based on XMLTABLE expression
  • indexing XML using XML full text index


road map of managing JSON data using oracle XML/JSON db
  • JSON can be persisted without schema-less storing JSON data in varchar/raw/clob/blob columns
  • JSON data can be queried without schema (SQL/JSON embedding)
  • Function based indexes


overview of JSON


growing JSON adoption
  • strong usage in web client programming
  • rest interfaces (Facebook API)
  • starts to influence server side coding
  • data model for nosql stores like couchDB, mongodb
  • very well suited for rapid application development


goal
  • make rdbms JSON aware storage queries indexing
  • leverage relational features sql indexed joins transactions
  • enable schema-flexiable app development (data first schema later)


storing JSON data
  • no new datatype use existing: varchar2, raw, blob, clob, bfile
  • existing OCI, .NET and JDBC api works on JSON documents


check constraint to make ensure only valid JSON gets inserted


create table jtab (col1 varchar2(4000)) constraint jcons check (col1 is JSON);
insert into jtab values '{“name”:”oracle”}';


querying with JSON data with JSON/SQL


SQL/JSON emerging standard by Oracle+IBM
native JSON processing no rewrite to Xquery etc..


Oracle: streaming evaluation (no need to load all data into the memory )


path expression similar to javascript
  • starts with symbol $
  • . to native inside an object
  • [] to navigate inside an array
  • $.phoneNumber[1].type
  • wildcards $.phonenumbers[*]?(type=”home”).number


JSON functionalities


use a path expression to select a value from the JSON data and return it as a SQL value


JSON_VALUE operator


select JSON_VALUE ('$.creditHistory[1].creditScore'
PASSING JCOL
RETURNING NUMBER
NULL ON ERROR) from t1;


select JSON_VALUE('$.geo.lat' passing col DEFAULT 'this is not a value' ON ERROR;


JSON_QUERY operator


select JSON_QUERY ('$.creditHistory[1]' PASSING col) from table_name;


  • return JSON object '{“year”:2011, “creditScore”:650}'
  • can return multiple values


JSON_EXISTS operator


select count(*) from table_name where JSON_EXISTS('$.creditScore' PASSING col);


JSON_TABLE rowsource


  • project JSON data as relational table
  • items inside array becomes rows
  • JSON tables can be chained


select jtab.*
from table_name, JSON_table ('$.phoneNumbers[*]' passing col)
columns
ph_type varcahar2(10) path '$.type',
ph_num … path '$.number') jtab;


Wednesday, September 25, 2013

OOW Notes - High Volume Text Indexing

Tuesday 12:00 - HighVolume text indexing with Oracle Text - Roger Ford


create index <index_name> on <table_name> (column_name) index type is ctxsys.context;


select from table_name
where
contains (product_name, 'monitor NEAR full hd', 99) > 0
order by score(99) desc;


index types:


  • context (primary)
  • ctxcat
  • ctxrule
  • Xquery Full Text


context index is inverted
consists list of words each with an associated posting list
posting lists can be very long (requires IO)
are stored in $I table
SGA works as is on this object always pin this table to SAG or flash cache if possible


Common Pitfalls


Memory


  • when indexing, postings lists are built in memory then flushed to disk when memory is exhausted
  • frequent flushes = small fragments


in memory controlled by MEMORY parameter


create index index_name
on table_name(col)
index type ctxsys.context parameters ('memory 1G');


exec ctxsys.ctx_adm.set_parameter('MAX_INDEX_MEMORY','10G');


Substring index / UTF8


context indexes are token (word) based


using leading wildcards are slow


need to turn on SUBSTRING_INDEX wordlist option


increases index build time
    1. : 5x long
    2. : %20 long


AL32UTF8 has significant indexing overhead (consider single byte character set)


Frequent syncs


  • context indexes are asynchronous
  • sync(on commit) will cause index fragmentation
  • can be cured by ctx_ddl.optimize_index


transactional will require unindexed documents to be scanned by all queries
use transactional only with sync(on commit)


if syncing frequently, optimize frequently
use 12c feature (near real time indexes)


Big IO

  • previously $I index were stored inline BLOBs
  • new storage attribute BIG_IO uses secure file lobs

Seperate offsets

posting lists consists of document Ids and word position in the document

some queries does not need word position

new separate_offset storage attribute will hugely reduce IO for these queries
will make proximity (NEAR), phase and zone section searcher will be a bit slower

Near Real Time Indexes

new storage attribute STAGE_ITAB

  • creates two level index
  • small enough to keep in memory
  • large main index stored in the disk
  • updates are moved from staging index to main index and optimized as they move
  • requires BIG_IO storage parameter in order to use

Query Filter Cache

some expressions are common to many queries
security restriction: .. “(sales or marketing) WITHIN dept”
postings entries for these queries may be very large
postings will likely be cached in sga but still need to be decompressed and merges

query filter cache, caches the results of subquery in memory
query operator ctxFilterCache(subquery)

Pattern stop class

specify regular expression exclusion list for index
large indexes can be %75 junk

  • reference codes
  • sequential/numeric data
  • wrongly indexed binary data

example:
  • exclude all words longer than 20 chars
  • exclude all words which are mixed alpha and numeric

Partitioning

  • partitioned index can be created on partitioned base tables
  • use local keyword for indexes
  • currently only range partitioning is supported
  • partitioned indexes looks like separate indexes on partitions

  • better query performance on subset
  • less interference between indexing and querying
  • caching of more common target positions

Parallelization

parallel index creation is supported by using parallel n clause
can mix local and parallel but max 1 process per partition (db restriction)

to do intra-partition parallelization

  • create index using unusable keyword
  • run dbms_pclxutil.build_part_index and specify degrees on parallelization

index is unavailable until complete
no option for incremental builds

manual partition builds

create index using nopopulate parameter
build individual partitions using

alter index index_name rebuild partition partn parallel n;

allows complete control over parallelization
index is usable as soon as one operation is built

unlike unusable option; can build incremental changes by loading new partition and rebuilding

no sync required for non changing data
faster use partition swapping

distributed index builds

  • indexes can be built on farm of machines
  • partitioned base table on master with local indexes
  • non partitioned table on remote instance
  • build index on remote table, then transfer to master using transportable tablespace

swap in index using exchange partitioning
alter table table_name
exchange partition main_table_p10
with table remote_table
including indexes
without validation;


  • almost instant ($K IOT must be copied)
  • new data immediately searchable

OOW Notes - What is new in Oracle DB App. Development.

Monday 12:15 - Whats new in Oracle Database App. Development – Thomas Kyte


Database improvements


Application client continuity called TAF


problem: if you click an application button twice, did it really committed twice or if the node failure happened and my transaction rolled back (or finished successfully)


Transaction guard – first rdbms to preserve commit outcome.. at most one transaction execution. Used by jdbc-thin, oci, occi, odp.net


now able to ask if the transaction is rollback or commited to the transaction guard.


Logs are info held in the client side application server.


Sql translation framework


by creating profiles now it is able to transform transact-sql to oracle pl-sql.

--create a profile and translation
dbms_sql_translator.create_profile(profile_name);
dbms_sql_translator.register_sql_translator(profile_name, 'select * from scott.emp','select* from scott.dept');

--in order to use in a transaction
alter session set sql_translation_profile=profile_name;
alter session set events '10601 trace name context forever, level 32';


APEX improvements


michael fitcher → web db → html db → apex

web browser → web listener / or web server with mod_plsql / or apex listener → database

4.2.3 is available

html5 supported charts and page items

packed applications
  • ask us (asktom.oracle.com)
  • group calendar
  • project tracking
  • issue tracker
  • checklist manager
  • meeting minutes

full support for apex in oracle multi-tenant architecture (apex installed in a container, patched and upgraded across all pluggable databases)

full support for 32K varchar2 columns
integration with real application security coming in apex 5.0


SQL Developer technology

sql command line extended reporting and charting.
Cart is enhanced.
Reports are greatly enhanced..
UI enhancements → border colors now able to change on connections
includes data modeler version 4.0
jdk 7 support and above

10012 → what is new in sqldeveloper 4.0 (I am already registered)

ODP.NET

transaction guard support guarantees that a commit should be done at most once.
Support for PLSQL type (boolean)

Support for all 12c types.
Multitenant support
Impcilicit ref cursor support
Array binding

Apply keyword in SQL for enhanced LINQ support

global data services support
  • region based workload routing
  • connect time load balancing → global data services (gds)
  • distributed replicas
Networking

Data compression levels
  • connection level (connection string)
  • service level (tnsnames.ora)
  • database level (sqlnet.ora)
Large buffers
  • default sdu is 8k (was 2k before 11g)
  • max values was 64k
  • max values is now 2m (clobs and blobs)
Multitenant data source

DRCP

Database resident connection pool vs shared server (serious differences and advantages)

JDBC

  • Jpublisher maps packages to jdbc in 12.1
  • In database map reduce !?!
  • Drop in hadoop mappers and reducers (run as is)
  • Support for hadoop ecosystem components
  • SQL support for hadoop
OCI

Performance

  • xml based client config file
  • auto tune of statement cache size
  • array dml returns affected row count in an array

Database

  • full pluggable database support

SQL and Datatypes

  • long string support
  • row count byte is 8byte integer from now on
  • bind directly to plsql packages
  • implicit result set supported

PL/SQL

query plsql table type directly (no more to create sql types)

less namespace clutter

grant roles to code
  • invokers rights routines
  • code will run with current set of privileges of the invoker plus that roles
  • white list (that sp can only be runned from other piece code but never from sqldeveloper)
  • fine grained privilege to protect code

improved introspection
utl_call_stack → provides procedural, structured access to the information


references:

www.mhprofessional.com/dbsec

Tuesday, June 11, 2013

Compressing LOB segments with securefiles

After Oracle Database 11G we have a new storage option for the LOB segments which is securefiles. By enabling securefiles LOB segments can be compressed significantly. After I have faced an unusually extending tablespace (approximately 30G per day) which may halt the database because of this unplanned extend, I have start digging into the problem and find out this LOB column which hold the unstructured HTML data inside it, its size was more than 350Gb lying inside a total of 600Gb database which means more than half of the database is only this one column. This is just unacceptible. In the following example you can find the same scenerio on the test system. Here the uncomressed LOB column holds 22Gb of space.
select 
  OWNER, 
  TABLE_NAME, 
  COLUMN_NAME, 
  SEGMENT_NAME, 
  INDEX_NAME 
from 
  DBA_LOBS 
where 
  TABLE_NAME='AUD_PRODUCT_BIG_DATA';

/*
OWNER       TABLE_NAME            COLUMN_NAME   SEGMENT_NAME                INDEX_NAME
--------    --------------------  -----------   -------------------------   ------------------------
SCH_DMALL   AUD_PRODUCT_BIG_DATA  DESCRIPTION   AUD_PRODUCT_BIG_DATA_CLOB   SYS_IL0000126287C00004$$
*/

select bytes/1024/1024 from dba_segments where segment_name='SYS_LOB0000126287C00004$$';  
--22GB

After moving the LOB column with the storage option of "SECUREFILE" and "MEDIUM COMPRESS" option the size of the LOB segment decreased to approx. 833Mb. More than 1/25 compression which is as valuable as gold in this production database.
ALTER TABLE sch_dmall.aud_product_big_data move lob (description) store
AS
  SECUREFILE aud_product_big_data_clob (TABLESPACE TS_AUD_LOB_SEGMENT COMPRESS MEDIUM) parallel 4;

select bytes/1024/1024 from dba_segments where segment_name='AUD_PRODUCT_BIG_DATA_CLOB';  
--833.125


There are three levels of compression which is LOW MEDIUM and HIGH which defines the compression level. Of course the higher the level gets, the more CPU power you need to spend. here I used MEDIUM to see the result and it is pretty convincing to not to change the setting.


References:

http://blog.trivadis.com/b/mathiaszarick/archive/2012/03/05/lob-compression-with-oracle-strange-multiple-physical-reads.aspx

http://www.oracle-base.com/articles/11g/secure-files-11gr1.php


Monday, September 12, 2011

Random Data Generation and DBMS_RANDOM

Generating random data itself is already a subject of its own. But here i can only write about the needs of random data in a database environment and how a database developer or administrator can generate the random data to fulfill his/her requirements. I personally used random data generation in three main purposes up to now.

DATAMASKING

Almost in every company i have worked for, datamasking is a must-have procedure for test data generation, especially after restoring production data to a prelive or day-1 database. In common sense just truncating the sensitive data or updating the columns to NULL maybe enough for security purposes. But in some individual cases application logic needs to continue which depends on the data availability. In such cases there are two main roadmaps, hashing the real data and un-identify it or some random data may be generated for representing the real sensitive data. The generated data may also ensure the formatting of the known structures such as credit card information, postal code or telephone number of the customers.

PASSWORD GENERATION

After applying the new password policy to our database, forgotten passwords by the end users are not recovered that easily because of the password complexity functions (by applying password verify function) and the password_reuse_max parameter of the profile. This means that, the portal used by the level-1 support is not enough as the procedure lies behind the button is simply changes the password to a default value which is not changed from the beginning of the procedure at all, which causes ORA-28007: the password cannot be reused exception.

TEST DATA GENERATION

In order to test any functionality or any new features by myself, generating data for newly created tables is a necessity. Formatted varchar2 and number columns are very handy to test some functionalities and how some functionalities behave in large amount of structured table data.

Dictionary views can be used for this purpose but If it is not enough for the individual test conditions then generation a random data within the desired format is crucial.

DBMS_RANDOM PACKAGE

For all the mentioned purposes, oracle database has a built-in package which is DBMS_RANDOM to generate random data. There are several functions in this package which can be used to obtain same results with some manipulation of the output values. Main functions to produce random number would be;

DBMS_RANDOM.VALUE() whichs output value is NUMBER datatype, this means you can produce up to 32 digit numbers.


select dbms_random.value() as result from dual;

RESULT
------
0.33363563290178533954768590716355821427


DBMS_RANDOM.VALUE(arg1 number, arg2 number) returns a random value between the supplied limits. The following sql should return a random number between 1 and 20.


select dbms_random.value(1,20) as result from dual;

RESULT
------
14.25291834803147861791906299688905822678


DBMS_RANDOM.STRING() function can also be used in order to generate a random character or a string. Function has two parameters one for the type of characters to be used and the second parameter for the length of the string. The following function definition is copied here from and Oracle 11gR2 databases DBMS_RANDOM package spec. It explains itself very well and nothing needs to be added.


FUNCTION string (opt char, len NUMBER)

/* 
"opt" specifies that the returned string may contain:

'u','U': upper case alpha characters only
'l','L': lower case alpha characters only
'a','A': alpha characters only (mixed case)
'x','X': any alpha-numeric characters 'p','P': any printable characters
*/
RETURN VARCHAR2;  
--string of  characters


So, if you need to produce some random string with 8 characters long and all characters are alpha numeric and lower case then it is easy by using the DBMS_RANDOM.STRING() function:


select dbms_random.string('l',8) as result from dual;

RESULT
------
yhxctomv


DBMS_RANDOM BY EXAMPLES

All the explanations of the examples can be found in the document that i shared from here. You can also find the link at the bottom of this post to the same document.


--------------------------
--masking the card numbers
--Ex.1
--------------------------
select 
lpad(round(dbms_random.value*power(10,4)),4,0) || '-' ||
lpad(round(dbms_random.value*power(10,4)),4,0) || '-' ||
lpad(round(dbms_random.value*power(10,4)),4,0) || '-' ||
lpad(round(dbms_random.value*power(10,4)),4,0)  as card_number
from dual 
connect by level <=5; 
/*
CARD_NUMBER
-----------
2877-6639-0728-5456
6026-6002-2218-9038
7679-8441-0899-2826
8294-6783-6110-7988
1836-0407-9206-3333
*/

--------------------------
--masking the card numbers
--Ex.2
--------------------------
select 
ltrim(to_char(dbms_random.value(1,9999),'0000')) || '-' ||
ltrim(to_char(dbms_random.value(1,9999),'0000')) || '-' ||
ltrim(to_char(dbms_random.value(1,9999),'0000')) || '-' ||
ltrim(to_char(dbms_random.value(1,9999),'0000'))  as card_number
from dual 
connect by level <=5; 
/*
CARD_NUMBER
-----------
1558-9846-7194-5325
5109-3233-0641-9209
3081-5946-9840-6615
4400-9638-6333-9113
2928-9883-1771-0465
*/

--------------------------
--masking the card numbers
--Ex.3
--------------------------
select 
ltrim(replace(to_char(round(dbms_random.value*power(10,16)),'0000,0000,0000,0000'),',','-')) as card_number
from dual
connect by level <=5;
/*
CARD_NUMBER
-----------
0157-8125-6418-6025
3829-9039-1357-9048
2876-1086-5371-8152
2775-1748-2591-2523
2058-2404-1101-5320
*/

--------------------------
--masking the card numbers
--Ex.4
--------------------------
select 
substr(abs(dbms_random.random),1,4) || '-' ||
substr(abs(dbms_random.random),1,4) || '-' ||
substr(abs(dbms_random.random),1,4) || '-' ||
substr(abs(dbms_random.random),1,4)  as card_number
from dual 
connect by level <=5; 
/*
CARD_NUMBER
-----------
8639-7576-1359-3965
1317-1525-2526-1796
1043-5881-1000-7113
2106-3239-8662-3769
1461-7473-5870-6829
*/

--------------------------
--masking the phone number
--Ex.5
--------------------------
select 
  '+' || 
  round(DBMS_RANDOM.VALUE(1,99)) || '-' ||
  round(DBMS_RANDOM.VALUE(10,99)) || '-' ||
  round(DBMS_RANDOM.VALUE(1000000,9999999)) as phone_number
from dual
connect by level <= 5;
/*
PHONE_NUMBER
------------
+8-44-9146987
*/

----------------------
--masking the postcode
--Ex.6
----------------------
select 
  round(dbms_random.value(1000,9999)) || '-' || 
  dbms_random.string('U',2) as postcode 
from dual;
/*
POSTCODE
--------
4997-QP
*/



What if, you have a password verify function which commits the passwords will be at least 8 characters long and must contain alphanumeric characters and this password complexity merged with a profile which has a password_lifetime of two month and password_reuse_max is four. If there is a predefined automatic case which explained detailly in the paper mentioned before which needs random password generation then there is the example which can be used;

-----------------
--random password
--Ex.7
-----------------
select  
  DBMS_RANDOM.STRING('A',1) || 
  round(DBMS_RANDOM.VALUE()*10) || 
  DBMS_RANDOM.STRING('X',6) as password 
from dual;
/*
PASSWORD
--------
w7N3C1YG
*/


TEST DATA GENERATION

I generally use two different methods while generating test data to fill the test tables. One of them is by using the DBMS_RANDOM package and the other is filling the bulk data in the columns with the same output as you can find in the following examples.

In this first part of the following example, the code tries to simulate a username bu using lower case string and with random lengths between five and fifteen. The firstname starts with uppercase by using INICAP() function and the lastname is fully in uppercase by using the UPPER() inline function. The second part is not that clever and it just creates the same data over and over which can substitude a customer name or a username.

----------------------
--test data generation
--Ex.8
----------------------
select 
  initcap(dbms_random.string('L',round(dbms_random.value(5,15)))) || ' ' || 
  upper(dbms_random.string('L',round(dbms_random.value(5,15)))) as name
from dual
connect by level <= 5;
/*
NAME
----
Abzvsidgbcfa AGUGIR
Wvuogptkxwhdwa IPOOXTVBLLCNPV
Yiwcgh SGPFKJYCDISO
Radshiyidcrst ZNKNSEYUZXVWY
Daxeqzugq LKJILZJEYULVI
*/

select
initcap(lpad('x',round(dbms_random.value(5,15)),'x')) || ' ' || 
upper(lpad('y',round(dbms_random.value(5,15)),'y')) as name
from dual
connect by level <= 5;
/*
NAME
----
Xxxxxxxxxxxx YYYYYYYYYYY
Xxxxxxxxxxxxxxx YYYYYYYYYYYYY
Xxxxxxxxxxxxxx YYYYYYYYY
Xxxxxxxx YYYYYYYYYYY
Xxxxxx YYYYYYYY
*/


select 
  initcap(lpad('x',9,'x')) || ' ' ||
  upper(lpad('y',9,'y')) as name
from dual
connect by level <= 5;
/*
NAME
----
Xxxxxxxxx YYYYYYYYY
Xxxxxxxxx YYYYYYYYY
Xxxxxxxxx YYYYYYYYY
Xxxxxxxxx YYYYYYYYY
Xxxxxxxxx YYYYYYYYY
*/


GENERATING MEANINGFUL DATA

Upto now, the generated data was completely dummy. For example while masking the credit card number, it wasn’t important if this was a valid card number or not, or if the names are real names or not. All the data generated were dummy random data which does not make any sense, they are just string or numbers formed by number characters or digits.

But by using the following example random data can be generated which makes sense (or a little sensible than the previous methods).

In this small example, there is a lookup table (which can be extended as far as the individual case needs) and the phone number generator picks up a random country code from the lookup table to generate the phone number. Every random phone number generated will have a valid country code by using the following example. Of course this example can be extended for the area codes as well.

--------------------------
--generating sensible data
--Ex.9
--------------------------
create table t_country_codes 
  (key number(2), 
   code number(2), 
   country varchar2(20));
   
insert into t_country_codes values (1,1,'United States');
insert into t_country_codes values (2,31,'Netherlands');
insert into t_country_codes values (3,44,'United Kingdom');
insert into t_country_codes values (4,49,'Germany');
insert into t_country_codes values (5,90,'Turkey');
commit;

select 
  '+' || code || '-' || 
  round(dbms_random.value(10,99)) || '-' || 
  round(DBMS_RANDOM.VALUE(100000,999999)) as phone_number
from 
  t_country_codes 
where 
  key=(select round(DBMS_RANDOM.VALUE(1,5)) from dual);
/*  
PHONE_NUMBER
------------
+31-54-732777
*/  


This post is taken from the paper i wrote for an internal use and enchanced for my blog post. You can download the paper from the following link if you have google docs access. Generating Random Data in Oracle 11g Database

Tuesday, June 28, 2011

Export problem and the invalid XDB library

This was about to be a regular export to the filesystem which is running on AIX 5.1 Operating system and the Oracle Database version is 9.2.0.7 and the Oracle client version is 10.2.0.3. But the weird error just popped out and interesting search results came up. Here is the story of the export.


Export: Release 10.2.0.3.0 - Production on Wed Oct 3 11:52:37 2007

Copyright (c) 1982, 2005, Oracle.  All rights reserved.


Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
Export done in WE8ISO8859P9 character set and AL16UTF16 NCHAR character set

About to export specified users ...
. exporting pre-schema procedural objects and actions
. exporting foreign function library names for user PARITEM 
. exporting PUBLIC type synonyms
. exporting private type synonyms
. exporting object type definitions for user PARITEM 
About to export PARITEM's objects ...
. exporting database links
. exporting sequence numbers
. exporting cluster definitions
EXP-00056: ORACLE error 600 encountered
ORA-00600: internal error code, arguments: [unable to load XDB library], [], [], [], [], [], [], []
EXP-00000: Export terminated unsuccessfully


It seems there is a problem with the XDB library. When i select from dba_registry to see the components and the statuses, i realized there is a problem with the XML Database component. After investigation and searching the metalink, i came up to the a document which explains how to recreate the XML database (please see the end of the post for the related resources)


--INVALID XML Database component

SQL> col comp_name format a30
SQL> set pagesize 100
SQL> select comp_name, status, version from dba_registry;

COMP_NAME                      STATUS      VERSION
------------------------------ ----------- ------------------------------
Oracle9i Catalog Views         VALID       9.2.0.7.0
Oracle9i Packages and Types    VALID       9.2.0.7.0
Oracle Workspace Manager       VALID       9.2.0.1.0
JServer JAVA Virtual Machine   VALID       9.2.0.7.0
Oracle XDK for Java            VALID       9.2.0.9.0
Oracle9i Java Packages         VALID       9.2.0.7.0
Oracle Text                    VALID       9.2.0.7.0
Oracle XML Database            INVALID     9.2.0.7.0
Spatial                        VALID       9.2.0.7.0
Oracle Ultra Search            VALID       9.2.0.7.0
Oracle Data Mining             VALID       9.2.0.7.0
OLAP Analytic Workspace        UPGRADED    9.2.0.7.0
Oracle OLAP API                UPGRADED    9.2.0.7.0
OLAP Catalog                   VALID       9.2.0.7.0

14 rows selected.

-- Dropping xml database
SQL> @?/rdbms/admin/catnoqm.sql;
SQL> drop trigger sys.xdb_installation_trigger;
SQL> drop trigger sys.dropped_xdb_instll_trigger;
SQL> drop table dropped_xdb_instll_tab;

-- Recreating the xml database
SQL> startup migrate;
SQL> @?/rdbms/admin/catproc.sql;
SQL> @?/rdbms/admin/catqm.sql;
SQL> @?/rdbms/admin/dbmsxsch.sql;
SQL> @?/rdbms/admin/catxdbj.sql; -- only in 9i
SQL> @?/rdbms/admin/xdbpatch;

SQL> select comp_name, status, version from dba_registry;

COMP_NAME                      STATUS      VERSION
------------------------------ ----------- ------------------------------
Oracle9i Catalog Views         VALID       9.2.0.7.0
Oracle9i Packages and Types    VALID       9.2.0.7.0
Oracle Workspace Manager       VALID       9.2.0.1.0
JServer JAVA Virtual Machine   VALID       9.2.0.7.0
Oracle XDK for Java            VALID       9.2.0.9.0
Oracle9i Java Packages         VALID       9.2.0.7.0
Oracle Text                    VALID       9.2.0.7.0
Oracle XML Database            VALID       9.2.0.7.0
Spatial                        VALID       9.2.0.7.0
Oracle Ultra Search            VALID       9.2.0.7.0
Oracle Data Mining             VALID       9.2.0.7.0
OLAP Analytic Workspace        UPGRADED    9.2.0.7.0
Oracle OLAP API                UPGRADED    9.2.0.7.0
OLAP Catalog                   VALID       9.2.0.7.0

14 rows selected.

Re run the export now !...


resources:
http://www.oratransplant.nl/2005/11/22/unable-to-export-char-semantic-102-database/
Note:339938.1 - Full Export From 10.2.0.1 Aborts With EXP-56 ORA-932 (Inconsistent Datatypes) EXP-0
Note:243554.1 - How to Deinstall and Reinstall XML Database (XDB)

Thursday, June 16, 2011

The way of opatch 10.2.0.4 Database on AIX


[oracle@]:/oracle/asmhome1/OPatch > opatch version
Invoking OPatch 10.2.0.5.1

OPatch Version: 10.2.0.5.1

OPatch succeeded.

[oracle@]:/oracle/asmhome1/OPatch > cd
[oracle@]:/oracle > . .profile

[oracle@]:/oracle > sql

SQL*Plus: Release 10.2.0.4.0 - Production on Wed Jun 1 09:48:52 2011

Copyright (c) 1982, 2007, Oracle.  All Rights Reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> alter system checkpoint;

System altered.

SQL> shutdown abort;
ORACLE instance shut down.
SQL> exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

[oracle@]:/oracle > 
[oracle@]:/oracle > . .profile_asm
[YOU HAVE NEW MAIL]
[oracle@]:/oracle > sql

SQL*Plus: Release 10.2.0.4.0 - Production on Wed Jun 1 09:54:58 2011

Copyright (c) 1982, 2007, Oracle.  All Rights Reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> shutdown;
ASM diskgroups dismounted
ASM instance shutdown
SQL> exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
[oracle@]:/oracle > lsnrctl stop

LSNRCTL for IBM/AIX RISC System/6000: Version 10.2.0.4.0 - Production on 01-JUN-2011 09:49:09

Copyright (c) 1991, 2007, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=defbora01)(PORT=1521)))
The command completed successfully

[oracle@]:/oracle/orahome1/OPatch > su - 
root's Password: 
[root@]:/home/root> /usr/sbin/slibclean
[root@]:/home/root>

[oracle@]:/oracle > cd $ORACLE_HOME/OPatch
[oracle@]:/oracle/asmhome1/OPatch > ls
11725015                              jlib                                  opatch.pl
README.txt                            ocm                                   opatchprereqs
crs                                   opatch                                p11725015_10204_AIX5L-cpuapr2011.zip
docs                                  opatch.bat                            p6880880_102000_AIX64-5L-opatch.zip
emdpatch.pl                           opatch.ini
[oracle@]:/oracle/asmhome1/OPatch > cd 11725015 
[oracle@]:/oracle/asmhome1/OPatch/11725015 > opatch napply --skip_subset --skip_duplicate

... 
(output truncated)
...

--------------------------------------------------------------------------------
**********************************************************************
**                       ATTENTION                                  **
**                                                                  **
** Please note that this Patch Installation is                      **
** not complete until all the Post Installation instructions        **
** noted in the Readme accompanying this patch, have been           **
** successfully completed.                                          **
**                                                                  **
**********************************************************************

--------------------------------------------------------------------------------


The local system has been patched and can be restarted.

UtilSession: N-Apply done.

OPatch succeeded.

[oracle@]:/oracle/asmhome1/OPatch/11725015 > 
[oracle@]:/oracle/asmhome1/OPatch/11725015 > opatch lsinv

... 
(output truncated)
...

[oracle@]:/oracle > . .profile
[oracle@]:/oracle/orahome1/OPatch > cd 11725015 
[oracle@]:/oracle/orahome1/OPatch/11725015 > opatch napply --skip_subset --skip_duplicate

...
(output truncated)
...

[oracle@]:/oracle/orahome1/OPatch/11725015 > sql

SQL*Plus: Release 10.2.0.4.0 - Production on Wed Jun 1 10:20:14 2011

Copyright (c) 1982, 2007, Oracle.  All Rights Reserved.

Connected to an idle instance.

SQL> startup upgrade;
ORACLE instance started.

Total System Global Area 3221225472 bytes
Fixed Size                  2087480 bytes
Variable Size             637535688 bytes
Database Buffers         2566914048 bytes
Redo Buffers               14688256 bytes
Database mounted.
Database opened.
SQL> @?/rdbms/admin/catbundle.sql cpu apply
SQL> @?/rdbms/admin/utlrp.sql

[oracle@]:/oracle/orahome1/cpu/view_recompile > ls -l
total 24
-rwxr-xr-x    1 oracle   dba            2095 Jul 10 2008  recompile_precheck_jan2008cpu.sql
-rwxr-xr-x    1 oracle   dba            5143 Jul 10 2008  view_recompile_jan2008cpu.sql
[oracle@]:/oracle/orahome1/cpu/view_recompile > sql

SQL*Plus: Release 10.2.0.4.0 - Production on Wed Jun 1 10:34:16 2011

Copyright (c) 1982, 2007, Oracle.  All Rights Reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> @recompile_precheck_jan2008cpu.sql;

Running precheck.sql...

Number of views to be recompiled :2226
-----------------------------------------------------------------------

Number of objects to be recompiled :4347
Please follow the README.txt instructions for running viewrecomp.sql

PL/SQL procedure successfully completed.

SQL> @view_recompile_jan2008cpu.sql;

PL/SQL procedure successfully completed.


PL/SQL procedure successfully completed.


PL/SQL procedure successfully completed.


1 row created.


Commit complete.

No. of Invalid Objects is :1848
Please refer to README.html to for instructions on validating these objects

PL/SQL procedure successfully completed.

Logfile for the current viewrecomp.sql session is : vcomp_CORET_01Jun2011_11_47_53.log
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup;
ORACLE instance started.

Total System Global Area 3221225472 bytes
Fixed Size                  2087480 bytes
Variable Size             637535688 bytes
Database Buffers         2566914048 bytes
Redo Buffers               14688256 bytes
Database mounted.
Database opened.
SQL> 

SQL> select bundle_series, action, version from dba_registry_history;

BUNDLE_SERIES          ACTION  VERSION
---------------------- ---------------- --------
CPU                    APPLY CPU 10.2.0.4


SQL> SELECT * FROM registry$history where ID = '6452863';

SQL> select bundle_series, action, version from registry$history where ID = '6452863';

BUNDLE_SERIES                  ACTION     VERSION
------------------------------ ---------- ------------------------------
CPU

Saturday, November 13, 2010

Basic Team Coding Extension for Oracle

Our development group is complaining about override each other 's codes by recompiling the same PLSQL program unit that is actively in development with any other developer. TOAD has builtin Team Coding functionality but there are developers not using TOAD in the development department.

I developed a tiny Team Coding functionality works in the database with using a DDL trigger. By this BEFORE CREATE Trigger developers are not able to alter any program unit by without first checking it out. You can find a small example below.

Without first declaring in which schema you want Team Coding should work everything works as they should be.



connect SYSTEM/***@ORCL11G

create or replace function SYSTEM.FNC_TEST return date as 
begin
return sysdate;
end;

--FUNCTION FNC_TEST compiled



After declaring the schema you want to enable Team Coding.



exec PCK_TCODE_CONTROL.set_param('SCHEMA_NAME','SYSTEM','INSERT');
--anonymous block completed

create or replace function SYSTEM.FNC_TEST return date as 
begin
return sysdate;
end;

--ORA-20501: You are not allowed to compile the code! 
--           First PCK_TCODE_CONTROL.CHECKOUT the unit.

exec PCK_TCODE_CONTROL.checkout('FNC_TEST','SYSTEM');
--anonymous block completed

create or replace function SYSTEM.FNC_TEST return date as 
begin
return sysdate;
end;
--FUNCTION FNC_TEST compiled

connect u_epeker/***@ORCL11G

create or replace function SYSTEM.FNC_TEST return date as 
begin
return sysdate;
end;

--ORA-20500: Unit Locked by another user! 
--           Check PCK_TCODE_CONTROL.CHECK_AVAILIBILITY function

select PCK_TCODE_CONTROL.check_avail('FNC_TEST','SYSTEM') from dual;
/*
"PCK_TCODE_CONTROL.CHECK_AVAIL('FNC_TEST','SYSTEM')"
"TCODE: SYSTEM.FNC_TEST is checked out by SYSTEM on 13.11.2010 19:26:30"
*/

connect SYSTEM/***@ORCL11G

exec PCK_TCODE_CONTROL.checkin('FNC_TEST','SYSTEM');
--anonymous block completed

connect U_EPEKER/***@ORCL11G

select PCK_TCODE_CONTROL.check_avail('FNC_TEST','SYSTEM') from dual;
--null

exec PCK_TCODE_CONTROL.checkin('FNC_TEST','SYSTEM');
--anonymous block completed

select PCK_TCODE_CONTROL.check_avail('FNC_TEST','SYSTEM') from dual;
/*
"PCK_TCODE_CONTROL.CHECK_AVAIL('FNC_TEST','SYSTEM')"
"TCODE: SYSTEM.FNC_TEST is checked out by U_EPEKER on 13.11.2010 19:33:53"
*/

create or replace function SYSTEM.FNC_TEST return date as 
begin
return sysdate;
end;

--FUNCTION FNC_TEST compiled



As you can see people are not able to override their codes hence this functionality exists. You can add this functionality to you database by running the following script. This small script does not remember the versions of the program units. Anyone can develop version history functionality on top of this small script and repository or maybe I can handle this version history in another version.

You can find the script here