Showing posts with label PLSQL. Show all posts
Showing posts with label PLSQL. Show all posts

Tuesday, December 9, 2014

Fetching Multiple Rows with EXECUTE IMMEDIATE

execute immediate is very handy for executing dynamic SQL statements inside PLSQL block. It can accept IN and OUT parameters and can also return output values to PLSQL variables by into clause. But returning values can be tricky because of the multiple returning rows. For single row returns into clause works fine but for multiple rows, hitting the following exception is quite easy.

declare  
  
  v_cmd varchar2(100);
  v_table_owner varchar2(30) := 'SCH_OLTP';  
  v_table_name varchar2(30) := 'T_VISIT_STATS_DETAILED';
  
  v_column_name varchar2(30);
  
begin 

  v_cmd := 'select column_name from dba_tab_columns where owner=:v_table_owner and table_name=:v_table_name';
   
  execute immediate v_cmd into v_column_name
    using v_table_owner, v_table_name;
  
end;
/*
ORA-01422: exact fetch returns more than requested number of rows
ORA-06512: at line 13
*/

So it is important to check the returning row count before using execute immediate is important. Or you can use the bulk collect clause to fetch multiple returning rows into an array.

declare 
  type typ_varcharlist is table of varchar2(30);
  
  col_list typ_varcharlist;  
  
  v_cmd varchar2(100);
  v_table_owner varchar2(30) := 'SCH_OLTP';  
  v_table_name varchar2(30) := 'T_VISIT_STATS_DETAILED';
  
begin 
  v_cmd := 'select column_name from dba_tab_columns where owner=:v_table_owner and table_name=:v_table_name';
  
  
  execute immediate v_cmd bulk collect into col_list 
    using v_table_owner, v_table_name;
  
  for n_ctl in 1 .. col_list.count() loop
    dbms_output.put_line (col_list(n_ctl));
  end loop;
  
end;
/*
DUMMY
SUCCESS
VISITED_USER_ID
FROM_IP
VISIT_DATE
PAGE_ID
*/

Reference:
Oracle PLSQL Users Guide
Thesis and Papers > Oracle 11g Execute Immediate


Wednesday, September 25, 2013

OOW Notes - What is new in Oracle DB App. Development.

Monday 12:15 - Whats new in Oracle Database App. Development – Thomas Kyte


Database improvements


Application client continuity called TAF


problem: if you click an application button twice, did it really committed twice or if the node failure happened and my transaction rolled back (or finished successfully)


Transaction guard – first rdbms to preserve commit outcome.. at most one transaction execution. Used by jdbc-thin, oci, occi, odp.net


now able to ask if the transaction is rollback or commited to the transaction guard.


Logs are info held in the client side application server.


Sql translation framework


by creating profiles now it is able to transform transact-sql to oracle pl-sql.

--create a profile and translation
dbms_sql_translator.create_profile(profile_name);
dbms_sql_translator.register_sql_translator(profile_name, 'select * from scott.emp','select* from scott.dept');

--in order to use in a transaction
alter session set sql_translation_profile=profile_name;
alter session set events '10601 trace name context forever, level 32';


APEX improvements


michael fitcher → web db → html db → apex

web browser → web listener / or web server with mod_plsql / or apex listener → database

4.2.3 is available

html5 supported charts and page items

packed applications
  • ask us (asktom.oracle.com)
  • group calendar
  • project tracking
  • issue tracker
  • checklist manager
  • meeting minutes

full support for apex in oracle multi-tenant architecture (apex installed in a container, patched and upgraded across all pluggable databases)

full support for 32K varchar2 columns
integration with real application security coming in apex 5.0


SQL Developer technology

sql command line extended reporting and charting.
Cart is enhanced.
Reports are greatly enhanced..
UI enhancements → border colors now able to change on connections
includes data modeler version 4.0
jdk 7 support and above

10012 → what is new in sqldeveloper 4.0 (I am already registered)

ODP.NET

transaction guard support guarantees that a commit should be done at most once.
Support for PLSQL type (boolean)

Support for all 12c types.
Multitenant support
Impcilicit ref cursor support
Array binding

Apply keyword in SQL for enhanced LINQ support

global data services support
  • region based workload routing
  • connect time load balancing → global data services (gds)
  • distributed replicas
Networking

Data compression levels
  • connection level (connection string)
  • service level (tnsnames.ora)
  • database level (sqlnet.ora)
Large buffers
  • default sdu is 8k (was 2k before 11g)
  • max values was 64k
  • max values is now 2m (clobs and blobs)
Multitenant data source

DRCP

Database resident connection pool vs shared server (serious differences and advantages)

JDBC

  • Jpublisher maps packages to jdbc in 12.1
  • In database map reduce !?!
  • Drop in hadoop mappers and reducers (run as is)
  • Support for hadoop ecosystem components
  • SQL support for hadoop
OCI

Performance

  • xml based client config file
  • auto tune of statement cache size
  • array dml returns affected row count in an array

Database

  • full pluggable database support

SQL and Datatypes

  • long string support
  • row count byte is 8byte integer from now on
  • bind directly to plsql packages
  • implicit result set supported

PL/SQL

query plsql table type directly (no more to create sql types)

less namespace clutter

grant roles to code
  • invokers rights routines
  • code will run with current set of privileges of the invoker plus that roles
  • white list (that sp can only be runned from other piece code but never from sqldeveloper)
  • fine grained privilege to protect code

improved introspection
utl_call_stack → provides procedural, structured access to the information


references:

www.mhprofessional.com/dbsec

Wednesday, March 14, 2012

Collection Types in table columns may cause performance problems

As this is not one of the relational database rules supported by Boyce-Codd, in Oracle Database table columns can be defined as NESTED TABLES or COLLECTION TYPES. This functionality seems to be handy in some cases as you can fetch all the values listed in one column but there may be some consequences especially in performance.

To demonstrate the collection and nested table types, i will create three different users table. First table will use a COLLECTION/VARRAY TYPE as the column type, the second one will use the NESTED TABLE and the third table will use built-in VARCHAR type to hold the demo users phone numbers. After all, i will insert some random data to query and to examine the execution plans and statistics information.


--create the test array type
create type TYP_PHONE_VARRAY as varray(5) of varchar2(15); 

--create the test table type
create type TYP_PHONE_TABLE as table of varchar2(15); 


--create table which uses the varray type
drop table T_USER_VARRAY;
create table T_USER_VARRAY
(
username varchar2(25),
fullname varchar2(25),
phone TYP_PHONE_VARRAY default null
);

--create table which uses the nested table
drop table T_USER_TABLE;
create table T_USER_TABLE
(
username varchar2(25),
fullname varchar2(25),
phone TYP_PHONE_TABLE default null
)
NESTED TABLE phone STORE AS nt_t_user_table_phone;

--create table which uses the built-in varchar as column type
drop table U_EPEKER.T_USER_STR;
create table U_EPEKER.T_USER_STR
(
username varchar2(25),
fullname varchar2(25),
phone1 varchar2(15) default null,
phone2 varchar2(15) default null,
phone3 varchar2(15) default null,
phone4 varchar2(15) default null,
phone5 varchar2(15) default null
);



After creating the types and the tables, some random data would be very useful to query and examine the execution plans of the queries. And of course i should not forget to gather statistics, also on the nested table.



--fill the varray typed table with the test data.
declare

  i number;
  v_name varchar2(10);
  v_surname varchar2(10);
  v_username varchar2(25);
  v_phone varchar2(15);
  
begin

  execute immediate ('truncate table T_USER_VARRAY');
  i:=0;
 
  while i<10 loop
    v_name := DBMS_RANDOM.STRING('u', 10);
    v_surname := DBMS_RANDOM.STRING('u', 10);
    v_username := substr(v_name,1,1) || '_' || v_surname;
    v_phone := '+31' || round(DBMS_RANDOM.VALUE(60,70)) || 
                        round(DBMS_RANDOM.VALUE(1000000,9999999)); 
    
    insert into t_user_varray
    values
    (
    v_username,
    v_name || ' ' || v_surname,
    TYP_PHONE_VARRAY(v_phone)
    );
  i:=i+1;
  end loop;
  
  commit;
  
end; 


--fill the nested table with the test data.
declare

  i number;
  v_name varchar2(10);
  v_surname varchar2(10);
  v_username varchar2(25);
  v_phone varchar2(15);
  
begin

  execute immediate ('truncate table T_USER_TABLE');
  i:=0;
 
  while i<10 loop
    v_name := DBMS_RANDOM.STRING('u', 10);
    v_surname := DBMS_RANDOM.STRING('u', 10);
    v_username := substr(v_name,1,1) || '_' || v_surname;
    v_phone := '+31' || round(DBMS_RANDOM.VALUE(60,70)) || 
                        round(DBMS_RANDOM.VALUE(1000000,9999999)); 
    
    insert into t_user_table
    values
    (
    v_username,
    v_name || ' ' || v_surname,
    TYP_PHONE_TABLE(v_phone)
    );
  i:=i+1;
  end loop;
  
  commit;
  
end; 


--fill the conventional table with the test data.
declare

  i number;
  v_name varchar2(10);
  v_surname varchar2(10);
  v_username varchar2(25);
  v_phone varchar2(15);
  
begin

  execute immediate ('truncate table T_USER_STR');
  i:=0;
 
  while i<10 loop
    v_name := DBMS_RANDOM.STRING('u', 10);
    v_surname := DBMS_RANDOM.STRING('u', 10);
    v_username := substr(v_name,1,1) || '_' || v_surname;
    v_phone := '+31' || round(DBMS_RANDOM.VALUE(60,70)) || 
                        round(DBMS_RANDOM.VALUE(1000000,9999999)); 
    
    insert into t_user_str
    (
    username,
    fullname,
    phone1
    )
    values
    (
    v_username,
    v_name || ' ' || v_surname,
    v_phone
    );
  i:=i+1;
  end loop;
  
  commit;
  
end; 

--gather the statistics of the filled tables
exec sys.dbms_stats.gather_table_stats(ownname=>'U_EPEKER', tabname=>'T_USER_STR', estimate_percent=>100, cascade=>true);
exec sys.dbms_stats.gather_table_stats(ownname=>'U_EPEKER', tabname=>'T_USER_VARRAY', estimate_percent=>100, cascade=>true);
exec sys.dbms_stats.gather_table_stats(ownname=>'U_EPEKER', tabname=>'T_USER_TABLE', estimate_percent=>100, cascade=>true);
exec sys.dbms_stats.gather_table_stats(ownname=>'U_EPEKER', tabname=>'NT_T_USER_TABLE_PHONE', estimate_percent=>100, cascade=>true);


After the preparation of the tables and filling them with some test data we can examine the costs of the same identical queries on these tables.

Selecting all five of the columns and only the one column has both 8 bytes of "consistent gets" but the returned amount of data differs as expected. As the developer cannot guess how many phone number exists for an individual user, probably in the code all columns of the phones be selected to be sure of it.

But when selecting from the table which has the VARRAY type as the "phone" column there is an unexpected amount of bytes and "consistent gets" in the first look. As it is also explained in the Oracle Documentation columns and variables which are defined as VARRAY types are objects which should be instantiated once in the memory. This is the most probable reason of this excessive consistent gets which is almost 5 times more than the conventinal VARCHAR column. Even there is only one phone number stored in the VARRAY, read operation results as if there are five values in the list because the object instantiated as it is defined before.

On the other hand, when i examine the query on the table which uses the NESTED TABLE type for the phone column there is a considerable difference in the "consistent gets" and "bytes read" when comparing with the VARRAY type. Most probably this result indicates that NESTED TABLE types are not instantiated as objects in the memory. They are real tables in the database which are nested in another table and returns the results which they store physically in the database. If you investigate you will find the index and the table segments in the tablespace.


EXPLAIN PLAN FOR
SELECT 
  phone1, phone2, phone3, phone4, phone5 
FROM t_user_str 
where username='M_VKFHWFQOKL';  --1 row 48 bytes

/*
Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
          8  consistent gets
          0  physical reads
          0  redo size
        810  bytes sent via SQL*Net to client
        523  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed
*/

EXPLAIN PLAN FOR
SELECT 
  phone1
FROM t_user_str 
where username='M_VKFHWFQOKL';  --1 row 26 bytes

/*
Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
          8  consistent gets
          0  physical reads
          0  redo size
        534  bytes sent via SQL*Net to client
        523  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed
*/

EXPLAIN PLAN FOR
SELECT 
  phone 
FROM t_user_varray 
where username='V_RLQDMEPGBJ';  --1 row 35 bytes

/*
Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
         44  consistent gets
          0  physical reads
          0  redo size
       4619  bytes sent via SQL*Net to client
       1865  bytes received via SQL*Net from client
         11  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed
*/


EXPLAIN PLAN FOR
SELECT 
  phone 
FROM t_user_table 
where username='N_RNJASPGQOS';  --1 row 30 bytes

/*
Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
         16  consistent gets
          0  physical reads
          0  redo size
       1675  bytes sent via SQL*Net to client
        800  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed
*/



As a conclusion; if you are not sure that the VARRAY typed column will not get filled properly then, instead of using VARRAY type using NESTED TABLES may be more convenient for the performance of the application. These small decreases of IO and consistent gets may be very valuable in a busy application for an enterprise environment.


Resources:
http://docs.oracle.com/cd/B19306_01/appdev.102/b14261/objects.htm
http://docs.oracle.com/cd/B28359_01/appdev.111/b28371/adobjdes.htm#i448939
http://docs.oracle.com/cd/B28359_01/appdev.111/b28371/adobjcol.htm#i454908


Tuesday, July 19, 2011

Hasty DBA's Most Probable Errors while testing TDE

Here I tested Oracle Transparent Data Encryption in 10gR2 (10.2.0.4) database which is installed on AIX 6.1 to see how it works and if there are some undocumented difficulty while using the technology. As i was too hasty and careless, i have seen most of the cheap errors.

In Summary;

ORA-28368: cannot auto-create wallet

Solution: Create the wallet directory (in this case default wallet directory which is $ORACLE_BASE/admin/$ORACLE_SID/wallet)

ORA-28336: cannot encrypt SYS owned objects

Solution: Do not test under SYS schema :)

ORA-39173: Encrypted data has been stored unencrypted in dump file set.

Solution: Use encryption_password parameter to store encrypted data safe in the export dump. Else it will be plain text.

ORA-31693: Table data object "ERGEMP"."T_ENCRYPTED" failed to load/unload and is being skipped due to error:
ORA-28336: cannot encrypt SYS owned objects

Solution: Use your own user for exporting instead of SYS as you cannot encrypt SYS obejcts even in the export dumpfile.

Error: ORA-39087: directory name DATA_PUMP_DIR is invalid

Solution: Dont forget to give write permission to your user which you are exporting data with :)


The rest is as follows.


[oracle@defbora01]:/oracle/admin/CORET > sql

SQL*Plus: Release 10.2.0.4.0 - Production on Thu Jul 14 12:18:08 2011

Copyright (c) 1982, 2007, Oracle.  All Rights Reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> alter system set encryption key identified by "pass1234";
alter system set encryption key identified by "pass1234"
*
ERROR at line 1:
ORA-28368: cannot auto-create wallet


SQL> exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
[oracle@defbora01]:/oracle/admin/CORET > mkdir wallet
[oracle@defbora01]:/oracle/admin/CORET > sql

SQL*Plus: Release 10.2.0.4.0 - Production on Thu Jul 14 12:20:03 2011

Copyright (c) 1982, 2007, Oracle.  All Rights Reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> alter system set encryption key identified by "pass1234";

System altered.

SQL> 


/*** ***/

SQL> create table t_encrypted(col1 varchar2(50) encrypt);
create table t_encrypted(col1 varchar2(50) encrypt)
*
ERROR at line 1:
ORA-28336: cannot encrypt SYS owned objects
SQL> create user ergemp identified by "pass1234" default tablespace users;

User created.

SQL> 
SQL> 
SQL> grant connect, resource to ergemp;

Grant succeeded.

SQL> create table ergemp.t_encrypted(col1 varchar2(50) encrypt);

Table created.

SQL> insert into ergemp.t_encrypted values ('this text should be encrypted');

1 row created.

SQL> commit;

Commit complete.

SQL> select * from dba_encrypted_columns;

OWNER           TABLE_NAME                     COLUMN_NAME               ENCRYPTION_ALG                SAL
--------------- ------------------------------ ------------------------- ----------------------------- ---
ERGEMP          T_ENCRYPTED                    COL1                      AES 192 bits key              YES

SQL> select * from ERGEMP.T_ENCRYPTED;

COL1
--------------------------------------------------
this text should be encrypted

SQL> 



I will test the datapump export. First i will not use the "encryption_password" parameter and search for the string of the column value if it is somewhere in the dumpfile.

And then i will run the same datapump export by using the "encryption_password" parameter to be sure the column value is not readable in the dumpfile with "string" utility.


[oracle@defbora01]:/oracle/orahome1/rdbms> expdp '/******** AS SYSDBA' directory=DATA_PUMP_DIR dumpfile=expdp.dmp logfile=expdp.log tables='ERGEMP.T_ENCRYPTED'


Export: Release 10.2.0.4.0 - 64bit Production on Friday, 15 July, 2011 11:53:18

Copyright (c) 2003, 2007, Oracle.  All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SYS"."SYS_EXPORT_TABLE_01":  '/******** AS SYSDBA' directory=DATA_PUMP_DIR dumpfile=expdp.dmp logfile=expdp.log tables=ERGEMP.T_ENCRYPTED 
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "ERGEMP"."T_ENCRYPTED"                      4.953 KB       1 rows
ORA-39173: Encrypted data has been stored unencrypted in dump file set.
Master table "SYS"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_TABLE_01 is:
/oracle/orahome1/rdbms/log/expdp.dmp
Job "SYS"."SYS_EXPORT_TABLE_01" completed with 1 error(s) at 11:56:53

[oracle@defbora01]:/oracle/orahome1/rdbms/log > strings expdp.dmp | grep "this text should be encrypted"
this text should be encrypted

[oracle@defbora01]:/oracle/orahome1/rdbms/log >



Without using the "encryption_password", data in the export files are not safe because the they are held as in plain text as it can be seen from the previous example.


[oracle@defbora01]:/oracle/orahome1/rdbms> expdp '/******** AS SYSDBA' directory=DATA_PUMP_DIR dumpfile=expdp.dmp logfile=expdp.log tables='ERGEMP.T_ENCRYPTED' encryption_password="pass1234"

Export: Release 10.2.0.4.0 - 64bit Production on Friday, 15 July, 2011 13:53:56

Copyright (c) 2003, 2007, Oracle.  All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SYS"."SYS_EXPORT_TABLE_01":  '/******** AS SYSDBA' directory=DATA_PUMP_DIR dumpfile=expdp.dmp logfile=expdp.log tables=ERGEMP.T_ENCRYPTED encryption_password=******** 
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
ORA-31693: Table data object "ERGEMP"."T_ENCRYPTED" failed to load/unload and is being skipped due to error:
ORA-28336: cannot encrypt SYS owned objects
Master table "SYS"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_TABLE_01 is:
/oracle/orahome1/rdbms/log/expdp.dmp
Job "SYS"."SYS_EXPORT_TABLE_01" completed with 1 error(s) at 13:55:55

[oracle@defbora01]:/oracle/orahome1/rdbms> expdp 'ergemp/********' directory=DATA_PUMP_DIR dumpfile=expdp.dmp logfile=expdp.log tables='ERGEMP.T_ENCRYPTED' encryption_password="pass1234"

Export: Release 10.2.0.4.0 - 64bit Production on Friday, 15 July, 2011 13:59:17

Copyright (c) 2003, 2007, Oracle.  All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ORA-39002: invalid operation
ORA-39070: Unable to open the log file.
ORA-39087: directory name DATA_PUMP_DIR is invalid

SQL> grant read,write on directory DATA_PUMP_DIR to ERGEMP;

Grant succeeded.

[oracle@defbora01]:/oracle/orahome1/rdbms> expdp 'ergemp/********' directory=DATA_PUMP_DIR dumpfile=expdp.dmp logfile=expdp.log tables='ERGEMP.T_ENCRYPTED' encryption_password="pass1234"

Export: Release 10.2.0.4.0 - 64bit Production on Friday, 15 July, 2011 14:08:59

Copyright (c) 2003, 2007, Oracle.  All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "ERGEMP"."SYS_EXPORT_TABLE_01":  'ergemp/********' directory=DATA_PUMP_DIR dumpfile=expdp.dmp logfile=expdp.log tables=ERGEMP.T_ENCRYPTED encryption_password=******** 
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "ERGEMP"."T_ENCRYPTED"                          5 KB       1 rows
Master table "ERGEMP"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for ERGEMP.SYS_EXPORT_TABLE_01 is:
/oracle/orahome1/rdbms/log/expdp.dmp
Job "ERGEMP"."SYS_EXPORT_TABLE_01" successfully completed at 14:10:19

[oracle@defbora01]:/oracle/orahome1/rdbms/log > strings expdp.dmp | grep "this text should be encrypted"

[oracle@defbora01]:/oracle/orahome1/rdbms/log >



By using the "encryption_password" while exporting the table the data in the column cannot be distinguished by digging the plain strings in the export file.

resources:
Oracle Advanced Security Admin Guide

Saturday, November 13, 2010

Basic Team Coding Extension for Oracle

Our development group is complaining about override each other 's codes by recompiling the same PLSQL program unit that is actively in development with any other developer. TOAD has builtin Team Coding functionality but there are developers not using TOAD in the development department.

I developed a tiny Team Coding functionality works in the database with using a DDL trigger. By this BEFORE CREATE Trigger developers are not able to alter any program unit by without first checking it out. You can find a small example below.

Without first declaring in which schema you want Team Coding should work everything works as they should be.



connect SYSTEM/***@ORCL11G

create or replace function SYSTEM.FNC_TEST return date as 
begin
return sysdate;
end;

--FUNCTION FNC_TEST compiled



After declaring the schema you want to enable Team Coding.



exec PCK_TCODE_CONTROL.set_param('SCHEMA_NAME','SYSTEM','INSERT');
--anonymous block completed

create or replace function SYSTEM.FNC_TEST return date as 
begin
return sysdate;
end;

--ORA-20501: You are not allowed to compile the code! 
--           First PCK_TCODE_CONTROL.CHECKOUT the unit.

exec PCK_TCODE_CONTROL.checkout('FNC_TEST','SYSTEM');
--anonymous block completed

create or replace function SYSTEM.FNC_TEST return date as 
begin
return sysdate;
end;
--FUNCTION FNC_TEST compiled

connect u_epeker/***@ORCL11G

create or replace function SYSTEM.FNC_TEST return date as 
begin
return sysdate;
end;

--ORA-20500: Unit Locked by another user! 
--           Check PCK_TCODE_CONTROL.CHECK_AVAILIBILITY function

select PCK_TCODE_CONTROL.check_avail('FNC_TEST','SYSTEM') from dual;
/*
"PCK_TCODE_CONTROL.CHECK_AVAIL('FNC_TEST','SYSTEM')"
"TCODE: SYSTEM.FNC_TEST is checked out by SYSTEM on 13.11.2010 19:26:30"
*/

connect SYSTEM/***@ORCL11G

exec PCK_TCODE_CONTROL.checkin('FNC_TEST','SYSTEM');
--anonymous block completed

connect U_EPEKER/***@ORCL11G

select PCK_TCODE_CONTROL.check_avail('FNC_TEST','SYSTEM') from dual;
--null

exec PCK_TCODE_CONTROL.checkin('FNC_TEST','SYSTEM');
--anonymous block completed

select PCK_TCODE_CONTROL.check_avail('FNC_TEST','SYSTEM') from dual;
/*
"PCK_TCODE_CONTROL.CHECK_AVAIL('FNC_TEST','SYSTEM')"
"TCODE: SYSTEM.FNC_TEST is checked out by U_EPEKER on 13.11.2010 19:33:53"
*/

create or replace function SYSTEM.FNC_TEST return date as 
begin
return sysdate;
end;

--FUNCTION FNC_TEST compiled



As you can see people are not able to override their codes hence this functionality exists. You can add this functionality to you database by running the following script. This small script does not remember the versions of the program units. Anyone can develop version history functionality on top of this small script and repository or maybe I can handle this version history in another version.

You can find the script here