Job "SYSTEM"."SYS_EXPORT_FULL_06" stopped due to fatal error at 01:42:36
I decided to investigate the log file and seems there is a lack of available space in the system tablespace and the datapump job goes in to the resumable wait and after 2 hours (which is a default resumable timeout value) fails. The error lines as follows.
ORA-39171: Job is experiencing a resumable wait. ORA-01653: unable to extend table SYSTEM.SYS_EXPORT_FULL_06 by 128 in tablespace SYSTEM
I think I should investigate the root cause of the problem. I want to know what objects are using the SYSTEM tablespace first. Listing the largest segments would be handy.
select segment_name, bytes/1024/1024 as MB from dba_segments where tablespace_name='SYSTEM' order by bytes desc; /* SEGMENT_NAME MB ------------ -- SYS_LOB0001316900C00039$$ 3072 SYS_LOB0001344581C00039$$ 3054 SYS_LOB0001343732C00039$$ 3029 SYS_LOB0001320071C00039$$ 2375 SYS_LOB0001344712C00039$$ 1216 C_OBJ# 816 I_COL1 745 I_COL2 484 */
What are these SYSTEM generated segments?? They are allocating lots of space here.
select owner, segment_name, segment_type, tablespace_name, bytes/1024/1024 as MB from dba_segments where segment_name = 'SYS_LOB0001344712C00039$$' /* OWNER SEGMENT_NAME SEGMENT_TYPE TABLESPACE_NAME MB ----- ------------ ------------ --------------- -- SYSTEM SYS_LOB0001344712C00039$$ LOBSEGMENT SYSTEM 1216 */ select owner, table_name, column_name, segment_name from dba_lobs where segment_name = 'SYS_LOB0001344712C00039$$'; /* OWNER TABLE_NAME COLUMN_NAME SEGMENT_NAME ----- ---------- ----------- ------------ SYSTEM SYS_EXPORT_FULL_05 XML_CLOB SYS_LOB0001344712C00039$$ */
Now it is clear. Datapump jobs locked themselves because of the tables created by the jobs itself (master tables of the datapump jobs). I think when the jobs couldnt ended up with success (or failed unexpectedly), datapump cannot tidy up its behind. Let me see the datapump jobs existing in the database.
select * from dba_datapump_jobs; select * from dba_datapump_sessions; /* OWNER_NAME JOB_NAME OPERATION JOB_MODE STATE ---------- -------- -------- -------- ----- SYSTEM SYS_EXPORT_FULL_04 EXPORT FULL NOT RUNNING SYSTEM SYS_EXPORT_FULL_05 EXPORT FULL NOT RUNNING */
I think that is what i guess. The jobs are not running and there no sessions attached to them. I decided to drop the tables to obtain some free space to the SYSTEM tablespace.
drop table SYSTEM.SYS_EXPORT_FULL_04; select segment_name, bytes/1024/1024 as MB from dba_segments where tablespace_name='SYSTEM' order by bytes desc; select to_char(trunc(sum(bytes)/1024/1024),'99,999,999') as FREE_MB from dba_free_space where tablespace_name='SYSTEM'; /* FREE_MB ------ 14,644 */
Now it is OK. 20Gb of SYSTEM tablespace is now has more than 14Gb of free space. Let me investigate the problem further for not to fall into the same problem again. When i check the available space of filesystem which datapump job works at, there seems a little space left on the disks. Then it is possible to be an unexpected job failure.
Meanwhile, As far as i know if you are using datapump in parallel mode you should obtain more than one dumpfile. You can attach %U parameter at the end of the file then datapump will automatically create the number of the files for every parallel thread. But in this case there is only one file is specified and 6 parallel thread is trying to be used. I dont know if this is the real cause of the problem but i will definitely remove the parallel clause from the parameter file.
There is another option here that i didnt tried before which is compression. I am not sure of this but may be the compression operation of the METADATA is done in the system tablespace? Or somehow needs some space in the database. It look silly because if the compression is done in an algorithm like this it is not really efficient, but i still dont want to use this parameter so also removed the compression parameter from the parameter file.
full=y
compression=METADATA_ONLY
directory=DATA_PUMP_DIR
dumpfile=fusion_export_full.exp
logfile=fusion_export_full.log
parallel=6
userid=system/
Check out
ReplyDeleteHow To Cleanup Orphaned DataPump Jobs In DBA_DATAPUMP_JOBS ? [ID 336014.1]
You should watch out, don't just drop any of these tables.