Wednesday, 25 March 2015

ORA-20011: Approximate NDV failed: ORA-29913: error in executing ODCIEXTTABLEOPEN callout

ORA-20011: Approximate NDV failed: ORA-29913: error in executing ODCIEXTTABLEOPEN callout
KUP-11024: This external table can only be accessed from within a Data Pump job.



Alert log File information:

DBMS_STATS: GATHER_STATS_JOB encountered errors.  Check the trace file.
Errors in file /u01/app/oracle/admin/testdb/diag/rdbms/testdb/testdb/trace/testdb_j000_22426.trc:
ORA-20011: Approximate NDV failed: ORA-29913: error in executing ODCIEXTTABLEOPEN callout
KUP-11024: This external table can only be accessed from within a Data Pump job.

Trace File Information:

*** 2015-03-26 08:30:18.342
DBMS_STATS: GATHER_STATS_JOB: GATHER_TABLE_STATS('"OPS$ORACLE"','"ET$016FA3770001"','""', ...)
DBMS_STATS: ORA-20011: Approximate NDV failed: ORA-29913: error in executing ODCIEXTTABLEOPEN callout
KUP-11024: This external table can only be accessed from within a Data Pump job.


Cause:
The primary cause of this issue is that an external table existed at some point in time but does not now. However, the database still believes the table exists since the dictionary information about the object has not been modified to reflect the change. When DBMS_STATS is run against the table in question, it makes a call out to the external table which fails because the object is not there.

There are many reasons that an external table may not exist including:
1. Temporary Data pump external tables have not been cleaned up properly. The dictionary information should have been dropped when the DataPump jobs completed.
2. An External table has been removed without clearing up the corresponding data dictionary information. For example: Oracle Demo Schema Tables such as the external table “SALES_TRANSACTIONS_EXT” may have been removed but the dictionary has not been updated to reflect this. The "SALES_TRANSACTIONS_EXT" table is an external table in the "SH" schema which is one of Demo Schema provided by Oracle.

Note:Our issue is due to point one. External tables are not cleaned properly.

Solutions:  To clean up the Orphaned datapump jobs.

Check and cleanup orphaned datapump jobs:

SELECT owner_name, job_name, operation, job_mode,state, attached_sessions FROM dba_datapump_jobs
WHERE job_name NOT LIKE 'BIN$%'
ORDER BY 1,2;


To identify the external tables.

SQL> conn / as sysdba
Connected.

SQL>
set linesize 200 trimspool on
set pagesize 2000
col owner form a30
col created form a25
col last_ddl_time form a25
col object_name form a30
col object_type form a25

select OWNER,OBJECT_NAME,OBJECT_TYPE, status,
to_char(CREATED,'dd-mon-yyyy hh24:mi:ss') created
,to_char(LAST_DDL_TIME , 'dd-mon-yyyy hh24:mi:ss') last_ddl_time
from dba_objects
where object_name like 'ET$%'
/


OWNER                          OBJECT_NAME                    OBJECT_TYPE               STATUS  CREATED                   LAST_DDL_TIME
------------------------------ ------------------------------ ------------------------- ------- ------------------------- -------------------------
OPS$ORACLE                     ET$000E4FF90001                TABLE                     VALID   16-oct-2012 13:10:15      16-oct-2012 13:10:15
OPS$ORACLE                     ET$007360190001                TABLE                     VALID   18-sep-2012 23:17:32      18-sep-2012 23:17:32
OPS$ORACLE                     ET$00F39F430001                TABLE                     VALID   16-oct-2012 13:33:10      16-oct-2012 13:33:10
OPS$ORACLE                     ET$016FA3770001                TABLE                     VALID   16-oct-2012 13:57:36      16-oct-2012 13:57:36

8 rows selected.

SQL> select owner, TABLE_NAME, DEFAULT_DIRECTORY_NAME, ACCESS_TYPE
from dba_external_tables order by 1,2
 /
OWNER                          TABLE_NAME                     DEFAULT_DIRECTORY_NAME         ACCESS_
------------------------------ ------------------------------ ------------------------------ -------
OPS$ORACLE                     ET$000E4FF90001                PRODUCT_REFRESH_DIR            CLOB
OPS$ORACLE                     ET$007360190001                PRODUCT_REFRESH_DIR            CLOB
OPS$ORACLE                     ET$00F39F430001                PRODUCT_REFRESH_DIR            CLOB
OPS$ORACLE                     ET$016FA3770001                PRODUCT_REFRESH_DIR            CLOB


To Drop the external temporary datapump tables.
SQL> drop table OPS$ORACLE.ET$000E4FF90001;
SQL> drop table OPS$ORACLE.ET$007360190001;
SQL> drop table OPS$ORACLE.ET$00F39F430001;
SQL> drop table OPS$ORACLE.ET$016FA3770001;
5.To ensure there is no datapump temporary tables.

SQL> select OWNER,OBJECT_NAME,OBJECT_TYPE, status,
to_char(CREATED,'dd-mon-yyyy hh24:mi:ss') created
,to_char(LAST_DDL_TIME , 'dd-mon-yyyy hh24:mi:ss') last_ddl_time
from dba_objects where object_name like 'ET$%'
/

no rows selected

SQL> select owner, TABLE_NAME, DEFAULT_DIRECTORY_NAME, ACCESS_TYPE from dba_external_tables order by 1,2
 /


Reference doc   1274653.1  

Thursday, 5 March 2015

De-install RAC cluster

How to de-install RAC cluster?

 cd /u01/11.2.0.4/grid/deinstall
[oracle@node-1 deinstall]$ ./deinstall
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /u01/app/oraInventory/logs/

############ ORACLE DEINSTALL & DECONFIG TOOL START ############


######################### CHECK OPERATION START #########################
## [START] Install check configuration ##


Checking for existence of the Oracle home location /u01/11.2.0.4/grid
Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Cluster
Oracle Base selected for deinstall is: /u01/app/oracle
Checking for existence of central inventory location /u01/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home /u01/11.2.0.4/grid
The following nodes are part of this cluster: node-1, node-2
Checking for sufficient temp space availability on node(s): 'node-1, node-2'

## [END] Install check configuration ##

Traces log file: /u01/app/oraInventory/logs//crsdc.log
Enter an address or the name of the virtual IP used on node "node-1"[node-1-vip]
 >
node-1-vip
The following information can be collected by running "/sbin/ifconfig -a" on node "node-1"
Enter the IP netmask of Virtual IP "xx.xxx.xx.xxx" on node "node-1"[255.xxx.xxx.x]
 >255.xxx.xxx.x

Enter the network interface name on which the virtual IP address "xx.xxx.xx.xxx" is active
 >xx.xxx.xx.xxx

Enter an address or the name of the virtual IP used on node "node-2"[node-2-vip]
 > node-2-vip

The following information can be collected by running "/sbin/ifconfig -a" on node "node-2"
Enter the IP netmask of Virtual IP "xx.xxx.xx.xxx" on node "node-2"[255.xxx.xxx.x]
 >255.xxx.xxx.x

Enter the network interface name on which the virtual IP address "xx.xxx.xx.xxx" is active
 >

Enter an address or the name of the virtual IP[]
 >


Network Configuration check config START

Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_check2014-02-04_03-38-34-AM.log

Specify all RAC listeners (do not include SCAN listener) that are to be de-configured [LISTENER,LISTENER_SCAN1]:LISTENER,LISTENER_SCAN1

Network Configuration check config END

Asm Check Configuration START

ASM de-configuration trace file location: /u01/app/oraInventory/logs/asmcadc_check2014-02-04_03-38-42-AM.log

ASM configuration was not detected in this Oracle home. Was ASM configured in this Oracle home (y|n) [n]: y
Specify the ASM Diagnostic Destination [ ]:
Specify the diskstring []:
Specify the diskgroups that are managed by this ASM instance []:


######################### CHECK OPERATION END #########################


####################### CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is: /u01/11.2.0.4/grid
The cluster node(s) on which the Oracle home deinstallation will be performed are:node-1,node-2
Oracle Home selected for deinstall is: /u01/11.2.0.4/grid
Inventory Location where the Oracle home registered is: /u01/app/oraInventory
Following RAC listener(s) will be de-configured: LISTENER,LISTENER_SCAN1
ASM instance will be de-configured from this Oracle home
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2014-02-04_03-36-32-AM.out'
Any error messages from this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2014-02-04_03-36-32-AM.err'

######################## CLEAN OPERATION START ########################
ASM de-configuration trace file location: /u01/app/oraInventory/logs/asmcadc_clean2014-02-04_03-39-03-AM.log
ASM Clean Configuration START
ASM Clean Configuration END

Network Configuration clean config START

Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_clean2014-02-04_03-39-04-AM.log

De-configuring RAC listener(s): LISTENER,LISTENER_SCAN1

De-configuring listener: LISTENER
    Stopping listener: LISTENER
    Warning: Failed to stop listener. Listener may not be running.
Listener de-configured successfully.

De-configuring listener: LISTENER_SCAN1
    Stopping listener: LISTENER_SCAN1
    Warning: Failed to stop listener. Listener may not be running.
Listener de-configured successfully.

De-configuring Naming Methods configuration file on all nodes...
Naming Methods configuration file de-configured successfully.

De-configuring Local Net Service Names configuration file on all nodes...
Local Net Service Names configuration file de-configured successfully.

De-configuring Directory Usage configuration file on all nodes...
Directory Usage configuration file de-configured successfully.

De-configuring backup files on all nodes...
Backup files de-configured successfully.

The network configuration has been cleaned up successfully.

Network Configuration clean config END


---------------------------------------->

The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on  the local node after the execution completes on all the remote nodes.

Run the following command as the root user or the administrator on node "node-2".

/tmp/deinstall2014-02-04_03-34-41AM/perl/bin/perl -I/tmp/deinstall2014-02-04_03-34-41AM/perl/lib -I/tmp/deinstall2014-02-04_03-34-41AM/crs/install /tmp/deinstall2014-02-04_03-34-41AM/crs/install/rootcrs.pl -force  -deconfig -paramfile "/tmp/deinstall2014-02-04_03-34-41AM/response/deinstall_Ora11g_gridinfrahome2.rsp"

[root@node-02 ~]# /tmp/deinstall2015-03-04_03-34-41AM/perl/bin/perl -I/tmp/deinstall2015-03-04_03-34-41AM/perl/lib -I/tmp/deinstall2015-03-04_03-34-41AM/crs/install /tmp/deinstall2015-03-04_03-34-41AM/crs/install/rootcrs.pl -force  -deconfig -paramfile "/tmp/deinstall2015-03-04_03-34-41AM/response/deinstall_Ora11g_gridinfrahome2.rsp" -lastnode

Using configuration parameter file: /tmp/deinstall2015-03-04_03-34-41AM/response/deinstall_Ora11g_gridinfrahome2.rsp
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'node-02'
CRS-2676: Start of 'ora.cssdmonitor' on 'node-02' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'node-02'
CRS-2672: Attempting to start 'ora.diskmon' on 'node-02'
CRS-2676: Start of 'ora.diskmon' on 'node-02' succeeded
CRS-2676: Start of 'ora.cssd' on 'node-02' succeeded
PRCR-1068 : Failed to query resources
Cannot communicate with crsd
PRCR-1068 : Failed to query resources
Cannot communicate with crsd
PRCR-1068 : Failed to query resources
Cannot communicate with crsd
PRCR-1068 : Failed to query resources
Cannot communicate with crsd
PRCR-1119 : Failed to look up CRS resources of ora.cluster_vip_net1.type type
PRCR-1068 : Failed to query resources
Cannot communicate with crsd
PRCR-1070 : Failed to check if resource ora.gsd is registered
Cannot communicate with crsd
PRCR-1070 : Failed to check if resource ora.ons is registered
Cannot communicate with crsd
 CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4000: Command Stop failed, or completed with errors.
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4000: Command Delete failed, or completed with errors.
CRS-4689: Oracle Clusterware is already stopped
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'node-02'
CRS-2676: Start of 'ora.cssdmonitor' on 'node-02' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'node-02'
CRS-2672: Attempting to start 'ora.diskmon' on 'node-02'
CRS-2676: Start of 'ora.diskmon' on 'node-02' succeeded
CRS-2676: Start of 'ora.cssd' on 'node-02' succeeded
CRS-4611: Successful deletion of voting disk +OCR_VOTE.

ASM de-configuration trace file location: /u01/app/oraInventory/logs/asmcadc_clean2015-03-04_03-51-20-AM.log
ASM Clean Configuration START
ASM Clean Configuration END

ASM with SID +ASM1 deleted successfully. Check /u01/app/oraInventory/logs/asmcadc_clean2015-03-04_03-51-20-AM.log for details.

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'node-02'
CRS-2673: Attempting to stop 'ora.ctssd' on 'node-02'
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'node-02'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'node-02'
CRS-2677: Stop of 'ora.mdnsd' on 'node-02' succeeded
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'node-02' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'node-02' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'node-02'
CRS-2677: Stop of 'ora.cssd' on 'node-02' succeeded
CRS-2673: Attempting to stop 'ora.crf' on 'node-02'
CRS-2677: Stop of 'ora.crf' on 'node-02' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'node-02'
CRS-2677: Stop of 'ora.gipcd' on 'node-02' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'node-02'
CRS-2677: Stop of 'ora.gpnpd' on 'node-02' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'node-02' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Removing Trace File Analyzer
Successfully deconfigured Oracle clusterware stack on this node


Run the following command as the root user or the administrator on node "node-1".

/tmp/deinstall2014-02-04_03-34-41AM/perl/bin/perl -I/tmp/deinstall2014-02-04_03-34-41AM/perl/lib -I/tmp/deinstall2014-02-04_03-34-41AM/crs/install /tmp/deinstall2014-02-04_03-34-41AM/crs/install/rootcrs.pl -force  -deconfig -paramfile "/tmp/deinstall2014-02-04_03-34-41AM/response/deinstall_Ora11g_gridinfrahome2.rsp" -lastnode

[root@node-01 ~]# /tmp/deinstall2015-03-04_03-34-41AM/perl/bin/perl -I/tmp/deinstall2015-03-04_03-34-41AM/perl/lib -I/tmp/deinstall2015-03-04_03-34-41AM/crs/install /tmp/deinstall2015-03-04_03-34-41AM/crs/install/rootcrs.pl -force  -deconfig -paramfile "/tmp/deinstall2015-03-04_03-34-41AM/response/deinstall_Ora11g_gridinfrahome2.rsp" -lastnode

Using configuration parameter file: /tmp/deinstall2015-03-04_03-34-41AM/response/deinstall_Ora11g_gridinfrahome2.rsp
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'node-01'
CRS-2676: Start of 'ora.cssdmonitor' on 'node-01' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'node-01'
CRS-2672: Attempting to start 'ora.diskmon' on 'node-01'
CRS-2676: Start of 'ora.diskmon' on 'node-01' succeeded
CRS-2676: Start of 'ora.cssd' on 'node-01' succeeded
PRCR-1068 : Failed to query resources
Cannot communicate with crsd
PRCR-1068 : Failed to query resources
Cannot communicate with crsd
PRCR-1068 : Failed to query resources
Cannot communicate with crsd
PRCR-1068 : Failed to query resources
Cannot communicate with crsd
PRCR-1119 : Failed to look up CRS resources of ora.cluster_vip_net1.type type
PRCR-1068 : Failed to query resources
Cannot communicate with crsd
PRCR-1070 : Failed to check if resource ora.gsd is registered
Cannot communicate with crsd
PRCR-1070 : Failed to check if resource ora.ons is registered
Cannot communicate with crsd

CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4000: Command Stop failed, or completed with errors.
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4000: Command Delete failed, or completed with errors.
CRS-4689: Oracle Clusterware is already stopped
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'node-01'
CRS-2676: Start of 'ora.cssdmonitor' on 'node-01' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'node-01'
CRS-2672: Attempting to start 'ora.diskmon' on 'node-01'
CRS-2676: Start of 'ora.diskmon' on 'node-01' succeeded
CRS-2676: Start of 'ora.cssd' on 'node-01' succeeded
CRS-4611: Successful deletion of voting disk +OCR_VOTE.

ASM de-configuration trace file location: /u01/app/oraInventory/logs/asmcadc_clean2015-03-04_03-51-20-AM.log
ASM Clean Configuration START
ASM Clean Configuration END

ASM with SID +ASM1 deleted successfully. Check /u01/app/oraInventory/logs/asmcadc_clean2015-03-04_03-51-20-AM.log for details.

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'node-01'
CRS-2673: Attempting to stop 'ora.ctssd' on 'node-01'
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'node-01'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'node-01'
CRS-2677: Stop of 'ora.mdnsd' on 'node-01' succeeded
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'node-01' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'node-01' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'node-01'
CRS-2677: Stop of 'ora.cssd' on 'node-01' succeeded
CRS-2673: Attempting to stop 'ora.crf' on 'node-01'
CRS-2677: Stop of 'ora.crf' on 'node-01' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'node-01'
CRS-2677: Stop of 'ora.gipcd' on 'node-01' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'node-01'
CRS-2677: Stop of 'ora.gpnpd' on 'node-01' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'node-01' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Removing Trace File Analyzer
Successfully deconfigured Oracle clusterware stack on this node


Setting the force flag to false
Setting the force flag to cleanup the Oracle Base
Oracle Universal Installer clean START

Detach Oracle home '/u01/11.2.0.4/grid' from the central inventory on the local node : Done

Failed to delete the directory '/u01/11.2.0.4/grid'. The directory is in use.
Delete directory '/u01/11.2.0.4/grid' on the local node : Failed <<<<

The Oracle Base directory '/u01/app/oracle' will not be removed on local node. The directory is in use by Oracle Home '/u01/11.2.0/grid'.

Detach Oracle home '/u01/11.2.0.4/grid' from the central inventory on the remote nodes 'node-2' : Done


Delete directory '/u01/11.2.0.4/grid' on the remote nodes 'node-2' : Failed <<<<

Could not remove listed directories based on '/tmp/OraInstall2014-02-04_03-54-46-AM/installRemoveDirFile.lst' from nodes 'node-2'. [PRKC-1083 : Failed to remove listed directory in "/tmp/OraInstall2014-02-04_03-54-46-AM/installRemoveDirFile.lst" to any of the given nodes "node-2 ".
Error on node node-2:/bin/rm: cannot remove directory `/u01/11.2.0.4/grid/': Permission denied]
The Oracle Base directory '/u01/app/oracle' will not be removed on node 'node-2'. The directory is in use by Oracle Home '/u01/app/oracle/product/11.2.0/db_1'.

Oracle Universal Installer cleanup completed with errors.

Oracle Universal Installer clean END


## [START] Oracle install clean ##

Clean install operation removing temporary directory '/tmp/deinstall2014-02-04_03-34-41AM' on node 'node-1'
Clean install operation removing temporary directory '/tmp/deinstall2014-02-04_03-34-41AM' on node 'node-2'

## [END] Oracle install clean ##


######################### CLEAN OPERATION END #########################


####################### CLEAN OPERATION SUMMARY #######################
ASM instance was de-configured successfully from the Oracle home
Following RAC listener(s) were de-configured successfully: LISTENER,LISTENER_SCAN1
Oracle Clusterware is stopped and successfully de-configured on node "node-2"
Oracle Clusterware is stopped and successfully de-configured on node "node-1"
Oracle Clusterware is stopped and de-configured successfully.
Successfully detached Oracle home '/u01/11.2.0.4/grid' from the central inventory on the local node.
Failed to delete directory '/u01/11.2.0.4/grid' on the local node.
Successfully detached Oracle home '/u01/11.2.0.4/grid' from the central inventory on the remote nodes 'node-2'.
Failed to delete directory '/u01/11.2.0.4/grid' on the remote nodes 'node-2'.
Oracle Universal Installer cleanup completed with errors.

For complete clean up of Oracle Clusterware software from the system, deinstall the following old clusterware home(s). Refer to Clusterware Install guide of respective old release for details.
    /u01/11.2.0/grid on nodes : node-1
Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################


############# ORACLE DEINSTALL & DECONFIG TOOL END #############


ORA-600 [kwqitnmphe:ltbagi], [1], [0] reported in the alert log file.

ORA-00600 [kwqitnmphe:ltbagi] Cause: This issue arises in 12.1.0.2. The error occurs because there are still Historical Messages without...