Wednesday, 28 January 2015

Stetps to create physical standby database using RMAN backup

How to  create physical standby database using RMAN backup?

Primary Database (PRIME):
Checks:
•          Archive log mode must be enabled on primary database.
sql>Archive log list
 (or)
sql>SELECT LOG_MODE FROM V$DATABASE;

If the DB is in "NOARCHIVELOG" mode, please perform the following to make it ARCHIVELOG mode:
http://orcl11gdba.blogspot.in/2014/03/enabling-archive-log-mode.html

•          Primary database must be forced logging.
SELECT FORCE_LOGGING FROM V$DATABASE;
ALTER DATABASE FORCE LOGGING;

•          Make sure the DB Name and DB Unique Name are set.

SQL> SHOW PARAMETER DB_NAME

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_name                              string      PRIME

SQL>SHOW PARAMETER DB_UNIQUE_NAME

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_unique_name                       string      PRIME

•          set the parameter Use LOG_ARCHIVE_CONFIG using DB_UNIQUE_NAME

ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(PRIME,STAND)' scope=both;

ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=STAND LGWR ASYNC max_failure=10 max_connections=5 reopen=180   VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=STAND' scope=both;
ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_1=ENABLE scope=both;
ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE scope=both;

•          Set FAL_SERVER and FAL_CLIENT parameters
ALTER SYSTEM SET FAL_SERVER=STAND scope=both;
ALTER SYSTEM SET FAL_CLIENT=PRIME scope=both;

•          Set the below parameters
ALTER SYSTEM SET LOG_FILE_NAME_CONVERT='/u01/oradata/log','/u04/oradata/logs' scope=spfile;
ALTER SYSTEM SET DB_FILE_NAME_CONVERT='/u01/oradata/dbf','/u01/oradata/data' scope=spfile;
•          Add stand by redo log files
ALTER DATABASE ADD STANDBY LOGFILE GROUP 5 ('/u01/oradata/log/styredo_05a.dbf') SIZE 100M BLOCKSIZE 1024 REUSE;
ALTER DATABASE ADD STANDBY LOGFILE GROUP 6 ('/u01/oradata/log/styredo_06a.dbf') SIZE 100M BLOCKSIZE 1024 REUSE;
ALTER DATABASE ADD STANDBY LOGFILE GROUP 7 ('/u01/oradata/log/styredo_07a.dbf') SIZE 100M BLOCKSIZE 1024 REUSE;
ALTER DATABASE ADD STANDBY LOGFILE GROUP 8 ('/u01/oradata/log/styredo_08a.dbf') SIZE 100M BLOCKSIZE 1024 REUSE;
ALTER DATABASE ADD STANDBY LOGFILE GROUP 9 ('/u01/oradata/log/styredo_09a.dbf') SIZE 100M BLOCKSIZE 1024 REUSE;
ALTER SYSTEM SET STANDBY_FILE_MANAGEMENT=AUTO scope=both;

RMAN>backup current controlfile for standby;
Take full backup of database  and send the dump to standby site:
http://orcl11gdba.blogspot.in/2014/05/taking-database-backup-using-rman-method.html

Make sure the authentication is exclusive and password file is created.
ALTER SYSTEM SET REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE SCOPE=SPFILE;
orapwd file=$ORACLE_HOME/dbs/orapw$ORACLE_SID password=sys/admin123 entries=10 force=y

•          Prepare a TNSnames.ora file
PRIME=
      (DESCRIPTION = (ADDRESS_LIST =(ADDRESS = (PROTOCOL = TCP)
    (HOST = bgl-db01.ttd.com)
        (PORT = 1521)))
    (CONNECT_DATA =
    (SID = PRIME)))

STAND=
      (DESCRIPTION =(ADDRESS_LIST =(ADDRESS = (PROTOCOL = TCP)
      (HOST = bgl-dr01.dot.com)
        (PORT = 1521)))
      (CONNECT_DATA =
      (SID = PRIME)))

•          Start the listener listener_PRIME
lsnrctl start listener_PRIME
•          Create pfile:
Create pfile from spfile;
----------------------------------------------------------------------------------------------------------------------
Standby database (STAND):
scp the pfile and the password files to $ORACLE_HOME/dbs destination.
Copy the tnsnames.ora file from primary to standby in location to $ORACLE_HOME/network/admin

Modify the parameters in pfile such as diagnostic_dump,adump,controlfile path, log_archive_dest_1, log_archive_dest_2, fal_client, fal_server, db_unique_name.
e.g.
log_archive_config='DG_CONFIG=(STAND,PRIME)' scope=both
log_archive_dest_1=/u04/oradata/logs

db_unique_name=STAND
instance_name=PRIME
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_STATE_2=ENABLE
LOG_ARCHIVE_DEST_2='SERVICE=PRIME  LGWR ASYNC max_failure=10 max_connections=5 reopen=180   VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)  DB_UNIQUE_NAME=PRIME
*.fal_client='STAND'
*.fal_server='PRIME'

sql>startup nomount pfile=initSTAND.ora
sql > show parameter db_unique
RMAN> set dbid=110374; ? DBID of PRIME database
RMAN> restore standby controlfile from ‘/u01/rman_bkp/’;
RMAN> sql 'alter database mount standby database';

RMAN> crosscheck backup;
RMAN> delete noprompt expired backup;
RMAN> list backup summary;

Follow the below step to register the backup dump to our instance
rman> catalog start with '/u01/rman_bkp/';

RMAN> restore database;
RMAN> list backup of archivelog all;
RMAN> recover database until sequence 1135 à  sequence number till what we have backed up archivelog files


SQL> alter database recover managed standby database disconnect from session;
Database altered.


Thursday, 8 January 2015

CRS-4700: The Cluster Time Synchronization Service is in Observer mode.

CRS-4700: The Cluster Time Synchronization Service is in Observer mode.
or 
How to switch from NTP to CTSS?

It is recommended to use CTSS for a cluster to synchronize the time between cluster nodes instead of NTP.
We will find the CTSS status in observer mode when ntp.conf file is in /etc/.

1. Check if ctssd is active or not
cd /u01/app/11.2.0.4/grid/bin
[oracle@test_node01 ~]$ ./crsctl check ctss
CRS-4700: The Cluster Time Synchronization Service is in Observer mode.
[oracle@test_node02 ~]$ ./crsctl check ctss
CRS-4700: The Cluster Time Synchronization Service is in Observer mode.
We can see that CTSSD is in observer mode

2. Stop  NTP and CRS
now we are going to disable the ntp service and enable CTSS service

#crsctl stop crs
#/sbin/service ntpd stop
#mv /etc/ntp.conf /etc/ntp.conf.original

3. Start the cluster on all nodes
#crsctl start crs

4. Check that ctssd automatically starts in active mode as shown below

[root@test_node02 etc]# crsctl check ctss
CRS-4701: The Cluster Time Synchronization Service is in Active mode.
CRS-4702: Offset (in msec): 0
[root@test_node01 etc]# crsctl check ctss
CRS-4701: The Cluster Time Synchronization Service is in Active mode.
CRS-4702: Offset (in msec): 0

./cluvfy comp clocksync

Verifying Clock Synchronization across the cluster nodes

Checking if Clusterware is installed on all nodes...
Check of Clusterware install passed

Checking if CTSS Resource is running on all nodes...
CTSS resource check passed


Querying CTSS for time offset on all nodes...
Query of CTSS for time offset passed

Check CTSS state started...
CTSS is in Active state. Proceeding with check of clock time offsets on all nodes...
Check of clock time offsets passed


Oracle Cluster Time Synchronization Services check passed


Verification of Clock Synchronization across the cluster nodes was successful.

Wednesday, 7 January 2015

[cssd(10157)]CRS-1714: Unable to discover any voting files, retrying discovery in 15 seconds

[cssd(10157)]CRS-1714: Unable to discover any voting files, retrying discovery in 15 seconds

Cause:
The above error occurred in my case after reboot of RAC nodes and then we tried to bring up the cluster . We found this in alert log of node.
Issue is during the start-up of cluster, the previously created partitions on disks are not recognized by nodes.
[root@test_node02 ]# /usr/sbin/oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
[root@test_node02 ]# /usr/sbin/oracleasm listdisks
[root@test_node02 ]#

Solution:
Try to recognize the disks manually as below
/etc/init.d/oracleasm stop
/etc/init.d/oracleasm start
/usr/sbin/oracleasm scandisks /dev/sdd*
[root@test_node02 ]# /usr/sbin/oracleasm listdisks
ASM_FRA03
ASM_FRA04
ASM_DATA01
ASM_DATA02
OCR01


this is one of the solution to above error.there may be other reasons also that could cause this.for my case issue is resolved after scanning the disks manually.

Cheers...!!!!

Monday, 5 January 2015

OracleHomeInventory was not able to create a lock file, probably due to a failed OPatch Session. The loaded inventory might not show correctly what you have in the Oracle Home.

OracleHomeInventory was not able to create a lock file, probably due to a failed OPatch Session. The loaded inventory might not show correctly what you have in the Oracle Home.

Error:
 OUI-67076:OracleHomeInventory was not able to create a lock file, probably due to a failed OPatch Session.
 The loaded inventory might not show correctly what you have in the Oracle Home

Cause
A previous "opatch apply" session failed and so a lock still exists on the local inventory.

Solution
1. Take a backup of $ORACLE_HOME/.patch_storage directory
$ cp -R $ORACLE_HOME/.patch_storage $ORACLE_HOME/.patch_storage_bkp

2. Remove $ORACLE_HOME/.patch_storage/patch_locked
$ rm $ORACLE_HOME/.patch_storage/patch_locked

3. Create empty file $ORACLE_HOME/.patch_storage/patch_free
$ touch $ORACLE_HOME/.patch_storage/patch_free

4. Verify that "opatch lsinventory" no longer reports the error
$ opatch lsinventory -detail

ORA-600 [kwqitnmphe:ltbagi], [1], [0] reported in the alert log file.

ORA-00600 [kwqitnmphe:ltbagi] Cause: This issue arises in 12.1.0.2. The error occurs because there are still Historical Messages without...