How to install RAC 11.2.0.2 on RHEL5.6
RAC installation includes both DBA and ROOT users involment from the beginning.
Beginning tasks should be done by root user like provind ip's,enabling network etc
Later tasks should be done by DBA like installation, Configuring ASM,database and creation of database
RAC installation includes both DBA and ROOT users involment from the beginning.
Beginning tasks should be done by root user like provind ip's,enabling network etc
Storage:
consider an example case.We need a
file system /u01 mounted with 40GB on both the nodes and a LUN of 200GB of
storage (should not be mounted).
Note:
please do the below steps in both the nodes test-node01 and test-node02.
Users and groups creation:
1. Create
below OS groups:
oinstall,dba,oper,asmadmin,asmdba,asmoper
2. Create
the user oracle and assign the below groups to
oinstall,asmadmin,asmoper,dba,oper,asmdba groups.
3. Make
Oinstall/DBA as primary group for oracle user
4. Install
the below RPM's :
binutils-2.15.92.0.2
compat-libstdc++-33-3.2.3
compat-libstdc++-33-3.2.3
(32 bit)
elfutils-libelf-0.97
elfutils-libelf-devel-0.97
expat-1.95.7
gcc-3.4.6
gcc-c++-3.4.6
glibc-2.3.4-2.41
glibc-2.3.4-2.41
(32 bit)
glibc-common-2.3.4
glibc-devel-2.3.4
glibc-headers-2.3.4
libaio-0.3.105
libaio-0.3.105
(32 bit)
libaio-devel-0.3.105
libaio-devel-0.3.105
(32 bit)
libgcc-3.4.6
libgcc-3.4.6
(32-bit)
libstdc++-3.4.6
libstdc++-3.4.6
(32 bit)
libstdc++-devel
3.4.6
make-3.80
pdksh-5.2.14
sysstat-5.0.5
unixODBC-2.2.11
unixODBC-2.2.11
(32 bit)
unixODBC-devel-2.2.11
unixODBC-devel-2.2.11
(32 bit)
Install
below ASM libraries:
oracleasm-2.6.18-194.el5-2.0.5-1.el5.i686.rpm
oracleasmlib-2.0.4-1.el5.i386.rpm
oracleasm-support-2.1.7-1.el5.i386.rpm
5. Provide
two Vip’s for 2 nodes following the below conditions. Provide a name in the format <public
hostname>-vip.
• The virtual IP address and the
network name must not be currently in use.
• The virtual IP address must be on
the same subnet as your public IP address.
•
The virtual host name for each node should be registered with your
DNS.
6. Provide
2 private ip for 2 nodes following the below conditions .these entries should
be entered in the /etc/hosts file.
A common
naming convention for the private hostname is <public hostname>-pvt.
• The private IP should NOT be
accessable to servers not participating in the local cluster.
• The private network should be on
standalone dedicated switch (es).
• The private network should NOT be
part of a larger overall network topology.
• The private network should be
deployed on Gigabit Ethernet or better.
7. Provide
a scan IP. The SCAN name must be resolved by DNS.
*********************************************************************************************************
The
following example is a previous set up which may be helpful to you performing
step: 5,6
.
[root@test-node02
~]# The /etc/hosts file.
127.0.0.1 localhost localhost.localdomain localhost4
localhost4.localdomain4
::1 localhost localhost.localdomain
localhost6 localhost6.localdomain6
#######
--------------- eth0 - PUBLIC
------------ ###########
xx.xxx.xx.xx8 test-node01.bgl.ttd.com test-node01
xx.xxx.xx.xx9 test-node02.bgl.ttd.com test-node02
#######
--------------------- VIP ------------------ ###########
xx.xxx.xx.xx3 test-node01-vip.bgl.ttd.com
test-node01-vip
xx.xxx.xx.xx4 test-node02-vip.bgl.ttd.com
test-node02-vip
#######
---------------- eth1 - PRIVATE ----------- ###########
xx.xxx.xx.xx1 test-node01-priv.bgl.ttd.com test-node01-priv
xx.xxx.xx.xx2 test-node02-priv.bgl.ttd.com
test-node02-priv
#######
---------------- Scan IP ----------- ###########
xx.xxx.xx.xx5 test-node-scan.bgl.ttd.com test-node-scan
*********************************************************************************************************
Configuring Kernel Parameter:
As the
root user add the following kernel parameter settings to /etc/sysctl.conf.
kernel.shmmni
= 4096
kernel.sem
= 250 32000 100 128
fs.file-max
= 6553600
net.ipv4.ip_local_port_range
= 9000 65500
net.core.rmem_default
= 262144
net.core.rmem_max
= 4194304
net.core.wmem_default
= 262144
net.core.wmem_max
= 1048576
Run the
following as the root user to allow the new kernel parameters to be put in
place:
#/sbin/sysctl –p
Repeat the
above steps on all cluster nodes.
Add the
following lines to the /etc/security/limits.conf file:
grid soft
nproc 2047
grid hard
nproc 16384
grid soft
nofile 1024
grid hard
nofile 65536
oracle
soft nproc 2047
oracle
hard nproc 16384
oracle
soft nofile 1024
oracle
hard nofile 65536
Add or edit the following line in the
/etc/pam.d/login file, if it does not already exist:
session
required pam_limits.so
Make the
following changes to the default shell startup file, add the following lines
to the /etc/profile file:
if [[
$USER = "oracle" ] || [ $USER = "grid" ]]; then
if [
$SHELL = "/bin/ksh" ]; then
ulimit -p
16384
ulimit -n
65536
else
ulimit -u
16384 -n 65536
fi
umask 022
fi
For the C shell
(csh or tcsh), add the following lines to the /etc/csh.login file:
if ( $USER
= "oracle" || $USER = "grid" ) then
limit
maxproc 16384
limit
descriptors 65536
endif
Create the Oracle Inventory
Director:
To create
the Oracle Inventory directory, enter the following commands as the root
user:
# mkdir -p
/u01/app/oraInventory
# chown -R
oracle:oinstall /u01/app/oraInventory
# chmod -R
775 /u01/app/oraInventory
Creating the Oracle Grid Infrastructure Home
Directory:
# mkdir -p
/u01/11.2.0/grid
# chown -R
oracle:oinstall /u01/11.2.0/grid
# chmod -R
775 /u01/11.2.0/grid
Creating the Oracle Base Directory
To create
the Oracle Base directory, enter the following commands as the root user:
# mkdir -p
/u01/app/oracle
# mkdir
/u01/app/oracle/cfgtoollogs
# chown -R
oracle:oinstall /u01/app/oracle
# chmod -R
775 /u01/app/oracle
Creating the Oracle RDBMS Home
Directory
To create
the Oracle RDBMS Home directory, enter the following commands as the root
user:
# mkdir -p
/u01/app/oracle/product/11.2.0/db_1
# chown -R
oracle:oinstall /u01/app/oracle/product/11.2.0/db_1
# chmod -R
775 /u01/app/oracle/product/11.2.0/db_1
Make sure Secure linux disabled
Vi /etc/selinux/config
file
SELINUX=permissive
Partition the disk:
Please partition the disk of 150 GB as mentioned
below:
Using ASMLib to Mark the Shared Disks as Candidate Disks:
To
create ASM disks using ASMLib:
1. As the root user, use oracleasm
to create ASM disks using the following syntax:
#
/usr/sbin/oracleasm configure -i
#
/usr/sbin/oracleasm createdisk disk_name device_partition_name
Example:
*******************************************************************************************************
[root@test-node01 ~]# /usr/sbin/oracleasm
createdisk OCR_VOTE01 /dev/sdd1
[root@test-node01
~]# /usr/sbin/oracleasm createdisk OCR_VOTE02 /dev/sdd2
[root@test-node01
~]# /usr/sbin/oracleasm createdisk OCR_VOTE03 /dev/sdd3
[root@test-node01
~]# /usr/sbin/oracleasm createdisk ASM_FRA01 /dev/sdd4
[root@test-node01
~]# /usr/sbin/oracleasm createdisk ASM_FRA02 /dev/sdd5
[root@test-node01
~]# /usr/sbin/oracleasm createdisk ASM_FRA03 /dev/sdd6
[root@test-node01
~]# /usr/sbin/oracleasm createdisk ASM_FRA04 /dev/sdd7
[root@test-node01
~]# /usr/sbin/oracleasm createdisk ASM_DATA01 /dev/sdd8
[root@test-node01
~]# /usr/sbin/oracleasm createdisk ASM_DATA02 /dev/sdd9
[root@test-node01
~]# /usr/sbin/oracleasm createdisk ASM_DATA03 /dev/sdd10
[root@test-node01
~]# /usr/sbin/oracleasm createdisk ASM_DATA04 /dev/sdd11
*********************************************************************************************************
2. Repeat step 1 for each
disk that will be used by Oracle ASM.
After
you have created all the ASM disks for your cluster, use the listdisks
command to verify their availability:
[root@test-node01
~]# /usr/sbin/oracleasm listdisks
3.On all the other nodes in the cluster, use the scandisks command as the root
user to pickup the newly created ASM disks. You do not need to create the ASM
disks on each node, only on one node in the cluster.
[root@test-node02
]# /usr/sbin/oracleasm scandisks
Reloading
disk partitions: done
Cleaning
any stale ASM disks...
Scanning
system for ASM disks...
[root@test-node02
]# /usr/sbin/oracleasm listdisks
4.
Stop the ntp on both nodes
/sbin/service ntpd stop
chkconfig ntpd off
mv /etc/ntp.conf /etc/ntp.conf.original
Perform the above step in both
nodes as root
|
Later tasks should be done by DBA like installation, Configuring ASM,database and creation of database
Tasks we are going to do as DBA:
·
Install grid and configure cluster
·
Install RAC software
·
Create ASM disks
·
Database creation
Cluvfy check: ./runcluvfy.sh stage -pre
crsinst -n test-node01,test-node02 -verbose
Clear
all the errors generated in the above step before proceeding to installation.
Display: make sure xterm is installed in server.
setenv
DISPLAY xx.xxx.xx.xx:0.0
xterm
&
Oracle
Grid Infrastructure Install: go to /u01/dba/softwares/grid path (path
where grid software unzipped)
./runInstaller
Action:
select the 1st option “install and configure grid infrastructure for a
cluster “ and click next
Action:
select the “Advanced installation” option and proceed to next step
Action:
select the language and proceed further
Action:
add your cluster name here and add the SCAN name. unselect the “Configure
DNS”. Keep the port as 1521
Action:
click the add button and add your second node details. And click next.
You can
check the SSH connectivity between the nodes with “SSH Connectivity” button
shown above. Enter the “oracle” user password in the password field.
Check your
public and private subnet values and click next
Action:
select the “ASM” option if your storage is ASM type.
Action:
select the OCR_VOTE_DISKs for creating voting disk location
Action:
select the password for ASM sys user and proceed to next
Action: do
not select the IPMI option and proceed next
Action:
click next
Action:
decide the Oracle Base and Grid home and enter here in the fields and proceed
next
Action:
Enter the Inventory directory
Now the
pre-checks will start
Action:
make sure everything is succeeded and click next
Action:
save the response file. It consists of all the params set ready for
installation.
Action:
the above scripts need to be executed as root user in both nodes. Execute the
scripts one after another on each node. Don’t execute simultaneously.
Click OK,
once scripts are executed by root user
Note: make
sure the scripts give success response. It should not fail. If it is failed
analyze the issue and apply the patch 9974223 for 11.2.0.2.
Note:
everything here should be success
Action:
click close to close the terminal
GRID Installation is completed….!!!!!!!!!
if you are facing difficulty to follow above steps use this link to follow GUI steps
Now go for
database installation using below steps:
1.go to /u01/dba/softwares/database
path and follow the GUI same as below
Action:
Click ‘YES’ and proceed next.
Action:
select “install database software only” and proceed next
Action:
select the “Real Application cluster database installation” and proceed
further
You may
check the SSH connectivity if you want. If it success it will move further or
else will fail. Make sure connectivity should be password less connectivity
Action:
select Enterprise Edition and proceed further
Action:
select the oracle base and Home values and proceed next
Action:
check the admin,oper groups and proceed further
Note: make
sure all the prechecks are success before installation
Action:
click finish to complete the activity. You can save the response file if you
want. It will contain installation structure
Action:
execute the above script as root user and make sure it run with no error
Action: click
close button.
DATABASE installation completed….!!!!!!!
if you are facing any issues with above steps use link to follow GUI steps
Now
we shall create ASM disks usinf asmca(configuration assistant):
Xterm &
|
No comments:
Post a Comment