Previous  |  Next  >  
Product: Storage Foundation for Oracle RAC Guides   
Manual: Storage Foundation 4.1 for Oracle RAC Installation and Configuration   

Phase 7 - Setting Up or Installing Oracle

  1. On the new node, create a local group and local user for Oracle. Make sure to assign the same group ID, user ID, and home directory as the ones for the current cluster nodes. For example, enter:
      # groupadd -g 1000 dba
      # useradd -g dba -u 1000 -d /oracle oracle 

    Create a password for the oracle user:


      # passwd oracle
  2. If Oracle system binaries are installed on shared storage on the existing nodes, skip to step 3. If Oracle is installed locally on the existing nodes, install Oracle on the local disk of the new node. Use the same location where Oracle is installed on the existing nodes.
    1. Refer to Installing Oracle9i Software. In that chapter, review instructions to install Oracle Release 2 on the local file system of a node. Install the release and patch level used by the existing nodes.
    2. Relink the Oracle binary to the VERITAS libraries.
    3. While installing Oracle on the new node, make sure the /var/opt/oracle/srvConfig.loc file is identical to the one for existing nodes, including contents, permissions, and ownership. If necessary, copy the file from one of the other nodes.
    4. Edit the listener.ora file on the new node to specify the IP address (or the virtual IP address) for the new node. For example, edit the section for LISTENER to resemble:

    5.     .
          .
          LISTENER =
            (DESCRIPTION_LIST =
             (DESCRIPTION =
              (ADDRESS_LIST =
               (ADDRESS = (PROTOCOL = TCP)(HOST = 192.2.40.23)(PORT =
                 1521))
               )
              )
             )
          .
          .
  3. If Oracle system binaries are installed on a cluster file system, set up the new node for Oracle.
    1. Create mount points for the shared file system on the new node; the mount points should have the same name as the one for the shared file system on the existing cluster nodes:

    2.     # mkdir /orasrv
          # mkdir /oracle
    3. Mount the shared file systems on the new node:

    4. # mount -F vxfs -o cluster,largefiles
      /dev/vx/dsk/orasrv_dg/srvm_vol /orasrv


      # mount -F vxfs -o cluster,largefiles
      /dev/vx/dsk/orabinvol_dg/orabinvol /oracle

        After mounting the file system, change the ownerships again:


          # chown -R oracle:dba /orasrv
          # chown -R oracle:dba /oracle
          # chmod 775 /oracle
    5. On the new node, open a new window and log in as oracle user. Edit the listener.ora file and add the IP address (or the virtual IP address) for the new node. For example, create a section for LISTENER_saturn that resembles:

    6.     .
           LISTENER_saturn =
            (DESCRIPTION_LIST =
             (DESCRIPTION =
              (ADDRESS_LIST =
               (ADDRESS = (PROTOCOL = TCP)(HOST = 192.2.40.23)(PORT =
                 1521))
               )
              )
             )
          .
    7. While installing Oracle on the new node, make sure the /var/opt/oracle/srvConfig.loc file is identical to the one for existing nodes, including contents, permissions, and ownership. If necessary, copy the file from one of the other nodes.
  4. Restart the new node:
      # /usr/sbin/shutdown –r now
    As the new node boots, the VCS propagates the configuration from the existing cluster nodes to the new node. All the configuration files located in the /etc/VRTSvcs/conf/config directory, including main.cf, CVMTypes.cf, CFSTypes.cf, and OracleTypes.cf, are identical on each node.
    At this point, GAB membership shows membership for all the nodes. All the following ports should be up on all the nodes:

      # gabconfig -a
    GAB Port Memberships
    ==============================================================
    Port a gen df205 membership 012
    Port b gen df20e membership 012
    Port d gen df20f membership 012
    Port f gen df219 membership 012
    Port h gen df211 membership 012
    Port o gen df208 membership 012
    Port v gen df215 membership 012
    Port w gen df217 membership 012
  5. Verify the CVM group is configured and online on each node (including the new node):
      # hastatus -sum 
      -- SYSTEM STATE 
      -- System      State          Frozen 
      A  galaxy      RUNNING        0 
      A  nebula      RUNNING        0 
      A  saturn      RUNNING        0 

      -- GROUP STATE 
      -- Group       System   Probed    AutoDisabled    State 
      B  cvm         galaxy   Y         N               ONLINE 
      B  cvm         nebula   Y         N               ONLINE 
      B  cvm         saturn   Y         N               ONLINE
  6. On one of the existing nodes, ensure CVM recognizes the new node:
      # /opt/VRTS/bin/vxclustadm nidmap
      Name        CVM Nid      CM Nid      State
      galaxy      0            0           Joined: Slave
      nebula      1            1           Joined: Master
      saturn      2            2           Joined: Slave
  7. Whether you installed Oracle9i locally or on shared storage, run the Global Services Daemon (gsd) in the background on the new node as oracle user:
      $ $ORACLE_HOME/bin/gsdctl start

Phase 8 - Configuring New Oracle Instance

  1. On an existing node, add a new instance. Refer to the Oracle9i Installation Guide. Highlights of the steps to add a new instance include:

    • Logging in as the oracle user and connecting to the instance.
    • Creating a new "undotbs" tablespace for the new instance. For example, if the tablespace is for the third instance, name it "undotbs3". If the database uses raw volumes, create the volume first. Use the same size as the one for existing "undotbs" volumes.
    • Create two new "redo" log groups for the new instance. For example, if the tablespace is for the third instance, create the tablespaces "redo3_1" and "redo3_2". If the database uses raw volumes, create the volume for the redo logs first. Use the size used by the existing redo volumes.
    • Enable "thread 3" where 3 is the number of the new instance.
    • Prepare the init{SID}.ora file for the new instance on the new node.
    • If Oracle is installed locally on the new node, prepare the bdump, cdump, udump, and pfile directories.

  2. If you use in-depth monitoring for the database, create the table for the database instance. Create the table on the new node. Refer to the VERITAS Cluster Server Enterprise Agent for Oracle, Installation and Configuration Guide for instructions on creating the table.
  3. Configure the ODM port on the new node.
    1. Unmount the ODM directory to unconfigure port d:

    2.     # /sbin/init.d/odm stop

        The installation automatically mounts ODM on the new node but does not link it.

    3. Mount the ODM directory. Re-mounting the ODM directory configures port d and re-links the ODM libraries with SFRAC:

    4.     # /sbin/init.d/odm start
  4. Create a mount point for the shared file system:
      # mkdir /rac_ts
  5. From the same node, mount the file system:
      # mount -F vxfs -o cluster /dev/vx/dsk/rac_dg/rac_vol1 /rac_ts
  6. Set "oracle" as the owner of the file system, and set "755" as the permissions:
      # chown oracle:dba /rac_ts
      # chmod 755 /rac_ts
  7. Log in as oracle user and attempt to manually start the new instance; the following example is for a third system:
      $ export ORACLE_SID=rac3
      $ sqlplus '/as sysdba'
      sqlplus> startup pfile=/oracle/orahome/dbs/initrac3.ora
  8. After the new Oracle instance is brought up manually on the new node, place the instance under VCS control.
    1. Add the new node to the SystemList. For example, if the existing nodes (galaxy and nebula) are nodes 0 and 1, the new node (saturn) is node 2:

    2.     # haconf -makerw    
          # hagrp -modify oradb1_grp SystemList -add saturn 2
    3. Add the new node to the AutoStartList for oradb1_grp:

    4.     # hagrp -modify oradb1_grp AutoStartList galaxy nebula saturn
    5. Modify the Sid (system ID) and Pfile (parameter file location) attributes of the Oracle resource. For example:

    6.     # hares -modify VRTdb Sid rac3 -sys Saturn
      # hares -modify VRTdb Pfile /oracle/orahome/dbs/initrac3.ora
      -sys Saturn
    7. If you created a table for in-depth monitoring, modify the Table attribute of the Oracle resource. For example:

    8.     # hares -modify VRTdb Table vcstable_saturn -sys saturn
    9. Close and save the configuration:

    10.     # haconf -dump -makero
  9. From the new node, verify the configuration:
      # hastop -local
    VCS takes all resources offline on the new node.
  10. Verify all resources come online after starting VCS on the new node:
      # hastart
 ^ Return to Top Previous  |  Next  >  
Product: Storage Foundation for Oracle RAC Guides  
Manual: Storage Foundation 4.1 for Oracle RAC Installation and Configuration  
VERITAS Software Corporation
www.veritas.com