Previous  |  Next  >  
Product: Storage Foundation for Oracle RAC Guides   
Manual: Storage Foundation 4.1 for Oracle RAC Installation and Configuration   

Shared Disk Group Cannot be Imported

If you see a message resembling:


vxvm:vxconfigd:ERROR:vold_pgr_register(/dev/vx/rdmp/disk_name):
local_node_id<0
Please make sure that CVM and vxfen are configured and operating correctly

This message is displayed when CVM cannot retrieve the node ID of the local system from the vxfen driver. This usually happens when port b is not configured. Verify that the vxfen driver is configured by checking the GAB ports with the command:


/sbin/gabconfig -a

Port b must exist on the local system.

CVMVolDg Does Not Go Online Even Though CVMCluster is Online

When the CVMCluster resource goes online, the shared disk groups are automatically imported. If the disk group import fails for some reason, the CVMVolDg resources fault. Clearing and offlining the CVMVolDg type resources does not fix the problem.

Workaround:

  1. Fix the problem causing the import of the shared disk group to fail.
  2. Offline the service group containing the resource of type CVMVolDg as well as the service group containing the CVMCluster resource type.
  3. Bring the service group containing the CVMCluster resource online.
  4. Bring the service group containing the CVMVolDg resource online.

Restoring Communication Between Host and Disks After Cable Disconnection

If a fiber cable is inadvertently disconnected between the host and a disk, you can restore communication between the host and the disk without restarting by doing the following:

  1. Reconnect the cable.
  2. Use the format command to verify that the host sees the disks. It may take a few minutes before the host is capable of seeing the disk.
  3. Issue the following vxdctl command to force the VxVM configuration daemon vxconfigd to rescan the disks:
      # vxdctl enable

Node is Unable to Join Cluster While Another Node is Being Ejected

A cluster that is currently fencing out (ejecting) a node from the cluster prevents a new node from joining the cluster until the fencing operation is completed. The following are example messages that appear on the console for the new node:


...VCS FEN ERROR V-11-1-25 ... Unable to join running cluster 
...VCS FEN ERROR V-11-1-25 ... since cluster is currently fencing 
...VCS FEN ERROR V-11-1-25 ... a node out of the cluster.

...VCS GAB.. Port b closed

If you see these messages when the new node is booting, the startup script (/etc/vxfen-startup) on the node makes up to five attempts to join the cluster. If this is not sufficient to allow the node to join the cluster, restart the new node or attempt to restart vxfen driver with the command:


/sbin/init.d/vxfen start

vxfentsthdw Fails When SCSI TEST UNIT READY Command Fails

If you see a message resembling:


Issuing SCSI TEST UNIT READY to disk reserved by other node FAILED.
Contact the storage provider to have the hardware configuration fixed.
The disk array does not support returning success for a SCSI TEST UNIT READY command when another host has the disk reserved using SCSI-3 persistent reservations. This happens with Hitachi Data Systems 99XX arrays if bit 186 of the system mode option is not enabled.
 ^ Return to Top Previous  |  Next  >  
Product: Storage Foundation for Oracle RAC Guides  
Manual: Storage Foundation 4.1 for Oracle RAC Installation and Configuration  
VERITAS Software Corporation
www.veritas.com