A P P E N D I X G |
Using the HA-NFS File System Agent |
This appendix describes how to set up the HA-NFS service for the Cluster
Platform 280/3 system using the automated installation scripts. These scripts, which are contained on a supplemental CD, are:
The scripts are located at /jumpstart/Packages/HA-NFS. They are executed after the SunTone cluster is installed using the jump start mechanism. Because setting up the HA-NFS data service requires configuration modifications to both cluster nodes, two separate scripts are provided, one for each cluster node. The setup-hanfs-node1.sh script must be run to completion on the first cluster node before the setup-hanfs-node2.sh script is run on the second cluster node.
The shared Sun StorEdge T3 array, which the HA-NFS data service uses for the exported NFS file system and administration files, is configured using the same volume manager that was used to mirror the boot disk of the cluster nodes during the JumpStart installation. The pair of arrays are assumed to be configured as a single LUN with hardware RAID 5. If you wish to reconfigure the T3s, you will have to modify the scripts to recognize the new array.
The scripts are designed to be run immediately after a fresh install, but you can run them later if no device groups were defined on the shared storage. Only minimal information is required by the scripts. This information includes:
Default values are used for most configuration parameters. These values can be changed by modifying the values in each of the scripts before they are executed.
Setting up the HA-NFS data service requires both local changes to each node and global changes to the cluster configuration database. The local changes are performed in each script while some of the global changes are performed in the first script and some in the second. The reason it was split this way is that the local changes must occur before the global changes are made. If all of the global changes were contained in one script then you would have to run a script on each node before running the script containing the global changes. By splitting them up, you only have to run one script on each node and not go back and forth.
The following parameters can be modified before running the scripts:
The default values for these parameters are:
The setup scripts must be run as root on each node.
1. First run the setup-hanfs-node1.sh script on the first cluster node.
When it completes, run the setup-hanfs-node2.sh script on the other node.
2. If you are unsure of each node's identity, run the scstat -n command and observe the output.
bur280rn0# scstat -n -- Cluster Nodes -- Node name Status --------- ------ Cluster node: bur280rn0 Online Cluster node: bur280rn1 Online |
In this example, bur280rn0 is node 1, and bur280rn1 is node 2.
Once the setup scripts have been run, you can either manage the HA-NFS service from the command line or through the SunPlex Manager GUI. By default, each cluster node is configured with an Apache web server that listens on port 3000. Since a password is required to run SunPlex Manager, a secure port is used.
1. Enter the name of one of the cluster nodes in a web browser.
https://node1.siroe.com:3000 |
After entering a login (root) and password, the cluster configuration data is displayed.
2. To examine the HA-NFS resource group that was set up, click on Resource Groups in the left window pane.
The Action Menu allows you to perform common cluster administration functions like switching the service to the other cluster node. A nice feature of SunPlex Manager is that the command that is executed for the chosen action is displayed. You can use this output as a guide to developing scripts for common administration functions.
Before the setup scripts make any modification to the cluster configuration, a check of the current configuration is performed. If the check finds an unexpected configuration, the scripts terminate.
Some of the errors that might occur are:
ERROR: Must be run as root |
The setup scripts require root privileges. If you want to run the script as a user other than root, the script has to be modified.
ERROR:node1 is not node 1 |
An attempt to run setup-hanfs-node1.sh on node 2 occurred.
ERROR:node2 is not node 2 |
An attempt to run setup-hanfs-node2.sh on node 1 occurred.
ERROR: Both nodes need to be online |
The setup scripts assume that both nodes are online. If they are not, then they must be brought online before running the scripts.
ERROR: Script must be run with no device groups |
It is assumed that no global device groups were created on the shared storage before running the scripts. You can ignore this message and have the script attempt to create another one. This is only recommended if the script is re-run on a system.
ERROR: Expected number of DID devices not present |
It is assumed that ten DID mappings will be present, representing the physical device on the Cluster Platform 280/3 system. If ten do not exist, then the scdidadm command is run to probe the devices. If ten still are not present, there could be a hardware problem with the shared storage.
ERROR: Unexpected DID mapping |
The arrays are expected to be mapped to c1t1d0 and c2t1d0. If these devices do not show up in the DID mappings, there could be a hardware problem.
ERROR: Unable to identify cluster name |
The cluster name is used to construct the name for the logical IP address used by the HA-NFS data service. This error could mean a corrupted cluster database.
ERROR: Both VxVM and SDS configured |
The script will not run if more than one volume manger is configured. This will only occur if the shared storage was configured with a different volume manager than the boot disk.
ERROR: Neither VxVM and SDS configured |
This will only occur if the boot disk mirroring was unconfigured.
ERROR: Can't initialize disk c1t1d0 |
This error occurs when VERITAS tries to initialize one of the arrays. If the array was previously configured with Solstice DiskSuite, then it will need to be re-labeled with a volume manager label that predates Solaris.
Copyright © 2002, Sun Microsystems, Inc. All rights reserved.