A P P E N D I X  G

Using the HA-NFS File System Agent

This appendix describes how to set up the HA-NFS service for the Cluster
Platform 280/3 system using the automated installation scripts. These scripts, which are contained on a supplemental CD, are:

The scripts are located at /jumpstart/Packages/HA-NFS. They are executed after the SunTone cluster is installed using the jump start mechanism. Because setting up the HA-NFS data service requires configuration modifications to both cluster nodes, two separate scripts are provided, one for each cluster node. The setup-hanfs-node1.sh script must be run to completion on the first cluster node before the setup-hanfs-node2.sh script is run on the second cluster node.

The shared Sun StorEdge T3 array, which the HA-NFS data service uses for the exported NFS file system and administration files, is configured using the same volume manager that was used to mirror the boot disk of the cluster nodes during the JumpStart installation. The pair of arrays are assumed to be configured as a single LUN with hardware RAID 5. If you wish to reconfigure the T3s, you will have to modify the scripts to recognize the new array.

The scripts are designed to be run immediately after a fresh install, but you can run them later if no device groups were defined on the shared storage. Only minimal information is required by the scripts. This information includes:

Default values are used for most configuration parameters. These values can be changed by modifying the values in each of the scripts before they are executed.


What the Scripts Do

Setting up the HA-NFS data service requires both local changes to each node and global changes to the cluster configuration database. The local changes are performed in each script while some of the global changes are performed in the first script and some in the second. The reason it was split this way is that the local changes must occur before the global changes are made. If all of the global changes were contained in one script then you would have to run a script on each node before running the script containing the global changes. By splitting them up, you only have to run one script on each node and not go back and forth.

Local changes include:

Global changes include:


Tunable Parameters

The following parameters can be modified before running the scripts:

The default values for these parameters are:


procedure icon  To Run the Scripts

The setup scripts must be run as root on each node.

1. First run the setup-hanfs-node1.sh script on the first cluster node.

When it completes, run the setup-hanfs-node2.sh script on the other node.

2. If you are unsure of each node's identity, run the scstat -n command and observe the output.

bur280rn0# scstat -n
-- Cluster Nodes --
Node name Status
--------- ------
Cluster node: bur280rn0 Online
Cluster node: bur280rn1 Online

In this example, bur280rn0 is node 1, and bur280rn1 is node 2.



Note - In theory, the scripts do not have to run on specific nodes as long as they are run in the correct order. However, because they were only tested on specific nodes, the scripts check to verify they are executed on the recommended node.




Sample Output

CODE EXAMPLE G-1 Setup for First Cluster Node

# /net/ManagementServer-ip-address/jumpstart/Packages/HA-NFS/setup-hanfs-node1.sh 
 
CAUTION: This script can only be run on node 1 before the setup-hanfs-node2.sh script has been run on node 2. Node 1 is the second node listed in the output of the scstat -n command. 
Continue? (y/n) y 
 
Nodename Status 
-------- ------ 
bur280rn0 Online 
bur280rn1 Online 
 
No NAFO groups are configured. Configure now? (y/n) y 
 
In the following, you will be prompted to do configuration for network adapter failover 
 
Do you want to continue ... [y/n]: y 
 
How many NAFO groups to configure [1]: <Return>
 
Enter NAFO group number [0]: <Return>
Enter space-separted list of adapters in nafo0: qfe0 qfe4 
 
Checking configuration of nafo0: 
Testing active adapter qfe0...
Testing adapter qfe4... 
 
NAFO configuration completed 
SUNWscnfs not installed. Install now? (y/n) y 
Enter Management Server name (default=bur280ms)
 

The following packages are available:
1 SUNWscnfs Sun Cluster NFS Server Component 
(sparc) 3.0.0,REV=2000.10.01.01.00 
 
Select package(s) you wish to process (or all to process all packages). (default: all) [?,??,q]: 1
 
Processing package instance <SUNWscnfs> from </net/bur280ms/ jumpstart/Packages/SC3.0u2/scdataservices_3_0_u2/components/ SunCluster_HA_NFS_3.0/Packages> 
 
Sun Cluster NFS Server Component 
(sparc) 3.0.0,REV=2000.10.01.01.00 
Using </opt> as the package base directory. 
## Processing package information.
## Processing system information. 
## Verifying package dependencies. 
## Verifying disk space requirements. ## Checking for conflicts with packages already installed. 
## Checking for setuid/setgid programs. 
 
This package contains scripts which will be executed with super-user permission during the process of installing this package. 
 
Do you want to continue with the installation of <SUNWscnfs> [y,n,?] y 
 
Installing Sun Cluster NFS Server Component as <SUNWscnfs> 
 
## Installing part 1 of 1. 
## Executing postinstall script. Installation of <SUNWscnfs> was successful. 
 
The following packages are available: 
1 SUNWscnfs Sun Cluster NFS Server Component 
(sparc) 3.0.0,REV=2000.10.01.01.00 
 
Select package(s) you wish to process (or  all  to process all packages). (default: all) [?,??,q]: q

The hostname used for the NFS service is: bur280r-nfs 
Enter IP Address of bur280r-nfs: HA_NFS_IP_address
About to configure the T3 Arrays with VxVM, continue? (y/n) y 
newfs: /dev/vx/rdsk/nfsdg/hanfsdisk01 last mounted as /global/nfs 
newfs: construct a new file system /dev/vx/rdsk/nfsdg/hanfsdisk01: (y/n)? y /dev/vx/rdsk/nfsdg/hanfsdisk01: 1024000 sectors in 500 cylinders of 32 tracks, 64 sectors 
500.0MB in 32 cyl groups (16 c/g, 16.00MB/g, 7680 i/g) 
super-block backups (for fsck -F ufs -o b=#) at: 
32, 32864, 65696, 98528, 131360, 164192, 197024, 229856, 262688, 295520, 328352, 361184, 394016, 426848, 459680, 492512, 525344, 558176, 591008,623840, 656672, 689504, 722336, 755168, 788000, 820832, 853664, 886496,919328, 952160, 984992, 1017824, 
 
You should now run setup-hanfs-node2.sh on cluster node 2 

CODE EXAMPLE G-2 Setup for Second Cluster Node

# /net/ManagementServer-ip-address/jumpstart/Packages/HA-NFS/setup-hanfs-node2.sh 
 
CAUTION: This script can only be run on node 2 after the 
setup-hanfs-node1.sh script has been run on node 1. Node 2 
is the second node listed in the output of the scstat -n 
command. Continue? (y/n) y 
Nodename Status 
-------- ------ 
bur280rn0 
Online bur280rn1 Online 
No NAFO groups are configured. Configure now? (y/n) y 
In the following, you will be prompted to do configuration for network adapter failover 
 
Do you want to continue ... [y/n]: y 
 
How many NAFO groups to configure [1]: <Return>
 
Enter NAFO group number [0]: <Return>
Enter space-separted list of adapters in nafo0: qfe0 qfe4
 
Checking configuration of nafo0: 
Testing active adapter qfe0...
Testing adapter qfe4...
 
NAFO configuration completed SUNWscnfs not installed. 
Install now? (y/n) y 
Enter Management Server name (default=bur280ms)
 

The following packages are available: 
1 SUNWscnfs Sun Cluster NFS Server Component 
(sparc) 3.0.0,REV=2000.10.01.01.00 
Select package(s) you wish to process (or  all  to process all packages). (default: all) [?,??,q]: <Return>
 
Processing package instance <SUNWscnfs> from </net/bur280ms/ jumpstart/Packages/SC3.0u2/scdataservices_3_0_u2/components/ 
 
SunCluster_HA_NFS_3.0/Packages> Sun Cluster NFS Server Component 
(sparc) 3.0.0,REV=2000.10.01.01.00 
Using </opt> as the package base directory.
## Processing package information.
## Processing system information. 
## Verifying package dependencies.
## Verifying disk space requirements.
## Checking for conflicts with packages already installed. 
## Checking for setuid/setgid programs. 
 
This package contains scripts which will be executed with super-user permission during the process of installing this package. 
 
Do you want to continue with the installation of <SUNWscnfs> [y,n,?] y 
Installing Sun Cluster NFS Server Component as <SUNWscnfs> 
## Installing part 1 of 1. 
## Executing postinstall script. 
 
Installation of <SUNWscnfs> was successful. 
 
The following packages are available: 
1 SUNWscnfs Sun Cluster NFS Server Component (sparc) 3.0.0,REV=2000.10.01.01.00 
 
Select package(s) you wish to process (or  all  to process all packages). (default: all) [?,??,q]: q

 

The hostname used for the NFS service is: bur280r-nfs 
Enter IP Address of bur280r-nfs: HA_NFS_IP_address 
Setup complete. Run https://bur280rn0:3000 or https:// bur280rn1:3000 to administer the HA-NFS service


HA-NFS Administration

Once the setup scripts have been run, you can either manage the HA-NFS service from the command line or through the SunPlex Manager GUI. By default, each cluster node is configured with an Apache web server that listens on port 3000. Since a password is required to run SunPlex Manager, a secure port is used.


procedure icon  To Invoke SunPlex Manager

1. Enter the name of one of the cluster nodes in a web browser.

For example:

https://node1.siroe.com:3000

After entering a login (root) and password, the cluster configuration data is displayed.

2. To examine the HA-NFS resource group that was set up, click on Resource Groups in the left window pane.

 

FIGURE G-1 The nfs-rg resource group

Screen shot showing the SunPlex Manager window.

The Action Menu allows you to perform common cluster administration functions like switching the service to the other cluster node. A nice feature of SunPlex Manager is that the command that is executed for the chosen action is displayed. You can use this output as a guide to developing scripts for common administration functions.


Troubleshooting Tips

Before the setup scripts make any modification to the cluster configuration, a check of the current configuration is performed. If the check finds an unexpected configuration, the scripts terminate.

Some of the errors that might occur are:

ERROR: Must be run as root 

The setup scripts require root privileges. If you want to run the script as a user other than root, the script has to be modified.

ERROR:node1 is not node 1 

An attempt to run setup-hanfs-node1.sh on node 2 occurred.

ERROR:node2 is not node 2

An attempt to run setup-hanfs-node2.sh on node 1 occurred.

ERROR: Both nodes need to be online

The setup scripts assume that both nodes are online. If they are not, then they must be brought online before running the scripts.

ERROR: Script must be run with no device groups

It is assumed that no global device groups were created on the shared storage before running the scripts. You can ignore this message and have the script attempt to create another one. This is only recommended if the script is re-run on a system.

ERROR: Expected number of DID devices not present

It is assumed that ten DID mappings will be present, representing the physical device on the Cluster Platform 280/3 system. If ten do not exist, then the scdidadm command is run to probe the devices. If ten still are not present, there could be a hardware problem with the shared storage.

ERROR: Unexpected DID mapping 

The arrays are expected to be mapped to c1t1d0 and c2t1d0. If these devices do not show up in the DID mappings, there could be a hardware problem.

ERROR: Unable to identify cluster name 

The cluster name is used to construct the name for the logical IP address used by the HA-NFS data service. This error could mean a corrupted cluster database.

ERROR: Both VxVM and SDS configured

The script will not run if more than one volume manger is configured. This will only occur if the shared storage was configured with a different volume manager than the boot disk.

ERROR: Neither VxVM and SDS configured

This will only occur if the boot disk mirroring was unconfigured.

ERROR: Can't initialize disk c1t1d0

This error occurs when VERITAS tries to initialize one of the arrays. If the array was previously configured with Solstice DiskSuite, then it will need to be re-labeled with a volume manager label that predates Solaris.

ERROR: Can't initialize disk c2t1d0

This error occurs when VERITAS tries to initialize one of the arrays. If the array was previously configured with Solstice DiskSuite, then it will need to be re-labeled with a volume manager label that predates Solaris.