Cluster Platform 280/3
Late-Breaking-News

This document contains the late-breaking news for the Cluster Platform 280/3 system. These issues were discovered after the production of the Cluster Platform 280/3 Installation and Recovery Guide.


Known Issues

Do Not Interrupt the Installation Process

During the installation, there are several times that one or more of the cluster components boots or downloads software. These activities take time (approximately 15 minutes for the management server to reboot, for example), and must not be interrupted. Do not interrupt the installation process at any time (this includes issuing a ctrl-C, and such) or the cluster platform will be left in an unpredictable state.

Recovering From Garbled Screen Text

On the local system, you must set and export the TERM environment variable to a value that emulates the kind of terminal you are using (refer to page 23 of the installation guide). If the TERM value is not set properly, the screen text might display garbled and prevent you from interacting with the installation script after you boot the management server. If this occurs, you must stop the installation, and start over as described in the following procedure.


procedure icon  To Stop the Installation and Start Over

1. In the telnet window of your local system, perform a break (press
Control and ], then type send brk) to take the management server to the ok prompt.

2. At the ok prompt, boot the management server to single-user mode:

ok boot disk -s

3. Run the sys-unconfig command to remove previously defined system parameters:

# sys-unconfig

4. Provide confirmation (answer Y) to the sys-unconfig command questions.

5. Resume the installation of the management server.

To do this, return to the Cluster Platform 280/3 Installation and Recovery Guide, page 24, Step 4, where you are instructed to boot the management server.


Documentation Errata

This section describes corrections or clarifications to the instructions listed in the Cluster Platform 280/3 Installation and Recovery Guide.

Accepting Default Values for Installation Questions (BugID 4658506)

The installation guide correctly describes the responses you must type for each installation question. Do not press Return to accept a default value unless the installation guide indicates that a Return is accepted for a given question. This is the case for all questions, including questions that expect a Y or N response.

Screen Text Might Be Different Than Shown In Guide

The formatting of text and the escape sequences used during the installation questions varies based on the type of terminal you use for the local system.

Recovery Information (BugID 4668236)

The recovery CDs are used as a last resort if you need to return the cluster platform to its factory shipped configuration. Only initiate a recovery at the direction of technical support, and be aware that site specific data will be lost.

A recovery can be performed to achieve the following levels of recovery:

  • To return the management server to its factory preinstalled configuration, while not affecting the cluster nodes and storage.
  • To return the management server to its factory preinstalled configuration, followed by a jumpstart to reinstall the cluster nodes (but leaving the storage untouched).
  • To return the management server to its factory preinstalled configuration, followed by a jumpstart to reinstall the cluster nodes and the storage.


Note - To restore the storage to the factory configuration, the Sun StorEdge T3 arrays must be set up as shipped from the factory. If you changed the T3 configuration (changed or added LUNS, or repartitioned the slices, for example) the recovery of the storage might fail. In addition, the T3s must be powered on, and all disks labeled. If your storage configuration does not meet these conditions, contact technical support for alternative methods of recovering the storage.



 

Modified Instructions For Performing a Management Server Recovery (BugID 4658562)

In the unlikely event that you must perform a recovery of the management server, you are instructed (in the Cluster Platform 280/3 Installation and Recovery Guide) to follow most of the installation procedures that you performed when you first installed your cluster platform. The documentation fails to point out important differences regarding the Sun StorEdgetrademark T3 arrays:

1. During a recovery, do not power cycle the Sun StorEdge T3 arrays as mentioned on page 47 in Step 1. Instead, proceed to Step 2 which instructs you to press Return.

2. The Sun StorEdge T3 arrays retain the passwords and IP addresses that were assigned during the initial installation, so setting them during a recovery is not needed. If you see messages complaining about T3 passwords and IP addresses, ignore such messages, and the recovery process will continue after a time-out of a few seconds.

3. If you follow the two directions listed above, the data stored on the Sun StorEdge T3 arrays is preserved.

Post-Installation Modification of the /etc/hosts File (BugID 4658593)

Once the management server finishes jump starting the nodes, the /etc/hosts file needs to be changed so that it corresponds to the cluster platform network configuration.

Although not mentioned in the installation guide, you should perform the following steps after the installation is complete. A good time to perform these steps is after Step 6 on page 65.


procedure icon  Updating the /etc/hosts File

1. Open /etc/hosts file on the management server with an editor.

2. Delete the first two -admin text strings (shown highlighted in the following example):

# Physical Hosts (Physical Addresses)
129.153.47.181 test                        #Management Server
129.153.47.71 TC                           #Terminal Concentrator
129.153.47.120 node1-admin         #First Cluster Node
129.153.47.121 node2-admin  #Second Cluster Node
10.0.0.1 test-admin                     #Admin Network
10.0.0.2 node1                 #First Cluster Node admin network
10.0.0.3 node2                #Second Cluster Node admin network
10.0.0.4 T3-01     #First T3 Host Name
10.0.0.5 T3-02     #Second T3 Host Name

3. Append -admin to the two internal administration node names as highlighted below:

# Physical Hosts (Physical Addresses)
129.153.47.181 test                        #Management Server
129.153.47.71 TC                           #Terminal Concentrator
129.153.47.120 node1                    #First Cluster Node
129.153.47.121 node2                    #Second Cluster Node
10.0.0.1 test-admin                     #Admin Network
10.0.0.2 node1-admin                 #First Cluster Node admin network
10.0.0.3 node2-admin                 #Second Cluster Node admin network
10.0.0.4 T3-01     #First T3 Host Name
10.0.0.5 T3-02     #Second T3 Host Name

4. Save your changes and quit the editing session.

Post-Installation Removal of the /.rhosts File

For security purposes, we recommended that you remove the root user /.rhosts file from the cluster nodes. It is typically not needed once the cluster installation is complete. However, some cluster agents may require root user to have remote access to the cluster nodes. Please consult the cluster agent documentation for more details.