Cluster Platform 280/3 Late-Breaking-News |
This document contains the late-breaking news for the Cluster Platform 280/3 system. These issues were discovered after the production of the Cluster Platform 280/3 Installation and Recovery Guide.
During the installation, there are several times that one or more of the cluster components boots or downloads software. These activities take time (approximately 15 minutes for the management server to reboot, for example), and must not be interrupted. Do not interrupt the installation process at any time (this includes issuing a ctrl-C, and such) or the cluster platform will be left in an unpredictable state.
On the local system, you must set and export the TERM environment variable to a value that emulates the kind of terminal you are using (refer to page 23 of the installation guide). If the TERM value is not set properly, the screen text might display garbled and prevent you from interacting with the installation script after you boot the management server. If this occurs, you must stop the installation, and start over as described in the following procedure.
1. In the telnet window of your local system, perform a break (press
Control and ], then type send brk) to take the management server to the ok prompt.
2. At the ok prompt, boot the management server to single-user mode:
ok boot disk -s |
3. Run the sys-unconfig command to remove previously defined system parameters:
# sys-unconfig |
4. Provide confirmation (answer Y) to the sys-unconfig command questions.
5. Resume the installation of the management server.
To do this, return to the Cluster Platform 280/3 Installation and Recovery Guide, page 24, Step 4, where you are instructed to boot the management server.
This section describes corrections or clarifications to the instructions listed in the Cluster Platform 280/3 Installation and Recovery Guide.
The installation guide correctly describes the responses you must type for each installation question. Do not press Return to accept a default value unless the installation guide indicates that a Return is accepted for a given question. This is the case for all questions, including questions that expect a Y or N response.
The formatting of text and the escape sequences used during the installation questions varies based on the type of terminal you use for the local system.
The recovery CDs are used as a last resort if you need to return the cluster platform to its factory shipped configuration. Only initiate a recovery at the direction of technical support, and be aware that site specific data will be lost.
A recovery can be performed to achieve the following levels of recovery:
In the unlikely event that you must perform a recovery of the management server, you are instructed (in the Cluster Platform 280/3 Installation and Recovery Guide) to follow most of the installation procedures that you performed when you first installed your cluster platform. The documentation fails to point out important differences regarding the Sun StorEdge T3 arrays:
1. During a recovery, do not power cycle the Sun StorEdge T3 arrays as mentioned on page 47 in Step 1. Instead, proceed to Step 2 which instructs you to press Return.
2. The Sun StorEdge T3 arrays retain the passwords and IP addresses that were assigned during the initial installation, so setting them during a recovery is not needed. If you see messages complaining about T3 passwords and IP addresses, ignore such messages, and the recovery process will continue after a time-out of a few seconds.
3. If you follow the two directions listed above, the data stored on the Sun StorEdge T3 arrays is preserved.
Once the management server finishes jump starting the nodes, the /etc/hosts file needs to be changed so that it corresponds to the cluster platform network configuration.
Although not mentioned in the installation guide, you should perform the following steps after the installation is complete. A good time to perform these steps is after Step 6 on page 65.
1. Open /etc/hosts file on the management server with an editor.
2. Delete the first two -admin text strings (shown highlighted in the following example):
3. Append -admin to the two internal administration node names as highlighted below:
4. Save your changes and quit the editing session.
For security purposes, we recommended that you remove the root user /.rhosts file from the cluster nodes. It is typically not needed once the cluster installation is complete. However, some cluster agents may require root user to have remote access to the cluster nodes. Please consult the cluster agent documentation for more details.
Copyright © 2002, Sun Microsystems, Inc. All rights reserved.