C H A P T E R  2

Cluster Platform 220/1000

Cluster Platform 220/1000 provides self-sustained platforms, integrated through the Sun Cluster technology, to support highly available applications. This two-node cluster system with shared access to 200 Gbytes of mirrored Ultra-SCSI storage can be used to implement a highly available file server, web server, or mail server.

Sun Cluster technology provides global file systems, global devices, and scalable services. These features allow independent cluster nodes, running independent Solaris Operating Environment instances, to run distributed applications while providing client access through a single IP address.

Your system contains the standard Sun hardware and software required for a cluster environment. This integrated system significantly reduces the complexity and implementation time associated with a cluster setup. The hardware and software are integrated by Sun, using established best practices extracted from engineering recommendations or through field experience, to help increase system availability.



Note - This Cluster configuration only provides a basic cluster environment. Data services must be installed and configured by the customer.




Your Integrated System

Your system includes a two-node cluster with shared, mirrored storage, a terminal concentrator, and a management server. The hardware components are installed in the rack in compliance with existing power, cooling, and electromagnetic interference (EMI) requirements. These components are cabled to provide redundant cluster interconnect between nodes, and to provide access to shared storage and production networks.

The management server is the repository for software and patches that are loaded on the system. The management server provides access to the cluster console, and it functions as a JumpStarttrademark server (install server) for the cluster nodes.



Note - The management server has sufficient CPU power and memory to implement a Suntrademark Management Center server for the cluster nodes, if required.



Task Overview

To integrate your Cluster Platform 220/1000 into a production environment, you must:

1. Remove the lower front panel of the rack.

2. Connect the power cord to the front and rear sequencer, and power on both breaker switches. Replace the front panel.

3. Provide a name, IP address and root password for the management server.

4. Provide a name and IP address for the terminal concentrator.

5. Provide a name for the cluster environment and a default router (gateway).

6. Provide names and IP addresses for individual cluster nodes.

7. Configure shared disk storage under Solstice DiskSuite or VERITAS volume manager. Configuration includes the creation of disksets (or disk groups), disk volumes, and file systems.

8. Select a quorum device from the shared disk storage.

9. Install and configure the required highly available applications.

10. Install and configure data services to support the highly available applications.

11. Configure Network Adapter Failover (NAFO) for automatic failover.



Note - This document does not provide information to support items 7 through 11. For specific implementation details, refer to the Sun Cluster 3.0 documentation.



The Ethernet address for each cluster node is located on the Customer Information, System Record sheet. Use the serial number on the information sheet and on the back of each node to correctly identify the Ethernet address. (See See Sun Cluster Standard Rack Placement for the placement of each cluster node.)

TABLE 2-1 provides a worksheet to assist with networking information. You will be referred back to the information you place in this table when you customize the cluster configuration.

 

FIGURE 2-1 Standard Configuration Card Placement

Figure showing the standard card placement.

Software Components

The Cluster Platform 220/1000 software packages include the following:



Note - A collection of software patches is applied to each cluster node. The patches are located in the /SOFTWARE/Solaris8_10-00/Patches directory of the management server. Detailed information on how the software solution is integrated is provided at http://www.sun.com/blueprints.



FIGURE 2-2 shows cluster node interface qfe0 providing network automatic failover (NAFO) for hme0 (production network). Interfaces qfe1 and qfe5 are used for the cluster interconnect. Interfaces qfe2, qfe3, qfe4, and qfe7 are available to expand application network services.



Note - See Ethernet IP Address Worksheet to specify the appropriate information for your network environment.



 

FIGURE 2-2 Sun Cluster Standard Interconnections and NAFO

Block diagram showing the network connections.

 


Cluster Platform Component Location

FIGURE 2-3 shows how all of the Cluster Platform 220/1000 are arranged in the expansion cabinet. TABLE 2-2 lists the rack components and the quantities required.



Note - The rack arrangement complies with weight distribution, cooling, EMI, and power requirements.



 

FIGURE 2-3 Sun Cluster Standard Rack Placement

Figure showing the component placement in the rack.

 

TABLE 2-2 Standard Cluster Rack Components

Component

Quantity

Sun StorEdge expansion cabinet

1

Netra T1 AC200 management server

1

Netra st D130 boot disks

4

Air baffle

2

Sun Enterprise cluster node

2

Sun StorEdge D1000 array

2

Terminal concentrator

1


Power and Heating Requirements

The Cluster Platform 220/1000 hardware should have two dedicated AC breaker panels. The cabinet should not share these breaker panels with other, unrelated equipment. The system requires two L30-R receptacles for the cabinet, split between two isolated circuits. For international installations, the system requires two Blue 32AIEC309 (international) receptacles.

If the cabinet is installed on a raised floor, cool conditioned air should be directed to the bottom of each rack through perforated panels.

The 72-inch cabinet in Cluster Platform 220/1000 consumes power and dissipates heat, as shown in TABLE 2-3.

 

TABLE 2-3 Power and Heat Requirements for Cluster Platforms

72-Inch Cabinet

Maximum Power Draw

Heat Dissipation

1

2138 W

7700 BTUs/hr.



Cabling the System

The Cluster Platform 220/1000 is shipped with the servers and each of the arrays already connected in the cabinet. You should not need to cable the system. Refer to TABLE 2-4 for servicing the cables.

This section describes how the Cluster Platform 220/1000 components are cabled when shipped from the factory. The standard configuration provides cables connected to the GBICs on the I/O board.

For specific cabling connections, refer to Appendix C.

TABLE 2-4 Standard Cluster Cables

Component

Part Number

Quantity

SCSI cable (2 M cable)

530-2834*

2

SCSI cable

530-1883*

2

DB-25/RJ-45 serial cable

2151A

2

Null Ethernet cable

3837A

2

Serial cable

9524A

1

RJ-45 Ethernet cable

1871A

1


*530-2834 and 530-1883 are manufacturing part numbers. 
FIGURE 2-4 Cluster Platforms Internal Cabling

Figure showing the cabling of the components.


Customizing the Cluster Configuration

When the Cluster Platform 220/1000 is shipped from the factory, the Netra T1 AC200 is preloaded with all of the necessary software to install the cluster nodes with the Solaris operating environment and Sun Cluster 3.0 software.

Because all of the cables are connected and labeled in the factory, configuring the terminal concentrator first will enable the cluster administrator to easily configure the cluster.



Note - You must enter the correct parameters for the initial customization, or the configuration will not initialize properly.




procedure icon  Customizing the Terminal Concentrator



Note - To produce a console terminal using a laptop computer, refer to Appendix A.



1. Power up the main circuit breakers, and then power up all individual system components.

2. Provide console connectivity into the terminal concentrator:

a. Disconnect the serial cable (Part no. 9524A) from Port 1 of the terminal concentrator.

b. Connect the RJ-45 end of the serial cable (Part no. 5121A) to Port 1 of the terminal concentrator and the other end, DB-25 male, to serial port A of a
Suntrademark workstation.



Note - The tip(1) command connects the Sun workstation I/O with the terminal concentrator I/O during an interactive session.



c. From a terminal window on the Sun workstation, enter the following command:

# /usr/bin/tip hardwire



Note - If the port is busy, refer to Troubleshooting the Cluster Platform 220/1000 Installation in Appendix D.



3. Configure the terminal concentrator device:

  • Power off and then power on the terminal concentrator.
  • Within 5 seconds, after power-on, press and release the TEST button.

The terminal concentrator undergoes a series of diagnostics tests that take approximately 60 seconds to complete.

Following the diagnostics tests, the tip window of the administration workstation should display:

System Reset - Entering Monitor Mode 
monitor::

4. Modify the default IP address that will be used in your network. Use the addr command to modify the network configuration of the terminal concentrator. Use the addr -d command to verify the network configuration:

monitor:: addr
 Enter Internet address [0.0.0.0]:: 192.212.87.62
 Internet address: 192.212.87.62
 Enter Subnet mask [255.255.0.0]:: 255.255.255.0
 Subnet mask: 255.255.255.0 
 Enter Preferred load host Internet address [<any host>]:: 0.0.0.0
 Preferred load host address: <any host>0.0.0.0
 Enter Broadcast address [0.0.0.0]:: 192.212.87.255
 Broadcast address: 192.212.87.255
 Enter Preferred dump address [0.0.0.0]:: 0.0.0.0
 Preferred dump address: 0.0.0.0
 Select type of IP packet encapsulation (ieee802/ethernet) [<ethernet>]:: ethernet
 Type of IP packet encapsulation: <ethernet> :: ethernet 
 Load Broadcast Y/N [Y]:: N
 Load Broadcast: N
 
monitor:: addr -d
 Ethernet address (hex): 00-80-2D-ED-21-23
 Internet address: 192.212.87.62
 Subnet masks: 192.255.255.0
 Preferred load host address: <any host>
 Broadcast address: 192.212.87.255
 Preferred dump address: 0.0.0.0
 Type of IP packet encapsulation: <ethernet>
 Load Broadcast: N
 
monitor:: sequence
 
Enter a list of 1 to 4 interfaces to attempt to use for downloading code or upline dumping. Enter them in the order they should be tried, separated by commas or spaces. Possible interfaces are:
 
 Ethernet: net
 SELF: self
 
Enter interface sequence [net]:: self
 Interface sequence: self

5. Copy the Ethernet address entered above for the terminal concentrator, and add it to Ethernet IP Address Worksheet for later reference when configuring the cluster.

6. Terminate your tip session by entering ~ . (tilde and period). Power-cycle the terminal concentrator to enable the IP address changes and wait at least two minutes for the terminal concentrator to activate its network.

monitor:: ~ .

a. Disconnect the RJ-45 serial cable (Part no. 5121A) from Port 1 of the terminal concentrator and from the Sun workstation.

b. Reconnect the serial cable (Part no. 9524A) back into Port 1 of the terminal concentrator.



Note - At this time, the cluster configuration should be cabled as originally shipped from the factory.



7. From the Sun workstation, verify that the terminal concentrator responds to the new IP address:

# /usr/sbin/ping -I 5 192.212.87.62
PING 192.212.87.62: 56 data bytes
64 bytes from scotch (192.212.87.62): icmp_seq=0. time=1. ms
64 bytes from scotch (192.212.87.62): icmp_seq=1. time=0. ms
64 bytes from scotch (192.212.87.62): icmp_seq=2. time=0. ms
64 bytes from scotch (192.212.87.62): icmp_seq=3. time=0. ms
^C



Note - The Sun workstation must be connected to the same subnet to which the terminal concentrator was configured.



8. To access the terminal concentrator, include the default router in the terminal concentrator configuration, and telnet to the terminal concentrator:

# telnet 192.212.87.62
Trying 192.212.87.62... 
Connected to 192.212.87.62. 
Escape character is '^]'. 
cli
 
Enter Annex port name or number: cli
Annex Command Line Interpreter   *   Copyright 1991 Xylogics, Inc. 
annex: su
Password: 192.212.87.62 (password defaults to the assigned IP address)
annex#  edit config.annex   



Note - Change the default terminal concentrator password to avoid unnecessary security exposure. The terminal concentrator password matches the IP address of the terminal concentrator.



The terminal concentrator opens an editing session and displays an editing config.annex file.

9. Type the following information into the config.annex file; replace the following variable with the IP address obtained from your network administrator.

%gateway
net default gateway 192.212.87.248 metric 1 active 
 
Ctrl-W: save and exit  Ctrl-X: exit  Ctrl-F: page down  Ctrl-B: page up 

10. Enter the <ctrl>w command to save changes and exit the config.annex file.

11. Enable access to all ports, and reboot the terminal concentrator.

annex# admin
Annex administration MICRO-XL-UX R7.0.1, 8 ports
admin: port all
admin: set port mode slave
       You may need to reset the appropriate port, Annex subsystem 
       or reboot the Annex for changes to take effect.
admin: quit
 
annex#: boot
bootfile: <CR>
warning: <CR>
       *** Annex (192.212.87.62) shutdown message from port v1 ***
       Annex (192.212.87.62) going down IMMEDIATELY



Note - After 90 seconds, the terminal concentrator and all ports will be accessible from outside the subnet. Use the command usr/sbin/ping -I 5 192.212.87.62 to determine when the terminal concentrator is ready to be used.





caution icon

Caution - The following steps display critical information for configuring the cluster nodes. Use the Ethernet IP Address Worksheet to collect the Ethernet addresses. Consult your system administrator to obtain the IP addresses and node names for your cluster devices.





Note - Make sure all network interfaces for node 1 and node 2 are attached to the production network. (See See Sun Cluster Standard Interconnections and NAFO for connection details.)



12. From the Sun workstation, access the terminal concentrator:

# telnet 192.212.87.62
Trying 192.212.87.62... 
Connected to 192.212.87.62. 
Escape character is '^]' <CR>
 
Rotaries Defined:
    cli
    
Enter Annex port name or number: 

Port designations follow:

  • Port 1 = management server
  • Port 2 = cluster node 1
  • Port 3 = cluster node 2

a. Enter the command /usr/openwin/bin/xhost 192.212.87.38 to allow your windows manager to display screens from remote systems.

b. Telnet to the telnet concentrator and select Port 1. The following steps will assist you in the configuration of the management server; at the conclusion of the steps, the management server will reboot, and you will be asked a series of questions to configure the cluster.

c. To terminate the telnet session, type <ctrl]>.

13. Boot the management server from the OBP prompt to start the customization process.

The management server boots into the Open Boot Prom (OBP) environment. The following examples show the customization process. The sample parameters may not fit your specific environment. Note the introduction on each code box, and select the best choice for your environment.

{0} ok setenv auto-boot? true
(0) ok boot disk0
Boot device: /pci@1f,0/pci@1/scsi@8/disk@0,0  File and args: 
SunOS Release 5.8 Version Generic_108528-06 64-bit
Copyright 1983-2000 Sun Microsystems, Inc.  All rights reserved.
Hostname: unknown
metainit: unknown: there are no existing databases
 
Configuring /dev and /devices



caution icon

Caution - Because of the complex nature of this installation, all information and instructions must be followed. If problems arise, a recovery of the management server may be required.





Note - Because the management server is not provided with a monitor, it is only accessible over the network from another Sun workstation. When executing commands on the management server that require a local display, verify that the DISPLAY shell environment variable is set to the local hostname.



14. Choose a specific localization.

At this time, only the English and U.S.A. locales are supported. Select a supported locale.

 Select a Locale
  0. English (C - 7-bit ASCII)
  1. Canada-English (ISO8859-1)
  2. Thai
  3. U.S.A. (en_US.ISO8859-1)
  4. U.S.A. (en_US.ISO8859-15)
  5. Go Back to Previous Screen
 
Please make a choice (0 - 5), or press h or ? for help: 0

15. Select the appropriate terminal emulation:

What type of terminal are you using?
 1) ANSI Standard CRT
 2) DEC VT100
 3) PC Console
 4) Sun Command Tool
 5) Sun Workstation
 6) X Terminal Emulator (xterms)
 7) Other
Type the number of your choice and press Return: 2

After you select the terminal emulation, network connectivity is acknowledged:

The eri0 interface on the management server is intended for connectivity to the production network. You can obtain the management server name, IP address, and root password information from your network administrator:

Network Connectivity
--------------------------------------------------------------
Specify Yes if the system is connected to the network by one of the Solaris or vendor network/communication Ethernet cards that are supported on the Solaris CD. See your hardware documentation for the current list of supported cards.
 
Specify No if the system is connected to a network/communication card that is not supported on the Solaris CD, and follow the instructions listed under Help.
 
      Networked
      ---------
      [X] Yes
      [ ] No
<esc>2

16. Select Dynamic Host Configuration Protocol (DHCP) services.

Because the management server must have a fixed IP address and name recognized by outside clients, DHCP is not supported:

On this screen you must specify whether or not this system should use DHCP for network interface configuration.  Choose Yes if DHCP is to be used, or No if the interfaces are to be configured manually.
 
WARNING: Because this machine booted from the network, DHCP support will not be enabled, if selected, until after the system reboots.
 
      Use DHCP
      --------
      [ ] Yes
      [X] No
<esc>2

17. Select the primary network interface.

The management server configuration uses eri0 as the default primary network interface:

On this screen you must specify which of the following network
adapters is the system's primary network interface. Usually the correct choice is the lowest number. However, do not guess; ask your system administrator if you're not sure.
 
  > To make a selection, use the arrow keys to highlight the option and
    press Return to mark it [X].
 
      Primary network interface
      -------------------------
      [X] eri0
      [ ] eri1
  
<esc>2

18. Enter the name of the management server.

Consult your local network administrator to obtain the appropriate host name. The following management server name is an example.

On this screen you must enter your host name, which identifies this system on the network. The name must be unique within your domain; creating a duplicate host name will cause problems on the network after you install Solaris.
 
A host name must be at least two characters; it can contain letters, digits, and minus signs (-).
 
    Host name: sc3sconf1-ms
 
<esc>2

19. Enter the IP address of the management server.

To specify the management server IP address, obtain it from your local network administrator and enter it at the prompt.

On this screen you must enter the Internet Protocol (IP) address for this system.  It must be unique and follow your site's address conventions, or a system/network failure could result.
 
IP addresses contain four sets of numbers separated by periods (for example 129.200.9.1).
 
    IP address: 192.212.87.38
 
<esc>2

20. Deselect IPv6 support.

Currently, only version 4 of the IP software is supported. Verify that IPv6 support is disabled.

On this screen you should specify whether or not IPv6, the next
generation Internet Protocol, will be enabled on this machine.  Enabling IPv6 will have no effect if this machine is not on a network that provides IPv6 service. IPv4 service will not be affected if IPv6 is enabled.
 
> To make a selection, use the arrow keys to highlight the option and press Return to mark it [X].
 
      Enable IPv6
      -----------
      [ ] Yes
      [X] No
 
<esc>2

21. Confirm the customization information for the management server:

> Confirm the following information.  If it is correct, press F2; to change any information, press F4.
 
                    Networked: Yes
                     Use DHCP: No
    Primary network interface: eri0
                    Host name: sc3sconf1-ms
                   IP address: 192.212.87.38
                  Enable IPv6: No
<esc>2

22. Deselect and confirm Kerberos security.

Only standard UNIX security is currently supported. Verify that Kerberos security is disabled.

Specify Yes if the system will use the Kerberos security mechanism. Specify No if this system will use standard UNIX security.
 
      Configure Kerberos Security
      ---------------------------
      [ ] Yes
      [X] No
<esc>2
 
> Confirm the following information.  If it is correct, press F2;
to change any information, press F4.
 
    Configure Kerberos Security: No
<esc>2

23. Select and confirm a naming service.

Consult your network administrator to specify a naming service. No naming services are selected for the following example.



Note - The two cluster nodes will be automatically configured to not support any naming services. This default configuration avoids the need to rely on external services.



On this screen you must provide name service information.  Select the name service that will be used by this system, or None if your system will either not use a name service at all, or if it will use a name service not listed here.
 
  > To make a selection, use the arrow keys to highlight the option
    and press Return to mark it [X].
 
      Name service
      -------------
      [ ] NIS+
      [ ] NIS
      [ ] DNS
      [X] None
<esc>2
> Confirm the following information.  If it is correct, press F2;
to change any information, press F4.
 
    Name service: None
<esc>2

24. Select a subnet membership.

The default standard configuration communicates with network interfaces as part of a subnet.

On this screen you must specify whether this system is part of a
subnet.  If you specify incorrectly, the system will have problems communicating on the network after you reboot.
 
> To make a selection, use the arrow keys to highlight the option and press Return to mark it [X].
 
      System part of a subnet
      -----------------------
      [X] Yes
      [ ] No
<esc>2

25. Select a netmask.

Consult your network administrator to specify the netmask of your subnet. The following shows an example of a netmask:

On this screen you must specify the netmask of your subnet.  A default netmask is shown; do not accept the default unless you are sure it is correct for your subnet.  A netmask must contain four sets of numbers separated by periods (for example 255.255.255.0).
 
    Netmask: 255.255.255.0
<esc>2

26. Select the appropriate time zone and region.

Select the time zone and region to reflect your environment:

On this screen you must specify your default time zone.  You can
specify a time zone in three ways:  select one of the geographic regions from the list, select other - offset from GMT, or other - specify time zone file.
 
> To make a selection, use the arrow keys to highlight the option and press Return to mark it [X].
 
Regions
--------------------------------
[ ] Asia, Western
[ ] Australia / New Zealand
[ ] Canada
[ ] Europe
[ ] Mexico
[ ] South America
[X] United States
[ ] other - offset from GMT
[ ] other - specify time zone file
<esc>2
> To make a selection, use the arrow keys to highlight the option and press Return to mark it [X].
 
Time zones
------------------
[ ] Eastern
[ ] Central
[ ] Mountain
[X] Pacific
[ ] East-Indiana
[ ] Arizona
[ ] Michigan
[ ] Samoa
[ ] Alaska
[ ] Aleutian
[ ] Hawaii
<esc>2

27. Set the date and time, and confirm all information.

 > Accept the default date and time or enter
    new values.
 
  Date and time: 2000-12-21 11:47
 
    Year   (4 digits) : 2000
    Month  (1-12)     : 12
    Day    (1-31)     : 21
    Hour   (0-23)     : 11
    Minute (0-59)     : 47
<esc>2
> Confirm the following information.  If it is correct, press F2; to change any information, press F4.
 
    System part of a subnet: Yes
                    Netmask: 255.255.255.0
                  Time zone: US/Pacific
              Date and time: 2000-12-21 11:47:00
<esc>2

28. Select a secure root password.

On this screen you can create a root password. A root password can contain any number of characters, but only the first eight
characters in the password are significant. (For example, if you create `a1b2c3d4e5f6' as your root password, you can use `a1b2c3d4' to gain root access.)
 
You will be prompted to type the root password twice; for security, the password will not be displayed on the screen as you type it.
 
> If you do not want a root password, press RETURN twice.
 
Root password: abc
 
Re-enter your root password: abc

After the system reboots, the cluster environment customization starts. After the system customization is completed, the management server installs the Solstice DiskSuite software and configures itself as an installation server for the cluster nodes.



Note - Use "Invalid Cross-Reference Format" on "Invalid Cross-Reference Format" as a reference to input data for Step 29 through Step 33. The variables shown in Step 29 through Step 33 are sample node names and parameters.



29. After the system reboots, the cluster environment customization starts. After the system customization is completed, the management server completes the Solstice DiskSuite configuration and configure itself as an install server for the cluster nodes.

30. Add the router name and IP address:

Enter the Management Server's Default Router (Gateway) IP Address... 192.145.23.248

31. Add the cluster environment name:

Enter the Cluster Environment Name (node names will follow)... sc3sconf1

32. Enter the terminal concentrator name (node names will follow):

Enter the Terminal Concentrator Name...sc3conf1-tc
Enter the Terminal Concentrator's IP Address...192.145.23.90

33. Add the cluster node names and IP addresses:

Enter the First Cluster Node's Name... sc3sconf1-n1
Enter the First Cluster Node's IP Address... 192.145.23.91
Enter the First Cluster Node's Ethernet Address... 9:8:50:bf:c5:91
Enter the Second Cluster Node's Name... sc3sconf1-n2
Enter the Second Cluster Node's IP Address... 192.145.23.92
Enter the Second Cluster Node's Ethernet Address... 8:9:10:cd:ef:b3

Your network administrator can provide the remaining parameters for your environment.

34. When prompted to confirm the variables, type y if all of the variables are correct. Type n if any of the variables are not correct, and re-enter the correct variables. Enter 99 to quit the update mode, once all variables are displayed correctly.

Option         Variable Setting
------  -------------------------------------------
    1) Management Server's Default Router= 192.212.87.248
    2) Cluster Name= sc3sconf1
    3) Terminal Server's Name= sc3sconf1-tc
    4) Terminal Server's IP Address= 192.145.23.90
    5) First Node's Name= sc3sconf1-n1
    6) First Node's IP Address= 192.145.23.91
    7) First Node's Ethernet Address= 9:8:50:bf:c5:91
    8) Second Node's Name= sc3sconf1-n2
    9) Second Node's IP Address= 192.145.23.92
   10) Second Node's Ethernet Address= 8:9:10:cd:ef:b3
   99) quit update mode....
 
  Are all variables correctly set y/n? y


procedure icon  Starting the Cluster Console

1. Start cluster console windows for both cluster nodes by entering the following command on the management server:

# /opt/SUNWcluster/bin/ccp sc3sconf1



Note - Replace the sc3sconf1 variable name displayed above with your cluster name. When executing the ccp(1) command remotely, ensure that the DISPLAY shell environment variable is set to the local hostname.



When the /opt/SUNWcluster/bin/ccp command is executed, the Cluster Control Panel window displays (see See Cluster Control Panel Window).

 

FIGURE 2-5 Cluster Control Panel Window

Screen showing the Cluster Control panel window.

2. In the Cluster Control Panel window, double-click the Cluster Console (console mode) icon to display a Cluster Console window for each cluster node (see FIGURE 2-6).



Note - Before you use an editor in a Cluster Console window, verify that the TERM shell environment value is set and exported to a value of vt220. FIGURE 2-7 shows the terminal emulation in the Cluster Console window.



To issue a Stop-A command to the cluster nodes and to access the OpenBoottrademark PROM (OBP) prompt, position the cursor in the Cluster Console window, and enter the <ctrl>] character sequence. This character forces access to the telnet prompt. Enter the Stop-A command, as follows:

telnet> send brk
ok>

 FIGURE 2-6 Cluster Nodes Console Windows

Screen showing the Cluster Nodes Console windows.

3. To enter text into both node windows simultaneously, click the cursor in the Cluster Console window and enter the text.

The text does not display in the Cluster Console window, and will display in both node windows. For example, the /etc/hosts file can be edited on both cluster nodes simultaneously. This ensures that both nodes maintain identical file modifications.



Note - The console windows for both cluster nodes are grouped (the three windows move in unison--FIGURE 2-6). To ungroup the Cluster Console window from the cluster node console windows, select Options from the Hosts menu (FIGURE 2-7) and deselect the Group Term Windows checkbox.



 

FIGURE 2-7 Cluster Console Window

Screen showing the Cluster Console Options drop down menu.

procedure icon  Installing the Software Stack on Both Cluster Nodes

1. Use the ccp(1M) Cluster Console window to enter the following command into both nodes simultaneously:

{0} ok setenv auto-boot? true
{0} ok boot net - install



boot net - You must use spaces between the dash (-) character in the
"install" string.



The Solaris operating environment, Solstice DiskSuite, and Sun Cluster 3.0 are automatically installed. All patches are applied and system files are configured to produce a basic cluster environment.

See Appendix B for sample output of the automatic installation of the first cluster node.



Note - Disregard error messages received during the initial cluster installation. Do not attempt to reconfigure the nodes.



2. Log into each cluster node as a superuser (password is abc) and change the default password to a secure password choice:

# passwd
passwd: Changing password for root
New password: secure-password-choice
Re-enter new password: secure-password-choice

3. Configure the Sun StorEdge D1000 shared disk storage using the Solstice DiskSuite software.

Solstice DiskSuite configuration involves creating disksets, metadevices, and file systems. (Refer to the included Sun Cluster 3.0 documentation.)

4. Select a quorum device to satisfy failure fencing requirements for the cluster. (Refer to the included Sun Cluster 3.0 documentation.)

5. Install and configure the highly available application for the cluster environment. (Refer to the Sun Cluster 3.0 documentation.)

Establish resource groups, logical hosts, and data services to enable the required application under the Sun Cluster 3.0 infrastructure. (Refer to the Sun Cluster 3.0 documentation.)

The path to the data CD is /net/{management server}/SOFTWARE/SC3.0-Build92DataServices

Your customized Cluster Platform 220/1000 is completed.


Cluster Platform Recovery

The recovery CDs enable you to replace the factory-installed Cluster
Platform 220/1000 software environment on the management server in the event of a system disk failure.



caution icon

Caution - Initiate a recovery only at the direction of technical support.



These CDs are intended only for recovering from a disaster. They are not needed for initial installation and configuration of the Cluster Platform 220/1000 management server.

Before You Begin Recovery

Before you attempt to restore the management software environment, you must know your system configuration information and the state of your system backups. See TABLE 2-1 on "Invalid Cross-Reference Format" for information on your customized configuration.

Your system configuration information must include the following:

    • System name
    • IP address
    • Time zone
    • Name service
    • Subnet information
    • Ethernet addresses


Note - Because the recovery results in a generic, unconfigured system, you must restore all site-specific configuration files from backup. If your site uses special backup software such as VERITAS NetBackup, you must reinstall and configure that software.



Recovery CD-ROMs

  • CD0: Cluster Platforms Recovery Environment Boot CD
Contains a bootable copy of the Solaris 8 10/00 Operating Environment, configured to execute recovery software
  • CD1: Cluster Platforms Recovery Operating System CD 2
Installs software for the Solaris operating environment, Solstice DiskSuite 4.2.1, Sun Cluster 3.0, and recommended patches.
  • CD2: Cluster Platforms Node Software Operating System CD.
Contains software for the Cluster Environment to jump-start the nodes for performing mirroring and hotsparing. Installs the Solaris operating environment and recommended patches onto each node.
Contains information to build the management server and loads the mirroring script.

procedure icon  Installing the Recovery CD

1. Access the management server console through the terminal concentrator:

# telnet sc3sconf1-tc
Trying 192.212.87.62... 
Connected to sc3sconf1-tc. 
Escape character is '^]' <CR>
 
Rotaries Defined:
    cli
    
Enter Annex port name or number: 1

2. To issue a Stop-A command to the cluster nodes and to access the OpenBoot PROM (OBP) prompt, position the cursor in the Cluster Console window and enter the <ctrl>] character sequence.

3. This character forces access to the telnet prompt. Enter the Stop-A command, as follows:

telnet> send brk

4. Boot the system from the CD-ROM:

ok boot cdrom

The system boots from the CD-ROM and prompts you for the mini-root location
(a minimized version of the Solaris operating environment). This procedure takes approximately 15 minutes.

Standard Cluster Environment Recovery Utility
 
 
Starting...
|
Searching the systems hard disks for a 
location to place the Standard Cluster Recovery Environment.
 
|
You must select a disk to become your system's new boot
disk.
Please select a disk for restoring the Standard Cluster
software:
 
  1  c1t0d0s0     
  2  c1t1d0s0     
 
Enter selection [?,??,q]: 1
You have selected c1t0d0s0 as the disk on which to restore the Standard Cluster software.
Please remember your selection.
 
 
Do you wish to format and erase c1t0d0s0?   [y,n,?,q] y
 
CONFIRM: c1t0d0s0 WILL BE FORMATTED AND ERASED.  CONTINUE?  [y,n,?,q] y
Formatting c1t0d0s0 for restore...
 
fmthard:  New volume table of contents now in place.
 
Disk c1t0d0s0 now is formatted to Standard Cluster specifications.
The Recovery (Disk) Miniroot will now be installed on the swap slice of your choice, and the system will be rebooted.
When the Recovery Miniroot prompts you for a disk on which to install the Standard Cluster software, remember to choose c1t0d0s0.

 

Looking for swap slice on which to install the Standard Cluster Recovery Environment...
 
The Recovery Utility will use the disk slice, c1t0d0s1, labeled as swap.
 
WARNING: All information on this disk will be lost
 
Can the Recovery Utility use this slice  [y,n,?] y
 
The recover utility will use disk slice, /dev/dsk/c1t0d0s1.
After files are copied, the system will automatically reboot, and the 
installation will continue.
Please Wait...
 
Copying mini-root to local disk....done.
 
 
Copying platform specific files....done.
 
Preparing to reboot and continue installation.
Rebooting to continue the installation. 



Note - Messages which display when a suitable disk slice is found may vary, depending upon the state of your disks. Disregard any messages and select any suitable disk slice; the mini-root is only used during system boot.



5. Select a CD-ROM drive from the menu.

Once the Cluster Platform 220/1000 recovery utility has placed a copy of the Solaris operating environment onto a suitable disk slice, the system reboots. You are prompted to specify the CD-ROM drive. Completing this process takes approximately 15 minutes.

Standard Cluster Environment Recovery Utility V. 1.0 
 
Please identify the CD-ROM drive:
 
  1  c0t0d0s0     
 
Enter selection [?,??,q]: 1
 
Please select a disk for restore:
 
  1  c1t0d0s0     
  2  c1t1d0s0     
 
Enter selection [?,??,q]: 1
You have chosen c1t0d0s0 as your new boot disk.
 
This disk must be formatted before use.
 
 
Do you wish to format and erase c1t0d0s0?  [y,n,?,q] y
 
CONFIRM: c1t0d0s0 WILL BE FORMATTED AND ERASED.  CONTINUE?  [y,n,?,q] y
Formatting c1t0d0s0 for restore...
 
newfs: /dev/rdsk/c1t0d0s0 last mounted as /a
newfs: construct a new file system /dev/rdsk/c1t0d0s0: (y/n)? y
Cylinder groups must have a multiple of 2 cylinders with the given parameters
Rounded cgsize up to 230
/dev/rdsk/c1t0d0s0:     26971488 sectors in 5724 cylinders of 19 tracks, 248 sectors
        13169.7MB in 261 cyl groups (22 c/g, 50.62MB/g, 6208 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
 32, 103952, 207872, 311792, 415712, 519632, 623552, 727472, 831392, 935312,

6. Install the Solaris operating environment software.

You are prompted to remove CD0 and mount CD1 on the CD-ROM drive. After
CD1 is mounted, press the Return key. The Solaris operating environment files are copied from CD1 onto the management server boot disk. This process takes approximately 20 minutes.

Preparing to recover Management Server. 
Please place Recovery CD #1 in the CD-ROM drive.  Press <Return> when mounted.
  
 Restoring files from CD 1...
 
Installing boot block on c1t0d0s0.
 
Root environment restore complete. 

7. Install the second data CD.

When all files are copied from the first data CD, you are prompted to remove CD 1 and mount CD 2 on the CD-ROM drive. After CD 2 is mounted, press the Return key. The software and patch files are copied from CD 2 onto the management server boot disk. When all files are copied from both CDs, the system automatically shuts down. You must reboot the system. This process takes approximately 20 minutes.

Please place Recovery CD #2 in the CD-ROM drive.  Press <Return> when mounted. 
 
 
Restoring files from CD 2...
 
Cluster environment restore complete.
 
Management Server restore complete; now shutting down system.
 
 
INIT: New run level: 0
The system is coming down.  Please wait.
System services are now being stopped.
umount: /dev busy
The system is down.
syncing file systems... done
Program terminated

Completing the Recovery Process

Once the management server recovery software is loaded, you must configure the system to match your environment. See Sun Cluster Standard Rack Placement to customize your cluster environment. If the recovery process involves the replacement of cluster nodes, refer to Ethernet IP Address Worksheet to verify that the first cluster node's FC-AL is properly set up.