Preface |
This guide contains an overview of the hardware and software for the Cluster Platform 15K/9960 system and procedures for installing and configuring the hardware and software.
To use this guide, you must have prior experience installing the Solaris operating environment and must have Sun service training.
This guide is organized in the following manner:
Chapter 1 contains an introduction to the hardware and software in the Cluster Platform 15K/9960 system.
Chapter 2 contains instructions on how to set up the hardware.
Chapter 3 contains procedures on how to access the terminal concentrator, set up the management server, and how to configure the software.
Appendix A covers laptop settings for accessing the terminal concentrator from a laptop.
Appendix B contains descriptions of how the expansion cabinet was cabled at the factory.
This section contains a task map for installing and configuring the Cluster Platform 15K/9960 system. You must follow this task map to ensure a successful installation.
1. Read all of the Cluster Platform 15K/9960 documentation, Sun Fire 15K documentation, and Sun StorEdge 9960 documentation that was sent with the system.
2. Unpack all of the system crates.
3. Obtain domain A Ethernet address for both cluster nodes (look up label on the server frame), and save that information in the Cluster Platform 15K/9960 System Site Planning Guide.
4. Verify compliance with all requirements specified in the Cluster Platform 15K/9960 System Site Planning Guide.
5. Cable the system components (consoles, disks, network interfaces), and make sure that the power cables are attached to proper power sources.
6. Power on the expansion cabinet.
7. Power on the Sun StorEdge 9960.
8. Create and configure the LUNs
The LUNs must be bound to the appropriate FCAL links that are attached to the cluster nodes and must be accessible by both cluster nodes (that is, the shared storage).
9. Connect and configure the terminal concentrator by using a Sun workstation or a laptop PC.
10. Start a console session for the system controllers (that is, SC0 and SC1) for both cluster nodes by using the telnet(1M) command on specific ports of the terminal concentrator.
To access the system controller consoles, use one of the following command:
# telnet TC 2002 (SC0 on node 1) # telnet TC 2003 (SC1 on node 1) # telnet TC 2004 (SC0 on node 2) # telnet TC 2005 (SC1 on node 2) |
11. Power on the cluster nodes.
Refer to the Sun Fire 15K Installation and De-Installation Guide for instructions on how to power on the cluster nodes.
12. Use the sys-config(1M) command on the system controllers to configure them by using the configuration information in the Cluster Platform 15K/9960 System Site Planning Guide.
13. After the system controllers are booted, verify or re-configure the SMS software on each cluster node.
The following items are pre-configured at the factory and must remain the same:
The password is temporarily set to abc and needs to be reset after the first login.
14. Verify domain-a status and hardware allocation for each cluster node.
15. Connect to the management console using the terminal concentrator (see To Configure the Terminal Concentrator).
16. Invoke the boot(1M) command at the OBP prompt on the management server (see To Configure the Terminal Concentrator).
17. Configure the management server (see To Configure the Management Server).
18. Use the configuration information in the Cluster Platform 15K/9960 System Site Planning Guide to configure the cluster environment on the management server (see To Customize the Cluster Environment).
19. Log in to the management server as the superuser, and use the ccp(1M) command to display the node console windows (see To Install the Cluster Platform 15K/9960 Software on the Cluster Nodes).
20. Using the cconsole window, log into SC0 on each cluster node by using the domain-a user ID.
21. Power on domain-a on each cluster node by using the setkeyswitch(1M) command.
22. Using the cconsole window, produce a domain-a console for each cluster node by using the console command.
23. Using the cconsole window, execute the boot net - install at the OBP prompt on both cluster nodes.
24. If you chose Solstice DiskSuite 4.2.1 the volume manager, you must select a quorum device from shared storage (that is, the LUNs) by using the scsetup(1M) command.
25. Configure the NAFO groups using the pnmset(1M) command.
26. Install data services using one of the management server data services repositories.
You can use either of the following repositories:
/net/management_server_name-admin/jumpstart/Packages/SC3.0u1/scdataservices_3_0_u1 /net/management_server_name-admin/jumpstart/Packages/SC3.0u2/scdataservices_3_0_u2 |
This document might not contain information on basic UNIX® commands and procedures such as shutting down the system, booting the system, and configuring devices.
See one or more of the following for this information:
The names of commands, files, and directories; on-screen computer output |
||
What you type, when contrasted with on-screen computer output |
||
Read Chapter 6 in the User's Guide. |
||
A broad selection of Sun system documentation is located at:
http://www.sun.com/products-n-solutions/hardware/docs
A complete set of Solaris documentation and many other titles are located at:
Fatbrain.com, an Internet professional bookstore, stocks select product documentation from Sun Microsystems, Inc.
For a list of documents and how to order them, visit the Sun Documentation Center on Fatbrain.com at:
http://www.fatbrain.com/documentation/sun
Sun is interested in improving its documentation and welcomes your comments and suggestions. You can email your comments to Sun at:
Please include the part number (816-3538-10) of your document in the subject line of your email.
Copyright © 2002, Sun Microsystems, Inc. All rights reserved.