C H A P T E R  2

Configuring Your System

This chapter assumes you have already installed the Solaris Operating System and the required patches on your Netra CT 820 node boards.

You configure the Netra CT 820 system primarily through the active distributed management card command-line interface (CLI). The active distributed management card CLI enables system-level configuration, administration, and management that includes the node boards, the switching fabric boards, the distributed management card, power supplies, and fan trays. The distributed management card CLI interface can be used both locally and remotely.

You configure the distributed management cards first, then the node boards, then the system-wide applications.

This chapter includes the following sections:


Accessing the Distributed Management Cards

When you initially access either distributed management card, you must do so over the serial port (console), using an ASCII terminal or the Tip program. When you first access the distributed management card, log in with the default user account of netract and the password suncli1. This account is set to full authorization (permissions). This account can not be deleted. However, you should change the password on this account for security purposes, before your Netra CT 820 server is operational.

The following sections provide information on configuring the distributed management cards' Ethernet ports and setting up user accounts and passwords using the distributed management card CLI. For more information on using the distributed management card CLI, refer to Chapter 3.

Each distributed management card supports 22 sessions (Tip and Telnet connections) at once. The active distributed management card is identified by the prompt hostname (Active slot#) cli> and the standby distributed management card is identified by the prompt hostname (Standby slot#) cli>.



Note - The term distributed management card as used in this manual refers to either the active or standby distributed management card unless otherwise specified. In this manual, the prompt for both is shortened to hostname cli>.




Configuring the Distributed Management Cards' Ethernet Ports

Each distributed management card has one external Ethernet port on the rear transition card, labeled SRVC LAN, and one internal Ethernet port. If you configure these ports, you can access the distributed management cards using a Telnet connection to the external Ethernet port or using a Telnet connection through the switching fabric board to the internal Ethernet port.

To configure the Ethernet ports, you must be logged in to the distributed management card with a user account that has full permissions. You configure the ports with CLI commands, and then reset the distributed management card for the changes to take effect. Use the following procedure for each distributed management card.



Note - The external Ethernet interface on the distributed management card and the external Ethernet interface on the switching fabric board must be connected to different subnets. If they are configured on the same subnet, arp messages are displayed on the distributed management card console.




procedure icon  To Configure the Distributed Management Cards' Ethernet Ports

1. Log in to the distributed management card.

2. Set the IP mode:

hostname cli> setipmode -b port_num rarp|config|none


where port_num is 1 for the external Ethernet port or 2 for the internal Ethernet port. Choose the IP mode according to the services available in the network (rarp, config, or none). The default is none.

If you set the IP mode to rarp, skip to Step 5.

3. Set the IP address:

hostname cli> setipaddr -b port_num addr


where port_num is 1 for the external Ethernet port or 2 for the internal Ethernet port. Set the IP address of the distributed management card. The default is 0.0.0.0. This command is only used if the ipmode is set to config.

4. Set the IP netmask:

hostname cli> setipnetmask -b port_num addr


where port_num is 1 for the external Ethernet port or 2 for the internal Ethernet port. Set the IP netmask of the distributed management card. The default is 255.255.255.0. This command is only used if the ipmode is set to config.

5. Set the IP gateway:

hostname cli> setipgateway addr


Set the IP gateway of the distributed management card to access the system from outside the subnet. The default is 0.0.0.0.

6. Reset the distributed management card:

hostname cli> reset dmc

 


Setting Up User Accounts on the Distributed Management Card

User accounts are set up using the distributed management card CLI. The default user account is netract and the password is suncli1. This account is set to full authorization (permissions). This account can not be deleted. However, you should change the password on this account for security purposes, before your Netra CT 820 server is operational.

User information is entered on the active distributed management card, and immediately mirrored, or shared, on the standby distributed management card. The distributed management card supports 16 accounts with passwords.


procedure icon  To Set Up a User Account

1. Log in to the active distributed management card.

2. Add a user:

hostname cli> useradd username

3. Add a password for that user:

hostname cli> userpassword username

By default, new accounts are created with read-only permission. Permission levels can be changed using the userperm command. Refer to CLI Commands for more information about permissions and the userperm command.

Username Restrictions

The username field has a maximum length of 16 characters. It must contain at least one lowercase alphabetic character, and the first character must be alphabetic.

Valid characters for username include:

Password Restrictions

Passwords have the following restrictions:


Specifying Netra CT Server FRU ID Information

A field-replaceable unit (FRU) is a module or component that can typically be replaced in its entirety as part of a field service repair operation.

The Netra CT system FRUs include:

All FRUs except power supplies contain FRU ID (identification) information that includes FRU manufacturing and configuration data. This information can be displayed through the distributed management card CLI (see TABLE 2-2). The Netra CT 820 system supports two FRU ID formats:

In addition, you can enter certain FRU ID information through the active distributed management card CLI, which is stored in the midplane. Note that you can also enter FRU ID information through the MOH application; refer to the Netra CT Server Developer's Guide for instructions. FRU ID information includes:

Some of this information is used by the MOH application to audit board insertions and prevent misconfigurations, and to display information; some is used by the system management network.

The format of the information to be specified is:

hostname cli> setfru fru_name  instance  fru_property  value

The FRU instance is a logical number; it matches the slot number only for the slot FRU name. The FRU property is case-insensitive.

TABLE 2-1 shows the FRU ID information that can be specified with the CLI setfru command.

TABLE 2-1 FRU ID Information Specified Using the setfru Command

FRU Name

Instance

FRU Property

Value

Description

midplane

1

SysmgtbusIPSubnet

IP subnet address (hexadecimal)

Specify the IP subnet address for the system management network. The default is 0xc0a80d00 (192.168.13).

midplane

1

SysmgtbusIPSubnetMask

IP subnet mask (hexadecimal)

Specify the IP subnet mask for the system management network. The default is 0xffffffe0 (255.255.255.224).

midplane

1

Location

text description

A description of the location (for example, the number on the chassis label) of the Netra CT system. This description is used in the MOH application. The text can be up to 80 characters in length.

midplane

1

User_Label

text description

Any customer-supplied information. The text can be up to 10 characters in length.

dmc

1 or 2

Cust_data

text description

Any customer-supplied information. The text can be up to 80 characters in length. FRU instance 1 is the DMC in slot 1A; FRU instance 2 is the DMC in slot 1B.

slot

2 to 21

Acceptable_Fru_Types

vendor:partnumber

First, specify the chassis slot number to be configured. (Slots are numbered starting from the left.) Second, specify the allowable plug-in board(s) for that slot, where the value is the vendor name and part number (separated by a colon) of the board. Use the showfru command to display this information. Multiple boards may be specified, separated by a semi-colon (;). The default is to power on all Sun supported cPSB-only boards.

slot

3 to 20

Acceptable_Fru_Types

nonsun:picmg2.16

This information applies to third-party node boards only. First, specify the chassis slot number to be configured. (Slots are numbered starting from the left.) Second, specify the value nonsun:picmg2.16, which indicates that a third-party node board is allowed in this slot.

slot

3 to 20

Boot_Devices

boot_device_list

First, specify the chassis slot number to be configured (Slots are numbered starting from the left.) Second, specify the alias(es) listing the devices and/or full device path names the board in this slot will boot from. The boot_device_list can be up to 16 characters in length. When the board in this slot is powered up, this information overwrites the entry in the OpenBoot PROM boot-device NVRAM configuration variable. Specifying "" (the null string) defaults to the OpenBoot PROM NVRAM setting.

slot

3 to 20;

all

Boot_Mask

true or false

First, specify the chassis slot number to be configured (slots are numbered starting from the left) or all to refer to all configurable slots. Second, specify whether the board in this slot is a boot server for the system. The default is false, which means that the board is not a boot server. Refer to Configuring a Node Board as a Boot Server for instructions on setting the boot mask for a slot.

slot

3 to 20

Cust_Data

text description

First, specify the chassis slot number to be configured (slots are numbered starting from the left). Second, specify any customer-supplied information. The text can be up to 80 characters in length.


Changes to FRU ID fields through the CLI setfru command require you to completely power the system off and on for the changes to take effect. It is recommended that you enter all necessary FRU ID information, then power the system off and on.


Displaying Netra CT Server FRU ID Information

FRU ID information entered during the manufacturing process and through the active distributed management card CLI setfru command can be displayed using the showfru command.

TABLE 2-2 shows the FRU ID information that can be displayed with the CLI showfru command. Use the FRU property to specify the information you want; the FRU property is case-insensitive.

TABLE 2-2 FRU ID Information Displayed Using the showfru Command

FRU Name

Instance

FRU Property

Description

midplane

1

Sun_Part_No

Display the part number for the midplane.

midplane

1

Sun_Serial_No

Display the serial number for the midplane.

midplane

1

SysmgtbusIPSubnet

Display the system management network IP subnet address in hexadecimal format for this system.

midplane

1

SysmgtbusIPSubnetMask

Display the system management network IP subnet mask in hexadecimal format for this system.

midplane

1

Vendor_Name

Display the vendor name for the midplane.

midplane

1

Fru_Shortname

Display the FRU short name for the midplane.

midplane

1

Location

Display any customer-supplied text specified for the Location of this system.

midplane

1

User_Label

Display any customer-supplied text for this field.

dmc

1 or 2

Sun_Part_No

Display the part number for the distributed management card in a particular slot. FRU instance 1 is the DMC in slot 1A; FRU instance 2 is the DMC in slot 1B.

dmc

1 or 2

Sun_Serial_No

Display the serial number for the distributed management card in a particular slot. FRU instance 1 is the DMC in slot 1A; FRU instance 2 is the DMC in slot 1B.

dmc

1 or 2

Vendor_Name

Display the vendor name for the distributed management card in a particular slot. FRU instance 1 is the DMC in slot 1A; FRU instance 2 is the DMC in slot 1B.

dmc

1 or 2

Fru_Shortname

Display the FRU short name for the distributed management card in a particular slot. FRU instance 1 is the DMC in slot 1A; FRU instance 2 is the DMC in slot 1B.

dmc

1 or 2

Initial_HW_Dash_Level

Display the initial hardware dash level of the distributed management card in a particular slot. FRU instance 1 is the DMC in slot 1A; FRU instance 2 is the DMC in slot 1B.

dmc

1 or 2

Initial_HW_Rev_Level

Display the initial hardware revision level of the distributed management card in a particular slot. FRU instance 1 is the DMC in slot 1A; FRU instance 2 is the DMC in slot 1B.

dmc

1 or 2

Cust_Data

Display any customer-supplied text for this field for the distributed management card in a particular slot. FRU instance 1 is the DMC in slot 1A; FRU instance 2 is the DMC in slot 1B.

slot

3 to 20

Sun_Part_No

Display the part number for the board in a particular slot.

slot

3 to 20

Part_No

Display the part number for the third-party node board in a particular slot.

slot

3 to 20

Sun_Serial_No

Display the serial number for the board in a particular slot.

slot

3 to 20

Serial_No

Display the serial number for the third-party node board in a particular slot.

slot

2 to 21

Acceptable_Fru_Types

Display the allowable plug-in boards for a particular slot.

slot

3 to 20

Boot_Devices

Display the boot devices for a particular slot.

slot

3 to 20

Boot_Mask

Display whether or not the board in a particular slot is a boot server for the system.

slot

3 to 20

Vendor_Name

Display the vendor name for the board in a particular slot.

slot

3 to 20

Fru_Shortname

Display the FRU short name for the board in a particular slot.

slot

3 to 20

Initial_HW_Dash_Level

Display the initial hardware dash level of the board in a particular slot.

slot

3 to 20

Initial_HW_Rev_Level

Display the initial hardware revision level of the board in a particular slot.

slot

3 to 20

Cust_Data

Display any customer-supplied text for this field for the board in a particular slot.

switch

1 or 2

Sun_Part_No

Display the part number for the specified switching fabric board. FRU instance 1 is the switch in slot 2; FRU instance 2 is the switch in slot 21.

switch

1 or 2

Sun_Serial_No

Display the serial number for the specified switching fabric board. FRU instance 1 is the switch in slot 2; FRU instance 2 is the switch in slot 21.

switch

1 or 2

Vendor_Name

Display the vendor name for the specified switching fabric board. FRU instance 1 is the switch in slot 2; FRU instance 2 is the switch in slot 21.

switch

1 or 2

Fru_Shortname

Display the FRU short name for the specified switching fabric board. FRU instance 1 is the switch in slot 2; FRU instance 2 is the switch in slot 21.

fantray

1 to 3

Sun_Part_No

Display the part number for the specified fan tray.

fantray

1 to 3

Sun_Serial_No

Display the serial number for the specified fan tray.

fantray

1 to 3

Vendor_Name

Display the vendor name for the specified fan tray.

fantray

1 to 3

Fru_Shortname

Display the FRU short name for the specified fan tray.



procedure icon  To Display FRU ID Information

1. Log in to the distributed management card.

2. Enter the showfru command:

hostname cli> showfru fru_name  instance  fru_property 

Refer to TABLE 2-2 for allowable information for each variable. For example, if you want to display the part number FRU ID information for fan tray 1, enter the following:

hostname cli> showfru fantray 1 Sun_Part_No

Use the FRU target "slot" to display information for the node boards. For example, to display part number FRU ID information for a board in slot 8, enter the following:

hostname cli> showfru slot 8 Sun_Part_No

The next several sections describe the configurations you can set by entering FRU ID information.


Configuring a Chassis Slot for a Board

You can specify the type of board that is allowed in a given chassis slot using the active distributed management card CLI. The slot usage information is used by the distributed management card software to audit board insertions and prevent misconfigurations. You can also specify the boot device for the slot, that is, the path to the device the board in the slot boots from. When the board is powered on, the FRU boot device information overwrites the entry in the OpenBoot PROM boot-device NVRAM configuration variable on that board. The chassis slot information can be changed at any time using the active distributed management card CLI.

By default, slots are configured to accept Sun supported cPSB-only board FRUs unless you specifically set an allowable plug-in for a specific slot. The exceptions are: for a Netra CT 820 server, the distributed management cards must be in slot 1A and 1B and the switching fabric boards must be in slots 2 and 21.

To set allowable plug-ins for a particular slot, you need the vendor name and the part number of the board. This FRU ID information can be displayed using the CLI showfru command. See Displaying Netra CT Server FRU ID Information for more information.


procedure icon  To Configure a Chassis Slot for a Board

1. Log in to the active distributed management card.

2. Set the acceptable FRUs for the slot:

hostname cli> setfru fru_name  instance  fru_property  value

Refer to TABLE 2-1 for allowable information for each variable. For example, if you want to set chassis slot 5 to allow only a Sun Microsystems (vendor 003E) particular CPU board (part number 595-5769-03), enter the following:

hostname cli> setfru slot 5 Acceptable_Fru_Types 003E:595-5769-03

Multiple boards can be specified for one slot. Separate the boards with a semi-colon. You can also use the asterisk (*) as a wild card in the part number to allow multiple boards. For example, if you want to set chassis slot 4 to allow only boards from three particular vendors, with multiple board part numbers from one vendor, enter the following:

hostname cli> setfru slot 4 Acceptable_Fru_Types 003E:*;0004:1234-5678-1;0001:8796541-02

3. Set the boot device for the slot:

hostname cli> setfru fru_name  instance  fru_property  value

Refer to TABLE 2-1 for allowable information for each variable. For example, if you want to set chassis slot 5 to boot from a device on the network, enter the following:

hostname cli> setfru slot 5 Boot_Devices boot_device_list

where boot_device_list is the alias or aliases specifying the boot devices for example, disk net. The boot_device_list is limited to 25 bytes.

4. Completely power off and on the system by locating the power switch at the rear of the Netra CT 820 server; press it to the Off (O) position, then press it to the On (|) position.


Configuring a Node Board as a Boot Server

You can configure a node board (Sun supported cPSB-only boards) to be a boot server for the Netra CT 820 system. To do this, you use the Boot_Mask field in the midplane FRU ID. When the system is powered on, the distributed management card looks at the Boot_Mask field; if a boot server has been specified, the distributed management card powers on that node board first. There can be any number of boot servers per Netra CT 820 system. If multiple boot servers are specified, all boot servers are powered on simultaneously.


procedure icon  To Configure a Node Board as a Boot Server

1. Log in to the active distributed management card.

2. Specify which slot contains a node board boot server by setting the Boot_Mask:

hostname cli> setfru fru_name  instance  fru_property  value

Refer to TABLE 2-1 for allowable information for each variable. For example, if you want to specify chassis slot 3 as a node board boot server, enter the following:

hostname cli> setfru slot 3 Boot_Mask true

To specify all slots (3 to 20) as boot servers, enter the following:

hostname cli> setfru slot all Boot_Mask true

To clear all slots (3 to 20) as boot servers, enter the following:

hostname cli> setfru slot all Boot_Mask false

3. Completely power off and on the system by locating the power switch at the rear of the Netra CT 820 server; press it to the Off (O) position, then press it to the On (|) position.


Configuring the System Management Network

The system management network provides a communication channel over the midplane. It is used to communicate between the distributed management cards, the node boards, and the switching fabric boards. FIGURE 2-1 shows the physical connections between boards over the midplane in the Netra CT 820 system.

The network appears as any other generic Ethernet port in the Solaris Operating System, and is configured by default on Solaris OS and on the distributed management cards. The system management network is used by the applications and features, such as MOH, PMS, and console connections from the distributed management cards to node boards.

  FIGURE 2-1 System Management Network Physical Connectivity over the cPSB Bus

Diagram showing cPSB physical connections among the distributed management cards, switching fabric boards, and a node board.

The system management network consists of two virtual local area networks (VLANs) running over the two internal Ethernet interfaces, and a logical Carrier Grade Transport Protocol (CGTP) interface. The two VLANs and the logical CGTP interface allow distributed management card and switching fabric board redundancy. FIGURE 2-2 shows the VLAN traffic over the physical connectivity shown in FIGURE 2-1.

  FIGURE 2-2 System Management Network VLAN Traffic over the cPSB Bus

Diagram showing VLAN 1 and VLAN 2 traffic over the cPSB bus.

On each node board, internal Ethernet ports dmfe33000 (VLAN tag 33) and dmfe44001 (VLAN tag 44) use CGTP to provide redundancy in case of failure of one of the ports or a failure of the switching fabric board connected to one of the ports. The interfaces are configured on each node board using a Solaris startup script. To verify that CGTP is installed on each node board, use the pkginfo command:

# pkginfo -l SUNWnhtp8 SUNWnhtu8

System management network traffic on the VLANs must always be contained within the chassis. Do not use VLAN tag 33 or 44.

IP Addressing for the System Management Network

The IP address of the system management network on the top (slot 1A) distributed management card is always the midplane FRU ID field SysmgtbusIPSubnet value IP_subnet_address.22; for the bottom (slot 1B) distributed management card it is always the midplane FRU ID field SysmgtbusIPSubnet value IP_subnet_address.23. The IP alias address for the system management network on the active distributed management card is the midplane FRU ID field SysmgtbusIPSubnet value IP_subnet_address.25. The IP alias address (the CGTP interface) provides packet redundancy.

The IP address of the system management network on the node boards is formed as follows. The midplane FRU ID field SysmgtbusIPSubnet contains the value IP_subnet_address.slot_number. The default IP subnet address is c0a80d00 (192.168.13.00) and the default IP subnet mask is 0xffffffe0 (255.255. 255.224). When you power on the Netra CT server, and if you have not made any changes for the system management network in the midplane FRU ID, the IP address of a board installed in slot 3 is configured to 192.168.13.3; if you then move that board to slot 4, the IP address for that board is configured to 192.168.13.4.

TABLE 2-3 shows the system management network interfaces with the IP address defaults for the distributed management cards and a node board in slot 4.

TABLE 2-3 System Management Network Interface IP Address Defaults

Board

CGTP Interface Address

VLAN 1 Address

VLAN 2 Address

Distributed management card 1A

192.168.13.22

192.168.13.54

192.168.13.86

Distributed management card 1B

192.168.13.23

192.168.13.55

192.168.13.87

Active distributed management card

192.168.13.25 (alias)

192.168.13.57 (alias)

192.168.13.89 (alias)

Node board in slot 4

192.168.13.4

192.168.13.36

192.168.13.68



procedure icon  To Configure the System Management Network

1. Log in to the active distributed management card.

2. Set the FRU ID for the system management network:

hostname cli> setfru fru_name  instance  fru_property  value

Refer to TABLE 2-1 for allowable information for each variable. You must set both the system management network IP subnet address and the subnet mask in hexadecimal format. For example, to set the subnet address to 192.168.16.00 and the subnet mask to 255.255.255.224, enter the following:

hostname cli> setfru midplane 1 SysmgtbusIPSubnet c0a81000
hostname cli> setfru midplane 1 SysmgtbusSubnetMask ffffffe0

3. Completely power off and on the system by locating the power switch at the rear of the Netra CT 820 server; press it to the Off (O) position, then press it to the On (|) position.

Checking the System Management Network Configuration for the Solaris OS

After you boot the Solaris OS, you can check to see that the system management network has been configured by using the ifconfig -a command. You should see output for the dmfe33000 interface (VLAN 1), the dmfe44001 interface (VLAN 2), and the cgtp1 interface similar to the following:

# ifconfig -a
lo0: flags=1000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4> 
mtu 8232 index 1
  inet 127.0.0.1 netmask ff000000
dmfe0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4>
mtu 1500 index 2
  inet 10.4.72.146 netmask ffffff00 broadcast 10.4.72.255
  ether 0:3:ba:2f:37:1a
dmfe33000: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4>
mtu 1500 index 3
  inet 192.168.13.35 netmask ffffffe0 broadcast 192.168.13.255
  ether 0:3:ba:2f:37:1a
dmfe44001: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> 
mtu 1500 index 4
  inet 192.168.13.67 netmask ffffffe0 broadcast 192.168.13.255
  ether 0:3:ba:2f:37:1b
cgtp1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> 
mtu 1500 index 5
  inet 192.168.13.3 netmask ffffffe0 broadcast 192.168.13.255
  ether 0:0:0:0:0:0

To test for actual communication, use the ping -s command. You should see output similar to the following:

# ping -s 192.168.13.25
PING 192.168.13.25: 56 data bytes
64 bytes from 192.168.13.25:icmp_seq=0,time=1,ms
64 bytes from 192.168.13.25:icmp_seq=1,time=0,ms
64 bytes from 192.168.13.25:icmp_seq=2,time=0,ms
...
----192.168.13.25 PING statistics----
14 packets transmitted, 14 packets received, 0% packet loss
round-trip (ms) min/avg/max=0/0/1

The cgtp1 interface should be plumbed and have a valid IP address assigned to it.



Note - This is a required interface. Never unplumb or unconfigure the system management network.



Checking the System Management Network Configuration on the Distributed Management Card

After you configure the system management network, you can check to see that it has been configured by using the CLI shownetwork command. You should see output similar to the following:

hostname cli> shownetwork
Netract network configuration is:
 
External Ethernet Interface : SRVC LAN
ip_addr : 10.4.72.170
ip_netmask : 0xffffff00
ip_alias : 10.4.72.195
ip_alias_netmask : 0xffffff00
mac_address : 00:03:ba:44:51:d0
 
System Management Interface :
ip_addr : 192.168.13.23
ip_alias : 192.168.13.25
ip_netmask : 0xffffffe0
 hostname cli> 


Specifying Other FRU ID Information

You can use the FRU properties Location, Cust_Data, and User_Label to enter any customer-specific information about your system. These are optional entries; by default, there is no information stored in these fields. Information entered for the Location property is displayed through the MOH application.

You might want to use the Location FRU property to enter specific, physical location information for your system. For example, you might enter the number on the chassis label to indicate the location of the system.


procedure icon  To Specify Other FRU ID Information

1. Log in to the active distributed management card.

2. Specify other FRU ID information for the Netra CT server:

hostname cli> setfru fru_name  instance  fru_property  value

Refer to TABLE 2-1 for allowable information for each variable. For example, if you want to set the location information to reflect a chassis label that reads 12345-10-20, enter the following:

hostname cli> setfru midplane 1 Location 12345-10-20

3. Completely power off and on the system by locating the power switch at the rear of the Netra CT 820 server; press it to the Off (O) position, then press it to the On (|) position.


Configuring the Distributed Management Cards for Failover

The Netra CT 820 server provides distributed management card failover from the active distributed management card to the standby distributed management card for certain hardware and software events.

Failover includes moving all services provided by the active distributed management card to the standby distributed management card, which then becomes the newly active card.

This section describes the distributed management cards' failover and redundancy capabilities and provides procedures to:

When you use CLI commands to enter information and set variables on the active distributed management card, this data is immediately mirrored on the standby distributed management card so that it is ready for a failover. Mirrored information includes user names, passwords, and permissions; configuration information, such as ntp server, alias IP address, SNMP interface, and MOH security information; and failover information.

If a failover occurs, services are started on the newly active card that are normally not running on the standby distributed management card, such as the PMS application. The failover is complete when the newly active distributed management card can provide services for CLI, MOH, and PMS.

Certain events always cause a failover. Other events cause a failover only if the setfailover mode is set to on (by default, failover mode is off). Failover mode can be turned on using the distributed management card CLI, a remote shell (rsh command), or MOH. Refer to the Netra CT 820 Server Software Developer's Guide for instructions on setting failover mode through MOH.

Failover Causes

Each distributed management card monitors itself (local monitoring) and the other distributed management card (remote monitoring). During this monitoring, a problem in any of the areas being monitored on the active distributed management card could cause a failover.

A failover always occurs with:

A failover occurs with any of the following events if the setfailover mode is set to on:

FIGURE 2-3 shows the internal hardware signals and interfaces that support distributed management card failover, and TABLE 2-4 describes the signals and interfaces.

  FIGURE 2-3 Hardware Signals and Interfaces Supporting Failover

Diagram showing hardware signals and interfaces between the two distributed management cards.

 

TABLE 2-4 Hardware Signals and Interfaces Supporting Failover

Hardware

Description

Serial interface

The primary interface between the distributed management cards; it is used to send heartbeats and state synchronization information. Both distributed management cards must view the same field-replaceable unit (FRU), such as a particular fantray or a node board in a certain slot, in the same state, for example, powered on

SPI interface

The redundant interface for the serial interface; if the serial interface fails, the SPI interfaces takes over sending the heartbeat and state synchronization information

#PRSNT

This signal indicates the presence of a distributed management card

#NEG

This signal indicates which is the active distributed management card

#HEALTHY

This signal indicates the overall health of the distributed management card, including both the hardware and software


The external alarm port on the active distributed management card is not failed over to the standby distributed management card on a failover event. Refer to the Netra CT 820 Server Installation Guide for information on connecting alarm ports, and to the Netra CT 820 Server Software Developer's Guide for information on reloading the alarm severity profile on the newly active distributed management card.

TABLE 2-5 shows the relationship of services and IP addresses to failover. When specifying the alias IP address for an Ethernet port, use the alias IP address for the port you configured, that is, the external or the internal Ethernet port.

TABLE 2-5 Services and IP Connections on Failover

Service

IP Address Use

Failover Impact

CLI: Telnet

Static IP address for Ethernet ports on both top and bottom distributed management cards

 

Alias IP address for Ethernet port on active distributed management card

Continue to communicate with the newly active distributed management card on failover.

 

Lose Telnet connection to active distributed management card on failover; must reconnect.

MOH: RMI application

Static IP address for Ethernet ports on both top and bottom distributed management cards

Keep RMI connection with the newly active distributed management card on failover. Notification is sent from the newly active distributed management card. See the Netra CT 820 Server Software Developer's Guide for information on how to manage this in your RMI application.

 

Alias IP address for Ethernet port on active distributed management card

Lose RMI connection to active distributed management card on failover; must reconnect. No notification is sent.

MOH: SNMP application

Static IP address for Ethernet ports on both top and bottom distributed management cards

Continue to communicate with the newly active distributed management card. The management agent sends a trap indicating a change in the distributed management card standby status.

 

Alias IP address for Ethernet port on active distributed management card

If the failover is caused by the setfailover force command, continue to communicate with the newly active distributed management card on failover; the management agent sends a trap indicating a change in the distributed management card standby status.

If the failover is caused by a failover event, continue to communicate with the newly active distributed management card on failover; the management agent does not send a trap.

PMS application

Alias IP address for external Ethernet port on active distributed management card

 

 

 

 

Alias IP address for system management interface

 

 

The PMS client library reconnects to the newly active distributed management card PMS daemon on failover. This alias IP address is used for basic PMS connectivity and for partner lists (slotrndaddressadd command) for a remote system.

 

The PMS client library reconnects to the newly active distributed management card PMS daemon on failover. This alias IP address is used for partner lists (slotrndaddressadd command) within the same system.

rsh

Static IP address for system management interface on top or bottom distributed management card

Alias IP address for system management interface

Lose connection to the failed distributed management card.

 

Continue to communicate with the newly active distributed management card on failover.


Signs of a Failover

Signs of a failover from an active to a standby distributed management card include:

If recovery is enabled (CLI setdmcrecovery on command), the active distributed management card tries a hard reset on the failed distributed management card. If the reset succeeds, the reset distributed management card comes online as the standby distributed management card. The reset is tried three times. An unsuccessful reset after the third try may indicate a serious hardware problem. By default, recovery mode is off.


procedure icon  To Enable Failover Using the CLI

1. Log in to the active distributed management card.

2. Set the failover mode to on:

hostname cli> setfailover on


procedure icon  To Enable Recovery

1. Log in to the active distributed management card.

2. Set the recovery mode to on:

hostname cli> setdmcrecovery on

Configuring Distributed Management Card Failover for External Ethernet Port Failure

You can configure the active distributed management card to fail over to the standby distributed management card if its external Ethernet port fails by using the CLI setetherfailover command.


procedure icon  To Configure the Active Distributed Management Card to Failover if its External Ethernet Port Fails

1. Log in to the active distributed management card.

2. Verify that the failover mode is on:

hostname cli> showfailover
DMC failover is turned: ON

3. To enable failover if the external Ethernet interface fails, enter the following:

hostname cli> setetherfailover -b 1 enable

Configuring Distributed Management Card Alias IP Addresses for the Ethernet Ports

You can configure an alias IP address for each Ethernet port on the active distributed management card. Using an alias IP address allows you to stay connected to whichever card is the active distributed management card in the event of a failover. The alias IP address must be in the same subnet as the IP address configured for that port. If you do not configure an alias IP address, you must connect to the static IP address.



Note - For the alias IP addresses to take effect, you must either reset the active distributed management card or force a failover.




procedure icon  To Configure Alias IP Addresses for the Ethernet Ports

1. Log in to the active distributed management card.

2. Verify that the failover mode is on:

hostname cli> showfailover
DMC failover is turned: ON

3. To configure an alias IP address for the internal Ethernet port, enter the following:

hostname cli> setipalias -b 2 addr
hostname cli> setipaliasnetmask -b 2 addr

where port number 2 indicates the internal Ethernet port to the switching fabric board, and addr is the alias IP address and alias IP netmask for that port.

4. To configure an alias IP address for the external Ethernet port, enter the following:

hostname cli> setipalias -b 1 addr
hostname cli> setipaliasnetmask -b 1 addr

where port number 1 indicates the external Ethernet port to the network, and addr is the alias IP address and alias IP netmask for that port.

5. Reset the active distributed management card or force a failover.

 


Setting the Date and Time on the Distributed Management Cards

The distributed management card does not support battery backup time-of-day because battery life cannot be monitored to predict end of life, and drift in system clocks can be common. To provide a consistent system time, set the date and time on the distributed management card using one of these methods:

You can also set the time zone on the distributed management card. Daylight savings time is not supported.


procedure icon  To Set the Distributed Management Card Date and Time Manually

1. Log in to the distributed management card.

2. Set the date and time manually:

hostname cli> setdate [mmdd][HHMM]][ccyy][:ss]

where mm is the current month; dd is the current day of the month; HH is the current hour of the day; MM is the current minutes past the hour; cc is the current century minus one; yy is the current year; and :ss is the current second number.

Set the date and time on both distributed management cards.


procedure icon  To Set the Distributed Management Card Date and Time as an NTP Client (and optionally, as an NTP Server)

1. Log in to the active distributed management card.

2. Set the distributed management card date and time as an NTP client:

hostname cli> setntpserver addr

where addr is the IP address of the NTP server. This information is synchronized between the two distributed management cards.

3. Reset the active distributed management card.

You can now configure the node boards to use the distributed management card as an NTP server, if desired. The recommended NTP client configuration is to use the NTP servers on both distributed management cards, with their respective system management network IP addresses (by default, 192.168.13.22 and 192.168.13.23).


procedure icon  To Set the Time Zone on the Distributed Management Card

1. Log in to the distributed management card.

2. Set the time zone with the settimezone command:

hostname cli> settimezone time_zone +|- offset

where time_zone is a valid three-character time zone, + or - indicates whether the time zone is west (+) or east (-) of Greenwich Mean Time (GMT), and offset is the number of hours (and optionally, minutes and seconds) the time zone is west or east of GMT.

The offset has the form of hh[:mm[:ss]]. The minutes (mm) and seconds (ss) are optional. The hour (hh) is required and may be a single digit. The hour must be between 0 and 24, and the minutes and seconds (if present) between 0 and 59.

For example, to set the local time zone to Pacific Standard Time, enter the following:

hostname cli> settimezone PST+9

Daylight savings time is not supported.

3. Reset the active distributed management card.


Configuring the Node Boards

Verify that you can log in to the node boards. Complete any Solaris configuration needed for your environment, such as modifying OpenBoot PROM variables. Refer to the Solaris documentation, the OpenBoot PROM documentation, or to the specific node board documentation if you need additional information. Chapter 3 contains additional information on node boards.


Enabling the Managed Object Hierarchy Application

The Managed Object Hierarchy (MOH) is an application that runs on the distributed management cards and the node boards. It monitors the field-replaceable units (FRUs) in your system.

Software Required

The MOH application requires the Solaris 8 2/02 or compatible operating system, and additional Netra CT platform-specific Solaris patches that contain packages shown in TABLE 2-6.

TABLE 2-6 Solaris Packages for the MOH Application

Package

Description

SUNW2jdrt

Javatrademark Runtime Java Dynamic Management Kit (JDMK) package

SUNWctmgx

Netra CT management agent package

SUNWctac

Distributed management card firmware package that includes the Netra CT management agent


Download Solaris patch updates from the web site: http://www.sunsolve.sun.com. For current patch information, refer to the Netra CT Server Release Notes.

Install the patch updates using the patchadd command. After these packages are installed, they reside in the default installation directory, /opt/SUNWnetract/mgmt3.0/. To verify the packages are installed, use the pkginfo command:

# pkginfo -l SUNW2jdrt SUNWctmgx SUNWctac
...
PKGINST: SUNW2jdrt
...

Once the MOH application is running, MOH agents on the distributed management cards and on node boards interface with your Simple Network Management Protocol (SNMP) or Remote Method Invocation (RMI) application to discover network elements, monitor the system, and provide status messages.

Refer to the Netra CT Server Software Developer's Guide for information on writing applications to interface with the MOH application.

Starting the MOH Application

The MOH application is started automatically on the distributed management cards.

You must start the MOH application as root on the node boards using the ctmgx start command:

# cd /opt/SUNWnetract/mgmt3.0/bin
# ./ctmgx start [options]

If you installed the Solaris patches in a directory other than the default directory, specify that path instead.

TABLE 2-7 lists the options that can be specified with ctmgx start when you start the MOH application.

TABLE 2-7 ctmgx Options

Option

Description

-rmiport portnum

Specify the RMI port number. The default is 1099.

-snmpport portnum

Specify the SNMP port number. The default is 9161.

-snmpacl filename

Specify the SNMP ACL file to be used. The full path to filename must be specified.

-showversion

Print the system version number.


By default, SNMP and RMI applications have read-write access to MOH agents on the distributed management cards and on node boards. The following sections describe how to configure MOH to control SNMP and RMI access on the distributed management cards and on node boards.

MOH Configuration and SNMP

By default, SNMP applications have read-write access to the Netra CT 820 server MOH agents. If you want to control which applications communicate with the MOH agents, you must configure the distributed management card and node board SNMP interfaces. This configuration provides additional security by controlling who has access to the agent.

The SNMP interface uses an SNMP access control list (ACL) to control:

An SNMP community is a group of IP addresses of devices supporting SNMP. It helps define where information is sent. The community name identifies the group. An SNMP device or agent may belong to more than one SNMP community. An SNMP device or agent does not respond to requests originating from IP addresses that do not belong to one of its communities.

SNMP Applications and Failover

If a distributed management card failover occurs, an SNMP application responds as follows:

Using the alias IP address for the Ethernet port is a simpler model to manage than using the static IP addresses for both Ethernet ports; it also ensures the failover is transparent.

Distributed Management Card SNMP Interface

On the active distributed management card, you enter ACL information using the CLI snmpconfig command. A limit of 20 communities can be specified. For each community, a limit of 5 IP addresses can be specified. The ACL information is stored in the distributed management card flash memory.


procedure icon  To Configure the Distributed Management Card SNMP Interface

1. Log in to the active distributed management card.

2. Enter SNMP ACL information with the snmpconfig command:

hostname cli> snmpconfig add|del|show access|trap community [readonly|readwrite] [ip_addr] 

where community is the name of a group that the MOH agent on the distributed management card supports, and ip_addr is the IP address of a device supporting an SNMP management application. For example, to add read-only access (the default) for the community trees, to add read-write access for the community birds, and to add a trap for the community lakes, enter the following:

hostname cli> snmpconfig add access trees ip_addr ip_addr ip_addr
hostname cli> snmpconfig add access birds readwrite ip_addr
hostname cli> snmpconfig add trap lakes ip_addr

3. Reset the active distributed management card.

You can use the snmpconfig command to show or delete existing ACL information. For example, to show the ACL access and trap information entered in Step 2 above, enter the following:

hostname cli> snmpconfig show access *
Community   Permissions    Hosts
trees       read-only      ip_addr ip_addr ip_addr
birds       read-write     ip_addr
hostname cli> snmpconfig show trap *
Community   Hosts
lakes       ip_addr
hostname cli> 

 

Node Board SNMP Interface

On node boards, ACL information is stored in a configuration file in the Solaris OS.

The format of this file is specified in the JDMK documentation. An ACL file template that is part of the JDMK package is installed by default in
/opt/SUNWjdmk/jdmk4.2/1.2/etc/conf/template.acl.

An example of a configuration file is:

acl = {
 {
 communities = trees
 access = read-only
 managers = oak, elm
 }
 {
 communities = birds
 access = read-write
 managers = robin
 } 
} 
 
trap = {
  {
  trap-community = lakes
  hosts = michigan, mead
  }
}

In this example, oak, elm, robin, michigan, and mead are hostnames. If this is the ACL file specified, when the MOH starts, a coldStart trap is sent to michigan and mead. Management applications running on oak and elm can read (get) information from MOH, but they cannot write (set) information. Management applications running on robin can read (get) and write (set) information from MOH.

The ACL file can be stored anywhere on your system. When you start the MOH application and you want to use an ACL file you created, you specify the complete path to the file.

Refer to the JDMK documentation (http://www.sun.com/documentation) for more information on ACL file format.


procedure icon  To Configure a Node Board SNMP Interface

1. Log in to the server.

2. Create a configuration file in the format of a JDMK ACL configuration file.

3. As root, start the MOH application.

# cd /opt/SUNWnetract/mgmt3.0/bin
# ./ctmgx start [options]

If you installed the Solaris patches in a directory other than the default directory, specify that path instead.

The MOH application starts and reads the configuration file using one of these methods, in this order:

a. If the command ctmgx start -snmpacl filename is used, MOH uses the specified file as the ACL file.

b. If the file /opt/SUNWjdmk/jdmk4.2/1.2/etc/conf/jdmk.acl exists, MOH uses that file as the ACL file when the command ctmgx start is used.

If the ACL cannot be determined after these steps, SNMP applications have read-write access and MOH sends the coldStart trap to the local host only.

MOH Configuration and RMI

By default, RMI applications have read-write access to the Netra CT 820 server MOH agents. If you want to control which applications communicate with the MOH agents, you must configure the distributed management card interfaces for RMI. This configuration provides additional security by authenticating who has access to the agent.

To authenticate which RMI applications can access the MOH agents on the distributed management card, the following configuration is needed:

If MOH security for RMI was enabled but becomes disabled on the distributed management card (for example, if the distributed management card is being reset or hot-swapped), security is disabled, a security exception occurs, and no access is given.

RMI Applications and Failover

If a distributed management card failover occurs, an RMI application responds as follows:


procedure icon  To Configure the Distributed Management Card RMI Interface

1. Verify that RMI programs you want to access the distributed management card MOH agent contain a valid distributed management card user name and password, with appropriate permissions.

2. Log in to the active distributed management card.

3. Set the setmohsecurity option to true:

hostname cli> setmohsecurity true

4. Reset the active distributed management card.

The RMI authentication takes effect immediately. Any modification to the distributed management card user names and passwords also takes effect immediately.


Enabling the Processor Management Service Application

The Processor Management Service (PMS) is a management application that provides support for high-availability services and applications. It provides both local and remote monitoring and control of a cluster of node boards. It monitors the health of node boards, takes recovery actions, and notifies partner nodes if so configured. It provides the state of the resources, such as hardware, operating system, and applications.

This section describes:

You use the distributed management card PMS CLI commands to configure PMS services, such as fault detection/notification, and fault recovery. The recovery administration is described in Using the PMS Application for Recovery and Control of Node Boards.

You can also use the PMS API to configure partner lists. Partner lists are tables of distributed management card and node board information relating to connectivity and addressing. Refer to the pms API man pages, installed by default in /opt/SUNWnetract/mgmt3.0/man, for more information on partner lists.

Note that the PMS daemon runs only on the active distributed management card. Because of this, you can not use static IP addresses for the distributed management card with the PMS application. You must use alias IP addresses so that PMS daemons continue to run on the active distributed management card in case of a failover, as follows:


procedure icon  To Start or Stop the PMS Application on a Node Board

1. Log in as root to the server that has the Solaris patches installed (see Software Required).

2. Create a Solaris script to start, stop, and restart PMS, as follows:

#!/sbin/sh
# Start/stop/restart processes required for PMS
 
case "$1" in
'start')
	/opt/SUNWnetract/mgmt3.0/bin/pmsd start -e force_avail
	;;
'stop')
	/opt/SUNWnetract/mgmt3.0/bin/pmsd stop
	;;
'restart')
	/opt/SUNWnetract/mgmt3.0/bin/pmsd stop
	/opt/SUNWnetract/mgmt3.0/bin/pmsd start -e force_avail
	;;
*)
	echo "Usage: $0 {start | stop | restart }"
	exit 1
	;;
esac
exit 0

3. Save the script to a file.

4. Start, stop, or restart the PMS application by typing one of the following:

where filename is the name of the file in which you saved the script.

You can also save this script in the /etc/rc* directory of your choice to have PMS automatically start at boot time.

Stopping and Restarting the PMS Daemon on the Distributed Management Card

The PMS daemon (pmsd) starts automatically on the active distributed management card. However, you can manually stop and restart the PMS daemon on the active distributed management card.



Note - Stopping the PMS daemon on the active distributed management card forces a failover if the setfailover mode is set to on. If you do not want a failover to occur, set the setfailover mode to off, stop PMS, then re-enable failover. When you stop PMS, the healthy state of the active distributed management card becomes not healthy and you must reset the distributed management card to recover.



These optional parameters can be specified:

You specify the port number for pmsd using the parameter port_num.

You specify the state in which to start pmsd using the parameter server_admin_state. This parameter may be set to force_unavail (force pmsd to start in the unavailable state); force_avail (force pmsd to start in the available state); or vote_avail (start pmsd in the available state, but only if all conditions have been met to make it available; if all the conditions have not been met, pmsd will not become available).

You specify whether to reset persistent storage to the default values on the distributed management card using the -d option. Data in persistent storage remains across reboots or power on and off cycles. If you do not specify -d, pmsd is started using its existing persistent storage configuration; if you specify -d, the persistent storage configuration is reset to the defaults for pmsd. The -d option would typically be specified only to perform a bulk reset of persistent storage during initial system bring up or if corruption occurred.


procedure icon  To Manually Stop the Processor Management Service on the Distributed Management Card

1. Log in to the active distributed management card.

2. Stop the PMS daemon with the stop command:

hostname cli> pmsd stop [-p port_num] 

where port_num is the port number of the currently running pmsd you want to stop. The default is port 10300. Note that stopping PMS on the active distributed management card forces a failover if the setfailover mode is set to on.


procedure icon  To Manually Start the Processor Management Service on the Distributed Management Card

1. Log in to the active distributed management card.

2. Start the PMS daemon with the start command:

hostname cli> pmsd start [-p port_num] [-e server_admin_state] [-d]

where port_num is the port number for pmsd to listen on, server_admin_state can be force_unavail, force_avail, or vote_avail, and -d resets the persistent storage to the defaults for pmsd.

Setting the IP Address for the Distributed Management Card to Control Node Boards in the Same System

The pmsd slotaddressset command is used to set the IP address by which the distributed management card controls and monitors a node board in a particular slot. The command establishes the connection between pmsd running on the distributed management card and pmsd running on a node board. The distributed management card and the node board must be in the same system.

You specify the slot number of the node board and the IP address to be configured. The default IP address for all slots is 0.0.0.0. Therefore, control is initially disabled.


procedure icon  To Set the IP Address for the Distributed Management Card to Control Node Boards in the Same System

1. Log in to the active distributed management card.

2. Set the IP address with the slotaddressset command:

hostname cli> pmsd slotaddressset -s slot_num -i ip_addr

where slot_num can be a slot number from 3 to 20, and ip_addr is the IP address to be configured.

Printing IP Address Information

The pmsd slotaddressshow -s slot_num|all command can be used to print IP address information for the specified slot or all slots. If the IP address information is not 0.0.0.0 for a given slot, PMS is configured to manage the node board in this slot using this IP address.

Adding Address Information for a Local Node Board to Control Node Boards in Local or Remote Systems

You can use the PMS CLI application to enable local node boards to remotely monitor and control node boards in the same system or in other Netra CT systems. One use for this capability is in a high availability environment. For example, if a high availability application fails on a controlled node board, PMS notifies the controlling node board of the failure, and the controlling node board (through a customer application) notifies another controlled node board to start the same high availability application.

The pmsd slotrndaddressadd command is used to configure a local node board to control and monitor another node board by specifying the IP addresses and slot information for the node board to be controlled, using the parameters shown in TABLE 2-8.

TABLE 2-8 pmsd slotrndaddressadd Parameters

Parameter

Description

-s slot_num|all

Specifies the slot number of the node board that is being configured in the local system to monitor or control other local or remote node boards

-n ip_addr

Specifies the IP address of the node board in the local or remote system to be monitored or controlled by the local node board

-d ip_addr

Specifies the IP address of the distributed management card in the same local or remote system of the node board to be monitored or controlled by the local node board. If the distributed management card is in the local system, use the alias IP address for the system management interface (default is 192.168.13.25); if the distributed management card is in a remote system, use the alias IP address for the external Ethernet port on the active distributed management card.

-r slot_num

Specifies the slot number of the node board in the local or remote system to be monitored or controlled by the local node board


Each local node board can control and monitor 16 local or remote node boards. Each local node board being managed must have already had its IP address set using the pmsd slotaddressset command.


procedure icon  To Add Address Information for a Local Node Board to Control Node Boards in Local or Remote Systems

1. Log in to the active distributed management card.

2. Add the address information with the slotrndaddressadd command:

hostname cli> pmsd slotrndaddressadd -s slot_num|all -n ip_addr -d ip_addr -r slot_num

where -s slot_num is the slot number in the same system of the local node board you want to use to control other local or remote node boards, and all specifies all slots containing node boards in the local system; -n ip_addr is the IP address of the node board to be controlled; -d ip_addr is either the alias IP address of the system management interface if the active distributed management card is in the system of the node board to be controlled or the alias IP address of the external Ethernet port of the active distributed management card if that card is in a remote system; and -r slot_num is the slot number of the node board to be controlled.

When you add address information with the slotrndaddressadd command, an index number is automatically assigned to the information. You can see index numbers by using the slotrndaddressshow command and use the index numbers to delete address information with the slotrndaddressdelete command.

Deleting Address Information

The pmsd slotrndaddressdelete -s slot_num|all -i index_num|all command can be used to delete address information from the controlling node board. The -s slot_num|all parameter specifies whether the address information is deleted on a single slot number or on all slots containing node boards in the local system. The -i index_num|all parameter specifies whether the address information will be deleted for a single address entry or for all address entries; index_num can be 1 to 16. Before using this command, it is advisable to print the current address information using the pmsd slotrndaddressshow command, so you know the index number to use.

Printing Address Information

The pmsd slotrndaddressshow -s slot_num|all -i index_num|all command can be used to print address information. The -s slot_num|all parameter specifies whether the address information is printed for a single slot number or for all slots containing node boards in the local system; index_num can be 1 to 16. The -i index_num|all parameter specifies whether the address information is printed for a single address entry or for all address entries.