C H A P T E R  2

Configuring Your System

This chapter assumes you have already installed the Solaris operating environment and the required patches on your Netra CT 820 node boards.

You configure the Netra CT 820 system primarily through the active distributed management card command-line interface (CLI). The active distributed management card CLI enables system-level configuration, administration, and management that includes the node boards, the switching fabric boards, the distributed management card, power supplies, and fan trays. The distributed management card CLI interface can be used both locally and remotely.

You configure the distributed management cards first, then the node boards, then the system-wide applications.

This chapter includes the following sections:


Accessing the Distributed Management Cards

When you initially access either distributed management card, you must do so over the serial port (console), using an ASCII terminal or the tip program. When you first access the distributed management card, log in with the default user account of netract and the password suncli1. This account is set to full authorization (permissions). This account can not be deleted; however, you should change the password on this account for security purposes, before your Netra CT 820 server is operational.

The next sections provide information on configuring the distributed management cards' external Ethernet ports and setting up user accounts and passwords using the distributed management card command-line interface. For more information on using the distributed management card command-line interface, refer to Chapter 3.

After you configure the external Ethernet port, you can access the distributed management card over:

The distributed management card supports 22 sessions (tip and telnet connections) at once.



Note - The term "distributed management card" as used in this manual refers to the active distributed management card unless otherwise specified.




Configuring the Distributed Management Cards' External Ethernet Ports

Each distributed management card has one external Ethernet port on the rear transition card, labeled SRVC LAN. You configure this port using the following CLI commands:

You must be logged in to the distributed management card with a user account that has full permissions.

When you specify the port number (port_num), use 1 to indicate the external Ethernet port.

You must reset the distributed management card (reset dmc) for any changes to take effect.



Note - The external Ethernet interface on the distributed management card and the external Ethernet interface on the switching fabric board must be connected to different subnets. If they are configured on the same subnet, arp messages will be displayed on the distributed management card console.




procedure icon  To Configure the Distributed Management Cards' External Ethernet Ports

1. Log in to the distributed management card.

2. Set the IP mode:

hostname cli> setipmode -b port_num rarp|config|none


Choose the IP mode according to the services available in the network (rarp, config, or none). The default is none. Set the ipmode to config to configure the Ethernet port. You must reset the distributed management card for the changes to take effect.

3. Set the IP address:

hostname cli> setipaddr -b port_num addr


Set the IP address of the distributed management card. The default is 0.0.0.0. This command is only used if the ipmode is set to config. You must reset the distributed management card for the changes to take effect.

4. Set the IP netmask:

hostname cli> setipnetmask -b port_num addr


Set the IP netmask of the distributed management card. The default is 0.0.0.0. This command is only used if the ipmode is set to config. You must reset the distributed management card for the changes to take effect.

5. Set the IP gateway:

hostname cli> setipgateway addr


Set the IP gateway of the distributed management card. The default is 0.0.0.0. You must reset the distributed management card for the changes to take effect.

6. Reset the distributed management card.


Setting Up User Accounts on the Distributed Management Card

User accounts are set up using the distributed management card command-line interface. The default user account is netract and the password is suncli1. This account is set to full authorization (permissions). This account can not be deleted; however, you should change the password on this account for security purposes, before your Netra CT 820 server is operational.

The distributed management card supports 16 accounts with passwords.


procedure icon  To Set Up a User Account

1. Log in to the distributed management card.

2. Add a user:

hostname cli> useradd username

3. Add a password for that user:

hostname cli> userpassword username

By default, new accounts are created with read-only permission. Permission levels can be changed using the userperm command; refer to CLI Commands for more information about permissions and the userperm command.

Username Restrictions

The username field has a maximum length of 16 characters; it must contain at least one lowercase alphabetic character, and the first character must be alphabetic.

Valid characters for username include:

Password Restrictions

Passwords have the following restrictions:


Specifying Netra CT Server FRU ID Information

A field-replaceable unit (FRU) is a module or component that can typically be replaced in its entirety as part of a field service repair operation.

The Netra CT system FRUs include:

All FRUs except power supplies contain FRU ID (identification) information that includes FRU manufacturing and configuration data. This information can be displayed through the distributed management card CLI (see TABLE 2-2). The Netra CT 820 system supports two FRU ID formats:

In addition, you enter certain FRU ID information, through the distributed management card CLI, that is stored in the midplane. (Note that you can also enter FRU ID information through the MOH application; refer to the Netra CT Server Developer's Guide for instructions.) FRU ID information includes:

Some of this information is used by the MOH application to audit board insertions and prevent misconfigurations, and to display information; some is used by the system management network.

The format of the information to be specified is:

hostname cli> setfru fru_target  fru_instance  fru_field  value

The FRU instance is a logical number; it matches the slot number only for the slot FRU target. The FRU field is case-insensitive.

TABLE 2-1 shows the FRU ID information that can be specified with the CLI setfru command.

TABLE 2-1 FRU ID Information Specified Using the setfru Command

FRU Target

FRU Instance

FRU Field

Value

Description

midplane

1

SysmgtbusIPSubnet

IP subnet address (hexadecimal)

Specify the IP subnet address for the system management network. The default is 0xc0a80d (192.168.13).

midplane

1

SysmgtbusIPSubnetMask

IP subnet mask (hexadecimal)

Specify the IP subnet mask for the system management network. The default is 0xffffffe0 (255.255.255.224).

midplane

1

Location

text description

A description of the location (for example, the number on the chassis label) of the Netra CT system. This description is used in the MOH application. The text can be up to 80 characters in length.

midplane

1

User_Label

text description

Any customer-supplied information. The text can be up to 10 characters in length.

dmc

1 or 2

Cust_data

text description

Any customer-supplied information. The text can be up to 80 characters in length. FRU instance 1 is the DMC in slot 1A; FRU instance 2 is the DMC in slot 1B.

slot

2 to 21

Acceptable_Fru_Types

vendor:partnumber

First, specify the chassis slot number to be configured. (Slots are numbered starting from the left.) Second, specify the allowable plug-in board(s) for that slot, where the value is the vendor name and part number (separated by a colon) of the board. Use the showfru command to display this information. Multiple boards may be specified, separated by a semi-colon (;). The default is to power on all Sun-supported cPSB-only boards.

slot

3 to 20

Acceptable_Fru_Types

nonsun:picmg2.16

This information applies to third-party node boards only. First, specify the chassis slot number to be configured. (Slots are numbered starting from the left.) Second, specify the value nonsun:picmg2.16, which indicates that a third-party node board is allowed in this slot.

slot

3 to 20

Boot_Devices

boot_device_list

First, specify the chassis slot number to be configured (Slots are numbered starting from the left.) Second, specify the alias(es) listing the devices and/or full device path names the board in this slot will boot from. The boot_device_list can be up to 16 characters in length. When the board in this slot is powered up, this information overwrites the entry in the OpenBoot PROM boot-device NVRAM configuration variable. Specifying "" (the null string) will default to the OpenBoot PROM NVRAM setting.

slot

3 to 20;

all

Boot_Mask

true or false

First, specify the chassis slot number to be configured (slots are numbered starting from the left) or all to refer to all configurable slots. Second, specify whether the board in this slot is a boot server for the system. The default is false, which means that the board is not a boot server. Refer to Configuring a Node Board as a Boot Server for instructions on setting the boot mask for a slot.

slot

3 to 20

Cust_Data

text description

First, specify the chassis slot number to be configured (Slots are numbered starting from the left.) Second, specify any customer-supplied information. The text can be up to 80 characters in length.


Changes to FRU ID fields through the CLI setfru command require you to completely power the system off and on for the changes to take effect. It is recommended that you enter all necessary FRU ID information, then power the system off and on.


Displaying Netra CT Server FRU ID Information

FRU ID information entered during the manufacturing process and through the distributed management card CLI setfru command can be displayed using the showfru command.

TABLE 2-2 shows the FRU ID information that can be displayed with the CLI showfru command. Use the FRU field to specify the information you want; the FRU field is case-insensitive.

TABLE 2-2 FRU ID Information Displayed Using the showfru Command

FRU Target

FRU Instance

FRU Field

Description

midplane

1

Sun_Part_No

Display the part number for the midplane.

midplane

1

Sun_Serial_No

Display the serial number for the midplane.

midplane

1

SysmgtbusIPSubnet

Display the system management network IP subnet address in hexadecimal format for this system.

midplane

1

SysmgtbusIPSubnetMask

Display the system management network IP subnet mask in hexadecimal format for this system.

midplane

1

Vendor_Name

Display the vendor name for the midplane.

midplane

1

Fru_Shortname

Display the FRU short name for the midplane.

midplane

1

Location

Display any customer-supplied text specified for the Location of this system.

midplane

1

User_Label

Display any customer-supplied text for this field.

dmc

1 or 2

Sun_Part_No

Display the part number for the distributed management card in a particular slot. FRU instance 1 is the DMC in slot 1A; FRU instance 2 is the DMC in slot 1B.

dmc

1 or 2

Sun_Serial_No

Display the serial number for the distributed management card in a particular slot. FRU instance 1 is the DMC in slot 1A; FRU instance 2 is the DMC in slot 1B.

dmc

1 or 2

Vendor_Name

Display the vendor name for the distributed management card in a particular slot. FRU instance 1 is the DMC in slot 1A; FRU instance 2 is the DMC in slot 1B.

dmc

1 or 2

Fru_Shortname

Display the FRU short name for the distributed management card in a particular slot. FRU instance 1 is the DMC in slot 1A; FRU instance 2 is the DMC in slot 1B.

dmc

1 or 2

Initial_HW_Dash_Level

Display the initial hardware dash level of the distributed management card in a particular slot. FRU instance 1 is the DMC in slot 1A; FRU instance 2 is the DMC in slot 1B.

dmc

1 or 2

Initial_HW_Rev_Level

Display the initial hardware revision level of the distributed management card in a particular slot. FRU instance 1 is the DMC in slot 1A; FRU instance 2 is the DMC in slot 1B.

dmc

1 or 2

Cust_Data

Display any customer-supplied text for this field for the distributed management card in a particular slot. FRU instance 1 is the DMC in slot 1A; FRU instance 2 is the DMC in slot 1B.

slot

3 to 20

Sun_Part_No

Display the part number for the board in a particular slot.

slot

3 to 20

Part_No

Display the part number for the third-party node board in a particular slot.

slot

3 to 20

Sun_Serial_No

Display the serial number for the board in a particular slot.

slot

3 to 20

Serial_No

Display the serial number for the third-party node board in a particular slot.

slot

2 to 21

Acceptable_Fru_Types

Display the allowable plug-in boards for a particular slot.

slot

3 to 20

Boot_Devices

Display the boot devices for a particular slot.

slot

3 to 20

Boot_Mask

Display whether or not the board in a particular slot is a boot server for the system.

slot

3 to 20

Vendor_Name

Display the vendor name for the board in a particular slot.

slot

3 to 20

Fru_Shortname

Display the FRU short name for the board in a particular slot.

slot

3 to 20

Initial_HW_Dash_Level

Display the initial hardware dash level of the board in a particular slot.

slot

3 to 20

Initial_HW_Rev_Level

Display the initial hardware revision level of the board in a particular slot.

slot

3 to 20

Cust_Data

Display any customer-supplied text for this field for the board in a particular slot.

switch

1 or 2

Sun_Part_No

Display the part number for the specified switching fabric board. FRU instance 1 is the switch in slot 2; FRU instance 2 is the switch in slot 21.

switch

1 or 2

Sun_Serial_No

Display the serial number for the specified switching fabric board. FRU instance 1 is the switch in slot 2; FRU instance 2 is the switch in slot 21.

switch

1 or 2

Vendor_Name

Display the vendor name for the specified switching fabric board. FRU instance 1 is the switch in slot 2; FRU instance 2 is the switch in slot 21.

switch

1 or 2

Fru_Shortname

Display the FRU short name for the specified switching fabric board. FRU instance 1 is the switch in slot 2; FRU instance 2 is the switch in slot 21.

fantray

1 to 3

Sun_Part_No

Display the part number for the specified fan tray.

fantray

1 to 3

Sun_Serial_No

Display the serial number for the specified fan tray.

fantray

1 to 3

Vendor_Name

Display the vendor name for the specified fan tray.

fantray

1 to 3

Fru_Shortname

Display the FRU short name for the specified fan tray.



procedure icon  To Display FRU ID Information

1. Log in to the active distributed management card.

2. Enter the showfru command:

hostname cli> showfru fru_target fru_instance fru_field 

Refer to TABLE 2-2 for allowable information for each variable. For example, if you want to display the part number FRU ID information for fan tray 1, enter the following:

hostname cli> showfru fantray 1 Sun_Part_No

Use the FRU target "slot" to display information for the node boards. For example, to display part number FRU ID information for a board in slot 8, enter the following:

hostname cli> showfru slot 8 Sun_Part_No

The next several sections describe the configurations you can set by entering FRU ID information.


Configuring a Chassis Slot for a Board

You can specify the type of board that is allowed in a given chassis slot using the distributed management card CLI. The slot usage information is used by the distributed management card software to audit board insertions and prevent misconfigurations. You can also specify the boot device for the slot, that is, the path to the device the board in the slot will boot from. When the board is powered on, the FRU boot device information overwrites the entry in the OpenBoot PROM boot-device NVRAM configuration variable on that board. The chassis slot information can be changed at any time if desired using the distributed management card CLI.

By default, slots are configured to accept Sun-supported cPSB-only board FRUs unless you specifically set an allowable plug-in for a specific slot. The exceptions are: for a Netra CT 820 server, the distributed management cards must be in slot 1A and 1B and the switching fabric boards must be in slots 2 and 21.

To set allowable plug-ins for a particular slot, you need the vendor name and the part number of the board. This FRU ID information can be displayed using the CLI showfru command; see Displaying Netra CT Server FRU ID Information for more information.


procedure icon  To Configure a Chassis Slot for a Board

1. Log in to the active distributed management card.

2. Set the acceptable FRUs for the slot:

hostname cli> setfru fru_target fru_instance fru_field value

Refer to TABLE 2-1 for allowable information for each variable. For example, if you want to set chassis slot 5 to allow only a Sun Microsystems (vendor 003E) particular CPU board (part number 595-5769-03), enter the following:

hostname cli> setfru slot 5 Acceptable_Fru_Types 003E:595-5769-03

Multiple boards can be specified for one slot. Separate the boards with a semi-colon. You can also use the asterisk (*) as a wild card in the part number to allow multiple boards. For example, if you want to set chassis slot 4 to allow only boards from three particular vendors, with multiple board part numbers from one vendor, enter the following:

hostname cli> setfru slot 4 Acceptable_Fru_Types 003E:*;0004:1234-5678-1;0001:8796541-02

3. Set the boot device for the slot:

hostname cli> setfru fru_target fru_instance fru_field value

Refer to TABLE 2-1 for allowable information for each variable. For example, if you want to set chassis slot 5 to boot from a device on the network, enter the following:

hostname cli> setfru slot 5 Boot_Devices boot_device_list

where boot_device_list is the alias(es) specifying the boot devices (limit is 25 bytes), for example, disk net.

4. Completely power off and on the system by locating the power switch at the rear of the Netra CT 820 server; press it to the Off (O) position, then press it to the On (|) position.


Configuring a Node Board as a Boot Server

You can configure a node board (Sun-supported cPSB-only boards) to be a boot server for the Netra CT 820 system. To do this, you use the Boot_Mask field in the midplane FRU ID. When the system is powered on, the distributed management card looks at the Boot_Mask field; if a boot server has been specified, the distributed management card powers on that node board first. There can be any number of boot servers per Netra CT 820 system. If multiple boot servers are specified, all boot servers are powered on simultaneously.


procedure icon  To Configure a Node Board as a Boot Server

1. Log in to the distributed management card.

2. Specify which slot contains a node board boot server by setting the Boot_Mask:

hostname cli> setfru fru_target fru_instance fru_field value

Refer to TABLE 2-1 for allowable information for each variable. For example, if you want to specify chassis slot 3 as a node board boot server, enter the following:

hostname cli> setfru slot 3 Boot_Mask true

To specify all slots (3 to 20) as boot servers, enter the following:

hostname cli> setfru slot all Boot_Mask true

To clear all slots (3 to 20) as boot servers, enter the following:

hostname cli> setfru slot all Boot_Mask false

3. Completely power off and on the system by locating the power switch at the rear of the Netra CT 820 server; press it to the Off (O) position, then press it to the On (|) position.


Configuring the System Management Network

The system management network provides a communication channel over the midplane. It can be used to communicate between the distributed management card, the node boards, and the switching fabric boards. It appears as any other generic Ethernet port in the Solaris operating environment. The system management network is configured by default on Solaris and on the distributed management card. The system management network is used by the applications and features, such as MOH, PMS, and console connections from the distributed management card to node boards.

Choosing the IP Address for the System Management Network

The IP address of the system management network on the node boards is formed as follows: the midplane FRU ID field SysmgtbusIPSubnet contains the value IP_subnet_address.slot_number. The default IP subnet address is c0a80d00 (192.168.13.00) and the default IP subnet mask is 0xffffffe0 (255.255. 255.224). When you power on the Netra CT server, and if you have not made any changes for the system management network in the midplane FRU ID, the IP address of a board installed in slot 3 will be configured to 192.168.13.3; if you then move that board to slot 4, the IP address for that board will be configured to 192.168.13.4.

The IP address of the system management network on the active distributed management card is always the midplane FRU ID field SysmgtbusIPSubnet value IP_subnet_address.22.



Note - If you configure multiple Netra CT 820 systems in the same subnet, make sure each system has a different system management network IP subnet.



For example, if your network configuration includes four Netra CT 820 systems connected to one external switch or router, you must configure a different system management network IP subnet for each system; otherwise, applications, such as MOH, will not work correctly. A sample configuration using the 192.168.13 subnet is as follows:

Netra CT 820 System

System Management Network IP Subnet

IP Address Range for Boards

System #1

192.168.13.0

192.168.13.1 to 192.168.13.30

System #2

192.168.13.32

192.168.13.33 to 192.168.13.62

System #3

192.168.13.64

192.168.13.65 to 192.168.13.94

System #4

192.168.13.96

192.168.13.97 to 192.168.13.126


FIGURE 2-1 illustrates this sample configuration.

 FIGURE 2-1 System Management Network Subnet Configuration with Multiple Systems

Figure illustrating four Netra CT 820 servers with different subnet numbers.

procedure icon  To Configure the System Management Network

1. Log in to the active distributed management card.

2. Set the FRU ID for the system management network:

hostname cli> setfru fru_target fru_instance fru_field value

Refer to TABLE 2-1 for allowable information for each variable. You must set both the system management network IP subnet address and the subnet mask in hexadecimal format. For example, to set the subnet address to 192.168.16.00 and the subnet mask to 255.255.255.224, enter the following:

hostname cli> setfru midplane 1 SysmgtbusIPSubnet c0a81000
hostname cli> setfru midplane 1 SysmgtbusSubnetMask ffffffe0

3. Completely power off and on the system by locating the power switch at the rear of the Netra CT 820 server; press it to the Off (O) position, then press it to the On (|) position.

Checking the System Management Network Configuration for the Solaris Environment

After you boot the Solaris operating environment, you can check to see that the system management network has been configured by using the ifconfig -a command. You should see output for the dmfe0:1 interface similar to the following:

# ifconfig -a...eri0: flags=10000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4>mtu 1500
 index 1
  inet 192.168.207.64 netmask ffffff00 broadcast 192.168.207.255
  ether 8:0:20:a9:4d:1d
lo0: flags=1000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4>mtu 1500
 index 2
  inet 127.0.0.1 netmask ff000000
dmfe0:1: flags=10000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4>mtu 1500
 index 3
  inet 192.168.16.1 netmask ffffff00 broadcast 192.168.16.255
  ether 8:0:20:a9:4d:1d

To test for actual communication, use the ping -s command. You should see output similar to the following:

# ping -s 192.168.16.3
PING 192.168.13.3: 56 data bytes
64 bytes from 192.168.16.3:icmp_seq=0,time=1,ms
64 bytes from 192.168.16.3:icmp_seq=1,time=0,ms
64 bytes from 192.168.16.3:icmp_seq=2,time=0,ms
...
----192.168.16.3 PING statistics----
14 packets transmitted, 14 packets received, 0% packet loss
round-trip (ms) min/avg/max=0/0/1

The dmfe0:1 interface should be plumbed and have a valid IP address assigned to it.



Note - This is a required interface. Never unplumb or unconfigure the system management network.



Checking the System Management Network Configuration on the Distributed Management Card

After you configure the system management network, you can check to see that it has been configured by using the CLI shownetwork command. You should see output similar to the following:

hostname cli> shownetwork
Netract network configuration is:
 
ethernet ports
ip_addr :192.168.207.130
ip_netmask : 0xffffff00
mac_address : 00:03:ba:13:c4:dc
 
ip_addr :192.168.13.22
ip_netmask : 0xffffff00
mac_address : 00:03:ba:13:c4:dd
hostname cli> 


Specifying Other FRU ID Information

You can use the FRU fields Location, Cust_Data, and User_Label to enter any customer-specific information about your system. These are optional entries; by default, there is no information stored in these fields. Information entered in the Location field is displayed through the MOH application.

You might want to use the Location FRU field to enter specific, physical location information for your system. For example, you might enter the number on the chassis label, to indicate the location of the system.


procedure icon  To Specify Other FRU ID Information

1. Log in to the active distributed management card.

2. Specify other FRU ID information for the Netra CT server:

hostname cli> setfru fru_target fru_instance fru_field value

Refer to TABLE 2-1 for allowable information for each variable. For example, if you want to set the location information to reflect a chassis label that reads 12345-10-20, enter the following:

hostname cli> setfru midplane 1 Location 12345-10-20

3. Completely power off and on the system by locating the power switch at the rear of the Netra CT 820 server; press it to the Off (O) position, then press it to the On (|) position.


Configuring the Node Boards

You should verify that you can log in to the node boards. Any Solaris configuration needed for your environment should be done, such as modifying OpenBoot PROM variables. Refer to the Solaris documentation, the OpenBoot PROM documentation, or to the specific node board documentation if you need additional information. Chapter 3 contains additional information on node boards.


Enabling the Managed Object Hierarchy Application

The Managed Object Hierarchy (MOH) is an application that runs on the distributed management card and the node boards. It monitors the field-replaceable units (FRUs) in your system.

Software Required

The MOH application requires the Solaris 8 2/02 or compatible operating environment, and additional Netra CT platform-specific Solaris patches that contain packages shown in TABLE 2-3.

TABLE 2-3 Solaris Packages for the MOH Application

Package

Description

SUNW2jdrt

Javatrademark Runtime Java Dynamic Management Kit (JDMK) package

SUNWctmgx

Netra CT management agent package

SUNWctac

Distributed management card firmware package that includes the Netra CT management agent


Download Solaris patch updates from the web site: http://www.sunsolve.sun.com. (For current patch information, refer to the Netra CT Server Installation Guide.)

Install the patch updates using the patchadd command. After these packages are installed, they reside in the default installation directory, /opt/SUNWnetract/mgmt3.0/.

The MOH application is always started on the distributed management card; the application requires a configuration file to be started on the Solaris operating environment on the node boards.

The MOH Configuration File

The MOH application requires a configuration file that contains a Simple Network Management Protocol (SNMP) access control list (ACL). The file lists:

The format of this file is specified in the JDMK documentation. An ACL file template that is part of the JDMK package is installed by default in
/opt/SUNWjdmk/jdmk4.2/etc/conf/template.acl.

An example of a configuration file is:

acl = {
 {
 communities = trees
 access = read-only
 managers = oak, elm
 }
 {
 communities = birds
 access = read-write
 managers = robin
 } 
} 
 
trap = {
  {
  trap-community = lakes
  hosts = michigan, mead
  }
}

In this example, oak, elm, robin, michigan, and mead are hostnames. If this is the ACL file specified, when the MOH starts, a coldStart trap will be sent to michigan and mead. Management applications running on oak and elm can read (get) information from MOH, but they cannot write (set) information. Management applications running on robin can read (get) and write (set) information from MOH.

The ACL file can be stored anywhere on your system. When you start the MOH application and you want to use an ACL file you created, you specify the complete path to the file.

Refer to the JDMK documentation (http://www.sun.com/documentation) for more information on ACL file format.


procedure icon  To Enable the Managed Object Hierarchy on the Node Boards

1. Log in to the server.

2. Verify that the Solaris packages SUNW2jdrt, SUNWctmgx, and SUNWctac are installed:

# pkginfo -l SUNW2jdrt SUNWctmgx SUNWctac
...
PKGINST: SUNW2jdrt
...

3. Create a configuration file in the format of a JDMK ACL configuration file.

Refer to the section The MOH Configuration File for information on the configuration file and format.

4. As root, start the MOH application.

# cd /opt/SUNWnetract/mgmt3.0/bin
# ./ctmgx start [option]

If you installed the Solaris patches in a directory other than the default directory, specify that path instead.

Options that can be specified with ctmgx start when you start the MOH application include:

TABLE 2-4 ctmgx Options

Option

Description

-rmiport portnum

Specify the Remote Method Invocation (RMI) port number. The default is 1099.

-snmpport portnum

Specify the Simple Network Management Protocol (SNMP) port number. The default is 9161.

-snmpacl filename

Specify the SNMP ACL file to be used. The full path to filename must be specified.

-showversion

Print the system version number.


The MOH application starts and reads the configuration file using one of these methods, in this order:

a. If the command ctmgx start -snmpacl filename is used, MOH uses the specified file as the ACL file.

b. If the file /opt/SUNWjdmk/jdmk4.2/etc/conf/jdmk.acl exists, MOH uses that file as the ACL file when the command ctmgx start is used.

If the ACL cannot be determined after these steps, SNMP applications will have read-write access and MOH will send the coldStart trap to the local node only.

Once MOH is running, it interfaces with your SNMP or RMI application to discover network elements, monitor the system, and provide status messages.

Refer to the Netra CT Server Software Developer's Guide for information on writing applications to interface with the MOH application.


Enabling the Processor Management Service Application

The Processor Management Service (PMS) is a management application that provides support for high-availability services and applications. It provides both local and remote monitoring and control of a cluster of node boards. It monitors the health of node boards, takes recovery actions, and notifies partner nodes if so configured. It provides the state of the resources, such as hardware, operating system, and applications.

This section describes:

You use the distributed management card PMS CLI commands to control PMS services, such as fault detection/notification, and fault recovery. The recovery administration is described in Using the PMS Application for Recovery and Control of Node Boards. You can also use the PMS API to configure partner lists (tables of distributed management card and node board information relating to connectivity and addressing; the distributed management card and the node boards in a partner list must be in the same system). Refer to the pms API man pages, installed by default in /opt/SUNWnetract/mgmt3.0/man, for more information on partner lists.


procedure icon  To Start or Stop the PMS Application on a Node Board

1. Log in as root to the server that has the Solaris patches installed (see Software Required).

2. Create a Solaris script to start, stop, and restart PMS, as follows:

#!/sbin/sh
# Start/stop/restart processes required for PMS
 
case "$1" in
'start')
	/opt/SUNWnetract/mgmt3.0/bin/pmsd start -e force_avail
	;;
'stop')
	/opt/SUNWnetract/mgmt3.0/bin/pmsd stop
	;;
'restart')
	/opt/SUNWnetract/mgmt3.0/bin/pmsd stop
	/opt/SUNWnetract/mgmt3.0/bin/pmsd start -e force_avail
	;;
*)
	echo "Usage: $0 {start | stop | restart }"
	exit 1
	;;
esac
exit 0

3. Save the script to a file.

4. Start, stop, or restart the PMS application by typing one of the following:

where filename is the name of the file in which you saved the script.

You can also save this script in the /etc/rc* directory of your choice to have PMS automatically start at boot time.

Stopping and Restarting the PMS Daemon on the Distributed Management Card

The PMS daemon (pmsd) starts automatically on the distributed management card. However, you can manually stop and restart the PMS daemon on the distributed management card, specifying these optional parameters:

You specify the port number for pmsd using the parameter port_num.

You specify the state in which to start pmsd using the parameter server_admin_state. This parameter may be set to force_unavail (force pmsd to start in the unavailable state); force_avail (force pmsd to start in the available state); or vote_avail (start pmsd in the available state, but only if all conditions have been met to make it available; if all the conditions have not been met, pmsd will not become available).

You specify whether to reset persistent storage to the default values on the distributed management card using the -d option. Data in persistent storage remains across reboots or power on and off cycles. If you do not specify -d, pmsd is started using its existing persistent storage configuration; if you specify -d, the persistent storage configuration is reset to the defaults for pmsd. The -d option would typically be specified only to perform a bulk reset of persistent storage during initial system bring up or if corruption occurred.


procedure icon  To Manually Stop the Processor Management Service on the Distributed Management Card

1. Log in to the distributed management card.

2. Stop the PMS daemon with the stop command:

hostname cli> pmsd stop [-p port_num] 

where port_num is the port number of the currently running pmsd you want to stop. The default is port 10300.


procedure icon  To Manually Start the Processor Management Service on the Distributed Management Card

1. Log in to the distributed management card.

2. Start the PMS daemon with the start command:

hostname cli> pmsd start [-p port_num] [-e server_admin_state] [-d]

where port_num is the port number for pmsd to listen on, server_admin_state can be force_unavail, force_avail, or vote_avail, and -d resets the persistent storage to the defaults for pmsd.

Setting the IP Address for the Distributed Management Card to Control Node Boards in the Same System

The pmsd slotaddressset command is used to set the IP address by which the distributed management card controls and monitors a node board in a particular slot. The command establishes the connection between pmsd running on the distributed management card and pmsd running on a node board. The distributed management card and the node board must be in the same system.

You specify the slot number of the node board and the IP address to be configured. The default IP address for all slots is 0.0.0.0; therefore, control is initially disabled.


procedure icon  To Set the IP Address for the Distributed Management Card to Control Node Boards in the Same System

1. Log in to the distributed management card.

2. Set the IP address with the slotaddressset command:

hostname cli> pmsd slotaddressset -s slot_num -i ip_addr

where slot_num can be a slot number from 3 to 20, and ip_addr is the IP address to be configured.

Printing IP Address Information

The pmsd slotaddressshow -s slot_num|all command can be used to print IP address information for the specified slot or all slots. If the IP address information is not 0.0.0.0 for a given slot, PMS is configured to manage the node board in this slot using this IP address.

Adding Address Information for a Local Node Board to Control Node Boards in Local or Remote Systems

You can use the PMS CLI application to enable local node boards to remotely monitor and control node boards in the same system or in other Netra CT systems. One use for this capability is in a high availability environment. For example, if a high availability application fails on a controlled node board, PMS notifies the controlling node board of the failure, and the controlling node board (through a customer application) notifies another controlled node board to start the same high availability application.

The pmsd slotrndaddressadd command is used to configure a local node board to control and monitor another node board by specifying the IP addresses and slot information for the node board to be controlled, using the parameters shown in TABLE 2-5.

TABLE 2-5 pmsd slotrndaddressadd Parameters

Parameter

Description

-s slot_num|all

Specifies the slot number of the node board that is being configured in the local system to monitor or control other local or remote node boards

-n ip_addr

Specifies the IP address of the node board in the local or remote system to be monitored or controlled by the local node board

-d ip_addr

Specifies the IP address of the distributed management card in the same local or remote system of the node board to be monitored or controlled by the local node board

-r slot_num

Specifies the slot number of the node board in the local or remote system to be monitored or controlled by the local node board


Each local node board can control and monitor 16 local or remote node boards. Each local node board being managed must have already had its IP address set using the pmsd slotaddressset command.


procedure icon  To Add Address Information for a Local Node Board to Control Node Boards in Local or Remote Systems

1. Log in to the distributed management card.

2. Add the address information with the slotrndaddressadd command:

hostname cli> pmsd slotrndaddressadd -s slot_num|all -n ip_addr -d ip_addr -r slot_num

where -s slot_num is the slot number in the same system of the local node board you want to use to control other local or remote node boards, and all specifies all slots containing node boards in the local system; -n ip_addr is the IP address of the node board to be controlled; -d ip_addr is the IP address of the active distributed management card in the system of the node board to be controlled; and -r slot_num is the slot number of the node board to be controlled.

When you add address information with the slotrndaddressadd command, an index number is automatically assigned to the information. You can see index numbers by using the slotrndaddressshow command and use the index numbers to delete address information with the slotrndaddressdelete command, described below.

Deleting Address Information

The pmsd slotrndaddressdelete -s slot_num|all -i index_num|all command can be used to delete address information from the controlling node board. The -s slot_num|all parameter specifies whether the address information will be deleted on a single slot number or on all slots containing node boards in the local system. The -i index_num|all parameter specifies whether the address information will be deleted for a single address entry or for all address entries; index_num can be 1 to 16. Before using this command, it is advisable to print the current address information using the pmsd slotrndaddressshow command, so you know the index number to use.

Printing Address Information

The pmsd slotrndaddressshow -s slot_num|all -i index_num|all command can be used to print address information. The -s slot_num|all parameter specifies whether the address information will be printed for a single slot number or for all slots containing node boards in the local system; index_num can be 1 to 16. The -i index_num|all parameter specifies whether the address information will be printed for a single address entry or for all address entries.