C H A P T E R 2 |
Configuring Your System |
This chapter assumes you have already installed the Solaris Operating System and the required patches on your Netra CT 820 node boards.
You configure the Netra CT 820 system primarily through the active distributed management card command-line interface (CLI). The active distributed management card CLI enables system-level configuration, administration, and management that includes the node boards, the switching fabric boards, the distributed management card, power supplies, and fan trays. The distributed management card CLI interface can be used both locally and remotely.
You configure the distributed management cards first, then the node boards, then the system-wide applications.
This chapter includes the following sections:
When you initially access either distributed management card, you must do so over the serial port (console), using an ASCII terminal or the Tip program. When you first access the distributed management card, log in with the default user account of netract and the password suncli1. This account is set to full authorization (permissions). This account can not be deleted. However, you should change the password on this account for security purposes, before your Netra CT 820 server is operational.
The following sections provide information on configuring the distributed management cards' Ethernet ports and setting up user accounts and passwords using the distributed management card CLI. For more information on using the distributed management card CLI, refer to Chapter 3.
Each distributed management card supports 22 sessions (Tip and Telnet connections) at once. The active distributed management card is identified by the prompt hostname (Active slot#) cli> and the standby distributed management card is identified by the prompt hostname (Standby slot#) cli>.
Each distributed management card has one external Ethernet port on the rear transition card, labeled SRVC LAN, and one internal Ethernet port. If you configure these ports, you can access the distributed management cards using a Telnet connection to the external Ethernet port or using a Telnet connection through the switching fabric board to the internal Ethernet port.
To configure the Ethernet ports, you must be logged in to the distributed management card with a user account that has full permissions. You configure the ports with CLI commands, and then reset the distributed management card for the changes to take effect. Use the following procedure for each distributed management card.
To Configure the Distributed Management Cards' Ethernet Ports |
1. Log in to the distributed management card.
where port_num is 1 for the external Ethernet port or 2 for the internal Ethernet port. Choose the IP mode according to the services available in the network (rarp, config, or none). The default is none.
If you set the IP mode to rarp, skip to Step 5.
where port_num is 1 for the external Ethernet port or 2 for the internal Ethernet port. Set the IP address of the distributed management card. The default is 0.0.0.0. This command is only used if the ipmode is set to config.
where port_num is 1 for the external Ethernet port or 2 for the internal Ethernet port. Set the IP netmask of the distributed management card. The default is 255.255.255.0. This command is only used if the ipmode is set to config.
Set the IP gateway of the distributed management card to access the system from outside the subnet. The default is 0.0.0.0.
6. Reset the distributed management card:
User accounts are set up using the distributed management card CLI. The default user account is netract and the password is suncli1. This account is set to full authorization (permissions). This account can not be deleted. However, you should change the password on this account for security purposes, before your Netra CT 820 server is operational.
User information is entered on the active distributed management card, and immediately mirrored, or shared, on the standby distributed management card. The distributed management card supports 16 accounts with passwords.
To Set Up a User Account |
1. Log in to the active distributed management card.
3. Add a password for that user:
By default, new accounts are created with read-only permission. Permission levels can be changed using the userperm command. Refer to CLI Commands for more information about permissions and the userperm command.
The username field has a maximum length of 16 characters. It must contain at least one lowercase alphabetic character, and the first character must be alphabetic.
Valid characters for username include:
Passwords have the following restrictions:
A field-replaceable unit (FRU) is a module or component that can typically be replaced in its entirety as part of a field service repair operation.
The Netra CT system FRUs include:
All FRUs except power supplies contain FRU ID (identification) information that includes FRU manufacturing and configuration data. This information can be displayed through the distributed management card CLI (see TABLE 2-2). The Netra CT 820 system supports two FRU ID formats:
In addition, you can enter certain FRU ID information through the active distributed management card CLI, which is stored in the midplane. Note that you can also enter FRU ID information through the MOH application; refer to the Netra CT Server Developer's Guide for instructions. FRU ID information includes:
Some of this information is used by the MOH application to audit board insertions and prevent misconfigurations, and to display information; some is used by the system management network.
The format of the information to be specified is:
The FRU instance is a logical number; it matches the slot number only for the slot FRU name. The FRU property is case-insensitive.
TABLE 2-1 shows the FRU ID information that can be specified with the CLI setfru command.
Specify the IP subnet address for the system management network. The default is 0xc0a80d00 (192.168.13). |
||||
Specify the IP subnet mask for the system management network. The default is 0xffffffe0 (255.255.255.224). |
||||
A description of the location (for example, the number on the chassis label) of the Netra CT system. This description is used in the MOH application. The text can be up to 80 characters in length. |
||||
Any customer-supplied information. The text can be up to 10 characters in length. |
||||
Any customer-supplied information. The text can be up to 80 characters in length. FRU instance 1 is the DMC in slot 1A; FRU instance 2 is the DMC in slot 1B. |
||||
First, specify the chassis slot number to be configured. (Slots are numbered starting from the left.) Second, specify the allowable plug-in board(s) for that slot, where the value is the vendor name and part number (separated by a colon) of the board. Use the showfru command to display this information. Multiple boards may be specified, separated by a semi-colon (;). The default is to power on all Sun supported cPSB-only boards. |
||||
This information applies to third-party node boards only. First, specify the chassis slot number to be configured. (Slots are numbered starting from the left.) Second, specify the value nonsun:picmg2.16, which indicates that a third-party node board is allowed in this slot. |
||||
First, specify the chassis slot number to be configured (Slots are numbered starting from the left.) Second, specify the alias(es) listing the devices and/or full device path names the board in this slot will boot from. The boot_device_list can be up to 16 characters in length. When the board in this slot is powered up, this information overwrites the entry in the OpenBoot PROM boot-device NVRAM configuration variable. Specifying "" (the null string) defaults to the OpenBoot PROM NVRAM setting. |
||||
First, specify the chassis slot number to be configured (slots are numbered starting from the left) or all to refer to all configurable slots. Second, specify whether the board in this slot is a boot server for the system. The default is false, which means that the board is not a boot server. Refer to Configuring a Node Board as a Boot Server for instructions on setting the boot mask for a slot. |
||||
First, specify the chassis slot number to be configured (slots are numbered starting from the left). Second, specify any customer-supplied information. The text can be up to 80 characters in length. |
Changes to FRU ID fields through the CLI setfru command require you to completely power the system off and on for the changes to take effect. It is recommended that you enter all necessary FRU ID information, then power the system off and on.
FRU ID information entered during the manufacturing process and through the active distributed management card CLI setfru command can be displayed using the showfru command.
TABLE 2-2 shows the FRU ID information that can be displayed with the CLI showfru command. Use the FRU property to specify the information you want; the FRU property is case-insensitive.
To Display FRU ID Information |
1. Log in to the distributed management card.
Refer to TABLE 2-2 for allowable information for each variable. For example, if you want to display the part number FRU ID information for fan tray 1, enter the following:
Use the FRU target "slot" to display information for the node boards. For example, to display part number FRU ID information for a board in slot 8, enter the following:
The next several sections describe the configurations you can set by entering FRU ID information.
You can specify the type of board that is allowed in a given chassis slot using the active distributed management card CLI. The slot usage information is used by the distributed management card software to audit board insertions and prevent misconfigurations. You can also specify the boot device for the slot, that is, the path to the device the board in the slot boots from. When the board is powered on, the FRU boot device information overwrites the entry in the OpenBoot PROM boot-device NVRAM configuration variable on that board. The chassis slot information can be changed at any time using the active distributed management card CLI.
By default, slots are configured to accept Sun supported cPSB-only board FRUs unless you specifically set an allowable plug-in for a specific slot. The exceptions are: for a Netra CT 820 server, the distributed management cards must be in slot 1A and 1B and the switching fabric boards must be in slots 2 and 21.
To set allowable plug-ins for a particular slot, you need the vendor name and the part number of the board. This FRU ID information can be displayed using the CLI showfru command. See Displaying Netra CT Server FRU ID Information for more information.
To Configure a Chassis Slot for a Board |
1. Log in to the active distributed management card.
2. Set the acceptable FRUs for the slot:
Refer to TABLE 2-1 for allowable information for each variable. For example, if you want to set chassis slot 5 to allow only a Sun Microsystems (vendor 003E) particular CPU board (part number 595-5769-03), enter the following:
Multiple boards can be specified for one slot. Separate the boards with a semi-colon. You can also use the asterisk (*) as a wild card in the part number to allow multiple boards. For example, if you want to set chassis slot 4 to allow only boards from three particular vendors, with multiple board part numbers from one vendor, enter the following:
3. Set the boot device for the slot:
Refer to TABLE 2-1 for allowable information for each variable. For example, if you want to set chassis slot 5 to boot from a device on the network, enter the following:
where boot_device_list is the alias or aliases specifying the boot devices for example, disk net. The boot_device_list is limited to 25 bytes.
4. Completely power off and on the system by locating the power switch at the rear of the Netra CT 820 server; press it to the Off (O) position, then press it to the On (|) position.
You can configure a node board (Sun supported cPSB-only boards) to be a boot server for the Netra CT 820 system. To do this, you use the Boot_Mask field in the midplane FRU ID. When the system is powered on, the distributed management card looks at the Boot_Mask field; if a boot server has been specified, the distributed management card powers on that node board first. There can be any number of boot servers per Netra CT 820 system. If multiple boot servers are specified, all boot servers are powered on simultaneously.
To Configure a Node Board as a Boot Server |
1. Log in to the active distributed management card.
2. Specify which slot contains a node board boot server by setting the Boot_Mask:
Refer to TABLE 2-1 for allowable information for each variable. For example, if you want to specify chassis slot 3 as a node board boot server, enter the following:
To specify all slots (3 to 20) as boot servers, enter the following:
To clear all slots (3 to 20) as boot servers, enter the following:
3. Completely power off and on the system by locating the power switch at the rear of the Netra CT 820 server; press it to the Off (O) position, then press it to the On (|) position.
The system management network provides a communication channel over the midplane. It is used to communicate between the distributed management cards, the node boards, and the switching fabric boards. FIGURE 2-1 shows the physical connections between boards over the midplane in the Netra CT 820 system.
The network appears as any other generic Ethernet port in the Solaris Operating System, and is configured by default on Solaris OS and on the distributed management cards. The system management network is used by the applications and features, such as MOH, PMS, and console connections from the distributed management cards to node boards.
The system management network consists of two virtual local area networks (VLANs) running over the two internal Ethernet interfaces, and a logical Carrier Grade Transport Protocol (CGTP) interface. The two VLANs and the logical CGTP interface allow distributed management card and switching fabric board redundancy. FIGURE 2-2 shows the VLAN traffic over the physical connectivity shown in FIGURE 2-1.
On each node board, internal Ethernet ports dmfe33000 (VLAN tag 33) and dmfe44001 (VLAN tag 44) use CGTP to provide redundancy in case of failure of one of the ports or a failure of the switching fabric board connected to one of the ports. The interfaces are configured on each node board using a Solaris startup script. To verify that CGTP is installed on each node board, use the pkginfo command:
System management network traffic on the VLANs must always be contained within the chassis. Do not use VLAN tag 33 or 44.
The IP address of the system management network on the top (slot 1A) distributed management card is always the midplane FRU ID field SysmgtbusIPSubnet value IP_subnet_address.22; for the bottom (slot 1B) distributed management card it is always the midplane FRU ID field SysmgtbusIPSubnet value IP_subnet_address.23. The IP alias address for the system management network on the active distributed management card is the midplane FRU ID field SysmgtbusIPSubnet value IP_subnet_address.25. The IP alias address (the CGTP interface) provides packet redundancy.
The IP address of the system management network on the node boards is formed as follows. The midplane FRU ID field SysmgtbusIPSubnet contains the value IP_subnet_address.slot_number. The default IP subnet address is c0a80d00 (192.168.13.00) and the default IP subnet mask is 0xffffffe0 (255.255. 255.224). When you power on the Netra CT server, and if you have not made any changes for the system management network in the midplane FRU ID, the IP address of a board installed in slot 3 is configured to 192.168.13.3; if you then move that board to slot 4, the IP address for that board is configured to 192.168.13.4.
TABLE 2-3 shows the system management network interfaces with the IP address defaults for the distributed management cards and a node board in slot 4.
To Configure the System Management Network |
1. Log in to the active distributed management card.
2. Set the FRU ID for the system management network:
Refer to TABLE 2-1 for allowable information for each variable. You must set both the system management network IP subnet address and the subnet mask in hexadecimal format. For example, to set the subnet address to 192.168.16.00 and the subnet mask to 255.255.255.224, enter the following:
hostname cli> setfru midplane 1 SysmgtbusIPSubnet c0a81000 hostname cli> setfru midplane 1 SysmgtbusSubnetMask ffffffe0 |
3. Completely power off and on the system by locating the power switch at the rear of the Netra CT 820 server; press it to the Off (O) position, then press it to the On (|) position.
After you boot the Solaris OS, you can check to see that the system management network has been configured by using the ifconfig -a command. You should see output for the dmfe33000 interface (VLAN 1), the dmfe44001 interface (VLAN 2), and the cgtp1 interface similar to the following:
To test for actual communication, use the ping -s command. You should see output similar to the following:
The cgtp1 interface should be plumbed and have a valid IP address assigned to it.
Note - This is a required interface. Never unplumb or unconfigure the system management network. |
After you configure the system management network, you can check to see that it has been configured by using the CLI shownetwork command. You should see output similar to the following:
You can use the FRU properties Location, Cust_Data, and User_Label to enter any customer-specific information about your system. These are optional entries; by default, there is no information stored in these fields. Information entered for the Location property is displayed through the MOH application.
You might want to use the Location FRU property to enter specific, physical location information for your system. For example, you might enter the number on the chassis label to indicate the location of the system.
To Specify Other FRU ID Information |
1. Log in to the active distributed management card.
2. Specify other FRU ID information for the Netra CT server:
Refer to TABLE 2-1 for allowable information for each variable. For example, if you want to set the location information to reflect a chassis label that reads 12345-10-20, enter the following:
3. Completely power off and on the system by locating the power switch at the rear of the Netra CT 820 server; press it to the Off (O) position, then press it to the On (|) position.
The Netra CT 820 server provides distributed management card failover from the active distributed management card to the standby distributed management card for certain hardware and software events.
Failover includes moving all services provided by the active distributed management card to the standby distributed management card, which then becomes the newly active card.
This section describes the distributed management cards' failover and redundancy capabilities and provides procedures to:
When you use CLI commands to enter information and set variables on the active distributed management card, this data is immediately mirrored on the standby distributed management card so that it is ready for a failover. Mirrored information includes user names, passwords, and permissions; configuration information, such as ntp server, alias IP address, SNMP interface, and MOH security information; and failover information.
If a failover occurs, services are started on the newly active card that are normally not running on the standby distributed management card, such as the PMS application. The failover is complete when the newly active distributed management card can provide services for CLI, MOH, and PMS.
Certain events always cause a failover. Other events cause a failover only if the setfailover mode is set to on (by default, failover mode is off). Failover mode can be turned on using the distributed management card CLI, a remote shell (rsh command), or MOH. Refer to the Netra CT 820 Server Software Developer's Guide for instructions on setting failover mode through MOH.
Each distributed management card monitors itself (local monitoring) and the other distributed management card (remote monitoring). During this monitoring, a problem in any of the areas being monitored on the active distributed management card could cause a failover.
A failover always occurs with:
A failover occurs with any of the following events if the setfailover mode is set to on:
FIGURE 2-3 shows the internal hardware signals and interfaces that support distributed management card failover, and TABLE 2-4 describes the signals and interfaces.
The external alarm port on the active distributed management card is not failed over to the standby distributed management card on a failover event. Refer to the Netra CT 820 Server Installation Guide for information on connecting alarm ports, and to the Netra CT 820 Server Software Developer's Guide for information on reloading the alarm severity profile on the newly active distributed management card.
TABLE 2-5 shows the relationship of services and IP addresses to failover. When specifying the alias IP address for an Ethernet port, use the alias IP address for the port you configured, that is, the external or the internal Ethernet port.
Signs of a failover from an active to a standby distributed management card include:
If recovery is enabled (CLI setdmcrecovery on command), the active distributed management card tries a hard reset on the failed distributed management card. If the reset succeeds, the reset distributed management card comes online as the standby distributed management card. The reset is tried three times. An unsuccessful reset after the third try may indicate a serious hardware problem. By default, recovery mode is off.
To Enable Failover Using the CLI |
1. Log in to the active distributed management card.
2. Set the failover mode to on:
To Enable Recovery |
1. Log in to the active distributed management card.
2. Set the recovery mode to on:
You can configure the active distributed management card to fail over to the standby distributed management card if its external Ethernet port fails by using the CLI setetherfailover command.
To Configure the Active Distributed Management Card to Failover if its External Ethernet Port Fails |
1. Log in to the active distributed management card.
2. Verify that the failover mode is on:
3. To enable failover if the external Ethernet interface fails, enter the following:
You can configure an alias IP address for each Ethernet port on the active distributed management card. Using an alias IP address allows you to stay connected to whichever card is the active distributed management card in the event of a failover. The alias IP address must be in the same subnet as the IP address configured for that port. If you do not configure an alias IP address, you must connect to the static IP address.
Note - For the alias IP addresses to take effect, you must either reset the active distributed management card or force a failover. |
To Configure Alias IP Addresses for the Ethernet Ports |
1. Log in to the active distributed management card.
2. Verify that the failover mode is on:
3. To configure an alias IP address for the internal Ethernet port, enter the following:
where port number 2 indicates the internal Ethernet port to the switching fabric board, and addr is the alias IP address and alias IP netmask for that port.
4. To configure an alias IP address for the external Ethernet port, enter the following:
where port number 1 indicates the external Ethernet port to the network, and addr is the alias IP address and alias IP netmask for that port.
5. Reset the active distributed management card or force a failover.
The distributed management card does not support battery backup time-of-day because battery life cannot be monitored to predict end of life, and drift in system clocks can be common. To provide a consistent system time, set the date and time on the distributed management card using one of these methods:
You can also set the time zone on the distributed management card. Daylight savings time is not supported.
To Set the Distributed Management Card Date and Time Manually |
1. Log in to the distributed management card.
2. Set the date and time manually:
where mm is the current month; dd is the current day of the month; HH is the current hour of the day; MM is the current minutes past the hour; cc is the current century minus one; yy is the current year; and :ss is the current second number.
Set the date and time on both distributed management cards.
To Set the Distributed Management Card Date and Time as an NTP Client (and optionally, as an NTP Server) |
1. Log in to the active distributed management card.
2. Set the distributed management card date and time as an NTP client:
where addr is the IP address of the NTP server. This information is synchronized between the two distributed management cards.
3. Reset the active distributed management card.
You can now configure the node boards to use the distributed management card as an NTP server, if desired. The recommended NTP client configuration is to use the NTP servers on both distributed management cards, with their respective system management network IP addresses (by default, 192.168.13.22 and 192.168.13.23).
To Set the Time Zone on the Distributed Management Card |
1. Log in to the distributed management card.
2. Set the time zone with the settimezone command:
where time_zone is a valid three-character time zone, + or - indicates whether the time zone is west (+) or east (-) of Greenwich Mean Time (GMT), and offset is the number of hours (and optionally, minutes and seconds) the time zone is west or east of GMT.
The offset has the form of hh[:mm[:ss]]. The minutes (mm) and seconds (ss) are optional. The hour (hh) is required and may be a single digit. The hour must be between 0 and 24, and the minutes and seconds (if present) between 0 and 59.
For example, to set the local time zone to Pacific Standard Time, enter the following:
Daylight savings time is not supported.
3. Reset the active distributed management card.
Verify that you can log in to the node boards. Complete any Solaris configuration needed for your environment, such as modifying OpenBoot PROM variables. Refer to the Solaris documentation, the OpenBoot PROM documentation, or to the specific node board documentation if you need additional information. Chapter 3 contains additional information on node boards.
The Managed Object Hierarchy (MOH) is an application that runs on the distributed management cards and the node boards. It monitors the field-replaceable units (FRUs) in your system.
The MOH application requires the Solaris 8 2/02 or compatible operating system, and additional Netra CT platform-specific Solaris patches that contain packages shown in TABLE 2-6.
Distributed management card firmware package that includes the Netra CT management agent |
Download Solaris patch updates from the web site: http://www.sunsolve.sun.com. For current patch information, refer to the Netra CT Server Release Notes.
Install the patch updates using the patchadd command. After these packages are installed, they reside in the default installation directory, /opt/SUNWnetract/mgmt3.0/. To verify the packages are installed, use the pkginfo command:
Once the MOH application is running, MOH agents on the distributed management cards and on node boards interface with your Simple Network Management Protocol (SNMP) or Remote Method Invocation (RMI) application to discover network elements, monitor the system, and provide status messages.
Refer to the Netra CT Server Software Developer's Guide for information on writing applications to interface with the MOH application.
The MOH application is started automatically on the distributed management cards.
You must start the MOH application as root on the node boards using the ctmgx start command:
If you installed the Solaris patches in a directory other than the default directory, specify that path instead.
TABLE 2-7 lists the options that can be specified with ctmgx start when you start the MOH application.
Specify the SNMP ACL file to be used. The full path to filename must be specified. |
|
By default, SNMP and RMI applications have read-write access to MOH agents on the distributed management cards and on node boards. The following sections describe how to configure MOH to control SNMP and RMI access on the distributed management cards and on node boards.
By default, SNMP applications have read-write access to the Netra CT 820 server MOH agents. If you want to control which applications communicate with the MOH agents, you must configure the distributed management card and node board SNMP interfaces. This configuration provides additional security by controlling who has access to the agent.
The SNMP interface uses an SNMP access control list (ACL) to control:
An SNMP community is a group of IP addresses of devices supporting SNMP. It helps define where information is sent. The community name identifies the group. An SNMP device or agent may belong to more than one SNMP community. An SNMP device or agent does not respond to requests originating from IP addresses that do not belong to one of its communities.
If a distributed management card failover occurs, an SNMP application responds as follows:
Using the alias IP address for the Ethernet port is a simpler model to manage than using the static IP addresses for both Ethernet ports; it also ensures the failover is transparent.
On the active distributed management card, you enter ACL information using the CLI snmpconfig command. A limit of 20 communities can be specified. For each community, a limit of 5 IP addresses can be specified. The ACL information is stored in the distributed management card flash memory.
To Configure the Distributed Management Card SNMP Interface |
1. Log in to the active distributed management card.
2. Enter SNMP ACL information with the snmpconfig command:
where community is the name of a group that the MOH agent on the distributed management card supports, and ip_addr is the IP address of a device supporting an SNMP management application. For example, to add read-only access (the default) for the community trees, to add read-write access for the community birds, and to add a trap for the community lakes, enter the following:
hostname cli> snmpconfig add access trees ip_addr ip_addr ip_addr hostname cli> snmpconfig add access birds readwrite ip_addr hostname cli> snmpconfig add trap lakes ip_addr |
3. Reset the active distributed management card.
You can use the snmpconfig command to show or delete existing ACL information. For example, to show the ACL access and trap information entered in Step 2 above, enter the following:
On node boards, ACL information is stored in a configuration file in the Solaris OS.
The format of this file is specified in the JDMK documentation. An ACL file template that is part of the JDMK package is installed by default in
/opt/SUNWjdmk/jdmk4.2/1.2/etc/conf/template.acl.
An example of a configuration file is:
acl = { { communities = trees access = read-only managers = oak, elm } { communities = birds access = read-write managers = robin } } trap = { { trap-community = lakes hosts = michigan, mead } } |
In this example, oak, elm, robin, michigan, and mead are hostnames. If this is the ACL file specified, when the MOH starts, a coldStart trap is sent to michigan and mead. Management applications running on oak and elm can read (get) information from MOH, but they cannot write (set) information. Management applications running on robin can read (get) and write (set) information from MOH.
The ACL file can be stored anywhere on your system. When you start the MOH application and you want to use an ACL file you created, you specify the complete path to the file.
Refer to the JDMK documentation (http://www.sun.com/documentation) for more information on ACL file format.
To Configure a Node Board SNMP Interface |
2. Create a configuration file in the format of a JDMK ACL configuration file.
3. As root, start the MOH application.
If you installed the Solaris patches in a directory other than the default directory, specify that path instead.
The MOH application starts and reads the configuration file using one of these methods, in this order:
a. If the command ctmgx start -snmpacl filename is used, MOH uses the specified file as the ACL file.
b. If the file /opt/SUNWjdmk/jdmk4.2/1.2/etc/conf/jdmk.acl exists, MOH uses that file as the ACL file when the command ctmgx start is used.
If the ACL cannot be determined after these steps, SNMP applications have read-write access and MOH sends the coldStart trap to the local host only.
By default, RMI applications have read-write access to the Netra CT 820 server MOH agents. If you want to control which applications communicate with the MOH agents, you must configure the distributed management card interfaces for RMI. This configuration provides additional security by authenticating who has access to the agent.
To authenticate which RMI applications can access the MOH agents on the distributed management card, the following configuration is needed:
If MOH security for RMI was enabled but becomes disabled on the distributed management card (for example, if the distributed management card is being reset or hot-swapped), security is disabled, a security exception occurs, and no access is given.
If a distributed management card failover occurs, an RMI application responds as follows:
To Configure the Distributed Management Card RMI Interface |
1. Verify that RMI programs you want to access the distributed management card MOH agent contain a valid distributed management card user name and password, with appropriate permissions.
2. Log in to the active distributed management card.
3. Set the setmohsecurity option to true:
4. Reset the active distributed management card.
The RMI authentication takes effect immediately. Any modification to the distributed management card user names and passwords also takes effect immediately.
The Processor Management Service (PMS) is a management application that provides support for high-availability services and applications. It provides both local and remote monitoring and control of a cluster of node boards. It monitors the health of node boards, takes recovery actions, and notifies partner nodes if so configured. It provides the state of the resources, such as hardware, operating system, and applications.
You use the distributed management card PMS CLI commands to configure PMS services, such as fault detection/notification, and fault recovery. The recovery administration is described in Using the PMS Application for Recovery and Control of Node Boards.
You can also use the PMS API to configure partner lists. Partner lists are tables of distributed management card and node board information relating to connectivity and addressing. Refer to the pms API man pages, installed by default in /opt/SUNWnetract/mgmt3.0/man, for more information on partner lists.
Note that the PMS daemon runs only on the active distributed management card. Because of this, you can not use static IP addresses for the distributed management card with the PMS application. You must use alias IP addresses so that PMS daemons continue to run on the active distributed management card in case of a failover, as follows:
To Start or Stop the PMS Application on a Node Board |
1. Log in as root to the server that has the Solaris patches installed (see Software Required).
2. Create a Solaris script to start, stop, and restart PMS, as follows:
4. Start, stop, or restart the PMS application by typing one of the following:
where filename is the name of the file in which you saved the script.
You can also save this script in the /etc/rc* directory of your choice to have PMS automatically start at boot time.
The PMS daemon (pmsd) starts automatically on the active distributed management card. However, you can manually stop and restart the PMS daemon on the active distributed management card.
These optional parameters can be specified:
You specify the port number for pmsd using the parameter port_num.
You specify the state in which to start pmsd using the parameter server_admin_state. This parameter may be set to force_unavail (force pmsd to start in the unavailable state); force_avail (force pmsd to start in the available state); or vote_avail (start pmsd in the available state, but only if all conditions have been met to make it available; if all the conditions have not been met, pmsd will not become available).
You specify whether to reset persistent storage to the default values on the distributed management card using the -d option. Data in persistent storage remains across reboots or power on and off cycles. If you do not specify -d, pmsd is started using its existing persistent storage configuration; if you specify -d, the persistent storage configuration is reset to the defaults for pmsd. The -d option would typically be specified only to perform a bulk reset of persistent storage during initial system bring up or if corruption occurred.
To Manually Stop the Processor Management Service on the Distributed Management Card |
1. Log in to the active distributed management card.
2. Stop the PMS daemon with the stop command:
where port_num is the port number of the currently running pmsd you want to stop. The default is port 10300. Note that stopping PMS on the active distributed management card forces a failover if the setfailover mode is set to on.
To Manually Start the Processor Management Service on the Distributed Management Card |
1. Log in to the active distributed management card.
2. Start the PMS daemon with the start command:
where port_num is the port number for pmsd to listen on, server_admin_state can be force_unavail, force_avail, or vote_avail, and -d resets the persistent storage to the defaults for pmsd.
The pmsd slotaddressset command is used to set the IP address by which the distributed management card controls and monitors a node board in a particular slot. The command establishes the connection between pmsd running on the distributed management card and pmsd running on a node board. The distributed management card and the node board must be in the same system.
You specify the slot number of the node board and the IP address to be configured. The default IP address for all slots is 0.0.0.0. Therefore, control is initially disabled.
To Set the IP Address for the Distributed Management Card to Control Node Boards in the Same System |
1. Log in to the active distributed management card.
2. Set the IP address with the slotaddressset command:
where slot_num can be a slot number from 3 to 20, and ip_addr is the IP address to be configured.
The pmsd slotaddressshow -s slot_num|all command can be used to print IP address information for the specified slot or all slots. If the IP address information is not 0.0.0.0 for a given slot, PMS is configured to manage the node board in this slot using this IP address.
You can use the PMS CLI application to enable local node boards to remotely monitor and control node boards in the same system or in other Netra CT systems. One use for this capability is in a high availability environment. For example, if a high availability application fails on a controlled node board, PMS notifies the controlling node board of the failure, and the controlling node board (through a customer application) notifies another controlled node board to start the same high availability application.
The pmsd slotrndaddressadd command is used to configure a local node board to control and monitor another node board by specifying the IP addresses and slot information for the node board to be controlled, using the parameters shown in TABLE 2-8.
Each local node board can control and monitor 16 local or remote node boards. Each local node board being managed must have already had its IP address set using the pmsd slotaddressset command.
To Add Address Information for a Local Node Board to Control Node Boards in Local or Remote Systems |
1. Log in to the active distributed management card.
2. Add the address information with the slotrndaddressadd command:
where -s slot_num is the slot number in the same system of the local node board you want to use to control other local or remote node boards, and all specifies all slots containing node boards in the local system; -n ip_addr is the IP address of the node board to be controlled; -d ip_addr is either the alias IP address of the system management interface if the active distributed management card is in the system of the node board to be controlled or the alias IP address of the external Ethernet port of the active distributed management card if that card is in a remote system; and -r slot_num is the slot number of the node board to be controlled.
When you add address information with the slotrndaddressadd command, an index number is automatically assigned to the information. You can see index numbers by using the slotrndaddressshow command and use the index numbers to delete address information with the slotrndaddressdelete command.
The pmsd slotrndaddressdelete -s slot_num|all -i index_num|all command can be used to delete address information from the controlling node board. The -s slot_num|all parameter specifies whether the address information is deleted on a single slot number or on all slots containing node boards in the local system. The -i index_num|all parameter specifies whether the address information will be deleted for a single address entry or for all address entries; index_num can be 1 to 16. Before using this command, it is advisable to print the current address information using the pmsd slotrndaddressshow command, so you know the index number to use.
The pmsd slotrndaddressshow -s slot_num|all -i index_num|all command can be used to print address information. The -s slot_num|all parameter specifies whether the address information is printed for a single slot number or for all slots containing node boards in the local system; index_num can be 1 to 16. The -i index_num|all parameter specifies whether the address information is printed for a single address entry or for all address entries.
Copyright © 2004, Sun Microsystems, Inc. All rights reserved.