C H A P T E R  2

Sun StorEdge QFS Initial Installation Procedure

This chapter describes the procedure for installing and configuring Sun StorEdge QFS standalone software for the first time. Use this procedure if this is the initial installation of the Sun StorEdge QFS standalone software package at your site. If you are upgrading Sun StorEdge QFS software on an existing server, see the Sun StorEdge QFS Upgrade Procedure.

The procedure in this chapter explains obtaining the packages, installing the software packages on your server or node, and configuring the software to match the hardware at your site.

You can install and configure your Sun StorEdge QFS file system entirely using Solaris Operating System (OS) commands, or you can use a combination of commands and the SAM-QFS Manager, which is a graphical user interface (GUI) configuration tool, to complete the procedure.

You must be logged in as superuser to complete most of the procedures in this chapter.


Ensuring That the Installation Prerequisites Are Met

The chapter titled System Requirements and Preinstallation Tasks describes the items you need to verify before you install and configure the Sun StorEdge QFS software. If you have not yet completed the system verification steps, complete them now before you proceed. The steps described in that chapter for verifying the system requirements and performing preinstallation tasks are as follows:


Adding the Packages on the Sun StorEdge QFS Server

The Sun StorEdge QFS software uses the Sun Solaris packaging utilities for adding and deleting software. The pkgadd(1M) utility prompts you to confirm various actions necessary to install the packages.


procedure icon  To Add the Packages

1. Become superuser.

2. Use the cd(1) command to change to the directory where the software package release files reside.

When you completed your preinstallation tasks, you obtained the release files as described in Obtaining the Release Files. Use the cd(1) command to change to the directory that contains the release files. Changing to the appropriate directory differs, depending on your release media, as follows:

3. Use the pkgadd(1M) command to add the SUNWqfsr and SUNWqfsu packages.

For example:

# pkgadd -d . SUNWqfsr SUNWqfsu

4. Enter yes or y as the answer to each of the questions.

When you install SUNWqfsr and SUNWqfsu, you are asked if you want to define an administrator group. Select y to accept the default (no administrator group) or select n if you want to define an administrator group. You can reset permissions on certain commands later by using the set_admin(1M) command. For more information on this command, see the set_admin(1M) man page.

5. (Optional) Use the pkgadd(1M) command to add one or more localized packages.

Perform this step only if you want to install the packages localized for Chinese, French, or Japanese. CODE EXAMPLE 2-1 shows the commands to use to install the localized packages.

CODE EXAMPLE 2-1 Using the pkgadd (1M) Command to Install Localized Packages
# pkgadd -d SUNWcqfs
# pkgadd -d SUNWfqfs
# pkgadd -d SUNWjqfs

The procedure for adding the SAM-QFS Manager software appears later in this chapter. The SAM-QFS Manager installation script prompts you to add localized versions of that software.

6. On each host, issue the pkginfo(1M) command and examine its output to make sure that a Sun StorEdge QFS package is installed.

Each host must have the SUNWqfsr and SUNWqfsu packages installed on it.

CODE EXAMPLE 2-2 shows the needed SUNWqfsr/SUNWqfsu packages.

CODE EXAMPLE 2-2 pkginfo (1M) Command Example on a Sun SAM-QFS File System
# pkginfo | grep SUNWqfs
system  SUNWqfsr     Sun QFS software Solaris 9 (root)
system  SUNWqfsu     Sun QFS software Solaris 9 (usr)

7. (Optional) Install the packages on additional host systems.

Perform this step if you are configuring a multihost file system.

Repeat this procedure and install the packages on each host.


Enabling the Sun StorEdge QFS Software License

You need a license key to run the Sun StorEdge QFS software. For more information, see Obtaining a Software License Key.

The Sun StorEdge QFS file system uses an encrypted license key. The license key consists of an encoded alphanumeric string.


procedure icon  To Enable the Sun StorEdge QFS Software License

1. Create the /etc/opt/SUNWsamfs/LICENSE.4.2 file.

2. Starting in column one, place the license key you have obtained from your ASP or from Sun Microsystems on the first line in the /etc/opt/SUNWsamfs/LICENSE.4.2 file.

The key must start in column one. No other keywords, host IDs, comments, or other information can appear in the /etc/opt/SUNWsamfs/LICENSE.4.2 file.

3. (Optional) Install the license keys on additional host systems.

Perform this step if you are configuring a multihost file system.

Repeat this procedure and install the license key for each host.


Setting Up PATH and MANPATH Variables

This procedure shows you how to modify your PATH and MANPATH environment variables so you can access the Sun StorEdge QFS commands and man pages easily.


procedure icon  To Set Up PATH and MANPATH Variables

1. For users who need to access the Sun StorEdge QFS user commands (for example, sls(1)), add /opt/SUNWsamfs/bin to the users' PATH variables.

2. Use vi(1) or another editor to edit your system setup files to include the correct paths to commands and man pages.

a. In the Bourne or Korn shell, edit the .profile file, change the PATH and MANPATH variables, and export the variables.

CODE EXAMPLE 2-3 shows how your .profile file might look after editing.

CODE EXAMPLE 2-3 Finished .profile File
PATH=$PATH:/opt/SUNWsamfs/bin:/opt/SUNWsamfs/sbin 
MANPATH=$MANPATH:/opt/SUNWsamfs/man
export PATH MANPATH

b. In the C shell, edit the .login and .cshrc files.

When you have finished editing, the path statement in your .cshrc file might look like the following line:

set path = ($path /opt/SUNWsamfs/bin /opt/SUNWsamfs/sbin)

CODE EXAMPLE 2-4 shows how the MANPATH in your .login file might look after you have finished editing.

CODE EXAMPLE 2-4 Finished MANPATH in the .login File
setenv MANPATH /usr/local/man:opt/SUNWspro/man:/$OPENWINHOME/\
share/man:/opt/SUNWsamfs/man

3. (Optional) Set up the PATH and MANPATH variables on additional host systems.

Perform this step if you are configuring a multihost file system.

Repeat this procedure and set up the PATH and MANPATH variables for each host.


Preparing the Host Systems

Perform this procedure if you are configuring the following types of file systems:


procedure icon  To Prepare the Host Systems

1. Verify that all the hosts have the same user and group IDs.

If you are not running the Network Information Name service (NIS), make sure that all /etc/passwd and all /etc/group files are identical. If you are running NIS, the /etc/passwd and /etc/group files should already be identical.

For more information about this, see the nis+(1) man page.

2. (Optional) Enable the network time daemon command, xntpd(1M), to synchronize the times on all the hosts.

Perform this step if you are configuring a Sun StorEdge QFS shared file system on Solaris OS. You do not need to perform this step if you are configuring a Sun StorEdge QFS shared file system on Sun Cluster because it has already been done as part of the Sun Cluster installation.

The clocks of all hosts must be synchronized, and must be kept synchronized, during Sun StorEdge QFS shared file system operations. For more information, see the xntpd(1M) man page.

The following steps enable the xntpd(1M) daemon on one host:

a. Stop the xntpd(1M) daemon.

For example:

# /etc/init.d/xntpd stop

b. Use vi(1) or another editor to create file /etc/inet/ntp.conf.

c. Create a line in file /etc/inet/ntp.conf that specifies the name of the local time server.

This line has the following format:

server IP-address prefer

In the preceding command, server and prefer are required keywords. Specify the IP Address of your local time server for IP-address.

If you have no local time server, see one of the following URLs for information on how to access a public time source:

http://www.eecis.udel.edu/~mills/ntp/servers.html
http://www.boulder.nist.gov/timefreq/general/pdf/1383.pdf

Alternatively, you can search for public time sources in a search engine.

d. Close file /etc/inet/ntp.conf.

e. Start the xntpd(1M) daemon.

# /etc/init.d/xntpd start

3. Repeat the preceding steps on each host.


(Optional) Enabling the SAM-QFS Manager

Perform this task if you want to be able to use the SAM-QFS Manager to configure, control, monitor, or reconfigure your Sun StorEdge QFS environment.

The procedures in this section are as follows:

In addition to the information in this section, this manual's appendix, SAM-QFS Manager Software Notes, describes other aspects of using the SAM-QFS Manager.



Note - The SAM-QFS Manager does not support the Sun StorEdge QFS shared file system nor does it support file systems in Sun Cluster environments.




procedure icon  To Install the SAM-QFS Manager Software

1. Ensure that you have met the installation requirements in (Optional) Verifying Requirements for the SAM-QFS Manager.

2. Log in to the server that you want to use as the management station.

This can be the same server upon which you installed the SUNWsamfsr and SUNWsamfsu packages.

3. Become superuser.

4. Use the cd(1) command to change to the directory where the software package release files reside on your server.

When you completed your preinstallation tasks, you obtained the release files as described in Obtaining the Release Files. Use the cd(1) command to change to the directory that contains the release files.

For example, if you obtained the release files from a CD-ROM, use the following command:

# cd /cdrom/cdrom0

If you downloaded the release files, change to the directory to which you downloaded the files.

5. Execute the samqfsmgr_setup script to install the SAM-QFS Manager software.

For example:

# samqfsmgr_setup

6. Answer the questions as prompted by the samqfsmgr_setup script.

During the installation procedure, you are asked to answer questions about your environment. The script prompts you to enter passwords for the SAMadmin role and for the samadmin and samuser login IDs.

The samqfsmgr_setup script automatically installs the following:

The installation scripts prompt you to answer questions regarding whether you want to install any localized packages.

After installing the packages, it starts the TomCat Web Server, enables logging, and creates the SAMadmin role.

7. Use vi(1) or another editor to edit your system setup files to include the correct paths to commands and man pages.

a. In the Bourne or Korn shell, edit the .profile file, change the PATH and MANPATH variables, and export the variables.

CODE EXAMPLE 2-5 shows how your .profile file might look after editing.

CODE EXAMPLE 2-5 Finished .profile File
PATH=$PATH:/opt/SUNWsamqfsui/bin
MANPATH=$MANPATH:/opt/SUNWsamqfsui/man
export PATH MANPATH

b. In the C shell, edit the .login and .cshrc files.

When you have finished editing, the path statement in your .cshrc file might look like the following line:

set path = ($path /opt/SUNWsamqfsui/bin)

CODE EXAMPLE 2-6 shows how the MANPATH in your .login file might look after you have finished editing.

CODE EXAMPLE 2-6 Finished MANPATH in the .login File
setenv MANPATH /usr/local/man:opt/SUNWspro/man:/$OPENWINHOME/\
share/man:/opt/SUNWsamfs/man:/opt/SUNWsamqfsui/man

8. Log in to the Sun StorEdge QFS server and become superuser.

9. Use the ps(1) and grep(1) commands to make sure that the rpcbind service is running.

For example:

# ps -ef | grep rpcbind

10. Examine the output from the preceding commands.

The output should contain a line similar to the following:

root   269     1  0   Feb 08 ?        0:06 /usr/sbin/rpcbind

If rpcbind does not appear in the output, enter the following command:

# /usr/sbin/rpcbind

11. (Optional) Start the SAM-QFS Manager (sam-mgmtrpcd) daemon.

Perform this step if you did not choose to have this daemon started automatically at installation time.

Enter the following command to start the SAM-QFS Manager daemon:

# /opt/SUNWsamfs/sbin/samadm config -a

With this configuration, the system automatically restarts this daemon every time the daemon process dies. The daemon autorestarts at system reboots.

If you want to stop the daemon completely, enter the following command:

# /opt/SUNWsamfs/sbin/samadm config -n

The preceding command also prevents the daemon from restarting automatically.

If you want the SAM-QFS Manager daemon to run only once and not automatically restart, use the following command:

# /opt/SUNWsamfs/sbin/samadm start

If you have used the preceding command to start the daemon, use the following command to stop it:

# /opt/SUNWsamfs/sbin/samadm stop

For more information, see the samadm(1M) man page.

Using the SAM-QFS Manager Software

After the SAM-QFS Manager is installed, you can log in to the software using two possible user names (samadmin and samuser) and two different roles (SAMadmin or no role). The tasks you can perform using the SAM-QFS Manager differ depending on the user name and the role you assume at login. These differences are as follows:

Only the Sun StorEdge QFS administrator should log in using the SAMadmin role. All other users should log in as samuser.

With regard to system administration, be aware that the Solaris OS root user on the server that hosts the SAM-QFS Manager is not necessarily the administrator of the SAM-QFS Manager. Only samadmin has administrator priviledges for the SAM-QFS Manager application. The root user is the administrator of the management station.


procedure icon  To Invoke the SAM-QFS Manager for the First Time

Perform this procedure if you want to invoke the SAM-QFS Manager and use it, rather than commands, to perform some of the configuration steps.

1. Log in to the management station web server.

2. From a web browser, invoke the SAM-QFS Manager software.

The URL is as follows:

https://hostname:6789

For hostname, type the name of the host. If you need to specify a domain name in addition to the host name, specify the hostname in this format: hostname.domainname.

Note that this URL begins with https, not http. The Sun Web Console login screen appears.

3. At the User Name prompt, enter samadmin.

4. At the Password prompt, enter the password you entered when you answered questions during the the samqfsmgr_setup script's processing in To Install the SAM-QFS Manager Software.

5. Click on the SAMadmin role.

Only the Sun StorEdge QFS administrator should ever log in with the SAMadmin role.

6. At the Role Password prompt, enter the password you entered in Step 4.

7. Click Log In.

8. Click SAM-QFS Manager 1.1.

You are now logged in to the SAM-QFS Manager.


procedure icon  To Use the SAM-QFS Manager for Configuration

This manual guides you through the configuration process using Solaris OS commands, but you can also use the SAM-QFS Manager, instead of commands, to accomplish many of the tasks.

1. Click Help, in the upper right corner of the screen, to access the SAM-QFS Manager online documentation.

2. Complete the configuration tasks.

TABLE 2-1 shows the rest of the steps you must perform to install and configure a Sun StorEdge QFS file system and the means by which you can accomplish each task.

Perform the configuration steps in TABLE 2-1 in the order in which they appear. You can open a terminal window next to the SAM-QFS Manager window for use when you need to alternate between using commands and using the SAM-QFS Manager.

TABLE 2-1 Sun StorEdge QFS Installation Tasks

Task

Accomplish Through GUI

Accomplish Through Commands

Defining the Sun StorEdge QFS Configuration By Creating the mcf File

Yes

Yes

(Optional) Editing the defaults.conf File

No

Yes

Verifying the License and mcf Files

No

Yes

(Optional) Creating the samfs.cmd File

Yes

Yes

Updating the /etc/vfstab File and Creating the Mount Point

Yes

Yes

Initializing the File System

Yes

Yes

Mounting the File System

Yes

Yes

(Optional) Sharing the File System With NFS Client Systems

No

Yes

Establishing Periodic Dumps Using qfsdump(1M)

No

Yes

(Optional) Backing Up Configuration Files

No

Yes

(Optional) Configuring the Remote Notification Facility

No

Yes

(Optional) Adding the Administrator Group

No

Yes

Configuring System Logging

No

Yes

(Optional) Configuring Other Products

Not applicable

Not applicable


TABLE 2-1 describes several installation steps as optional. The only required installation steps that you still must perform using Solaris OS commands are as follows:

The other installation steps in TABLE 2-1 are necessary, or are highly recommended, depending on your environment.


Defining the Sun StorEdge QFS Configuration By Creating the mcf File

Each Sun StorEdge QFS environment is unique. The system requirements and hardware that are used differ from site to site. It is up to you, the system administrator at your site, to set up the specific configuration for your Sun StorEdge QFS environment.

The master configuration file, /etc/opt/SUNWsamfs/mcf, defines the topology of the equipment managed by the Sun StorEdge QFS file system. This file specifies the devices and file systems included in the environment. You assign each piece of equipment a unique Equipment Identifier in the mcf file.

To configure Sun StorEdge QFS devices, create an mcf file in /etc/opt/SUNWsamfs/mcf that contains a line for each device and family set in your configuration. The mcf contains information that enables you to identify the disk slices to be used and to organize them into Sun StorEdge QFS file systems.

There are examples of mcf files in /opt/SUNWsamfs/examples.



Note - For information about file system design considerations, see the Sun StorEdge QFS and Sun StorEdge SAM-FS File System Administration Guide.



The following sections provide examples and describe activities related to creating and maintaining the mcf file:



Note - The instructions for creating the mcf file differ depending on whether you are creating a Sun StorEdge QFS environment or a Sun SAM-QFS environment.

If you are installing the Sun StorEdge QFS software, all configuration instructions are contained in this section.

If you are creating a Sun SAM-QFS environment, the instructions for configuring the file system portion of the mcf file are contained in this section. The instructions for library and drive configuration are contained in Defining the Sun StorEdge SAM-FS Configuration By Creating the mcf File.




procedure icon  To Create an mcf File

single-step bulletUse vi(1) or another editor to create the mcf file.

When you create the mcf file, follow these guidelines:

CODE EXAMPLE 2-7 shows the fields of each line entry in the mcf file.

CODE EXAMPLE 2-7 mcf File Fields
#
# Sun QFS file system configuration
#
# Equipment       Equip  Equip Fam   Dev    Additional
# Identifier      Ord    Type  Set   State  Parameters
# ----------      -----  ----- ----  -----  ----------

Where to Go From Here

After you have created your mcf file, using the examples in this section as a guide, proceed on to one of the following sections depending on the type of file system you are configuring:

mcf File Fields

The fields in an mcf file are the same regardless of what kind of file system you are configuring. CODE EXAMPLE 2-7 shows the fields. The following sections explain the fields. For more information about the content of these mcf file fields, see the Sun StorEdge QFS and Sun StorEdge SAM-FS File System Administration Guide.

The Equipment Identifier Field

This is a required field. Enter one of the following:

The specification for a disk partition or disk slice is limited to 127 characters in length. TABLE 2-2 shows the kinds of devices to use when creating Sun StorEdge QFS file systems.

TABLE 2-2 File System Types and Allowed Disk Devices

Platform

Sun StorEdge QFS (Shared)

Sun StorEdge QFS (Single Host)

Solaris OS

Raw devices (/dev/dsk/...)

Raw devices (/dev/dsk/cntndnsn)

Volume-manager controlled devices (/dev/vx/... or /dev/md/...)

Sun Cluster

DID devices (/dev/did/...)

Global devices (/dev/global/...)


The following notes pertain to the information in TABLE 2-2:

The Equipment Ordinal Field

This is a required field. Enter a unique integer such that 1 less than or equal eq_ord less than or equal 65534.

The Equipment Type Field

This is a required field. Enter the code for the Equipment Type, as follows:

For more information about Equipment Types, see the mcf(4) man page.

The Family Set Field

This is a required field. Enter the name of the file system to which this device belongs. The system organizes all devices with the same Family Set name into a Sun StorEdge QFS file system. Limited to 31 characters.

If this line is the first in a series of lines that define devices for a particular file system, enter the same name you entered in the Equipment Identifier field.

If this line defines a device within a file system, enter the file system name in this field.

The Device State Field

This is an optional field. If specified, this field should contain either the keyword on or a dash character (-). Enter a state for the device for when the Sun StorEdge QFS file system is initialized.

The Additional Parameters Field

This is an optional field. Specify shared in this field only if you are configuring a Sun StorEdge QFS shared file system. For information about the Sun StorEdge QFS shared file system, see the Sun StorEdge QFS and Sun StorEdge SAM-FS File System Administration Guide.

For more information, see the mcf(4) man page. An example mcf file is located in /opt/SUNWsamfs/examples/mcf.



caution icon

Caution - Make sure you specify disk partitions that are not in use on your system. Do not use overlapping partitions.

If you give the wrong partition names, you risk damaging user or system data. This is true when creating any type of file system. The risk is greatest if the partition named contains a UFS file system that is not mounted currently.



CODE EXAMPLE 2-8 shows file system entries in an mcf file for a Sun StorEdge QFS file system that is local to one Solaris OS host.

CODE EXAMPLE 2-8 Example Sun StorEdge QFS mcf File
#
# Sun QFS file system configuration
#
# Equipment       Equip  Equip Fam   Dev    Additional
# Identifier      Ord    Type  Set   State  Parameters
# ----------      -----  ----- ----  -----  ----------
qfs1               1     ma    qfs1  on
/dev/dsk/c1t0d0s0 11     mm    qfs1  on
/dev/dsk/c1t1d0s4 12     mr    qfs1  on
/dev/dsk/c1t2d0s4 13     mr    qfs1  on
/dev/dsk/c1t3d0s4 14     mr    qfs1  on



Note - If you change the mcf file after the Sun StorEdge QFS file system is in use, you must convey the new mcf specifications to the Sun StorEdge QFS software. For information about propagating mcf file changes to the system, see the Sun StorEdge QFS and Sun StorEdge SAM-FS File System Administration Guide.



Configuration Examples for Local File Systems

Use the configuration examples in this section for configuring the mcf file for a Sun StorEdge QFS file system to be installed in the following types of configurations:

For mcf examples that you can use in a Sun Cluster environment, see "Configuration Examples for Sun Cluster File Systems" on page 70.

Configuration Example 1

This example shows how to configure two Sun StorEdge QFS file systems using a server that has a Sun StorEdge Multipack desktop array connected by a SCSI attachment.

You can use the format(1M) command to determine how the disks are partitioned. CODE EXAMPLE 2-9 shows the format(1M) command's output.

CODE EXAMPLE 2-9 format (1M) Command Output for Configuration Example 1
# format < /dev/null
Searching for disks...done
 
AVAILABLE DISK SELECTIONS:
       0. c0t10d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107>
          /sbus@3,0/SUNW,fas@3,8800000/sd@a,0
       1. c0t11d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107>
          /sbus@3,0/SUNW,fas@3,8800000/sd@b,0
       2. c6t2d0 <SUN9.0G cyl 4924 alt 2 hd 27 sec 133>
          /pci@7,4000/SUNW,isptwo@3/sd@2,0
       3. c6t3d0 <SUN9.0G cyl 4924 alt 2 hd 27 sec 133>
          /pci@7,4000/SUNW,isptwo@3/sd@3,0
       4. c6t4d0 <SUN9.0G cyl 4924 alt 2 hd 27 sec 133>
          /pci@7,4000/SUNW,isptwo@3/sd@4,0
       5. c6t5d0 <SUN9.0G cyl 4924 alt 2 hd 27 sec 133>
          /pci@7,4000/SUNW,isptwo@3/sd@5,0
       6. c8t2d0 <SUN9.0G cyl 4924 alt 2 hd 27 sec 133>
          /pci@b,4000/SUNW,isptwo@3/sd@2,0
       7. c8t3d0 <SUN9.0G cyl 4924 alt 2 hd 27 sec 133>
          /pci@b,4000/SUNW,isptwo@3/sd@3,0
       8. c8t4d0 <SUN9.0G cyl 4924 alt 2 hd 27 sec 133>
          /pci@b,4000/SUNW,isptwo@3/sd@4,0
       9. c8t5d0 <SUN9.0G cyl 4924 alt 2 hd 27 sec 133>
          /pci@b,4000/SUNW,isptwo@3/sd@5,0
Specify disk (enter its number):
#
 
                                 # format(1M) shows the partition layout of all drives.
# format /dev/rdsk/c6t2d0s2      # Only the last lines of format(1M) output are shown.
 
Output Deleted From Example
Part      Tag    Flag     Cylinders        Size            Blocks
  0 unassigned    wm       0               0         (0/0/0)           0
  1 unassigned    wm       0               0         (0/0/0)           0
  2     backup    wu       0 - 4923        8.43GB    (4924/0/0) 17682084
  3 unassigned    wm       0               0         (0/0/0)           0
  4 unassigned    wm       0 - 1229        2.11GB    (1230/0/0)  4416930
  5 unassigned    wm    1230 - 2459        2.11GB    (1230/0/0)  4416930
  6 unassigned    wm    2460 - 3689        2.11GB    (1230/0/0)  4416930
  7 unassigned    wm    3690 - 4919        2.11GB    (1230/0/0)  4416930


procedure icon  To Configure the System

Begin writing the mcf file for this configuration example by defining the file system and its disk partitions, as follows:

1. Write the mcf file.

a. Make an ma entry for the first file system.

b. Make an mm entry listing the partition(s) that comprise the metadata for the qfs1 file system.

c. Make a series of mr entries listing the partitions that comprise the file data for the qfs1 file system.

d. Make similar entries for the second (qfs2) file system.

The finished mcf file defines the following two file systems:

CODE EXAMPLE 2-10 shows the resulting mcf file.

CODE EXAMPLE 2-10 mcf File for Sun StorEdge QFS Example 1
# cat /etc/opt/SUNWsamfs/mcf
#
# Equipment         Eq   Eq     Family   Device   Additional
# Identifier        Ord  Type    Set     State    Parameters
#-----------        ---  ----   ------   ------   ----------
#
qfs1                 10    ma   qfs1       on
/dev/dsk/c8t2d0s4    11    mm   qfs1       on
/dev/dsk/c6t2d0s4    12    mr   qfs1       on
/dev/dsk/c6t3d0s4    13    mr   qfs1       on
#
qfs2                 20    ma   qfs2       on
/dev/dsk/c8t2d0s5    21    mm   qfs2       on
/dev/dsk/c6t2d0s5    22    mr   qfs2       on
/dev/dsk/c6t3d0s5    23    mr   qfs2       on

2. Modify the /etc/vfstab file.

Make entries in the /etc/vfstab file for the qfs1 and qfs2 file systems you defined in the mcf file. The last two lines in CODE EXAMPLE 2-11 show entries for these new file systems.

CODE EXAMPLE 2-11 /etc/vfstab File for Sun StorEdge QFS Example 1
# cat /etc/vfstab
# device            device                       file            mount
# to                to                   mount   system   fsck   at    mount
# mount             fsck                 point   type     pass   boot  params
# -----             ----                 -----   ----     ----   ----  ------
fd                  -                    /dev/fd  fd       -     no    -
/proc               -                    /proc    proc     -     no    -
/dev/dsk/c0t10d0s1  -                    -        swap     -     no    -
/dev/dsk/c0t10d0s0  /dev/rdsk/c0t10d0s0  /        ufs      1     no    logging
swap                -                    /tmp     tmpfs    -     yes   -
qfs1                -                    /qfs1    samfs    -     yes  stripe=1
qfs2                -                    /qfs2    samfs    -     yes  stripe=1



Note - Modifying the /etc/vfstab file is a later step in this chapter's configuration procedure. This step shows the /etc/vfstab file modifications only for completeness' sake.



Configuration Example 2

This example illustrates a Sun StorEdge QFS file system that uses round-robin allocation on four disk drives.

This example assumes the following:


procedure icon  To Configure the System

This example introduces the round-robin data layout. For more information about data layout, see the Sun StorEdge QFS and Sun StorEdge SAM-FS File System Administration Guide.

1. Write the mcf file.

CODE EXAMPLE 2-12 shows the mcf file for this round-robin disk configuration.

CODE EXAMPLE 2-12 mcf File for Sun StorEdge QFS Example 2
# cat /etc/opt/SUNWsamfs/mcf
#
# Equipment         Eq   Eq     Family   Device   Additional
# Identifier        Ord  Type    Set     State    Parameters
#-----------        ---  ----   ------   ------   ----------
#
qfs3                 10    ma   qfs3       on
/dev/dsk/c8t4d0s4    11    mm   qfs3       on
/dev/dsk/c6t2d0s4    12    mr   qfs3       on
/dev/dsk/c6t3d0s4    13    mr   qfs3       on
/dev/dsk/c6t4d0s4    14    mr   qfs3       on
/dev/dsk/c6t5d0s4    15    mr   qfs3       on



Note - Modifying the /etc/vfstab file and using the sammkfs(1M) command are later steps in this chapter's configuration procedure. This step shows these steps only for completeness' sake.



2. Modify the /etc/vfstab file.

Edit the /etc/vfstab file to explicitly set round-robin allocation on the file system by specifying stripe=0 in the mount params field. CODE EXAMPLE 2-13 shows stripe=0 for the qfs3 file system.

CODE EXAMPLE 2-13 /etc/vfstab File for Sun StorEdge QFS Example 2
# cat /etc/vfstab
#device             device                        file          mount
#to                 to                   mount    system  fsck  at    mount
#mount              fsck                 point    type    pass  boot  params
#-----              ----                 -----    ----    ----  ----  ------
fd                  -                    /dev/fd  fd      -     no    -
/proc               -                    /proc    proc    -     no    -
/dev/dsk/c0t10d0s1  -                    -        swap    -     no    -
/dev/dsk/c0t10d0s0  /dev/rdsk/c0t10d0s0  /        ufs     1     no    logging
swap                -                    /tmp     tmpfs   -     yes   -
qfs3                -                    /qfs3    samfs   -     yes   stripe=0

3. Run the sammkfs(1M) command.

Initialize the Sun StorEdge QFS file system by using the sammkfs(1M) command. The default DAU is 64 kilobytes, but the following example sets the DAU size to 128 kilobytes:

# sammkfs -a 128 qfs1

Configuration Example 3

This example illustrates a Sun StorEdge QFS file system. It stripes file data to four disk drives. This example assumes the following:


procedure icon  To Configure the System

1. Write the mcf file.

Write the mcf file using the disk configuration assumptions. CODE EXAMPLE 2-14 shows a sample mcf file for a striped disk configuration.

CODE EXAMPLE 2-14 mcf File for Sun StorEdge QFS Example 3
# Equipment         Eq   Eq     Family   Device   Additional
# Identifier        Ord  Type    Set     State    Parameters
#-----------        ---  ----   ------   ------   ----------
#
qfs4                 40    ma   qfs4       on
/dev/dsk/c8t4d0s4    41    mm   qfs4       on
/dev/dsk/c6t2d0s4    42    mr   qfs4       on
/dev/dsk/c6t3d0s4    43    mr   qfs4       on
/dev/dsk/c6t4d0s4    44    mr   qfs4       on
/dev/dsk/c6t5d0s4    45    mr   qfs4       on



Note - Modifying the /etc/vfstab file and using the sammkfs(1M) command are later steps in this chapter's configuration procedure. This step shows these steps only for completeness' sake.



2. Modify the /etc/vfstab file.

Set the stripe width by using the stripe= option. CODE EXAMPLE 2-15 shows the /etc/vfstab file with a mount parameter of stripe=1 set for the qfs4 file system.

CODE EXAMPLE 2-15 /etc/vfstab File for Sun StorEdge QFS Example 3
# cat /etc/vfstab
#
#device             device                        file         mount
#to                 to                   mount    system fsck  at    mount
#mount              fsck                 point    type   pass  boot  params
#-----              ----                 -----    -----  ----  ----  ------
fd                  -                    /dev/fd  fd     -     no    -
/proc               -                    /proc    proc   -     no    -
/dev/dsk/c0t10d0s1  -                    -        swap   -     no    -
/dev/dsk/c0t10d0s0  /dev/rdsk/c0t10d0s0  /        ufs    1     no    logging
swap                -                    /tmp     tmpfs  -     yes   -
qfs4                -                    /qfs4    samfs  -     yes   stripe=1

The stripe=1 specification stripes file data across all four of the mr data disks with a stripe width of one disk allocation unit (DAU). Note that the DAU is the allocation unit you set when you use the sammkfs(1M) command to initialize the file system.

3. Run the sammkfs(1M) command.

Initialize the Sun StorEdge QFS file system by using the sammkfs(1M) command. The following example sets the DAU size to 128 kilobytes:

# sammkfs -a 128 qfs1

With this striped disk configuration, any file written to this file system is striped across all of the devices in increments of 128 kilobytes. Files less than the aggregate stripe width times the number of devices still use 128 kilobytes of disk space. Files larger than 128 kilobytes have space allocated for them as needed in total space increments of 128 kilobytes. The file system writes metadata to device 41 only.

Configuration Example 4

Striped groups allow you to build RAID-0 devices of separate disk devices. With striped groups, however, there is only one DAU per striped group. This method of writing huge, effective DAUs across RAID devices saves system update time and supports high-speed sequential I/O. Striped groups are useful for writing very large files to groups of disk devices.



Note - A DAU is the minimum disk space allocated. The minimum disk space allocated in a striped group is as follows:

allocation_unit x number of disks in the group

Writing a single byte of data consumes a DAU on every member of the entire striped group. Make sure that you understand the effects of using striped groups with your file system.



The devices within a striped group must be the same size. It is not possible to increase the size of a striped group. You can add additional striped groups to the file system, however.

This example configuration illustrates a Sun StorEdge QFS file system that separates the metadata onto a low-latency disk. The mcf file defines two striped groups on four drives. This example assumes the following:


procedure icon  To Configure the System

1. Write the mcf file.

Write the mcf file by using the disk configuration assumptions. CODE EXAMPLE 2-16 shows a sample mcf file for a striped group configuration.

CODE EXAMPLE 2-16 mcf File for Sun StorEdge QFS Example 4
# cat /etc/opt/SUNWsamfs/mcf
#
# Equipment         Eq   Eq     Family   Device   Additional
# Identifier        Ord  Type    Set     State    Parameters
#-----------        ---  ----   ------   ------   ----------
#
qfs5                 50    ma   qfs5       on
/dev/dsk/c8t4d0s5    51    mm   qfs5       on
/dev/dsk/c6t2d0s5    52    g0   qfs5       on
/dev/dsk/c6t3d0s5    53    g0   qfs5       on
/dev/dsk/c6t4d0s5    54    g1   qfs5       on
/dev/dsk/c6t5d0s5    55    g1   qfs5       on



Note - Modifying the /etc/vfstab file and using the sammkfs(1M) command are later steps in this chapter's configuration procedure. This procedure shows these steps only for completeness' sake.



2. Modify the /etc/vfstab file.

Use the the stripe= option to set the stripe width. CODE EXAMPLE 2-17 shows the /etc/vfstab file with a mount parameter of stripe=0, which specifies a round-robin allocation between striped group g0 to striped group g1.

CODE EXAMPLE 2-17 /etc/vfstab File for Sun StorEdge QFS Example 4
# cat /etc/vfstab
#device             device                        file          mount
#to                 to                   mount    system  fsck  at    mount
#mount              fsck                 point    type    pass  boot  params
#-----              ----                 -----    ----    ----  ----  ------
fd                  -                    /dev/fd  fd      -     no    -
/proc               -                    /proc    proc    -     no    -
/dev/dsk/c0t10d0s1  -                    -        swap    -     no    -
/dev/dsk/c0t10d0s0  /dev/rdsk/c0t10d0s0  /        ufs     1     no    logging
swap                -                    /tmp     tmpfs   -     yes   -
qfs5                -                    /qfs5    samfs   -     yes   stripe=0

3. Run the sammkfs(1M) command.

Initialize the Sun StorEdge QFS file system by using the sammkfs(1M) command. The -a option is not used with striped groups because the DAU is equal to the size of an allocation, or the size, of each group.

# sammkfs qfs5

In this example, there are two striped groups, g0 and g1. With stripe=0 in /etc/vfstab, devices 12 and 13 are striped; devices 14 and 15 are striped; and files are round robined around the two striped groups. You are actually treating a striped group as a bound entity. After you configure a stripe group, you cannot change it without issuing another sammkfs(1M) command.

Configuration Example for a Sun StorEdge QFS Shared File System on a Solaris OS Platform

FIGURE 2-1 illustrates a Sun StorEdge QFS shared file system configuration in a Sun SAM-QFS environment.

  FIGURE 2-1 Sun StorEdge QFS Shared File System Configuration in a Sun SAM-QFS Environment

Figure of a shared Sun SAM-QFS environment. Shows hosts titan, tethys, dione, and mimas connected to a LAN.[ D ]

FIGURE 2-1 shows four network-attached hosts: titan, tethys, dione, and mimas. The tethys, dione, and mimas hosts are the clients, and titan is the current metadata server. The titan and tethys hosts are potential metadata servers.

The archive media consists of a network-attached library and tape drives that are fibre-attached to titan and tethys. In addition, the archive media catalog resides in a file system that is mounted on the current metadata server, titan.

Metadata travels to and from the clients to the metadata server over the network. The metadata server makes all modifications to the name space, and this keeps the metadata consistent. The metadata server also provides the locking capability, the block allocation, and the block deallocation.

Several metadata disks are connected to titan and tethys, and these disks can be accessed only by the potential metadata servers. If titan were unavailable, you could change the metadata server to tethys, and the library, tape drives, and catalog could be accessed by tethys as part of the Sun StorEdge QFS shared file system. The data disks are connected to all four hosts by a Fibre Channel connection.


procedure icon  To Configure the System

1. Issue the format(1M) command and examine its output.

Make sure that the metadata disk partitions configured for the Sun StorEdge QFS shared file system mount point are connected to the potential metadata servers. Also make sure that the data disk partitions configured for the Sun StorEdge QFS shared file system are connected to the potential metadata servers and to all the client hosts in this file system.

If your host supports multipath I/O drivers, individual devices shown in the format(1M) command's output might show multiple controllers. These correspond to the multiple paths to the actual devices.

CODE EXAMPLE 2-18 shows the format(1M) command output for titan. There is one metadata disk on controller 2, and there are three data disks on controller 3.

CODE EXAMPLE 2-18 format (1M) Command Output on titan
titan<28>format
Searching for disks...done
 
 
AVAILABLE DISK SELECTIONS:
       0. c1t0d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107>
          /pci@8,600000/SUNW,qlc@4/fp@0,0/ssd@w2100002037e9c296,0
       1. c2t2100002037E2C5DAd0 <SUN36G cyl 24620 alt 2 hd 27 sec 107>
          /pci@8,600000/SUNW,qlc@4/fp@0,0/ssd@w2100002037e9c296,0
       2. c2t50020F23000065EEd0 <SUN-T300-0116 cyl 34901 alt 2 hd 128 sec 256>
          /pci@8,600000/SUNW,qlc@4/fp@0,0/ssd@w50020f23000065ee,0
       3. c3t50020F2300005D22d0 <SUN-T300-0116 cyl 34901 alt 2 hd 128 sec 256>
          /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w50020f2300005d22,0
       4. c3t50020F2300006099d0 <SUN-T300-0116 cyl 34901 alt 2 hd 128 sec 256>
          /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w50020f2300006099,0
       5. c3t50020F230000651Cd0 <SUN-T300-0116 cyl 34901 alt 2 hd 128 sec 256>
          /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w50020f230000651c,0

CODE EXAMPLE 2-19 shows the format(1M) command output for tethys. There is one metadata disk on controller 2, and there are four data disks on controller 7.

CODE EXAMPLE 2-19 format (1M) Command Output on tethys
tethys<1>format
Searching for disks...done
 
 
AVAILABLE DISK SELECTIONS:
       0. c0t1d0 <IBM-DNES-318350Y-SA60 cyl 11112 alt 2 hd 10 sec 320>
          /pci@1f,4000/scsi@3/sd@1,0
       1. c2t2100002037E9C296d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107>
          /pci@8,600000/SUNW,qlc@4/fp@0,0/ssd@w2100002037e9c296,0
       2. c2t50020F23000065EEd0 <SUN-T300-0116 cyl 34901 alt 2 hd 128 sec 256>
          /pci@1f,4000/SUNW,qlc@4/ssd@w50020f23000065ee,0
       3. c7t50020F2300005D22d0 <SUN-T300-0116 cyl 34901 alt 2 hd 128 sec 256>
          /pci@1f,4000/SUNW,qlc@5/ssd@w50020f2300005d22,0
       4. c7t50020F2300006099d0 <SUN-T300-0116 cyl 34901 alt 2 hd 128 sec 256>
          /pci@1f,4000/SUNW,qlc@5/ssd@w50020f2300006099,0
       5. c7t50020F230000651Cd0 <SUN-T300-0116 cyl 34901 alt 2 hd 128 sec 256>
          /pci@1f,4000/SUNW,qlc@5/ssd@w50020f230000651c,0

Note the following in CODE EXAMPLE 2-19:

CODE EXAMPLE 2-20 shows the format(1M) command's output for mimas. This shows three data disks on controller 1 and no metadata disks.

CODE EXAMPLE 2-20 format (1M) Command Output on mimas
mimas<9>format
Searching for disks...done
 
 
AVAILABLE DISK SELECTIONS:
       0. c0t0d0 <SUN18G cyl 7506 alt 2 hd 19 sec 248>
          /pci@1f,4000/scsi@3/sd@0,0
       1. c1t50020F2300005D22d0 <SUN-T300-0116 cyl 34901 alt 2 hd 128 sec 256>
          /pci@1f,4000/SUNW,qlc@4/fp@0,0/ssd@w50020f2300005d22,0
       2. c1t50020F2300006099d0 <SUN-T300-0116 cyl 34901 alt 2 hd 128 sec 256>
          /pci@1f,4000/SUNW,qlc@4/fp@0,0/ssd@w50020f2300006099,0
       3. c1t50020F230000651Cd0 <SUN-T300-0116 cyl 34901 alt 2 hd 128 sec 256>
          /pci@1f,4000/SUNW,qlc@4/fp@0,0/ssd@w50020f230000651c,0

CODE EXAMPLE 2-19 and CODE EXAMPLE 2-20 show that the data disks on titan's controller 3 are the same disks as mimas' controller 1. You can verify this by looking at the World Wide Name, which is the last component in the device name. For titan's number 3 disk, the World Wide Name is 50020F2300005D22. This is the same name as number 3 on controller 1 on mimas.



Note - All the data disk partitions must be connected and accessible from all the hosts that share this file system. All the disk partitions, for both data and metadata, must be connected and accessible to all potential metadata servers. You can use the format(1M) command to verify these connections.

For some storage devices, it is possible that the format(1M) command's output does not present unique worldwide Names. If you find that this is the case, see the libdevid(3LIB) man page for information about finding such devices on different hosts.



2. Use vi(1) or another editor to create the mcf file on the metadata server.

The only difference between the mcf file of a shared Sun StorEdge QFS file system and an unshared Sun StorEdge QFS file system is the presence of the shared keyword in the Additional Parameters field of the file system name line of a Sun StorEdge QFS shared file system.



Note - If Sun StorEdge QFS or Sun StorEdge SAM-FS file systems are already operational on the Sun StorEdge QFS shared file system's metadata server or on any of the client host systems, select a Family Set name and select Equipment Ordinals that do not conflict with existing Family Set names or Equipment Ordinals on any host that will be included in the Sun StorEdge QFS shared file system.



CODE EXAMPLE 2-21 shows an mcf file fragment for titan that defines several disks for use in the Sun StorEdge QFS shared file system. It shows the shared keyword in the Additional Parameters field on the file system name line.

CODE EXAMPLE 2-21 Sun StorEdge QFS Shared File System mcf File Example for titan
# Equipment                      Eq   Eq    Family   Dev   Addl
# Identifier                     Ord  Type  Set      Stat  Params
------------                     ---  ----  ------   ----  ------
sharefs1                         10   ma    sharefs1 on    shared
/dev/dsk/c2t50020F23000065EEd0s6 11   mm    sharefs1 on
/dev/dsk/c3t50020F2300005D22d0s6 12   mr    sharefs1 on
/dev/dsk/c3t50020F2300006099d0s6 13   mr    sharefs1 on
/dev/dsk/c3t50020F230000651Cd0s6 14   mr    sharefs1 on



Note - In a Sun SAM-QFS shared file system, for each host that is a metadata server or potential metadata server, that hosts's mcf file must define all libraries and library catalogs used by its own shared file systems and by its potential shared file systems. This is necessary if you want to change the metadata server. For information on defining libraries in an mcf file, see the Sun StorEdge SAM-FS Initial Installation Procedure.



Configuration Examples for Sun StorEdge QFS Highly Available File Systems

The Sun Cluster software moves a Sun StorEdge QFS highly available file system from a failing node to a viable node in the event of a node failure.

Each node in the Sun Cluster that can host this file system must have an mcf file. Later on in this chapter's configuration process, you copy mcf file lines from the metadata server's mcf file to other nodes in the Sun Cluster.


procedure icon  To Create an mcf File for a Sun StorEdge QFS Highly Available File System

The procedure for creating an mcf file for a Sun StorEdge QFS highly available file system is as follows:

1. Make an ma entry for the file system.

2. Make an mm entry listing the partition(s) that comprise the metadata for the qfs1 file system.

3. Make a series of mr, gXXX, or md entries listing the partitions that comprise the file data for the qfs1 file system.

You can use the scdidadm(1M) command to determine the partitions to use.

Example 1. CODE EXAMPLE 2-22 is an example mcf file entry for a Sun StorEdge QFS highly available file system that uses raw devices.

CODE EXAMPLE 2-22 mcf File That Specifies Raw Devices
Equipment            Eq   Eq     Family   Additional
Identifier           Ord  Type   Set      Parameters
-------------------- ---  ----   ------   ----------
qfs1                   1   ma    qfs1     on
/dev/global/dsk/d4s0  11   mm    qfs1     
/dev/global/dsk/d5s0  12   mr    qfs1     
/dev/global/dsk/d6s0  13   mr    qfs1     
/dev/global/dsk/d7s0  14   mr    qfs1     

Example 2. CODE EXAMPLE 2-23 is an example mcf file entry for a Sun StorEdge QFS highly available file system that uses Solaris Volume Manager metadevices. The example assumes that the Solaris Volume Manager metaset in use is named red.

CODE EXAMPLE 2-23 mcf File That Specifies Solaris Volume Manager Devices
Equipment            Eq   Eq     Family   Additional
Identifier           Ord  Type   Set      Parameters
-------------------- ---  ----   ------   ----------
qfs1                   1   ma    qfs1     on
/dev/md/red/dsk/d0s0  11   mm    qfs1     
/dev/md/red/dsk/d1s0  12   mr    qfs1     

Example 3. CODE EXAMPLE 2-24 is an example mcf file entry for a Sun StorEdge QFS highly available file system that uses VxVm devices.

CODE EXAMPLE 2-24 mcf File That Specifies VxVM Devices
Equipment            Eq   Eq     Family   Additional
Identifier           Ord  Type   Set      Parameters
-------------------- ---  ----   ------   ----------
qfs1                   1   ma    qfs1     on
/dev/vx/dsk/oradg/m1  11   mm    qfs1     
/dev/vx/dsk/oradg/m2  12   mr    qfs1     

Configuration Example for a Sun StorEdge QFS Shared File System on a Sun Cluster Platform

This example assumes that both ash and elm are nodes in a Sun Cluster. Host ash is the metadata server. The keyword shared in this example's mcf file indicates to the system that this is a shared file system. This example builds upon Example - Using the scdidadm(1M) Command in a Sun Cluster.


procedure icon  To Create an mcf File for a Sun StorEdge QFS Shared File System on a Sun Cluster

Make sure that you create the mcf file on the node that you want to designate as the metadata server. The procedure for creating an mcf file for a Sun StorEdge QFS shared file system on a Sun Cluster is as follows:

1. Use the scdidadm(1M) -L command to obtain information about the devices included in the Sun Cluster.

The scdidadm(1M) command administers the device identifier (DID) devices. The -L option lists all the DID device paths, including those on all nodes in the Sun Cluster. CODE EXAMPLE 2-25 shows the format output from all the /dev/did devices. This information is needed when you build the mcf file.

CODE EXAMPLE 2-25 format (1M) Command Output
ash# format /dev/did/rdsk/d4s2
selecting /dev/did/rdsk/d4s2
 
Primary label contents:
 
Volume name = <        >
ascii name  = <SUN-T300-0118 cyl 34530 alt 2 hd 64 sec 32>
pcyl        = 34532
ncyl        = 34530
acyl        =    2
nhead       =   64
nsect       =   32
Part      Tag    Flag     Cylinders         Size            Blocks
  0        usr    wm       0 - 17264       16.86GB    (17265/0/0) 35358720
  1        usr    wm   17265 - 34529       16.86GB    (17265/0/0) 35358720
  2     backup    wu       0 - 34529       33.72GB    (34530/0/0) 70717440
  3 unassigned    wu       0                0         (0/0/0)            0
  4 unassigned    wu       0                0         (0/0/0)            0
  5 unassigned    wu       0                0         (0/0/0)            0
  6 unassigned    wu       0                0         (0/0/0)            0
  7 unassigned    wu       0                0         (0/0/0)            0
 
ash# format /dev/did/rdsk/d5s2
selecting /dev/did/rdsk/d5s2
 
Volume name = <        >
ascii name  = <SUN-T300-0118 cyl 34530 alt 2 hd 192 sec 64>
pcyl        = 34532
ncyl        = 34530
acyl        =    2
nhead       =  192
nsect       =   64
Part      Tag    Flag     Cylinders         Size            Blocks
  0        usr    wm       0 - 17264      101.16GB    (17265/0/0) 212152320
  1        usr    wm   17265 - 34529      101.16GB    (17265/0/0) 212152320
  2     backup    wu       0 - 34529      202.32GB    (34530/0/0) 424304640
  3 unassigned    wu       0                0         (0/0/0)             0
  4 unassigned    wu       0                0         (0/0/0)             0
  5 unassigned    wu       0                0         (0/0/0)             0
  6 unassigned    wu       0                0         (0/0/0)             0
  7 unassigned    wu       0                0         (0/0/0)             0
 
ash# format /dev/did/rdsk/d6s2
selecting /dev/did/rdsk/d6s2
 
Volume name = <        >
ascii name  = <SUN-T300-0118 cyl 34530 alt 2 hd 64 sec 32>
pcyl        = 34532
ncyl        = 34530
acyl        =    2
nhead       =   64
nsect       =   32
Part      Tag    Flag     Cylinders         Size            Blocks
  0        usr    wm       0 - 17264       16.86GB    (17265/0/0) 35358720
  1        usr    wm   17265 - 34529       16.86GB    (17265/0/0) 35358720
  2     backup    wu       0 - 34529       33.72GB    (34530/0/0) 70717440
  3 unassigned    wu       0                0         (0/0/0)            0
  4 unassigned    wu       0                0         (0/0/0)            0
  5 unassigned    wu       0                0         (0/0/0)            0
  6 unassigned    wu       0                0         (0/0/0)            0
  7 unassigned    wu       0                0         (0/0/0)            0
 
 
ash# format /dev/did/rdsk/d7s2
selecting /dev/did/rdsk/d7s2
 
Volume name = <        >
ascii name  = <SUN-T300-0118 cyl 34530 alt 2 hd 192 sec 64>
pcyl        = 34532
ncyl        = 34530
acyl        =    2
nhead       =  192
nsect       =   64
Part      Tag    Flag     Cylinders         Size            Blocks
  0        usr    wm       0 - 17264      101.16GB    (17265/0/0) 212152320
  1        usr    wm   17265 - 34529      101.16GB    (17265/0/0) 212152320
  2     backup    wu       0 - 34529      202.32GB    (34530/0/0) 424304640
  3 unassigned    wu       0                0         (0/0/0)             0
  4 unassigned    wu       0                0         (0/0/0)             0
  5 unassigned    wu       0                0         (0/0/0)             0
  6 unassigned    wu       0                0         (0/0/0)             0
  7 unassigned    wu       0                0         (0/0/0)             0
 
ash# format /dev/did/rdsk/d8s2
selecting /dev/did/rdsk/d8s2
 
Volume name = <        >
ascii name  = <SUN-T300-0118 cyl 34530 alt 2 hd 128 sec 128>
pcyl        = 34532
ncyl        = 34530
acyl        =    2
nhead       =  128
nsect       =  128
Part      Tag    Flag     Cylinders         Size            Blocks
  0        usr    wm       0 - 17264      134.88GB    (17265/0/0) 282869760
  1        usr    wm   17265 - 34529      134.88GB    (17265/0/0) 282869760
  2     backup    wm       0 - 34529      269.77GB    (34530/0/0) 565739520
  3 unassigned    wu       0                0         (0/0/0)             0
  4 unassigned    wu       0                0         (0/0/0)             0
  5 unassigned    wu       0                0         (0/0/0)             0
  6 unassigned    wu       0                0         (0/0/0)             0
  7 unassigned    wu       0                0         (0/0/0)             0
 
 
ash# format /dev/did/rdsk/d9s2
selecting /dev/did/rdsk/d9s2
 
Volume name = <        >
ascii name  = <SUN-T300-0118 cyl 34530 alt 2 hd 128 sec 128>
pcyl        = 34532
ncyl        = 34530
acyl        =    2
nhead       =  128
nsect       =  128
Part      Tag    Flag     Cylinders         Size            Blocks
  0        usr    wm       0 - 17264      134.88GB    (17265/0/0) 282869760
  1        usr    wm   17265 - 34529      134.88GB    (17265/0/0) 282869760
  2     backup    wu       0 - 34529      269.77GB    (34530/0/0) 565739520
  3 unassigned    wu       0                0         (0/0/0)             0
  4 unassigned    wu       0                0         (0/0/0)             0
  5 unassigned    wu       0                0         (0/0/0)             0
  6 unassigned    wu       0                0         (0/0/0)             0
  7 unassigned    wu       0                0         (0/0/0)             0

The format(1M) command reveals the space available on a device, but it does not reveal whether a disk is mirrored or striped. Put the file system's mm devices on mirrored (RAID-1) disks. The mm devices should constitute about 10% of the space allocated for the entire file system. CODE EXAMPLE 2-25's format(1M) output reveals the following information that is used when writing the mcf file shown in CODE EXAMPLE 2-26:

2. Make an ma entry for the file system.

In this line entry, make sure to include the shared keyword in the Additional Parameters field.

3. Make an mm entry listing the partition(s) that comprise the metadata for the qfs1 file system.

4. Make a series of mr entries listing the partitions that comprise the file data for the qfs1 file system.

CODE EXAMPLE 2-26 shows the mcf file.

CODE EXAMPLE 2-26 mcf File on Metadata Server ash
Equipment            Eq   Eq     Family   Additional
Identifier           Ord  Type   Set      Parameters
-------------------- ---  ----   ------   ----------
#
# Family Set sqfs1 (shared FS for SunCluster)
#
sqfs1                500   ma    sqfs1    shared
/dev/did/dsk/d4s0    501   mm    sqfs1    -
/dev/did/dsk/d6s0    502   mm    sqfs1    -
/dev/did/dsk/d8s0    503   mr    sqfs1    -
/dev/did/dsk/d9s0    504   mr    sqfs1    -


(Optional) Editing mcf Files on Other Hosts

Perform this task if you are configuring one of the following types of file systems:

The mcf file lines that define a particular file system must be identical in the mcf file on each host system that supports the file system. Only one mcf file can reside on a host. Because you can have other, additional Sun StorEdge QFS file systems defined in an mcf file, the mcf files on each host might not be identical.


procedure icon  To Edit mcf Files on Other Hosts in a Sun Cluster for a Sun StorEdge QFS Highly Available File System

Perform this procedure for a Sun StorEdge QFS highly available file system on Sun Cluster hosts.

1. Log in to a Sun Cluster node that you want to support the file system you are configuring.

2. Become superuser.

3. Use vi(1) or another editor to create an mcf file on that node.

If an mcf file already exists on the host, add the lines for the new file system to this mcf file.

4. Copy the lines that define the file system from the primary node's mcf file to this node's mcf file.

5. Repeat the preceding steps for each host that you want to support the file system.


procedure icon  To Edit mcf Files on Other Hosts for a Sun StorEdge QFS Shared File System

Perform this procedure for a shared file system on Solaris OS hosts or on Sun Cluster hosts.

1. Log into another host that you want to include in the file system.

2. Become superuser.

3. Use the format(1M) command to verify the presence of client host disks.

4. Use vi(1) or another editor to create an mcf file.

If an mcf file already exists on the host, add the lines for the new file system to this mcf file.

5. Issue the samfsconfig(1M) command.

Examine this command's output to locate the local device names for each additional host to be configured in the Sun StorEdge QFS shared file system.

6. Update the mcf file on other client hosts.

Any host system that wants to access or mount a shared file system must have that file system defined in its mcf file. The content of these mcf files differs depending on whether the Solaris OS or Sun Cluster hosts the file system, as follows:

Use vi(1) or another editor to edit the mcf file on one of the client host systems. The mcf file must be updated on all client hosts to be included in the Sun StorEdge QFS shared file system. The file system and disk declaration information must have the same data for the Family Set name, Equipment Ordinal, and Equipment Type as the configuration on the metadata server. The mcf files on the client hosts must also include the shared keyword. The device names, however, can change as controller assignments can change from host to host.

The samfsconfig(1M) command generates configuration information that can help you to identify the devices included in the Sun StorEdge QFS shared file system. Enter a separate samfsconfig(1M) command on each client host. Note that the controller number might not be the same controller number as on the metadata server because the controller numbers are assigned by each client host.

7. Repeat this procedure for each host that you want to include in the file system.

Examples

Example 1 - Solaris OS hosts. CODE EXAMPLE 2-27 shows how the samfsconfig(1M) command is used to retrieve device information for family set sharefs1 on client tethys. Note that tethys is a potential metadata server, so it is connected to the same metadata disks as titan.

CODE EXAMPLE 2-27 samfsconfig (1M) Command Example on tethys
tethys# samfsconfig /dev/dsk/*
#
# Family Set 'sharefs1' Created Wed Jun 27 19:33:50 2003
#
sharefs1                         10 ma sharefs1 on shared
/dev/dsk/c2t50020F23000065EEd0s6 11 mm sharefs1 on
/dev/dsk/c7t50020F2300005D22d0s6 12 mr sharefs1 on
/dev/dsk/c7t50020F2300006099d0s6 13 mr sharefs1 on
/dev/dsk/c7t50020F230000651Cd0s6 14 mr sharefs1 on

Edit the mcf file on client host tethys by copying the last five lines of output from the samfsconfig(1M) command into the mcf file on client host tethys. Verify the following:

CODE EXAMPLE 2-28 shows the resulting mcf file.

CODE EXAMPLE 2-28 mcf File for sharefs1 Client Host tethys
# Equipment                      Eq  Eq   Family   Dev   Add
# Identifier                     Ord Type Set      State Params
# ----------                     --- ---- ------   ----- ------
sharefs1                         10  ma   sharefs1 on    shared
/dev/dsk/c2t50020F23000065EEd0s6 11  mm   sharefs1 on
/dev/dsk/c7t50020F2300005D22d0s6 12  mr   sharefs1 on
/dev/dsk/c7t50020F2300006099d0s6 13  mr   sharefs1 on
/dev/dsk/c7t50020F230000651Cd0s6 14  mr   sharefs1 on

In CODE EXAMPLE 2-28, note that the Equipment Ordinal numbers match those of the example mcf file for metadata server titan. These Equipment Ordinal numbers must not already be in use on client host tethys or any other client host.

Example 2 - Solaris OS hosts. CODE EXAMPLE 2-29 shows how the samfsconfig(1M) command is used to retrieve device information for family set sharefs1 on client host mimas. Note that mimas can never become a metadata server, and it is not connected to the metadata disks.

CODE EXAMPLE 2-29 samfsconfig (1M) Command Example on mimas
mimas# samfsconfig /dev/dsk/*
#
# Family Set 'sharefs1' Created Wed Jun 27 19:33:50 2001
#
# Missing slices
# Ordinal 0
# /dev/dsk/c1t50020F2300005D22d0s6   12    mr   sharefs1   on
# /dev/dsk/c1t50020F2300006099d0s6   13    mr   sharefs1   on
# /dev/dsk/c1t50020F230000651Cd0s6   14    mr   sharefs1   on

In the output from the samfsconfig(1M) command on mimas, note that Ordinal 0, which is the metadata disk, is not present. Because devices are missing, the samfsconfig(1M) command comments out the elements of the file system and omits the file system Family Set declaration line. Make the following types of edits to the mcf file:

CODE EXAMPLE 2-30 shows the resulting mcf file for mimas.

CODE EXAMPLE 2-30 mcf File for Client Host mimas
# The mcf File For mimas
# Equipment                      Eq  Eq   Family   Device Addl
# Identifier                     Ord Type Set      State  Params
------------                     --- ---- ---      -----  ------
sharefs1                         10  ma   sharefs1 on     shared
nodev                            11  mm   sharefs1 on
/dev/dsk/c1t50020F2300005D22d0s6 12  mr   sharefs1 on
/dev/dsk/c1t50020F2300006099d0s6 13  mr   sharefs1 on
/dev/dsk/c1t50020F230000651Cd0s6 14  mr   sharefs1 on



Note - If you update a metadata server's mcf file at any time after the Sun SAM-QFS shared file system is mounted, make sure that you update the mcf files as necessary on all hosts that can access that shared file system.




(Optional) Creating the Shared Hosts File

Perform this task if you are configuring the following types of file systems:


procedure icon  To Create the Shared Hosts File on the Metadata Server

The system copies information from the hosts file to the shared hosts file in the shared file system at file system creation time. You update this information when you issue the samsharefs(1M) -u command.

1. Use the cd(1) command to change to directory /etc/opt/SUNWsamfs.

2. Use vi(1) or another editor to create an ASCII hosts file called hosts.fs-name.

For fs-name, specify the Family Set name of the Sun StorEdge QFS shared file system.

Comments are permitted in the hosts file. Comment lines must begin with a pound character (#). Characters to the right of the pound character are ignored.

3. Use the information in TABLE 2-3 to fill in the lines of the hosts file.

File hosts.fs-name contains configuration information pertaining to all hosts in the Sun StorEdge QFS shared file system. The ASCII hosts file defines the hosts that can share the Family Set name.

TABLE 2-3 shows the fields in the hosts file.

TABLE 2-3 Hosts File Fields

Field Number

Content

1

The Host Name field. This field must contain an alphanumeric host name. It defines the Sun StorEdge QFS shared file system hosts. You can use the output from the hostname(1) command to create this field.

2

The Host IP Addresses field. This field must contain a comma-separated list of host IP addresses. You can use the output from the ifconfig(1M) -a command to create this field. You can specify the individual addresses in one of the following ways:

  • Dotted-decimal IP address form
  • IP version 6 hexadecimal address form
  • A symbolic name that the local domain name service (DNS) can resolve to a particular host interface

The metadata server uses this field to determine whether a host is allowed to connect to the Sun StorEdge QFS shared file system. If the metadata server receives a connect attempt from any interface not listed in this field, it rejects the connection attempt. Conversely, use care when adding elements here because the metadata server accepts any host with an IP address that matches an address in this field.

The client hosts use this field to determine the metadata server interfaces to use when attempting to connect to the metadata server. Each host evaluates the addresses from left to right, and the connection is made using the first responding address in the list.

3

The Server field. This field must contain either a dash character (-) or an integer ranging from 0 through n. The - and the 0 are equivalent.

If the Server field is a nonzero integer number, the host is a potential metadata server. The rest of the row defines the server as a metadata host. The metadata server processes all the metadata modification for the file system. At any one time there is at most one metadata server host, and that metadata server supports archiving, staging, releasing, and recycling for a Sun SAM-QFS shared file system.

If the Server field is - or 0, the host is not eligible to be a metadata server.

4

Reserved for future use by Sun Microsystems. This field must contain either a dash character (-) or a 0. The - and the 0 are equivalent.

5

The Server Host field. This field can contain either a blank or the server keyword in the row that defines the active metadata server. Only one row in the hosts file can contain the server keyword. This field must be blank in all other rows.


The system reads and manipulates the hosts file. You can use the samsharefs(1M) command to examine metadata server and client host information about a running system.

Example for Solaris OS Hosts

CODE EXAMPLE 2-31 is an example hosts file that shows four hosts.

CODE EXAMPLE 2-31 Sun StorEdge QFS Shared File System Hosts File Example
# File /etc/opt/SUNWsamfs/hosts.sharefs1
# Host   Host IP                           Server   Not  Server
# Name   Addresses                         Priority Used Host
# ----   --------------------------------- -------- ---- -----
titan    172.16.0.129,titan.xyzco.com      1        -    server
tethys   172.16.0.130,tethys.xyzco.com     2        -
mimas    mimas.xyzco.com                   -        -
dione    dione.xyzco.com                   -        -

CODE EXAMPLE 2-31 shows a hosts file that contains fields of information and comment lines for the sharefs1 file system. In this example, the Server Priority field contains the number 1 in the Server Priority field to define the primary metadata server as titan. If titan is down, the next metadata server is tethys, and the number 2 in this field indicates this secondary priority. Note that neither dione nor mimas can ever be a metadata server.

Example for Sun Cluster Hosts

If you are configuring a Sun StorEdge QFS shared file system in a Sun Cluster, every host is a potential metadata server. The hosts files and the local hosts configuration files must contain node names in the Host Names field. These fields must contain Sun Cluster private interconnect names in the Host IP Addresses field.

CODE EXAMPLE 2-32 shows the local hosts configuration file for a shared file system, sharefs1. This file system's participating hosts are Sun Cluster nodes scnode-A and scnode-B. Each node's private interconnect name is listed in the Host IP Addresses field.

CODE EXAMPLE 2-32 Sun StorEdge QFS Shared File System Hosts File Example
# File /etc/opt/SUNWsamfs/hosts.sharefs1
# Host   Host IP                           Server   Not  Server
# Name   Addresses                         Priority Used Host
# ----   --------------------------------- -------- ---- -----
scnode-A clusternode1-priv                 1        -    server
scnode-B clusternode2-priv                 2        -


procedure icon  (Optional) To Create the Local Hosts File on a Client

Perform this procedure under the following circumstances:

1. Create the local hosts configuration file on the client host.

Using vi(1) or another editor, create an ASCII local hosts configuration file that defines the host interfaces that the metadata server and the client hosts can use when accessing the file system. The local hosts configuration file must reside in the following location:

/etc/opt/SUNWsamfs/hosts.fsname.local

For fsname, specify the Family Set Name of the Sun StorEdge QFS shared file system.

Comments are permitted in the local host configuration file. Comment lines must begin with a pound character (#). Characters to the right of the pound character are ignored.

TABLE 2-4 shows the fields in the local hosts configuration file.

TABLE 2-4 Local Hosts Configuration File Fields

Field Number

Content

1

The Host Name field. This field must contain the alphanumeric name of a metadata server or potential metadata server that is part of the Sun StorEdge QFS shared file system.

2

The Host Interfaces field. This field must contain a comma-separated list of host interface addresses. You can use the output from the ifconfig(1M) -a command to create this field. You can specify the individual interfaces in one of the following ways:

  • Dotted-decimal IP address form
  • IP version 6 hexadecimal address form
  • A symbolic name that the local domain name service (DNS) can resolve to a particular host interface

Each host uses this field to determine whether a host will try to connect to the specified host interface. The system evaluates the addresses from left to right, and the connection is made using the first responding address in the list that is also included in the shared hosts file.


2. Repeat this procedure for each client host that you want to include in the Sun StorEdge QFS shared file system.

Obtaining Addresses

The information in this section might be useful when you are debugging.

In a Sun StorEdge QFS shared file system, each client host obtains the list of metadata server IP addresses from the shared hosts file.

The metadata server and the client hosts use the shared hosts file on the metadata server and the hosts.fsname.local file on each client host (if it exists) to determine the host interface to use when accessing the metadata server. This process is as follows (note that client, as in network client, is used to refer to both client hosts and the metadata server host in the following process):

1. The client obtains the list of metadata server host IP interfaces from the file system's on-disk shared hosts file. To examine this file, issue the samsharefs(1M) command from the metadata server or from a potential metadata server.

2. The client searches for an /etc/opt/SUNWsamfs/hosts.fsname.local file. Depending on the outcome of the search, one of the following occurs:

i. It compares the list of addresses for the metadata server from both the shared hosts file on the file system and the hosts.fsname.local file.

ii. It builds a list of addresses that are present in both places, and then it attempts to connect to each of these addresses, in turn, until it succeeds in connecting to the server. If the order of the addresses differs in these files, the client uses the ordering in the hosts.fsname.local file.

Example

This example expands on FIGURE 2-1. CODE EXAMPLE 2-31 shows the hosts file for this configuration. FIGURE 2-2 shows the interfaces to these systems.

  FIGURE 2-2 Network Interfaces

Figure of a shared Sun SAM-QFS environment showing public and private networks.[ D ]

Systems titan and tethys share a private network connection with interfaces 172.16.0.129 and 172.16.0.130. To guarantee that titan and tethys always communicate over their private network connection, the system administrator has created identical copies of /etc/opt/SUNWsamfs/hosts.sharefs1.local on each system. CODE EXAMPLE 2-33 shows the information in these files.

CODE EXAMPLE 2-33 File hosts.sharefs1.local on Both titan and tethys
# This is file /etc/opt/SUNWsamfs/hosts.sharefs1.local
# Host Name    Host Interfaces
# ---------    ---------------
titan          172.16.0.129
tethys         172.16.0.130

Systems mimas and dione are not on the private network. To guarantee that they connect to titan and tethys through titan's and tethys' public interfaces, and never attempt to connect to titan's or tethys' unreachable private interfaces, the system administrator has created identical copies of /etc/opt/SUNWsamfs/hosts.sharefs1.local on mimas and dione. CODE EXAMPLE 2-34 shows the information in these files.

CODE EXAMPLE 2-34 File hosts.sharefs1.local on Both mimas and dione
# This is file /etc/opt/SUNWsamfs/hosts.sharefs1.local
# Host Name    Host Interfaces
# ----------   --------------
titan          titan.xyzco.com
tethys         tethys.xyzco.com


Initializing the Environment

This procedure initializes the environment.


procedure icon  To Initialize the Environment

single-step bulletType the samd(1M) config command to initialize the Sun StorEdge QFS environment.

For example:

# samd config

Repeat this command on each host if you are configuring a Sun StorEdge QFS shared file system or a Sun StorEdge QFS highly available file system.


(Optional) Editing the defaults.conf File

The /opt/SUNWsamfs/examples/defaults.conf file contains default settings for the Sun StorEdge QFS environment. You can change these settings at any time after the initial installation. If you want to change any default settings now, examine the defaults.conf(4) man page to discern the types of behaviors this file controls.

Perform this task if you want to change system default values.


procedure icon  To Set Up Default Values

1. Read the defaults.conf(4) man page and examine this file to determine if you want to change any of the defaults.

2. Use the cp(1) command to copy /opt/SUNWsamfs/examples/defaults.conf to its functional location.

For example:

# cp /opt/SUNWsamfs/examples/defaults.conf /etc/opt/SUNWsamfs/defaults.conf

3. Use vi(1) or another editor to edit the file.

Edit the lines that control aspects of the system that you want to change. Remove the pound character (#) from column 1 of the lines you change.

For example, if you are configuring a Sun StorEdge QFS shared file system in a Sun Cluster, CODE EXAMPLE 2-35 shows defaults.conf entries that are helpful when debugging.

CODE EXAMPLE 2-35 defaults.conf Entries for Debugging
# File defaults.conf
trace
all=on
endtrace

4. Use the samd(1M) config command to restart the sam-fsd(1M) daemon and enable the daemon to recognize the changes in the defaults.conf file.

5. (Optional) Repeat this procedure for each host that you want to include in a Sun StorEdge QFS shared file system or a Sun StorEdge QFS highly available file system.

For debugging purposes, the defaults.conf file should be the same on all hosts.


Verifying the License and mcf Files

At this point in the installation and configuration process, the following files exist on each Sun StorEdge QFS host:

The procedures in this section show you how to verify the correctness of these configuration files.

Perform these verifications on all hosts if you are configuring a Sun StorEdge QFS shared file system or a Sun StorEdge QFS highly available file system.


procedure icon  To Verify the License File

single-step bulletEnter the samcmd(1M) l (lowercase L) command to verify the license file.

The samcmd(1M) output includes information about features that are enabled. If the output you receive is not similar to that shown in CODE EXAMPLE 2-36, return to Enabling the Sun StorEdge QFS Software License.

CODE EXAMPLE 2-36 Using samcmd (1M)
# samcmd l
 
License information samcmd     4.2    Fri Aug 27 16:24:12 2004
 
hostid = xxxxxxx
 
License never expires
 
Fast file system feature enabled
 
QFS stand alone feature enabled
 
Shared filesystem support enabled
 
SAN API support enabled


procedure icon  To Verify the mcf File

single-step bulletEnter the sam-fsd(1M) command to verify the mcf file.

Examine the output for errors, as follows:

If your mcf file has errors, refer to Defining the Sun StorEdge QFS Configuration By Creating the mcf File and to the mcf(4) man page for information about how to create this file correctly.



Note - If you change the mcf file after the Sun StorEdge QFS file system is in use, you must convey the new mcf specifications to the Sun StorEdge QFS software. For information about propagating mcf file changes to the system, see the Sun StorEdge QFS and Sun StorEdge SAM-FS File System Administration Guide.




(Optional) Creating the samfs.cmd File

You can create the /etc/opt/SUNWsamfs/samfs.cmd file as the place from which the system reads mount parameters. If you are configuring multiple Sun StorEdge QFS systems with multiple mount parameters, consider creating this file.

You can specify mount parameters in the following ways:

You can manage certain features more easily from a samfs.cmd file. These features include the following:

For more information about the /etc/vfstab file, see Updating the /etc/vfstab File and Creating the Mount Point. For more information about the mount(1M) command, see the mount_samfs(1M) man page.


procedure icon  To Create the samfs.cmd File

1. Use vi(1) or another editor to create the samfs.cmd file.

Create lines in the samfs.cmd file to control mounting, performance features, or other aspects of file system management. For more information about the samfs.cmd file, see the Sun StorEdge QFS and Sun StorEdge SAM-FS File System Administration Guide, or see the samfs.cmd(4) man page.

CODE EXAMPLE 2-38 shows a samfs.cmd file for a Sun StorEdge QFS file system.

CODE EXAMPLE 2-38 Example samfs.cmd File for a Sun StorEdge QFS File System
qwrite   # Global mount option. Enables qwrite for all file systems
fs=qfs1  # Enables mount options for the qfs1 file system only
   trace # Enables file system tracing for qfs1 only

2. (Optional) Copy lines, as necessary, to the samfs.cmd file on other hosts.

Perform this step if you are creating a multihost file system.

If you have created a samfs.cmd file on one host in a Sun Cluster to describe a particular file system's mount parameters, copy those lines to samfs.cmd files on all the nodes that can access that file system.

For debugging purposes, the samfs.cmd file, as it pertains to a specific file system, should be the same on all hosts. For example, if the qfs3 file system is accessible from all nodes in a Sun Cluster, then the lines in the samfs.cmd file that describe the qfs3 file system should be identical on all the nodes in the Sun Cluster.

Depending on your site needs, it might be easier to manage mount options from the samfs.cmd file rather than from the /etc/vfstab file. The /etc/vfstab file overrides the samfs.cmd file in the event of conflicts.

For more information about mount options, see Updating the /etc/vfstab File and Creating the Mount Point.


Updating the /etc/vfstab File and Creating the Mount Point

This task shows you how to edit the /etc/vfstab file.



Note - Even though /global is used in this chapter's examples as the mount point for file systems mounted in a Sun Cluster environment, it is not required. You can use any mount point.



TABLE 2-5 shows the values you can enter in the fields in the /etc/vfstab file.

TABLE 2-5 Fields in the /etc/vfstab File

Field

Field Title and Contents

1

Device to Mount. The name of the Sun StorEdge QFS file system to mount. This must be the same as the file system's Family Set name specified in the mcf file.

2

Device to fsck(1M). Must be a dash (-) character. The dash indicates that there are no options. This prevents the Solaris system from performing an fsck(1M) on the Sun StorEdge QFS file system. For more information about this process, see the fsck(1M) or samfsck(1M) man page.

3

Mount Point. Examples:

  • /qfs1 for a local Sun StorEdge QFS file system on a single host.
  • /global/qfs1 for a Sun StorEdge QFS shared file system in a Sun Cluster.
  • /global/qfs1 for a Sun StorEdge QFS highly available file system in a Sun Cluster.

4

File System Type. Must be samfs.

5

fsck(1M) Pass. Must be a dash (-) character. A dash indicates that there are no options.

6

Mount at Boot. Specify either yes or no.

  • Specifying yes in this field requests that the Sun StorEdge QFS file system be mounted automatically at boot time. Do not specify yes if you are creating a file system for use in a Sun Cluster.
  • Specifying no in this field indicates that you do not want to mount the file system automatically. Specify no in this field if you are creating a file system for use in a Sun Cluster to indicate that the file system is under Sun Cluster control.

For information about the format of these entries, see the mount_samfs(1M) man page.

7

Mount Parameters. A list of comma-separated parameters (with no spaces) that are used in mounting the file system. You can specify mount options on the mount(1M) command, in the /etc/vfstab file, or in a samfs.cmd file. Mount options specified on the mount(1M) command override those specified in the /etc/vfstab file and in the samfs.cmd file. Mount options specified in the /etc/vfstab file override those in the samfs.cmd file.

For example, stripe=1 specifies a stripe width of one DAU. For a list of available mount options, see the mount_samfs(1M) man page.



procedure icon  To Update the /etc/vfstab File and Create the Mount Point

1. Use vi(1) or another editor to open the /etc/vfstab file and create an entry for each Sun StorEdge QFS file system.

CODE EXAMPLE 2-39 shows header fields and entries for a local Sun StorEdge QFS file system.

CODE EXAMPLE 2-39 Example /etc/vfstab File Entries for a Sun StorEdge QFS File System
#DEVICE    DEVICE   MOUNT  FS    FSCK  MOUNT    MOUNT
#TO MOUNT  TO FSCK  POINT  TYPE  PASS  AT BOOT  PARAMETERS
#
qfs1       -        /qfs1  samfs -     yes      stripe=1

TABLE 2-5 shows the various fields in the /etc/vfstab file and their contents.

If you are configuring a file system for a Sun Cluster environment, the mount options that are required, or are recommended, differ depending on the type of file system you are configuring. TABLE 2-6 explains the mount options.

TABLE 2-6 Mount Options for a Sun Cluster File System

File System Type

Required Options

Recommended Options

Sun StorEdge QFS shared file system

shared

forcedirectio

sync_meta=1

mh_write

qwrite

nstreams=1024

rdlease=300

aplease=300

wrlease=300

Sun StorEdge QFS shared file system to support Oracle Real Application Clusters database files

shared

forcedirectio

sync_meta=1

mh_write

qwrite

nstreams=1024

stripe>=1

rdlease=300

aplease=300

wrlease=300

 

Sun StorEdge QFS highly available file system

 

sync_meta=1


You can specify most of the mount options mentioned in TABLE 2-6 in either the /etc/vfstab file or in the samds.cmd file. The shared option, however, must be specified in the /etc/vfstab file.



Tip - In addition to the mount options mentioned in TABLE 2-6, you can also specify the trace mount option for configuration debugging purposes.



2. Use the mkdir(1) command to create the file system mount point.

The mount point location differs depending on where the file system is to be mounted. The following examples illustrate this.

Example 1. This example assumes that /qfs1 is the mount point of the qfs1 file system. This is a local file system. It can exist on a standalone server or on a local node in a Sun Cluster. For example:

# mkdir /qfs1

Example 2. This example assumes that /global/qfs1 is the mount point of the qfs1 file system, which is a Sun StorEdge QFS shared file system to be mounted on a Sun Cluster:

# mkdir /global/qfs1



Note - If you configured multiple mount points, repeat these steps for each mount point, using a different mount point (such as /qfs2) and Family Set name (such as qfs2) each time.



3. (Optional) Repeat the preceding steps for all hosts if you are configuring a Sun StorEdge QFS shared file system or a Sun StorEdge QFS highly available file system.

For debugging purposes, if you are configuring a Sun StorEdge QFS shared file system, the mount options should be the same on all hosts that can mount the file system.


Initializing the File System

This procedure shows how to use the sammkfs(1M) command and the Family Set names that you have defined to initialize a file system.



Note - The sammkfs(1M) command sets one tuning parameter, the disk allocation unit (DAU). You cannot reset this parameter without reinitializing the file system. For information about how the DAU affects tuning, see the Sun StorEdge QFS and Sun StorEdge SAM-FS File System Administration Guide or see the sammkfs(1M) man page.




procedure icon  To Initialize a File System

single-step bulletUse the sammkfs(1M) command to initialize a file system for each Family Set defined in the mcf file.



caution icon

Caution - Running sammkfs(1M) creates a new file system. It removes all references to the data currently contained in the partitions associated with the file system in the /etc/opt/SUNWsamfs/mcf file.



Example for a Sun StorEdge QFS File System

CODE EXAMPLE 2-40 shows the command to use to initialize a Sun StorEdge QFS file system with the Family Set name of qfs1.

CODE EXAMPLE 2-40 Initializing Example File System qfs1
# sammkfs -a 128 qfs1
Building `qfs1' will destroy the contents of devices:
                /dev/dsk/c1t0d0s0
                /dev/dsk/c3t1d0s6
                /dev/dsk/c3t1d1s6
                /dev/dsk/c3t2d0s6
Do you wish to continue? [y/N]

Enter y in response to this message to continue the file system creation process.

Example for a Sun StorEdge QFS Shared File System

If you are configuring a Sun StorEdge QFS shared file system, enter the sammkfs(1M) comand on the metadata server only.

Enter the sammkfs(1M) command at the system prompt. The -S options specifies that the file system be a Sun StorEdge QFS shared file system. Use this command in the following format:

sammkfs -S -a allocation_unit fs_name

TABLE 2-7 sammkfs (1M) Command Arguments

Argument

Meaning

allocation_unit

Specifies the number of bytes, in units of 1024 (1-kilobyte) blocks, to be allocated to a disk allocation unit (DAU). The specified allocation_unit must be a multiple of 8 kilobytes. For more information, see the sammkfs(1M) man page.

fs_name

Family Set name of the file system as defined in the mcf file.


 

For more information about the sammkfs(1M) command, see the sammkfs(1M) man page. For example, you can use the following sammkfs(1M) command to initialize a Sun StorEdge QFS shared file system and identify it as shared:

# sammkfs -S -a 512 sharefs1

If the shared keyword appears in the mcf file, the file system must be initialized as a shared file system by using the -S option to the sammkfs(1M) command. You cannot mount a file system as shared if it was not initialized as shared.

If you are initializing a file system as a Sun StorEdge QFS file system, file /etc/opt/SUNWsamfs/hosts.sharefs1 must exist at the time you issue the sammkfs(1M) command. The sammkfs(1M) command uses the hosts file when it creates the file system. You can use the samsharefs(1M) command to replace or update the contents of the hosts file at a later date.


(Optional) Verifying That the Daemons Are Running

Perform this task if you are configuring the following types of file systems:


procedure icon  To Verify the Daemons

Perform these steps on each host that can mount the file system.

1. Use the ps(1) and grep(1) commands to verify that the sam-sharefsd daemon is running for this file system.

CODE EXAMPLE 2-41 shows these commands.

CODE EXAMPLE 2-41 Output from the ps (1) and grep (1) Commands
# ps -ef | grep sam-sharefsd
root 26167 26158  0 18:35:20 ?        0:00 sam-sharefsd sharefs1
root 27808 27018  0 10:48:46 pts/21   0:00 grep sam-sharefsd

CODE EXAMPLE 2-41 shows that the sam-sharefsd daemon is active for the sharefs1 file system. If this is the case for your system, you can proceed to the next step in this procedure. If, however, the output returned on your system does not show that the sam-sharefsd daemon is active for your Sun StorEdge QFS shared file system, you need to perform some diagnostic procedures. For information about these procedures, see the Sun StorEdge QFS and Sun StorEdge SAM-FS File System Administration Guide.

Depending on whether or not this daemon is running, perform the remaining steps in this procedure.

2. (Optional) Determine whether the sam-fsd daemon is running.

Perform this step if the previous step's output indicates that the sam-sharefsd daemon is not running.

a. Use the ps(1) and grep(1) commands to verify that the sam-fsd daemon is running for this file system.

b. Examine the output.

CODE EXAMPLE 2-42 shows sam-fsd output that indicates that the daemon is running.

CODE EXAMPLE 2-42 sam-fsd (1M) Output That Shows the sam-fsd Daemon is Running
cur% ps -ef | grep sam-fsd      
 user1 16435 16314  0 16:52:36 pts/13   0:00 grep sam-fsd
    root   679     1  0   Aug 24 ?        0:00 /usr/lib/fs/samfs/sam-fsd


Mounting the File System

The mount(1M) command mounts a file system. It also reads the /etc/vfstab and samfs.cmd configuration files. For information about the mount(1M) command, see the mount_samfs(1M) man page.

Use one or more of the procedures that follow to mount your file system. The introduction to each procedure explains the file system to which it pertains.


procedure icon  To Mount the File System on One Host

Perform this procedure on all Sun StorEdge QFS file system, as follows:

1. Use the mount(1M) command to mount the file system.

Specify the file system mount point as the argument. For example:

# mount /qfs1

2. Use the mount(1M) command with no arguments to verify the mount.

This step confirms whether the file system is mounted and shows how to set permissions. CODE EXAMPLE 2-43 shows the output from a mount(1M) command issued to verify whether example file system qfs1 is mounted.

CODE EXAMPLE 2-43 Using the mount (1M) Command to Verify That a File System Is Mounted
# mount
<<< information deleted >>>
/qfs1 on qfs1 read/write/setuid/dev=8001b1 on Mon Jan 14 12:21:03 2002
<<< information deleted >>>

3. (Optional) Use the chmod(1) and chown(1) commands to change the permissions and ownership of the file system's root directory.

If this is the first time the file system has been mounted, it is typical to perform this step. CODE EXAMPLE 2-44 shows the commands to use to change file system permissions and ownership.

CODE EXAMPLE 2-44 Using chmod (1M) and chown (1M) to Change File System Permissions and Ownership
# chmod 755 /qfs1
# chown root:other /qfs1


procedure icon  (Optional) To Verify Metadata Server Changes

Perform this procedure if you are creating a Sun StorEdge QFS shared file system in either a Solaris OS or in a Sun Cluster environment. This procedure ensures that the file system is configured to support changing the metadata server.

1. Log in to the metadata server as superuser.

2. Use the samsharefs(1M) command to change the metadata server.

For example:

ash# samsharefs -s oak qfs1

3. Use the ls(1) -al command to verify that the files are accessible on the new metadata server.

For example:

oak# ls -al /qfs1

4. Repeat Step 2 and Step 3.

If you are creating a Sun StorEdge QFS shared file system in a Solaris OS environment, repeat these commands on each metadata server or potential metadata server.

If you are creating a Sun StorEdge QFS shared file system in a Sun Cluster, repeat these steps on all hosts that can mount the file system.


(Optional) Configuring the SUNW.qfs Resource Type

Perform this task if you are configuring a Sun StorEdge QFS shared file system on a Sun Cluster platform.


procedure icon  To Enable a Sun StorEdge QFS Shared File System as a SUNW.qfs(5) Resource

1. Log in to the metadata server as superuser.

2. Use the scrgadm(1M) -p command and search for the SUNW.qfs(5) resource type.

This step verifies that the previous step succeeded. For example:

metadataserver# scrgadm -p | grep SUNW.qfs

If the SUNW.qfs resource type is missing, issue the following command:

metadataserver# scrgadm -a -t SUNW.qfs

3. Use the scrgadm(1M) command to set the FilesystemCheckCommand property of the SUNW.qfs(5) resource type to /bin/true.

The SUNW.qfs(5) resource type is part of the Sun StorEdge QFS software package. Configuring the resource type for use with your shared file system makes the shared file system's metadata server highly available. Sun Cluster scalable applications can then access data contained in the file system. For more information, see the Sun StorEdge QFS and Sun StorEdge SAM-FS File System Administration Guide.

CODE EXAMPLE 2-45 shows how to use the scrgadm(1M) command to register and configure the SUNW.qfs resource type. In this example, the nodes are scnode-A and scnode-B. /global/sharefs1 is the mount point as specified in the /etc/vfstab file.

CODE EXAMPLE 2-45 Configuring a SUNW.qfs Resource
# scrgadm -a -g qfs-rg -h scnode-A,scnode-B
# scrgadm -a -g qfs-rg -t SUNW.qfs -j qfs-res \
	   -x QFSFileSystem=/global/sharefs1


(Optional) Configuring the HAStoragePlus Resource

Perform this task if you are configuring a Sun StorEdge QFS highly available file system on a Sun Cluster platform.


procedure icon  To Configure a Sun StorEdge QFS Highly Available File System as an HAStoragePlus Resource

single-step bulletUse the scrgadm(1M) command to set the FilesystemCheckCommand property of HAStoragePlus to /bin/true.

All other resource properties for HAStoragePlus apply as specified in SUNW.HAStoragePlus(5).

The following example command shows how to use the scrgadm(1M) command to configure an HAStoragePlus resource:

# scrgadm -a -g qfs-rg -j ha-qfs -t SUNW.HAStoragePlus \
        -x FilesystemMountPoints=/global/qfs1 \
        -x FilesystemCheckCommand=/bin/true


(Optional) Sharing the File System With NFS Client Systems

Perform this task if you are configuring a file system and you want the file system to be NFS shared.

This procedure uses the Sun Solaris share(1M) command to make the file system available for mounting by remote systems. The share(1M) commands are typically placed in the /etc/dfs/dfstab file and are executed automatically by the Sun Solaris OS when you enter init(1M) state 3.


procedure icon  To NFS Share the File System in a Sun Cluster Environment

The following procedure explains how to NFS share a file system in a Sun Cluster environment in general terms. For more information on NFS sharing file systems that are controlled by HAStorage Plus, see the Sun StorEdge QFS and Sun StorEdge SAM-FS File System Administration Guide, see the Sun Cluster Data Service for Network File System (NFS) Guide for Solaris OS, and see your NFS documentation.

1. Locate the dfstab.resource_name file.

The Pathprefix property of HAStoragePlus specifies the directory in which the dfstab.resource_name file resides.

2. Use vi(1) or another editor to add a share(1M) command to the Pathprefix/SUNW.nfs/dfstab.resource_name file.

For example, add a line like the following to NFS share the new file system:

share -F nfs -o rw /global/qfs1


procedure icon  To NFS Share the File System in a Solaris OS Environment

If you are configuring a Sun StorEdge QFS shared file system, you can perform this procedure from the metadata server or from one of the shared clients.

1. Use vi(1) or another editor to add a share(1M) command to the /etc/dfs/dfstab file.

For example, add a line like the following to direct the Solaris OS to NFS share the new Sun StorEdge QFS file system:

share -F nfs -o rw=client1:client2 -d "QFS" /qfs1

2. Use the ps(1) and grep(1) commands to determine whether or not nfs.server is running.

CODE EXAMPLE 2-46 shows these commands and their output.

CODE EXAMPLE 2-46 Commands and Output Showing NFS Activity
# ps -ef | grep nfsd
    root   694     1  0   Apr 29 ?        0:36 /usr/lib/nfs/nfsd -a 16
en17     29996 29940  0 08:27:09 pts/5    0:00 grep nfsd
# ps -ef | grep mountd
    root   406     1  0   Apr 29 ?       95:48 /usr/lib/autofs/automountd
    root   691     1  0   Apr 29 ?        2:00 /usr/lib/nfs/mountd
en17     29998 29940  0 08:27:28 pts/5    0:00 grep mountd

In CODE EXAMPLE 2-46, the lines that contain /usr/lib/nfs indicate that the NFS server is mounted.

3. (Optional) Start the NFS server.

Perform this step if the nfs.server server is not running. Use the following command:

# /etc/init.d/nfs.server start

4. (Optional) Type the share(1M) command at a root shell prompt.

Perform this step if you want to NFS share the new Sun StorEdge QFS file system immediately.

If there are no NFS shared file systems when the Sun Solaris OS boots, the NFS server is not started. CODE EXAMPLE 2-47 shows the commands to use to enable NFS sharing. You must change to run level 3 after adding the first share entry to this file.

CODE EXAMPLE 2-47 NFS Commands
# init 3
# who -r
.       run-level 3  Dec 12 14:39     3    2  2
# share
-          /qfs1  -   "QFS"

Some NFS mount parameters can affect the performance of an NFS mounted Sun StorEdge QFS file system. You can set these parameters in the /etc/vfstab file as follows:

For more information about these parameters, see the mount_nfs(1M) man page.

5. Proceed to To NFS Mount the File System on NFS Clients in a Solaris OS Environment.


procedure icon  To NFS Mount the File System on NFS Clients in a Solaris OS Environment

If you are configuring a Sun StorEdge QFS shared file system, you can perform this procedure from the metadata server or from one of the shared clients.

1. On the NFS client systems, use vi(1) or another editor to edit the /etc/vfstab file and add a line to mount the server's Sun StorEdge QFS file system at a convenient mount point.

The following example line mounts server:/qfs1 on the /qfs1 mount point:

server:/qfs1    -    /qfs1    nfs    -   no intr,timeo=60

In this example, server:/qfs1 is mounted on /qfs1, and information is entered into the /etc/vfstab file.

2. Save and close the /etc/vfstab file.

3. Enter the mount(1M) command.

The following mount(1M) command mounts the qfs1 file system:

client# mount /qfs1

The automounter can also do this, if you prefer. Follow your site procedures for adding server:/qfs1 to your automounter maps. For more information about automounting, see the automountd(1M) man page.



Note - At times, there might be a significant delay in the Sun StorEdge QFS file system's response to NFS client requests. This can occur in a Sun StorEdge QFS shared file system. As a consequence, the system might generate an error instead of retrying until the operation completes.

To avoid this situation, Sun recommends that clients mount the file system with either the hard option enabled or with the soft, retrans, and timeo options enabled. If you use the soft option, also specify retrans=120 (or greater) and timeo=3000 (or greater).




(Optional) Bringing the Shared Resource Online

Perform this task if you are configuring the following types of file systems:


procedure icon  To Bring the Shared Resource Online

1. Log into the appropriate host.

You must perform this step with the file system mounted on all nodes. If it is not mounted, go back to Mounting the File System and follow the instructions there.

2. Use the scswitch(1M) command to move the file system resource to another node.

For example:

metadataserver# scswitch -Z -g qfs-rg

3. Use the scstat(1M) command to verify that the file system resource moved to a different node.

For example:

CODE EXAMPLE 2-48 Using scstat (1M)
metadataserver# scstat
< information deleted from this output >
-- Resources --
Resource Name    Node Name  State     Status Message
-------------    ---------  -----     --------------
Resource: qfs-res   ash     Online    Online
Resource: qfs-res   elm     Offline   Offline
Resource: qfs-res   oak     Offline   Offline


(Optional) Verifying the Resource Group on All Nodes

Perform this task if you are configuring the following types of file systems:


procedure icon  To Verify the Resource Group on All Nodes

1. From any node in the Sun Cluster, use the scswitch(1M) command to move the file system resource from one node to another.

For example:

server# scswitch -z -g qfs-rg -h elm

2. Use the scstat(1M) command to verify that the file system resource moved to a different node.

For example:

CODE EXAMPLE 2-49 Using scstat (1M)
server# scstat
-- Resources --
Resource Name    Node Name  State     Status Message
-------------    ---------  -----     --------------
Resource: qfs-res   ash     Offline   Offline
Resource: qfs-res   elm     Online    Online
Resource: qfs-res   oak     Offline   Offline

3. Repeat the preceding commands on each node in the cluster.


Establishing Periodic Dumps Using qfsdump(1M)

File systems are made up of directories, files, and links. The Sun StorEdge QFS file system keeps track of all the files in the .inodes file. The .inodes file resides on a separate metadata device. The file system writes all file data to the data devices.

It is important to use the qfsdump(1M) command periodically to create a dump file of metadata and file data. The qfsdump(1M) command saves the relative path information for each file contained in a complete file system or in a portion of a file system. This protects your data in the event of a disaster.

Create dump files at least once a day. The frequency depends on your site's requirements. By dumping file system data on a regular basis, you can restore old files and file systems. You can also move files and file systems from one server to another.

The following are some guidelines for creating dump files:

You can run the qfsdump(1M) command manually or automatically. Even if you implement this command to be run automatically, you might need to run it manually from time to time depending on your site's circumstances. In the event of a disaster, you can use the qfsrestore(1M) command to recreate your file system. You can also restore a single directory or file. For more information, see the qfsdump(1M) man page and see the Sun QFS, Sun SAM-FS, and Sun SAM-QFS Disaster Recovery Guide.

For more information about creating dump files, see the qfsdump(1M) man page. The following sections describe procedures for issuing this command both manually and automatically.


procedure icon  To Run the qfsdump(1M) Command Automatically

1. Make an entry in root's crontab file so that the cron daemon runs the qfsdump(1M) command periodically.

For example:

10 0 * * * (cd /qfs1; /opt/SUNWsamfs/sbin/qfsdump -f /dev/rmt/0cbn)

This entry executes the qfsdump(1M) command at 10 minutes after midnight. It uses the cd(1) command to change to the mount point of the qfs1 file system, and it executes the /opt/SUNWsamfs/sbin/qfsdump command to write the data to tape device /dev/rmt/0cbn.

2. (Optional) Using the previous step as a guide, make similar crontab file entries for each file system.

Perform this step if you have more than one Sun StorEdge QFS file system. Make sure you save each dump file in a separate file.


procedure icon  To Run the qfsdump(1M) Command Manually

1. Use the cd(1) command to go to the directory that contains the mount point for the file system.

For example:

# cd /qfs1

2. Use the qfsdump(1M) command to write a dump file to a file system outside of the one you are dumping.

For example:

# qfsdump -f /save/qfs1/dump_file


(Optional) Backing Up Configuration Files

Sun StorEdge QFS regularly accesses several files that have been created as part of this installation and configuration procedure. You should back up these files regularly to a file system that is outside the file system in which they reside. In the event of a disaster, you can restore these files from your backup copies.



Note - Sun Microsystems strongly recommends that you back up your environment's configuration files because they will be needed in the event of a file system disaster.



The following files are among those that you should back up regularly and whenever you modify them:

For more information about the files you should protect, see the Sun QFS, Sun SAM-FS, and Sun SAM-QFS Disaster Recovery Guide.


(Optional) Configuring the Remote Notification Facility

The Sun StorEdge QFS software can be configured to notify you when potential problems occur in its environment. The system sends notification messages to a management station of your choice. The Simple Management Network Protocol (SNMP) software manages the exchange of information between network devices such as servers, automated libraries, and drives.

The Sun StorEdge QFS and Sun StorEdge SAM-FS Management Information Base (MIB) defines the types of problems, or events, that the Sun StorEdge QFS software can detect. The software can detect errors in configuration, tapealert(1M) events, and other atypical system activity. For complete information about the MIB, see /opt/SUNWsamfs/mibs/SUN-SAM-MIB.mib.

The following sections describe how to enable and how to disable remote notification.


procedure icon  To Enable Remote Notification

1. Ensure that the management station is configured and known to be operating correctly.

(Optional) Verifying the Network Management Station describes this prerequisite.

2. Use vi(1) or another editor to examine file /etc/hosts.

For example, CODE EXAMPLE 2-50 shows an /etc/hosts file that defines a management station. In this example, the management station's hostname is mgmtconsole.

CODE EXAMPLE 2-50 Example /etc/hosts File
999.9.9.9       localhost
999.999.9.999   loggerhost      loghost
999.999.9.998   mgmtconsole
999.999.9.9     samserver

Examine the /etc/hosts file to ensure that the management station to which notifications should be sent is defined. If it is not defined, add a line that defines the appropriate host.

3. Save your changes to /etc/hosts and exit the file.

4. Use vi(1) or another editor to open file /etc/opt/SUNWsamfs/scripts/sendtrap.

5. Locate the TRAP_DESTINATION=`hostname` directive in /etc/opt/SUNWsamfs/scripts/sendtrap.

This line specifies that the remote notification messages be sent to port 161 of the server upon which the Sun StorEdge QFS software is installed. Note the following:

For example:

TRAP_DESTINATION="localhost:161 doodle:163 mgmt_station:1162"

6. Locate the COMMUNITY="public" directive in /etc/opt/SUNWsamfs/scripts/sendtrap.

This line acts as a password. It prevents unauthorized viewing or use of SNMP trap messages. Examine this line and determine the following:

7. Save your changes to /etc/opt/SUNWsamfs/scripts/sendtrap and exit the file.


procedure icon  To Disable Remote Notification

The remote notification facility is enabled by default. If you want to disable remote notification, perform this procedure.

1. (Optional) Use the cp(1) command to copy file /opt/SUNWsamfs/examples/defaults.conf to /etc/opt/SUNWsamfs/defaults.conf.

Perform this step if file /etc/opt/SUNWsamfs/defaults.conf does not exist.

2. Use vi(1) or another editor to open file /etc/opt/SUNWsamfs/defaults.conf.

Find the line in defaults.conf that specifies SNMP alerts. The line is as follows:

#alerts=on

3. Edit the line to disable SNMP alerts.

Remove the # symbol and change on to off. After editing, the line is as follows:

alerts=off

4. Save your changes to /etc/opt/SUNWsamfs/defaults.conf and exit the file.

5. Use the samd(1M) config command to restart the sam-fsd(1M) daemon.

The format for this command is as follows:

# samd config

This command restarts the sam-fsd(1M) daemon and enables the daemon to recognize the changes in the defaults.conf file.


(Optional) Adding the Administrator Group

By default, only the superuser can execute Sun StorEdge QFS administrator commands. However, during installation you can create an administrator group. Members of the administrator group can execute all administrator commands except for star(1M), samfsck(1M), samgrowfs(1M), sammkfs(1M), and samd(1M). The administrator commands are located in /opt/SUNWsamfs/sbin.

After installing the package, you can use the set_admin(1M) command to add or remove the administrator group. You must be logged in as superuser to use the set_admin(1M) command. You can also undo the effect of this selection and make the programs in /opt/SUNWsamfs/sbin executable only by the superuser. For more information about this command, see the set_admin(1M) man page.


procedure icon  To Add the Administrator Group

1. Choose an administrator group name or select a group that already exists within your environment.

2. Use the groupadd(1M) command, or edit the /etc/group file.

The following is an entry from the /etc/group file that designates an administrator group for the Sun StorEdge QFS software. In this example, the samadm group consists of both the adm and operator users.

samadm::1999:adm,operator


Configuring System Logging

The Sun StorEdge QFS system logs errors, cautions, warnings, and other messages using the standard Sun Solaris syslog(3) interface. By default, the Sun StorEdge QFS facility is local7.


procedure icon  To Enable Logging

1. Use vi(1) or another editor to open the /etc/syslog.conf file.

Read in the line from the following file:

/opt/SUNWsamfs/examples/syslog.conf_changes

The line is similar, if not identical, to the following line:

local7.debug  /var/adm/sam-log



Note - The preceding entry is all one line and has a TAB character (not a space) between the fields.



This step assumes that you want to use local7, which is the default. If you set logging to something other than local7 in the /etc/syslog.conf file, edit the defaults.conf file and reset it there, too. For more information, see the defaults.conf(4) man page.

2. Use commands to append the logging line from /opt/SUNWsamfs/examples/syslog.conf_changes to your /etc/syslog.conf file.

CODE EXAMPLE 2-51 shows the commands to use to append the logging lines.

CODE EXAMPLE 2-51 Using cp (1) and cat (1) to Append Logging Lines to /etc/syslog.conf
# cp /etc/syslog.conf /etc/syslog.conf.orig
# cat /opt/SUNWsamfs/examples/syslog.conf_changes >> /etc/syslog.conf

3. Create an empty log file and send the syslogd process a HUP signal.

CODE EXAMPLE 2-52 shows the command sequence to create a log file in /var/adm/sam-log and send the HUP to the syslogd daemon.

CODE EXAMPLE 2-52 Creating an Empty Log File and Sending a HUP Signal to syslogd
# touch /var/adm/sam-log
# pkill -HUP syslogd

For more information, see the syslog.conf(4) and syslogd(1M) man pages.

4. (Optional) Use the log_rotate.sh(1M) command to enable log file rotation.

Log files can become very large, and the log_rotate.sh(1M) command can help in managing log files. For more information, see the log_rotate.sh(1M) man page.


(Optional) Configuring Other Products

The Sun StorEdge QFS installation and configuration process is complete. You can configure other Sun products at this time.

For example, if you want to configure an Oracle database, see the Sun Cluster Data Service for Oracle Real Application Clusters Guide for Solaris OS. The Oracle Real Application Clusters application is the only scalable application that the Sun StorEdge QFS supports in Sun Cluster environments.