C H A P T E R 2 |
Sun StorEdge QFS Initial Installation Procedure |
This chapter describes the procedure for installing and configuring Sun StorEdge QFS standalone software for the first time. Use this procedure if this is the initial installation of the Sun StorEdge QFS standalone software package at your site. If you are upgrading Sun StorEdge QFS software on an existing server, see the Sun StorEdge QFS Upgrade Procedure.
The procedure in this chapter explains obtaining the packages, installing the software packages on your server or node, and configuring the software to match the hardware at your site.
You can install and configure your Sun StorEdge QFS file system entirely using Solaris Operating System (OS) commands, or you can use a combination of commands and the SAM-QFS Manager, which is a graphical user interface (GUI) configuration tool, to complete the procedure.
You must be logged in as superuser to complete most of the procedures in this chapter.
The chapter titled System Requirements and Preinstallation Tasks describes the items you need to verify before you install and configure the Sun StorEdge QFS software. If you have not yet completed the system verification steps, complete them now before you proceed. The steps described in that chapter for verifying the system requirements and performing preinstallation tasks are as follows:
The Sun StorEdge QFS software uses the Sun Solaris packaging utilities for adding and deleting software. The pkgadd(1M) utility prompts you to confirm various actions necessary to install the packages.
|
2. Use the cd(1) command to change to the directory where the software package release files reside.
When you completed your preinstallation tasks, you obtained the release files as described in Obtaining the Release Files. Use the cd(1) command to change to the directory that contains the release files. Changing to the appropriate directory differs, depending on your release media, as follows:
3. Use the pkgadd(1M) command to add the SUNWqfsr and SUNWqfsu packages.
4. Enter yes or y as the answer to each of the questions.
When you install SUNWqfsr and SUNWqfsu, you are asked if you want to define an administrator group. Select y to accept the default (no administrator group) or select n if you want to define an administrator group. You can reset permissions on certain commands later by using the set_admin(1M) command. For more information on this command, see the set_admin(1M) man page.
5. (Optional) Use the pkgadd(1M) command to add one or more localized packages.
Perform this step only if you want to install the packages localized for Chinese, French, or Japanese. CODE EXAMPLE 2-1 shows the commands to use to install the localized packages.
# pkgadd -d SUNWcqfs # pkgadd -d SUNWfqfs # pkgadd -d SUNWjqfs |
The procedure for adding the SAM-QFS Manager software appears later in this chapter. The SAM-QFS Manager installation script prompts you to add localized versions of that software.
6. On each host, issue the pkginfo(1M) command and examine its output to make sure that a Sun StorEdge QFS package is installed.
Each host must have the SUNWqfsr and SUNWqfsu packages installed on it.
CODE EXAMPLE 2-2 shows the needed SUNWqfsr/SUNWqfsu packages.
# pkginfo | grep SUNWqfs system SUNWqfsr Sun QFS software Solaris 9 (root) system SUNWqfsu Sun QFS software Solaris 9 (usr) |
7. (Optional) Install the packages on additional host systems.
Perform this step if you are configuring a multihost file system.
Repeat this procedure and install the packages on each host.
You need a license key to run the Sun StorEdge QFS software. For more information, see Obtaining a Software License Key.
The Sun StorEdge QFS file system uses an encrypted license key. The license key consists of an encoded alphanumeric string.
|
1. Create the /etc/opt/SUNWsamfs/LICENSE.4.2 file.
2. Starting in column one, place the license key you have obtained from your ASP or from Sun Microsystems on the first line in the /etc/opt/SUNWsamfs/LICENSE.4.2 file.
The key must start in column one. No other keywords, host IDs, comments, or other information can appear in the /etc/opt/SUNWsamfs/LICENSE.4.2 file.
3. (Optional) Install the license keys on additional host systems.
Perform this step if you are configuring a multihost file system.
Repeat this procedure and install the license key for each host.
This procedure shows you how to modify your PATH and MANPATH environment variables so you can access the Sun StorEdge QFS commands and man pages easily.
|
1. For users who need to access the Sun StorEdge QFS user commands (for example, sls(1)), add /opt/SUNWsamfs/bin to the users' PATH variables.
2. Use vi(1) or another editor to edit your system setup files to include the correct paths to commands and man pages.
a. In the Bourne or Korn shell, edit the .profile file, change the PATH and MANPATH variables, and export the variables.
CODE EXAMPLE 2-3 shows how your .profile file might look after editing.
PATH=$PATH:/opt/SUNWsamfs/bin:/opt/SUNWsamfs/sbin MANPATH=$MANPATH:/opt/SUNWsamfs/man export PATH MANPATH |
b. In the C shell, edit the .login and .cshrc files.
When you have finished editing, the path statement in your .cshrc file might look like the following line:
CODE EXAMPLE 2-4 shows how the MANPATH in your .login file might look after you have finished editing.
setenv MANPATH /usr/local/man:opt/SUNWspro/man:/$OPENWINHOME/\ share/man:/opt/SUNWsamfs/man |
3. (Optional) Set up the PATH and MANPATH variables on additional host systems.
Perform this step if you are configuring a multihost file system.
Repeat this procedure and set up the PATH and MANPATH variables for each host.
Perform this procedure if you are configuring the following types of file systems:
|
1. Verify that all the hosts have the same user and group IDs.
If you are not running the Network Information Name service (NIS), make sure that all /etc/passwd and all /etc/group files are identical. If you are running NIS, the /etc/passwd and /etc/group files should already be identical.
For more information about this, see the nis+(1) man page.
2. (Optional) Enable the network time daemon command, xntpd(1M), to synchronize the times on all the hosts.
Perform this step if you are configuring a Sun StorEdge QFS shared file system on Solaris OS. You do not need to perform this step if you are configuring a Sun StorEdge QFS shared file system on Sun Cluster because it has already been done as part of the Sun Cluster installation.
The clocks of all hosts must be synchronized, and must be kept synchronized, during Sun StorEdge QFS shared file system operations. For more information, see the xntpd(1M) man page.
The following steps enable the xntpd(1M) daemon on one host:
b. Use vi(1) or another editor to create file /etc/inet/ntp.conf.
c. Create a line in file /etc/inet/ntp.conf that specifies the name of the local time server.
This line has the following format:
In the preceding command, server and prefer are required keywords. Specify the IP Address of your local time server for IP-address.
If you have no local time server, see one of the following URLs for information on how to access a public time source:
http://www.eecis.udel.edu/~mills/ntp/servers.html http://www.boulder.nist.gov/timefreq/general/pdf/1383.pdf |
Alternatively, you can search for public time sources in a search engine.
d. Close file /etc/inet/ntp.conf.
e. Start the xntpd(1M) daemon.
3. Repeat the preceding steps on each host.
Perform this task if you want to be able to use the SAM-QFS Manager to configure, control, monitor, or reconfigure your Sun StorEdge QFS environment.
The procedures in this section are as follows:
In addition to the information in this section, this manual's appendix, SAM-QFS Manager Software Notes, describes other aspects of using the SAM-QFS Manager.
Note - The SAM-QFS Manager does not support the Sun StorEdge QFS shared file system nor does it support file systems in Sun Cluster environments. |
|
1. Ensure that you have met the installation requirements in (Optional) Verifying Requirements for the SAM-QFS Manager.
2. Log in to the server that you want to use as the management station.
This can be the same server upon which you installed the SUNWsamfsr and SUNWsamfsu packages.
4. Use the cd(1) command to change to the directory where the software package release files reside on your server.
When you completed your preinstallation tasks, you obtained the release files as described in Obtaining the Release Files. Use the cd(1) command to change to the directory that contains the release files.
For example, if you obtained the release files from a CD-ROM, use the following command:
If you downloaded the release files, change to the directory to which you downloaded the files.
5. Execute the samqfsmgr_setup script to install the SAM-QFS Manager software.
6. Answer the questions as prompted by the samqfsmgr_setup script.
During the installation procedure, you are asked to answer questions about your environment. The script prompts you to enter passwords for the SAMadmin role and for the samadmin and samuser login IDs.
The samqfsmgr_setup script automatically installs the following:
The installation scripts prompt you to answer questions regarding whether you want to install any localized packages.
After installing the packages, it starts the TomCat Web Server, enables logging, and creates the SAMadmin role.
7. Use vi(1) or another editor to edit your system setup files to include the correct paths to commands and man pages.
a. In the Bourne or Korn shell, edit the .profile file, change the PATH and MANPATH variables, and export the variables.
CODE EXAMPLE 2-5 shows how your .profile file might look after editing.
PATH=$PATH:/opt/SUNWsamqfsui/bin MANPATH=$MANPATH:/opt/SUNWsamqfsui/man export PATH MANPATH |
b. In the C shell, edit the .login and .cshrc files.
When you have finished editing, the path statement in your .cshrc file might look like the following line:
CODE EXAMPLE 2-6 shows how the MANPATH in your .login file might look after you have finished editing.
setenv MANPATH /usr/local/man:opt/SUNWspro/man:/$OPENWINHOME/\ share/man:/opt/SUNWsamfs/man:/opt/SUNWsamqfsui/man |
8. Log in to the Sun StorEdge QFS server and become superuser.
9. Use the ps(1) and grep(1) commands to make sure that the rpcbind service is running.
10. Examine the output from the preceding commands.
The output should contain a line similar to the following:
If rpcbind does not appear in the output, enter the following command:
11. (Optional) Start the SAM-QFS Manager (sam-mgmtrpcd) daemon.
Perform this step if you did not choose to have this daemon started automatically at installation time.
Enter the following command to start the SAM-QFS Manager daemon:
With this configuration, the system automatically restarts this daemon every time the daemon process dies. The daemon autorestarts at system reboots.
If you want to stop the daemon completely, enter the following command:
The preceding command also prevents the daemon from restarting automatically.
If you want the SAM-QFS Manager daemon to run only once and not automatically restart, use the following command:
If you have used the preceding command to start the daemon, use the following command to stop it:
For more information, see the samadm(1M) man page.
After the SAM-QFS Manager is installed, you can log in to the software using two possible user names (samadmin and samuser) and two different roles (SAMadmin or no role). The tasks you can perform using the SAM-QFS Manager differ depending on the user name and the role you assume at login. These differences are as follows:
Only the Sun StorEdge QFS administrator should log in using the SAMadmin role. All other users should log in as samuser.
With regard to system administration, be aware that the Solaris OS root user on the server that hosts the SAM-QFS Manager is not necessarily the administrator of the SAM-QFS Manager. Only samadmin has administrator priviledges for the SAM-QFS Manager application. The root user is the administrator of the management station.
Perform this procedure if you want to invoke the SAM-QFS Manager and use it, rather than commands, to perform some of the configuration steps.
1. Log in to the management station web server.
2. From a web browser, invoke the SAM-QFS Manager software.
For hostname, type the name of the host. If you need to specify a domain name in addition to the host name, specify the hostname in this format: hostname.domainname.
Note that this URL begins with https, not http. The Sun Web Console login screen appears.
3. At the User Name prompt, enter samadmin.
4. At the Password prompt, enter the password you entered when you answered questions during the the samqfsmgr_setup script's processing in To Install the SAM-QFS Manager Software.
5. Click on the SAMadmin role.
Only the Sun StorEdge QFS administrator should ever log in with the SAMadmin role.
6. At the Role Password prompt, enter the password you entered in Step 4.
You are now logged in to the SAM-QFS Manager.
This manual guides you through the configuration process using Solaris OS commands, but you can also use the SAM-QFS Manager, instead of commands, to accomplish many of the tasks.
1. Click Help, in the upper right corner of the screen, to access the SAM-QFS Manager online documentation.
2. Complete the configuration tasks.
TABLE 2-1 shows the rest of the steps you must perform to install and configure a Sun StorEdge QFS file system and the means by which you can accomplish each task.
Perform the configuration steps in TABLE 2-1 in the order in which they appear. You can open a terminal window next to the SAM-QFS Manager window for use when you need to alternate between using commands and using the SAM-QFS Manager.
Defining the Sun StorEdge QFS Configuration By Creating the mcf File |
||
TABLE 2-1 describes several installation steps as optional. The only required installation steps that you still must perform using Solaris OS commands are as follows:
The other installation steps in TABLE 2-1 are necessary, or are highly recommended, depending on your environment.
Each Sun StorEdge QFS environment is unique. The system requirements and hardware that are used differ from site to site. It is up to you, the system administrator at your site, to set up the specific configuration for your Sun StorEdge QFS environment.
The master configuration file, /etc/opt/SUNWsamfs/mcf, defines the topology of the equipment managed by the Sun StorEdge QFS file system. This file specifies the devices and file systems included in the environment. You assign each piece of equipment a unique Equipment Identifier in the mcf file.
To configure Sun StorEdge QFS devices, create an mcf file in /etc/opt/SUNWsamfs/mcf that contains a line for each device and family set in your configuration. The mcf contains information that enables you to identify the disk slices to be used and to organize them into Sun StorEdge QFS file systems.
There are examples of mcf files in /opt/SUNWsamfs/examples.
Note - For information about file system design considerations, see the Sun StorEdge QFS and Sun StorEdge SAM-FS File System Administration Guide. |
The following sections provide examples and describe activities related to creating and maintaining the mcf file:
Note - The instructions for creating the mcf file differ depending on whether you are creating a Sun StorEdge QFS environment or a Sun SAM-QFS environment.
|
|
Use vi(1) or another editor to create the mcf file.
When you create the mcf file, follow these guidelines:
CODE EXAMPLE 2-7 shows the fields of each line entry in the mcf file.
After you have created your mcf file, using the examples in this section as a guide, proceed on to one of the following sections depending on the type of file system you are configuring:
The fields in an mcf file are the same regardless of what kind of file system you are configuring. CODE EXAMPLE 2-7 shows the fields. The following sections explain the fields. For more information about the content of these mcf file fields, see the Sun StorEdge QFS and Sun StorEdge SAM-FS File System Administration Guide.
This is a required field. Enter one of the following:
The specification for a disk partition or disk slice is limited to 127 characters in length. TABLE 2-2 shows the kinds of devices to use when creating Sun StorEdge QFS file systems.
Raw devices (/dev/dsk/cntndnsn) Volume-manager controlled devices (/dev/vx/... or /dev/md/...) |
||
The following notes pertain to the information in TABLE 2-2:
This is a required field. Enter a unique integer such that 1 eq_ord
65534.
This is a required field. Enter the code for the Equipment Type, as follows:
For more information about Equipment Types, see the mcf(4) man page.
This is a required field. Enter the name of the file system to which this device belongs. The system organizes all devices with the same Family Set name into a Sun StorEdge QFS file system. Limited to 31 characters.
If this line is the first in a series of lines that define devices for a particular file system, enter the same name you entered in the Equipment Identifier field.
If this line defines a device within a file system, enter the file system name in this field.
This is an optional field. If specified, this field should contain either the keyword on or a dash character (-). Enter a state for the device for when the Sun StorEdge QFS file system is initialized.
This is an optional field. Specify shared in this field only if you are configuring a Sun StorEdge QFS shared file system. For information about the Sun StorEdge QFS shared file system, see the Sun StorEdge QFS and Sun StorEdge SAM-FS File System Administration Guide.
For more information, see the mcf(4) man page. An example mcf file is located in /opt/SUNWsamfs/examples/mcf.
CODE EXAMPLE 2-8 shows file system entries in an mcf file for a Sun StorEdge QFS file system that is local to one Solaris OS host.
Use the configuration examples in this section for configuring the mcf file for a Sun StorEdge QFS file system to be installed in the following types of configurations:
For mcf examples that you can use in a Sun Cluster environment, see "Configuration Examples for Sun Cluster File Systems" on page 70.
This example shows how to configure two Sun StorEdge QFS file systems using a server that has a Sun StorEdge Multipack desktop array connected by a SCSI attachment.
You can use the format(1M) command to determine how the disks are partitioned. CODE EXAMPLE 2-9 shows the format(1M) command's output.
Begin writing the mcf file for this configuration example by defining the file system and its disk partitions, as follows:
a. Make an ma entry for the first file system.
b. Make an mm entry listing the partition(s) that comprise the metadata for the qfs1 file system.
c. Make a series of mr entries listing the partitions that comprise the file data for the qfs1 file system.
d. Make similar entries for the second (qfs2) file system.
The finished mcf file defines the following two file systems:
CODE EXAMPLE 2-10 shows the resulting mcf file.
2. Modify the /etc/vfstab file.
Make entries in the /etc/vfstab file for the qfs1 and qfs2 file systems you defined in the mcf file. The last two lines in CODE EXAMPLE 2-11 show entries for these new file systems.
Note - Modifying the /etc/vfstab file is a later step in this chapter's configuration procedure. This step shows the /etc/vfstab file modifications only for completeness' sake. |
This example illustrates a Sun StorEdge QFS file system that uses round-robin allocation on four disk drives.
This example assumes the following:
This example introduces the round-robin data layout. For more information about data layout, see the Sun StorEdge QFS and Sun StorEdge SAM-FS File System Administration Guide.
CODE EXAMPLE 2-12 shows the mcf file for this round-robin disk configuration.
Note - Modifying the /etc/vfstab file and using the sammkfs(1M) command are later steps in this chapter's configuration procedure. This step shows these steps only for completeness' sake. |
2. Modify the /etc/vfstab file.
Edit the /etc/vfstab file to explicitly set round-robin allocation on the file system by specifying stripe=0 in the mount params field. CODE EXAMPLE 2-13 shows stripe=0 for the qfs3 file system.
3. Run the sammkfs(1M) command.
Initialize the Sun StorEdge QFS file system by using the sammkfs(1M) command. The default DAU is 64 kilobytes, but the following example sets the DAU size to 128 kilobytes:
This example illustrates a Sun StorEdge QFS file system. It stripes file data to four disk drives. This example assumes the following:
Write the mcf file using the disk configuration assumptions. CODE EXAMPLE 2-14 shows a sample mcf file for a striped disk configuration.
Note - Modifying the /etc/vfstab file and using the sammkfs(1M) command are later steps in this chapter's configuration procedure. This step shows these steps only for completeness' sake. |
2. Modify the /etc/vfstab file.
Set the stripe width by using the stripe= option. CODE EXAMPLE 2-15 shows the /etc/vfstab file with a mount parameter of stripe=1 set for the qfs4 file system.
The stripe=1 specification stripes file data across all four of the mr data disks with a stripe width of one disk allocation unit (DAU). Note that the DAU is the allocation unit you set when you use the sammkfs(1M) command to initialize the file system.
3. Run the sammkfs(1M) command.
Initialize the Sun StorEdge QFS file system by using the sammkfs(1M) command. The following example sets the DAU size to 128 kilobytes:
With this striped disk configuration, any file written to this file system is striped across all of the devices in increments of 128 kilobytes. Files less than the aggregate stripe width times the number of devices still use 128 kilobytes of disk space. Files larger than 128 kilobytes have space allocated for them as needed in total space increments of 128 kilobytes. The file system writes metadata to device 41 only.
Striped groups allow you to build RAID-0 devices of separate disk devices. With striped groups, however, there is only one DAU per striped group. This method of writing huge, effective DAUs across RAID devices saves system update time and supports high-speed sequential I/O. Striped groups are useful for writing very large files to groups of disk devices.
The devices within a striped group must be the same size. It is not possible to increase the size of a striped group. You can add additional striped groups to the file system, however.
This example configuration illustrates a Sun StorEdge QFS file system that separates the metadata onto a low-latency disk. The mcf file defines two striped groups on four drives. This example assumes the following:
Write the mcf file by using the disk configuration assumptions. CODE EXAMPLE 2-16 shows a sample mcf file for a striped group configuration.
Note - Modifying the /etc/vfstab file and using the sammkfs(1M) command are later steps in this chapter's configuration procedure. This procedure shows these steps only for completeness' sake. |
2. Modify the /etc/vfstab file.
Use the the stripe= option to set the stripe width. CODE EXAMPLE 2-17 shows the /etc/vfstab file with a mount parameter of stripe=0, which specifies a round-robin allocation between striped group g0 to striped group g1.
3. Run the sammkfs(1M) command.
Initialize the Sun StorEdge QFS file system by using the sammkfs(1M) command. The -a option is not used with striped groups because the DAU is equal to the size of an allocation, or the size, of each group.
In this example, there are two striped groups, g0 and g1. With stripe=0 in /etc/vfstab, devices 12 and 13 are striped; devices 14 and 15 are striped; and files are round robined around the two striped groups. You are actually treating a striped group as a bound entity. After you configure a stripe group, you cannot change it without issuing another sammkfs(1M) command.
FIGURE 2-1 illustrates a Sun StorEdge QFS shared file system configuration in a Sun SAM-QFS environment.
FIGURE 2-1 shows four network-attached hosts: titan, tethys, dione, and mimas. The tethys, dione, and mimas hosts are the clients, and titan is the current metadata server. The titan and tethys hosts are potential metadata servers.
The archive media consists of a network-attached library and tape drives that are fibre-attached to titan and tethys. In addition, the archive media catalog resides in a file system that is mounted on the current metadata server, titan.
Metadata travels to and from the clients to the metadata server over the network. The metadata server makes all modifications to the name space, and this keeps the metadata consistent. The metadata server also provides the locking capability, the block allocation, and the block deallocation.
Several metadata disks are connected to titan and tethys, and these disks can be accessed only by the potential metadata servers. If titan were unavailable, you could change the metadata server to tethys, and the library, tape drives, and catalog could be accessed by tethys as part of the Sun StorEdge QFS shared file system. The data disks are connected to all four hosts by a Fibre Channel connection.
1. Issue the format(1M) command and examine its output.
Make sure that the metadata disk partitions configured for the Sun StorEdge QFS shared file system mount point are connected to the potential metadata servers. Also make sure that the data disk partitions configured for the Sun StorEdge QFS shared file system are connected to the potential metadata servers and to all the client hosts in this file system.
If your host supports multipath I/O drivers, individual devices shown in the format(1M) command's output might show multiple controllers. These correspond to the multiple paths to the actual devices.
CODE EXAMPLE 2-18 shows the format(1M) command output for titan. There is one metadata disk on controller 2, and there are three data disks on controller 3.
CODE EXAMPLE 2-19 shows the format(1M) command output for tethys. There is one metadata disk on controller 2, and there are four data disks on controller 7.
Note the following in CODE EXAMPLE 2-19:
CODE EXAMPLE 2-20 shows the format(1M) command's output for mimas. This shows three data disks on controller 1 and no metadata disks.
CODE EXAMPLE 2-19 and CODE EXAMPLE 2-20 show that the data disks on titan's controller 3 are the same disks as mimas' controller 1. You can verify this by looking at the World Wide Name, which is the last component in the device name. For titan's number 3 disk, the World Wide Name is 50020F2300005D22. This is the same name as number 3 on controller 1 on mimas.
2. Use vi(1) or another editor to create the mcf file on the metadata server.
The only difference between the mcf file of a shared Sun StorEdge QFS file system and an unshared Sun StorEdge QFS file system is the presence of the shared keyword in the Additional Parameters field of the file system name line of a Sun StorEdge QFS shared file system.
CODE EXAMPLE 2-21 shows an mcf file fragment for titan that defines several disks for use in the Sun StorEdge QFS shared file system. It shows the shared keyword in the Additional Parameters field on the file system name line.
Note - In a Sun SAM-QFS shared file system, for each host that is a metadata server or potential metadata server, that hosts's mcf file must define all libraries and library catalogs used by its own shared file systems and by its potential shared file systems. This is necessary if you want to change the metadata server. For information on defining libraries in an mcf file, see the Sun StorEdge SAM-FS Initial Installation Procedure. |
The Sun Cluster software moves a Sun StorEdge QFS highly available file system from a failing node to a viable node in the event of a node failure.
Each node in the Sun Cluster that can host this file system must have an mcf file. Later on in this chapter's configuration process, you copy mcf file lines from the metadata server's mcf file to other nodes in the Sun Cluster.
|
The procedure for creating an mcf file for a Sun StorEdge QFS highly available file system is as follows:
1. Make an ma entry for the file system.
2. Make an mm entry listing the partition(s) that comprise the metadata for the qfs1 file system.
3. Make a series of mr, gXXX, or md entries listing the partitions that comprise the file data for the qfs1 file system.
You can use the scdidadm(1M) command to determine the partitions to use.
Example 1. CODE EXAMPLE 2-22 is an example mcf file entry for a Sun StorEdge QFS highly available file system that uses raw devices.
Example 2. CODE EXAMPLE 2-23 is an example mcf file entry for a Sun StorEdge QFS highly available file system that uses Solaris Volume Manager metadevices. The example assumes that the Solaris Volume Manager metaset in use is named red.
Example 3. CODE EXAMPLE 2-24 is an example mcf file entry for a Sun StorEdge QFS highly available file system that uses VxVm devices.
This example assumes that both ash and elm are nodes in a Sun Cluster. Host ash is the metadata server. The keyword shared in this example's mcf file indicates to the system that this is a shared file system. This example builds upon Example - Using the scdidadm(1M) Command in a Sun Cluster.
|
Make sure that you create the mcf file on the node that you want to designate as the metadata server. The procedure for creating an mcf file for a Sun StorEdge QFS shared file system on a Sun Cluster is as follows:
1. Use the scdidadm(1M) -L command to obtain information about the devices included in the Sun Cluster.
The scdidadm(1M) command administers the device identifier (DID) devices. The -L option lists all the DID device paths, including those on all nodes in the Sun Cluster. CODE EXAMPLE 2-25 shows the format output from all the /dev/did devices. This information is needed when you build the mcf file.
The format(1M) command reveals the space available on a device, but it does not reveal whether a disk is mirrored or striped. Put the file system's mm devices on mirrored (RAID-1) disks. The mm devices should constitute about 10% of the space allocated for the entire file system. CODE EXAMPLE 2-25's format(1M) output reveals the following information that is used when writing the mcf file shown in CODE EXAMPLE 2-26:
2. Make an ma entry for the file system.
In this line entry, make sure to include the shared keyword in the Additional Parameters field.
3. Make an mm entry listing the partition(s) that comprise the metadata for the qfs1 file system.
4. Make a series of mr entries listing the partitions that comprise the file data for the qfs1 file system.
CODE EXAMPLE 2-26 shows the mcf file.
Perform this task if you are configuring one of the following types of file systems:
The mcf file lines that define a particular file system must be identical in the mcf file on each host system that supports the file system. Only one mcf file can reside on a host. Because you can have other, additional Sun StorEdge QFS file systems defined in an mcf file, the mcf files on each host might not be identical.
|
Perform this procedure for a Sun StorEdge QFS highly available file system on Sun Cluster hosts.
1. Log in to a Sun Cluster node that you want to support the file system you are configuring.
3. Use vi(1) or another editor to create an mcf file on that node.
If an mcf file already exists on the host, add the lines for the new file system to this mcf file.
4. Copy the lines that define the file system from the primary node's mcf file to this node's mcf file.
5. Repeat the preceding steps for each host that you want to support the file system.
|
Perform this procedure for a shared file system on Solaris OS hosts or on Sun Cluster hosts.
1. Log into another host that you want to include in the file system.
3. Use the format(1M) command to verify the presence of client host disks.
4. Use vi(1) or another editor to create an mcf file.
If an mcf file already exists on the host, add the lines for the new file system to this mcf file.
5. Issue the samfsconfig(1M) command.
Examine this command's output to locate the local device names for each additional host to be configured in the Sun StorEdge QFS shared file system.
6. Update the mcf file on other client hosts.
Any host system that wants to access or mount a shared file system must have that file system defined in its mcf file. The content of these mcf files differs depending on whether the Solaris OS or Sun Cluster hosts the file system, as follows:
Use vi(1) or another editor to edit the mcf file on one of the client host systems. The mcf file must be updated on all client hosts to be included in the Sun StorEdge QFS shared file system. The file system and disk declaration information must have the same data for the Family Set name, Equipment Ordinal, and Equipment Type as the configuration on the metadata server. The mcf files on the client hosts must also include the shared keyword. The device names, however, can change as controller assignments can change from host to host.
The samfsconfig(1M) command generates configuration information that can help you to identify the devices included in the Sun StorEdge QFS shared file system. Enter a separate samfsconfig(1M) command on each client host. Note that the controller number might not be the same controller number as on the metadata server because the controller numbers are assigned by each client host.
7. Repeat this procedure for each host that you want to include in the file system.
Example 1 - Solaris OS hosts. CODE EXAMPLE 2-27 shows how the samfsconfig(1M) command is used to retrieve device information for family set sharefs1 on client tethys. Note that tethys is a potential metadata server, so it is connected to the same metadata disks as titan.
Edit the mcf file on client host tethys by copying the last five lines of output from the samfsconfig(1M) command into the mcf file on client host tethys. Verify the following:
CODE EXAMPLE 2-28 shows the resulting mcf file.
In CODE EXAMPLE 2-28, note that the Equipment Ordinal numbers match those of the example mcf file for metadata server titan. These Equipment Ordinal numbers must not already be in use on client host tethys or any other client host.
Example 2 - Solaris OS hosts. CODE EXAMPLE 2-29 shows how the samfsconfig(1M) command is used to retrieve device information for family set sharefs1 on client host mimas. Note that mimas can never become a metadata server, and it is not connected to the metadata disks.
In the output from the samfsconfig(1M) command on mimas, note that Ordinal 0, which is the metadata disk, is not present. Because devices are missing, the samfsconfig(1M) command comments out the elements of the file system and omits the file system Family Set declaration line. Make the following types of edits to the mcf file:
CODE EXAMPLE 2-30 shows the resulting mcf file for mimas.
Perform this task if you are configuring the following types of file systems:
|
The system copies information from the hosts file to the shared hosts file in the shared file system at file system creation time. You update this information when you issue the samsharefs(1M) -u command.
1. Use the cd(1) command to change to directory /etc/opt/SUNWsamfs.
2. Use vi(1) or another editor to create an ASCII hosts file called hosts.fs-name.
For fs-name, specify the Family Set name of the Sun StorEdge QFS shared file system.
Comments are permitted in the hosts file. Comment lines must begin with a pound character (#). Characters to the right of the pound character are ignored.
3. Use the information in TABLE 2-3 to fill in the lines of the hosts file.
File hosts.fs-name contains configuration information pertaining to all hosts in the Sun StorEdge QFS shared file system. The ASCII hosts file defines the hosts that can share the Family Set name.
TABLE 2-3 shows the fields in the hosts file.
The system reads and manipulates the hosts file. You can use the samsharefs(1M) command to examine metadata server and client host information about a running system.
CODE EXAMPLE 2-31 is an example hosts file that shows four hosts.
CODE EXAMPLE 2-31 shows a hosts file that contains fields of information and comment lines for the sharefs1 file system. In this example, the Server Priority field contains the number 1 in the Server Priority field to define the primary metadata server as titan. If titan is down, the next metadata server is tethys, and the number 2 in this field indicates this secondary priority. Note that neither dione nor mimas can ever be a metadata server.
If you are configuring a Sun StorEdge QFS shared file system in a Sun Cluster, every host is a potential metadata server. The hosts files and the local hosts configuration files must contain node names in the Host Names field. These fields must contain Sun Cluster private interconnect names in the Host IP Addresses field.
CODE EXAMPLE 2-32 shows the local hosts configuration file for a shared file system, sharefs1. This file system's participating hosts are Sun Cluster nodes scnode-A and scnode-B. Each node's private interconnect name is listed in the Host IP Addresses field.
|
Perform this procedure under the following circumstances:
1. Create the local hosts configuration file on the client host.
Using vi(1) or another editor, create an ASCII local hosts configuration file that defines the host interfaces that the metadata server and the client hosts can use when accessing the file system. The local hosts configuration file must reside in the following location:
For fsname, specify the Family Set Name of the Sun StorEdge QFS shared file system.
Comments are permitted in the local host configuration file. Comment lines must begin with a pound character (#). Characters to the right of the pound character are ignored.
TABLE 2-4 shows the fields in the local hosts configuration file.
2. Repeat this procedure for each client host that you want to include in the Sun StorEdge QFS shared file system.
The information in this section might be useful when you are debugging.
In a Sun StorEdge QFS shared file system, each client host obtains the list of metadata server IP addresses from the shared hosts file.
The metadata server and the client hosts use the shared hosts file on the metadata server and the hosts.fsname.local file on each client host (if it exists) to determine the host interface to use when accessing the metadata server. This process is as follows (note that client, as in network client, is used to refer to both client hosts and the metadata server host in the following process):
1. The client obtains the list of metadata server host IP interfaces from the file system's on-disk shared hosts file. To examine this file, issue the samsharefs(1M) command from the metadata server or from a potential metadata server.
2. The client searches for an /etc/opt/SUNWsamfs/hosts.fsname.local file. Depending on the outcome of the search, one of the following occurs:
i. It compares the list of addresses for the metadata server from both the shared hosts file on the file system and the hosts.fsname.local file.
ii. It builds a list of addresses that are present in both places, and then it attempts to connect to each of these addresses, in turn, until it succeeds in connecting to the server. If the order of the addresses differs in these files, the client uses the ordering in the hosts.fsname.local file.
This example expands on FIGURE 2-1. CODE EXAMPLE 2-31 shows the hosts file for this configuration. FIGURE 2-2 shows the interfaces to these systems.
Systems titan and tethys share a private network connection with interfaces 172.16.0.129 and 172.16.0.130. To guarantee that titan and tethys always communicate over their private network connection, the system administrator has created identical copies of /etc/opt/SUNWsamfs/hosts.sharefs1.local on each system. CODE EXAMPLE 2-33 shows the information in these files.
Systems mimas and dione are not on the private network. To guarantee that they connect to titan and tethys through titan's and tethys' public interfaces, and never attempt to connect to titan's or tethys' unreachable private interfaces, the system administrator has created identical copies of /etc/opt/SUNWsamfs/hosts.sharefs1.local on mimas and dione. CODE EXAMPLE 2-34 shows the information in these files.
This procedure initializes the environment.
|
Type the samd(1M) config command to initialize the Sun StorEdge QFS environment.
Repeat this command on each host if you are configuring a Sun StorEdge QFS shared file system or a Sun StorEdge QFS highly available file system.
The /opt/SUNWsamfs/examples/defaults.conf file contains default settings for the Sun StorEdge QFS environment. You can change these settings at any time after the initial installation. If you want to change any default settings now, examine the defaults.conf(4) man page to discern the types of behaviors this file controls.
Perform this task if you want to change system default values.
|
1. Read the defaults.conf(4) man page and examine this file to determine if you want to change any of the defaults.
2. Use the cp(1) command to copy /opt/SUNWsamfs/examples/defaults.conf to its functional location.
3. Use vi(1) or another editor to edit the file.
Edit the lines that control aspects of the system that you want to change. Remove the pound character (#) from column 1 of the lines you change.
For example, if you are configuring a Sun StorEdge QFS shared file system in a Sun Cluster, CODE EXAMPLE 2-35 shows defaults.conf entries that are helpful when debugging.
4. Use the samd(1M) config command to restart the sam-fsd(1M) daemon and enable the daemon to recognize the changes in the defaults.conf file.
5. (Optional) Repeat this procedure for each host that you want to include in a Sun StorEdge QFS shared file system or a Sun StorEdge QFS highly available file system.
For debugging purposes, the defaults.conf file should be the same on all hosts.
At this point in the installation and configuration process, the following files exist on each Sun StorEdge QFS host:
The procedures in this section show you how to verify the correctness of these configuration files.
Perform these verifications on all hosts if you are configuring a Sun StorEdge QFS shared file system or a Sun StorEdge QFS highly available file system.
|
Enter the samcmd(1M) l (lowercase L) command to verify the license file.
The samcmd(1M) output includes information about features that are enabled. If the output you receive is not similar to that shown in CODE EXAMPLE 2-36, return to Enabling the Sun StorEdge QFS Software License.
|
Enter the sam-fsd(1M) command to verify the mcf file.
Examine the output for errors, as follows:
If your mcf file has errors, refer to Defining the Sun StorEdge QFS Configuration By Creating the mcf File and to the mcf(4) man page for information about how to create this file correctly.
You can create the /etc/opt/SUNWsamfs/samfs.cmd file as the place from which the system reads mount parameters. If you are configuring multiple Sun StorEdge QFS systems with multiple mount parameters, consider creating this file.
You can specify mount parameters in the following ways:
You can manage certain features more easily from a samfs.cmd file. These features include the following:
For more information about the /etc/vfstab file, see Updating the /etc/vfstab File and Creating the Mount Point. For more information about the mount(1M) command, see the mount_samfs(1M) man page.
|
1. Use vi(1) or another editor to create the samfs.cmd file.
Create lines in the samfs.cmd file to control mounting, performance features, or other aspects of file system management. For more information about the samfs.cmd file, see the Sun StorEdge QFS and Sun StorEdge SAM-FS File System Administration Guide, or see the samfs.cmd(4) man page.
CODE EXAMPLE 2-38 shows a samfs.cmd file for a Sun StorEdge QFS file system.
2. (Optional) Copy lines, as necessary, to the samfs.cmd file on other hosts.
Perform this step if you are creating a multihost file system.
If you have created a samfs.cmd file on one host in a Sun Cluster to describe a particular file system's mount parameters, copy those lines to samfs.cmd files on all the nodes that can access that file system.
For debugging purposes, the samfs.cmd file, as it pertains to a specific file system, should be the same on all hosts. For example, if the qfs3 file system is accessible from all nodes in a Sun Cluster, then the lines in the samfs.cmd file that describe the qfs3 file system should be identical on all the nodes in the Sun Cluster.
Depending on your site needs, it might be easier to manage mount options from the samfs.cmd file rather than from the /etc/vfstab file. The /etc/vfstab file overrides the samfs.cmd file in the event of conflicts.
For more information about mount options, see Updating the /etc/vfstab File and Creating the Mount Point.
This task shows you how to edit the /etc/vfstab file.
Note - Even though /global is used in this chapter's examples as the mount point for file systems mounted in a Sun Cluster environment, it is not required. You can use any mount point. |
TABLE 2-5 shows the values you can enter in the fields in the /etc/vfstab file.
|
1. Use vi(1) or another editor to open the /etc/vfstab file and create an entry for each Sun StorEdge QFS file system.
CODE EXAMPLE 2-39 shows header fields and entries for a local Sun StorEdge QFS file system.
TABLE 2-5 shows the various fields in the /etc/vfstab file and their contents.
If you are configuring a file system for a Sun Cluster environment, the mount options that are required, or are recommended, differ depending on the type of file system you are configuring. TABLE 2-6 explains the mount options.
Sun StorEdge QFS shared file system to support Oracle Real Application Clusters database files |
||
You can specify most of the mount options mentioned in TABLE 2-6 in either the /etc/vfstab file or in the samds.cmd file. The shared option, however, must be specified in the /etc/vfstab file.
Tip - In addition to the mount options mentioned in TABLE 2-6, you can also specify the trace mount option for configuration debugging purposes. |
2. Use the mkdir(1) command to create the file system mount point.
The mount point location differs depending on where the file system is to be mounted. The following examples illustrate this.
Example 1. This example assumes that /qfs1 is the mount point of the qfs1 file system. This is a local file system. It can exist on a standalone server or on a local node in a Sun Cluster. For example:
Example 2. This example assumes that /global/qfs1 is the mount point of the qfs1 file system, which is a Sun StorEdge QFS shared file system to be mounted on a Sun Cluster:
Note - If you configured multiple mount points, repeat these steps for each mount point, using a different mount point (such as /qfs2) and Family Set name (such as qfs2) each time. |
3. (Optional) Repeat the preceding steps for all hosts if you are configuring a Sun StorEdge QFS shared file system or a Sun StorEdge QFS highly available file system.
For debugging purposes, if you are configuring a Sun StorEdge QFS shared file system, the mount options should be the same on all hosts that can mount the file system.
This procedure shows how to use the sammkfs(1M) command and the Family Set names that you have defined to initialize a file system.
|
Use the sammkfs(1M) command to initialize a file system for each Family Set defined in the mcf file.
CODE EXAMPLE 2-40 shows the command to use to initialize a Sun StorEdge QFS file system with the Family Set name of qfs1.
Enter y in response to this message to continue the file system creation process.
If you are configuring a Sun StorEdge QFS shared file system, enter the sammkfs(1M) comand on the metadata server only.
Enter the sammkfs(1M) command at the system prompt. The -S options specifies that the file system be a Sun StorEdge QFS shared file system. Use this command in the following format:
For more information about the sammkfs(1M) command, see the sammkfs(1M) man page. For example, you can use the following sammkfs(1M) command to initialize a Sun StorEdge QFS shared file system and identify it as shared:
If the shared keyword appears in the mcf file, the file system must be initialized as a shared file system by using the -S option to the sammkfs(1M) command. You cannot mount a file system as shared if it was not initialized as shared.
If you are initializing a file system as a Sun StorEdge QFS file system, file /etc/opt/SUNWsamfs/hosts.sharefs1 must exist at the time you issue the sammkfs(1M) command. The sammkfs(1M) command uses the hosts file when it creates the file system. You can use the samsharefs(1M) command to replace or update the contents of the hosts file at a later date.
Perform this task if you are configuring the following types of file systems:
|
Perform these steps on each host that can mount the file system.
1. Use the ps(1) and grep(1) commands to verify that the sam-sharefsd daemon is running for this file system.
CODE EXAMPLE 2-41 shows these commands.
CODE EXAMPLE 2-41 shows that the sam-sharefsd daemon is active for the sharefs1 file system. If this is the case for your system, you can proceed to the next step in this procedure. If, however, the output returned on your system does not show that the sam-sharefsd daemon is active for your Sun StorEdge QFS shared file system, you need to perform some diagnostic procedures. For information about these procedures, see the Sun StorEdge QFS and Sun StorEdge SAM-FS File System Administration Guide.
Depending on whether or not this daemon is running, perform the remaining steps in this procedure.
2. (Optional) Determine whether the sam-fsd daemon is running.
Perform this step if the previous step's output indicates that the sam-sharefsd daemon is not running.
a. Use the ps(1) and grep(1) commands to verify that the sam-fsd daemon is running for this file system.
CODE EXAMPLE 2-42 shows sam-fsd output that indicates that the daemon is running.
The mount(1M) command mounts a file system. It also reads the /etc/vfstab and samfs.cmd configuration files. For information about the mount(1M) command, see the mount_samfs(1M) man page.
Use one or more of the procedures that follow to mount your file system. The introduction to each procedure explains the file system to which it pertains.
|
Perform this procedure on all Sun StorEdge QFS file system, as follows:
1. Use the mount(1M) command to mount the file system.
Specify the file system mount point as the argument. For example:
2. Use the mount(1M) command with no arguments to verify the mount.
This step confirms whether the file system is mounted and shows how to set permissions. CODE EXAMPLE 2-43 shows the output from a mount(1M) command issued to verify whether example file system qfs1 is mounted.
3. (Optional) Use the chmod(1) and chown(1) commands to change the permissions and ownership of the file system's root directory.
If this is the first time the file system has been mounted, it is typical to perform this step. CODE EXAMPLE 2-44 shows the commands to use to change file system permissions and ownership.
# chmod 755 /qfs1 # chown root:other /qfs1 |
|
Perform this procedure if you are creating a Sun StorEdge QFS shared file system in either a Solaris OS or in a Sun Cluster environment. This procedure ensures that the file system is configured to support changing the metadata server.
1. Log in to the metadata server as superuser.
2. Use the samsharefs(1M) command to change the metadata server.
3. Use the ls(1) -al command to verify that the files are accessible on the new metadata server.
If you are creating a Sun StorEdge QFS shared file system in a Solaris OS environment, repeat these commands on each metadata server or potential metadata server.
If you are creating a Sun StorEdge QFS shared file system in a Sun Cluster, repeat these steps on all hosts that can mount the file system.
Perform this task if you are configuring a Sun StorEdge QFS shared file system on a Sun Cluster platform.
|
1. Log in to the metadata server as superuser.
2. Use the scrgadm(1M) -p command and search for the SUNW.qfs(5) resource type.
This step verifies that the previous step succeeded. For example:
If the SUNW.qfs resource type is missing, issue the following command:
3. Use the scrgadm(1M) command to set the FilesystemCheckCommand property of the SUNW.qfs(5) resource type to /bin/true.
The SUNW.qfs(5) resource type is part of the Sun StorEdge QFS software package. Configuring the resource type for use with your shared file system makes the shared file system's metadata server highly available. Sun Cluster scalable applications can then access data contained in the file system. For more information, see the Sun StorEdge QFS and Sun StorEdge SAM-FS File System Administration Guide.
CODE EXAMPLE 2-45 shows how to use the scrgadm(1M) command to register and configure the SUNW.qfs resource type. In this example, the nodes are scnode-A and scnode-B. /global/sharefs1 is the mount point as specified in the /etc/vfstab file.
# scrgadm -a -g qfs-rg -h scnode-A,scnode-B # scrgadm -a -g qfs-rg -t SUNW.qfs -j qfs-res \ -x QFSFileSystem=/global/sharefs1 |
Perform this task if you are configuring a Sun StorEdge QFS highly available file system on a Sun Cluster platform.
|
Use the scrgadm(1M) command to set the FilesystemCheckCommand property of HAStoragePlus to /bin/true.
All other resource properties for HAStoragePlus apply as specified in SUNW.HAStoragePlus(5).
The following example command shows how to use the scrgadm(1M) command to configure an HAStoragePlus resource:
# scrgadm -a -g qfs-rg -j ha-qfs -t SUNW.HAStoragePlus \ -x FilesystemMountPoints=/global/qfs1 \ -x FilesystemCheckCommand=/bin/true |
Perform this task if you are configuring a file system and you want the file system to be NFS shared.
This procedure uses the Sun Solaris share(1M) command to make the file system available for mounting by remote systems. The share(1M) commands are typically placed in the /etc/dfs/dfstab file and are executed automatically by the Sun Solaris OS when you enter init(1M) state 3.
|
The following procedure explains how to NFS share a file system in a Sun Cluster environment in general terms. For more information on NFS sharing file systems that are controlled by HAStorage Plus, see the Sun StorEdge QFS and Sun StorEdge SAM-FS File System Administration Guide, see the Sun Cluster Data Service for Network File System (NFS) Guide for Solaris OS, and see your NFS documentation.
1. Locate the dfstab.resource_name file.
The Pathprefix property of HAStoragePlus specifies the directory in which the dfstab.resource_name file resides.
2. Use vi(1) or another editor to add a share(1M) command to the Pathprefix/SUNW.nfs/dfstab.resource_name file.
For example, add a line like the following to NFS share the new file system:
|
If you are configuring a Sun StorEdge QFS shared file system, you can perform this procedure from the metadata server or from one of the shared clients.
1. Use vi(1) or another editor to add a share(1M) command to the /etc/dfs/dfstab file.
For example, add a line like the following to direct the Solaris OS to NFS share the new Sun StorEdge QFS file system:
2. Use the ps(1) and grep(1) commands to determine whether or not nfs.server is running.
CODE EXAMPLE 2-46 shows these commands and their output.
In CODE EXAMPLE 2-46, the lines that contain /usr/lib/nfs indicate that the NFS server is mounted.
3. (Optional) Start the NFS server.
Perform this step if the nfs.server server is not running. Use the following command:
4. (Optional) Type the share(1M) command at a root shell prompt.
Perform this step if you want to NFS share the new Sun StorEdge QFS file system immediately.
If there are no NFS shared file systems when the Sun Solaris OS boots, the NFS server is not started. CODE EXAMPLE 2-47 shows the commands to use to enable NFS sharing. You must change to run level 3 after adding the first share entry to this file.
# init 3 # who -r . run-level 3 Dec 12 14:39 3 2 2 # share - /qfs1 - "QFS" |
Some NFS mount parameters can affect the performance of an NFS mounted Sun StorEdge QFS file system. You can set these parameters in the /etc/vfstab file as follows:
For more information about these parameters, see the mount_nfs(1M) man page.
5. Proceed to To NFS Mount the File System on NFS Clients in a Solaris OS Environment.
|
If you are configuring a Sun StorEdge QFS shared file system, you can perform this procedure from the metadata server or from one of the shared clients.
1. On the NFS client systems, use vi(1) or another editor to edit the /etc/vfstab file and add a line to mount the server's Sun StorEdge QFS file system at a convenient mount point.
The following example line mounts server:/qfs1 on the /qfs1 mount point:
In this example, server:/qfs1 is mounted on /qfs1, and information is entered into the /etc/vfstab file.
2. Save and close the /etc/vfstab file.
3. Enter the mount(1M) command.
The following mount(1M) command mounts the qfs1 file system:
The automounter can also do this, if you prefer. Follow your site procedures for adding server:/qfs1 to your automounter maps. For more information about automounting, see the automountd(1M) man page.
Perform this task if you are configuring the following types of file systems:
|
1. Log into the appropriate host.
You must perform this step with the file system mounted on all nodes. If it is not mounted, go back to Mounting the File System and follow the instructions there.
2. Use the scswitch(1M) command to move the file system resource to another node.
3. Use the scstat(1M) command to verify that the file system resource moved to a different node.
Perform this task if you are configuring the following types of file systems:
|
1. From any node in the Sun Cluster, use the scswitch(1M) command to move the file system resource from one node to another.
2. Use the scstat(1M) command to verify that the file system resource moved to a different node.
3. Repeat the preceding commands on each node in the cluster.
File systems are made up of directories, files, and links. The Sun StorEdge QFS file system keeps track of all the files in the .inodes file. The .inodes file resides on a separate metadata device. The file system writes all file data to the data devices.
It is important to use the qfsdump(1M) command periodically to create a dump file of metadata and file data. The qfsdump(1M) command saves the relative path information for each file contained in a complete file system or in a portion of a file system. This protects your data in the event of a disaster.
Create dump files at least once a day. The frequency depends on your site's requirements. By dumping file system data on a regular basis, you can restore old files and file systems. You can also move files and file systems from one server to another.
The following are some guidelines for creating dump files:
You can run the qfsdump(1M) command manually or automatically. Even if you implement this command to be run automatically, you might need to run it manually from time to time depending on your site's circumstances. In the event of a disaster, you can use the qfsrestore(1M) command to recreate your file system. You can also restore a single directory or file. For more information, see the qfsdump(1M) man page and see the Sun QFS, Sun SAM-FS, and Sun SAM-QFS Disaster Recovery Guide.
For more information about creating dump files, see the qfsdump(1M) man page. The following sections describe procedures for issuing this command both manually and automatically.
|
1. Make an entry in root's crontab file so that the cron daemon runs the qfsdump(1M) command periodically.
This entry executes the qfsdump(1M) command at 10 minutes after midnight. It uses the cd(1) command to change to the mount point of the qfs1 file system, and it executes the /opt/SUNWsamfs/sbin/qfsdump command to write the data to tape device /dev/rmt/0cbn.
2. (Optional) Using the previous step as a guide, make similar crontab file entries for each file system.
Perform this step if you have more than one Sun StorEdge QFS file system. Make sure you save each dump file in a separate file.
|
1. Use the cd(1) command to go to the directory that contains the mount point for the file system.
2. Use the qfsdump(1M) command to write a dump file to a file system outside of the one you are dumping.
Sun StorEdge QFS regularly accesses several files that have been created as part of this installation and configuration procedure. You should back up these files regularly to a file system that is outside the file system in which they reside. In the event of a disaster, you can restore these files from your backup copies.
Note - Sun Microsystems strongly recommends that you back up your environment's configuration files because they will be needed in the event of a file system disaster. |
The following files are among those that you should back up regularly and whenever you modify them:
For more information about the files you should protect, see the Sun QFS, Sun SAM-FS, and Sun SAM-QFS Disaster Recovery Guide.
The Sun StorEdge QFS software can be configured to notify you when potential problems occur in its environment. The system sends notification messages to a management station of your choice. The Simple Management Network Protocol (SNMP) software manages the exchange of information between network devices such as servers, automated libraries, and drives.
The Sun StorEdge QFS and Sun StorEdge SAM-FS Management Information Base (MIB) defines the types of problems, or events, that the Sun StorEdge QFS software can detect. The software can detect errors in configuration, tapealert(1M) events, and other atypical system activity. For complete information about the MIB, see /opt/SUNWsamfs/mibs/SUN-SAM-MIB.mib.
The following sections describe how to enable and how to disable remote notification.
|
1. Ensure that the management station is configured and known to be operating correctly.
(Optional) Verifying the Network Management Station describes this prerequisite.
2. Use vi(1) or another editor to examine file /etc/hosts.
For example, CODE EXAMPLE 2-50 shows an /etc/hosts file that defines a management station. In this example, the management station's hostname is mgmtconsole.
999.9.9.9 localhost 999.999.9.999 loggerhost loghost 999.999.9.998 mgmtconsole 999.999.9.9 samserver |
Examine the /etc/hosts file to ensure that the management station to which notifications should be sent is defined. If it is not defined, add a line that defines the appropriate host.
3. Save your changes to /etc/hosts and exit the file.
4. Use vi(1) or another editor to open file /etc/opt/SUNWsamfs/scripts/sendtrap.
5. Locate the TRAP_DESTINATION=`hostname` directive in /etc/opt/SUNWsamfs/scripts/sendtrap.
This line specifies that the remote notification messages be sent to port 161 of the server upon which the Sun StorEdge QFS software is installed. Note the following:
6. Locate the COMMUNITY="public" directive in /etc/opt/SUNWsamfs/scripts/sendtrap.
This line acts as a password. It prevents unauthorized viewing or use of SNMP trap messages. Examine this line and determine the following:
7. Save your changes to /etc/opt/SUNWsamfs/scripts/sendtrap and exit the file.
|
The remote notification facility is enabled by default. If you want to disable remote notification, perform this procedure.
1. (Optional) Use the cp(1) command to copy file /opt/SUNWsamfs/examples/defaults.conf to /etc/opt/SUNWsamfs/defaults.conf.
Perform this step if file /etc/opt/SUNWsamfs/defaults.conf does not exist.
2. Use vi(1) or another editor to open file /etc/opt/SUNWsamfs/defaults.conf.
Find the line in defaults.conf that specifies SNMP alerts. The line is as follows:
3. Edit the line to disable SNMP alerts.
Remove the # symbol and change on to off. After editing, the line is as follows:
4. Save your changes to /etc/opt/SUNWsamfs/defaults.conf and exit the file.
5. Use the samd(1M) config command to restart the sam-fsd(1M) daemon.
The format for this command is as follows:
This command restarts the sam-fsd(1M) daemon and enables the daemon to recognize the changes in the defaults.conf file.
By default, only the superuser can execute Sun StorEdge QFS administrator commands. However, during installation you can create an administrator group. Members of the administrator group can execute all administrator commands except for star(1M), samfsck(1M), samgrowfs(1M), sammkfs(1M), and samd(1M). The administrator commands are located in /opt/SUNWsamfs/sbin.
After installing the package, you can use the set_admin(1M) command to add or remove the administrator group. You must be logged in as superuser to use the set_admin(1M) command. You can also undo the effect of this selection and make the programs in /opt/SUNWsamfs/sbin executable only by the superuser. For more information about this command, see the set_admin(1M) man page.
|
1. Choose an administrator group name or select a group that already exists within your environment.
2. Use the groupadd(1M) command, or edit the /etc/group file.
The following is an entry from the /etc/group file that designates an administrator group for the Sun StorEdge QFS software. In this example, the samadm group consists of both the adm and operator users.
The Sun StorEdge QFS system logs errors, cautions, warnings, and other messages using the standard Sun Solaris syslog(3) interface. By default, the Sun StorEdge QFS facility is local7.
|
1. Use vi(1) or another editor to open the /etc/syslog.conf file.
Read in the line from the following file:
/opt/SUNWsamfs/examples/syslog.conf_changes
The line is similar, if not identical, to the following line:
Note - The preceding entry is all one line and has a TAB character (not a space) between the fields. |
This step assumes that you want to use local7, which is the default. If you set logging to something other than local7 in the /etc/syslog.conf file, edit the defaults.conf file and reset it there, too. For more information, see the defaults.conf(4) man page.
2. Use commands to append the logging line from /opt/SUNWsamfs/examples/syslog.conf_changes to your /etc/syslog.conf file.
CODE EXAMPLE 2-51 shows the commands to use to append the logging lines.
3. Create an empty log file and send the syslogd process a HUP signal.
CODE EXAMPLE 2-52 shows the command sequence to create a log file in /var/adm/sam-log and send the HUP to the syslogd daemon.
# touch /var/adm/sam-log # pkill -HUP syslogd |
For more information, see the syslog.conf(4) and syslogd(1M) man pages.
4. (Optional) Use the log_rotate.sh(1M) command to enable log file rotation.
Log files can become very large, and the log_rotate.sh(1M) command can help in managing log files. For more information, see the log_rotate.sh(1M) man page.
The Sun StorEdge QFS installation and configuration process is complete. You can configure other Sun products at this time.
For example, if you want to configure an Oracle database, see the Sun Cluster Data Service for Oracle Real Application Clusters Guide for Solaris OS. The Oracle Real Application Clusters application is the only scalable application that the Sun StorEdge QFS supports in Sun Cluster environments.
Copyright © 2004, Sun Microsystems, Inc. All rights reserved.