Oracle® Clusterware Administration and Deployment Guide 11g Release 1 (11.1) Part Number B28255-01 |
|
|
View PDF |
This chapter describes how to clone an existing Oracle Clusterware home and use it to create a new cluster or to extend Oracle Clusterware to new nodes on the same cluster. You implement cloning through the use of scripts in silent mode.
The cloning procedures described in this chapter are applicable to Linux, UNIX, and Windows systems. Although the examples in this chapter use Linux and UNIX commands, the cloning concepts and procedures apply to all platforms. For the Windows platform, you need to adjust the examples or commands to be Windows specific.
This chapter contains the following topics:
Cloning is the process of copying an existing Oracle installation to a different location and then updating the copied installation to work in the new environment. The changes made by one-off patches applied on the source Oracle home are also present after the clone operation. During cloning, you run a script that replays the actions that installed the Oracle Clusterware home.
Cloning requires that you start with a successfully installed Oracle Clusterware home that you use as the basis for implementing a script that extends the Oracle Clusterware home to either create a new cluster or to extend the Oracle Clusterware environment to more nodes in the same cluster. Manually creating the cloning script can be prone to errors, because you must prepare the script without the benefit of any interactive checks to validate your input. Despite this, the initial effort is worthwhile for scenarios where you run a single script to install tens or even hundreds of clusters. If you have only one cluster to install, then you should use the traditional automated and interactive installation methods, such as Oracle Universal Installer (OUI) or the Provisioning Pack feature of Oracle Enterprise Manager.
Note:
Cloning is not a replacement for Oracle Enterprise Manager cloning that is a part of the Provisioning Pack. During Enterprise Manager cloning, the provisioning process simplifies the process by interactively asking you the details about the Oracle home (such as the location to which you want to deploy the clone, the name of the Oracle Database home, a list of the nodes in the cluster, and so on).The Provisioning Pack feature of Oracle Grid Control provides a framework that automates the provisioning of new nodes and clusters. For data centers with many clusters, the investment in creating a cloning procedure to provision new clusters and new nodes to existing clusters is worth the effort.
The following list describes some situations in which cloning is useful:
Cloning provides a way to prepare a Oracle Clusterware home once and deploy it to many hosts simultaneously. You can complete the installation in silent mode, as a noninteractive process. You do not need to use a graphical user interface (GUI) console, and you can perform cloning from a Secure Shell (SSH) terminal session, if required.
Cloning enables you to create a new installation (copy of a production, test, or development installation) with all patches applied to it in a single step. Once you have performed the base installation and applied all patch sets and patches on the source system, the clone performs all of these individual steps as a single procedure. This is in contrast to going through the installation process to perform the separate steps to install, configure, and patch the installation on each node in the cluster.
Installing Oracle Clusterware by cloning is a quick process. For example, cloning an Oracle Clusterware home to a new cluster with more than two nodes requires a few minutes to install the Oracle software, plus a few minutes more for each node (approximately the amount of time it takes to run the root.sh
script).
Cloning provides a guaranteed method of repeating the same Oracle Clusterware installation on multiple clusters.
The cloned installation acts the same as the source installation. For example, you can remove the cloned Oracle Clusterware home using OUI or patch it using OPatch. You can also use the cloned Oracle Clusterware home as the source for another cloning operation. You can create a cloned copy of a test, development, or production installation by using the command-line cloning scripts. The default cloning procedure is adequate for most cases. However, you can also customize some aspects of the cloning process, for example, to specify custom port assignments or to preserve custom settings.
The cloning process works by copying all of the files from the source Oracle Clusterware home to the destination Oracle Clusterware home. Thus, any files used by the source instance that are located outside the source Oracle Clusterware home's directory structure are not copied to the destination location.
The size of the binary files at the source and the destination may differ because these are relinked as part of the cloning operation, and the operating system patch levels may also differ between these two locations. Additionally, the number of files in the cloned home would increase because several files copied from the source, specifically those being instantiated, are backed up as part of the clone operation.
To prepare the source Oracle Clusterware home to be cloned, you create a copy of an installed Oracle Clusterware home that you then use to perform the cloning procedure on one or more nodes.
Use the following step-by-step procedure to prepare a copy of the Oracle Clusterware home.
Step 1 Install Oracle Clusterware.
Use the detailed instructions in your platform-specific Oracle Clusterware installation guide to perform the following steps on the source node:
Install the Oracle Clusterware 11g release.
Install any patches that are required (for example, 11.1.0.n), if necessary.
Apply one-off patches, if necessary.
Step 2 Shutdown Oracle Clusterware.
Before copying the source Oracle Clusterware home, shut down Oracle Clusterware using the crsctl stop crs
command. The following example shows the command and the messages that display during the shutdown:
[root@node1 root]# crsctl stop crs Stopping resources. This could take several minutes. Successfully stopped Oracle Clusterware resources Stopping Cluster Synchronization Services. Shutting down the Cluster Synchronization Services daemon. Shutdown request successfully issued.
Note that you copy the Oracle Clusterware home from only one of the nodes.
Step 3 Make a copy of the Oracle Clusterware home
To keep the installed Oracle Clusterware home as a working home, you should make a full copy of the source Oracle Clusterware home and remove the unnecessary files from the copy. For example, as the root
user on Linux systems you could issue the cp
command:
cp -prf CRS_HOME location_for_the_copy_of_crs_home
Step 4 Remove unnecessary files from the copy of the Oracle Clusterware home.
The Oracle Clusterware home contains files that are relevant only to the source node so you should remove the unnecessary files from the copy. You should exclude files in the log
, crs/init
, racg/dump
, srvm/log
, and cdata
directories.
Use one of the following methods to exclude files from your backup file:
Make a copy of the source CRS home and delete the unnecessary files from the copy.
The following example shows the commands you can issue to remove unnecessary files from the copy of the CRS home. In the example, crscluster
represents the name of the cluster:
[root@node1 root]# cd /opt/oracle/product/11g/crs
[root@node1 crs]# rm –rf ./opt/oracle/product/11g/crs/log/hostname
[root@node1 crs]# find . -name '*.ouibak' -exec rm {} \;
[root@node1 crs]# find . -name '*.ouibak.1' -exec rm {} \;
[root@node1 crs]# rm –rf ./cdata/*
[root@node1 crs]# rm –rf root.sh*
[root@node1 crs]# cd cfgtoollogs
[root@node1 cfgtoollogs]# find . -type f -exec rm -f {} \;
Create an excludeFileList
file and then use the tar
command or Winzip to create a copy of the CRS home. For example, on Linux, issue the tar cpfX - excludFileList.txt
command to create a tar file that does not excludes the unnecessary files.
Step 5 Create a copy of the source Oracle Clusterware home.
On the source node, create a copy of the Oracle Clusterware home using WinZip on Windows systems and tar or gzip on Linux and UNIX systems. Make sure that the tool that you use preserves the permissions and file timestamps.
When creating the copy, the best practice is to include the release number in the name of the file. For example, the following Linux example uses the cd
command to change to the Oracle Clusterware home location, and then uses the tar
command to create the copy named crs11101.tgz
.
The following examples describe how to archive and compress the source Oracle Clusterware home on various platforms:
On Linux and UNIX systems, issue the following command if you are using an excludeFileList
file:
tar cpfX - excludeFileList . | compress -fv > temp_dir/crs11101.tar.Z
The following example shows the Linux and UNIX commands to create a copy when you are not using an excludeFileList
file. In the tar
command, the pathname variable represents the location of the file:
[root@node1 root]# cd /opt/oracle/product/11g/crs/
[root@node1 crs]# tar –zcvf /pathname/crs11101.tgz .
On AIX or HPUX systems:
tar cpf - . | compress -fv > temp_dir/crs11101.tar.Z
On Windows systems, use WinZip to create a zip file
Note:
Do not use the jar utility to copy and compress the Oracle Clusterware home.This section explains how to create a new cluster by cloning a successfully installed Oracle Clusterware environment and copying it to the nodes on the destination cluster. The procedures in this section describe how to use cloning for Linux, UNIX, and Windows systems.
For example, you can use cloning to quickly duplicate a successfully installed Oracle Clusterware environment to create a new cluster. Figure 3-1 shows the end result of a cloning procedure in which the Oracle Clusterware home on Node 1 has been cloned to Node 2 and Node 3 on Cluster 2, making Cluster 2 a new two-node cluster.
Figure 3-1 Cloning to Create a New Oracle Clusterware Environment
At a high level, the steps to create a new cluster through cloning are as follows:
Step 1 Prepare the new cluster nodes
On each destination node, perform the following preinstallation steps:
Specify the kernel parameters.
Configure block devices for Oracle Clusterware devices.
Ensure you have set the block device permissions correctly.
Use short, nondomain-qualified names for all names in the Hosts file.
Test whether or not the interconnect interfaces are reachable using the ping
command.
Verify that the VIP addresses are not active at the start of the cloning process by using the ping
command (the ping
command of the VIP address must fail).
Run the Cluster Verification Utility (CVU) to verify your hardware and operating system environment.
See your platform-specific Oracle Clusterware installation guide for the complete preinstallation checklist.
Note:
Unlike traditional methods of installation, the cloning process does not validate your input during the preparation phase. (By comparison, during the traditional method of installation using the OUI, various checks take place during the interview phase.) Thus, if you make any mistakes during the hardware setup or in the preparation phase, then the cloned installation will fail.Step 2 Deploy Oracle Clusterware on the destination nodes
Before you begin the cloning procedure described in this section, ensure that you have completed the prerequisite tasks to create a copy of the Oracle Clusterware home, as described in the "Preparing the Oracle Clusterware Home for Cloning" section.
On each destination node, deploy the copy of the Oracle Clusterware home by performing the following steps:
If you do not have a shared Oracle Clusterware home, then restore the copy of the Oracle Clusterware home on each node in the destination cluster in the equivalent directory structure as the directory structure in which the Oracle Clusterware home resided on the source node. Skip this step if you have a shared Oracle Clusterware home.
For example:
On Linux or UNIX systems, issue commands similar to the following:
[root@node1 root]# mkdir -p /opt/oracle/product/11g/crs
[root@node1 root]# cd /opt/oracle/product/11g/crs
[root@node1 crs]# tar –zxvf /pathname/crs11101.tgz
In the example, the pathname
variable represents the directory structure in which you want to install the Oracle Clusterware home.
On Windows systems, unzip the Oracle Clusterware home on the destination node in the equivalent directory structure as the directory structure in which the Oracle Clusterware home resided on the source node.
Change the ownership of all files to oracle
:oinstall
group, and create a directory for the Oracle Inventory. For example, the following commands are for a Linux system:
[root@node1 crs]# chown –R oracle:oinstall /opt/oracle/product/11g/crs [root@node1 crs]# mkdir -p /opt/oracle/oraInventory [root@node1 crs]# chown oracle:oinstall /opt/oracle/oraInventory
Run the preupgrade.sh
script from the CRS_Home/install
directory on each target node as follows:
@ preupgrade.sh -crshome target_crs_oh -crsuser user_who_runs_cloning -noshutdown
Step 3 Run the clone.pl script on each destination node
To set up the new Oracle Clusterware environment, the clone.pl
script requires you to provide a number of setup values to the script. You can supply the variables by either supplying input when you run the clone.pl
script, or by creating a file in which you can assign values to the cloning variables. The following discussions describe these options.
Supplying Input to the clone.pl
Script On the Command Line
If you do not have a shared Oracle Clusterware home, then on each destination node, then navigate to the $ORACLE_HOME/clone/bin
directory and run the clone.pl
script, which performs the main Oracle Clusterware cloning tasks. To run the script, you need supply input to a number of variables, as shown in the following example:
Note:
When cloning Oracle Clusterware using theclone.pl
script, you must set a value for the ORACLE_BASE
variable even though specifying Oracle Base is not a requirement of the Oracle Clusterware installation. You can set the ORACLE_BASE
variable to any directory location (for example, you could set it to the CRS Home location), because the value is ignored.The clone.pl script takes the following variables:
Oracle_home_name
is the name of the destination Oracle Clusterware home
new_node
is the name of the destination node
new_node-priv
is the private interconnect protocol address of the destination node
new_node-vip
is the virtual interconnect protocol address of the destination node
central_inventory_location
is the location of the Oracle central inventory
For example:
On Linux and UNIX systems:
perl clone.pl ORACLE_BASE=/opt/oracle ORACLE_HOME=CRS_home ORACLE_HOME_ NAME=CRS_HOME_NAME '-On_storageTypeVDSK=2' '-On_storageTypeOCR=2' '-O"sl_tableList={node2:node2-priv:node2-vip, node3:node3-priv:node3-vip}"' '-O"ret_PrivIntrList=private interconnect list"' '-O"sl_OHPartitionsAndSpace_valueFromDlg={partition and space information}"' '-O-noConfig'
On Windows systems:
perl clone.pl ORACLE_BASE=D:\oracle ORACLE_HOME=CRS_home ORACLE_HOME_ NAME=CRS_HOME_NAME '-On_storageTypeVDSK=2' '-On_storageTypeOCR=2' '-O"sl_tableList={node2:node2-priv:node2-vip, node3:node3-priv:node3-vip}"' '-O"ret_PrivIntrList=private interconnect list"' '-O"sl_OHPartitionsAndSpace_valueFromDlg={partition and space information}"' '-O-noConfig' '-OPERFORM_PARTITION_TASKS=FALSE'
If you have a shared Oracle Clusterware home, then append the -cfs
option to the command example in this step and provide a complete path location for the cluster file system. Ensure that the variables n_storageTypeOCR
and n_storageTypeVDSK
are set to 2 for redundant storage. Ensure that the values are set to 1 for nonredundant storage. In this case, you must also specify the mirror locations. On the other nodes, issue the same command, by passing an additional argument PERFORM_PARTITION_TASKS=FALSE
.
For example:
perl clone.pl ORACLE_BASE=/opt/oracle ORACLE_HOME=CRS_home ORACLE_HOME_NAME=CRS_home_name '-On_storageTypeVDSK=2' '-On_storageTypeOCR=2' '-O"sl_tableList={node2:node2-priv:node2-vip, node3:node3-priv:node3-vip}"' '-O"ret_PrivIntrList=private interconnect list"' '-O"sl_OHPartitionsAndSpace_valueFromDlg={partition and space information}"' '-O-noConfig' '-OPERFORM_PARTITION_TASKS=FALSE'
See Also:
The "Cloning Script Variables Reference" section for more information about setting these variables.Supplying Input to the clone.pl
Script in a File
Because the clone.pl
script is sensitive to the parameters being passed to it, you must be accurate in your use of brackets, single quotation marks, and double quotation marks. To make running the clone.pl script less prone to errors, you can create a file that is similar to the start.sh
script shown in Example 3-1 in which you can specify environment variables and cloning parameters to the clone.pl
script.
Example 3-1 shows an excerpt from the example file example called start.sh
script that calls the clone.pl
script and has been set up for a cluster named crscluster
. Invoke the script as the operating system user that installed Oracle Clusterware.
Example 3-1 Excerpt From the start.sh Script to Clone Oracle Clusterware
#!/bin/sh ORACLE_BASE=/opt/oracle CRS_home=/opt/oracle/product/11g/crs E01=CRS_home=/opt/oracle/product/11g/crs E02=ORACLE_HOME=${CRS_home} E03=ORACLE_HOME_NAME=OraCrs11g E04=ORACLE_BASE=/opt/oracle #C00="-O'-debug'" C01="-O's_clustername=crscluster'" C02="-O'INVENTORY_LOCATION=/opt/oracle/oraInventory'" C03="-O'sl_tableList={node1:node1int:node1vip:N:Y,node2:node2int:node2vip:N:Y}'" C04="-O'ret_PrivIntrList={eth0:144.25.212.0:1,eth1:10.10.10.0:2}'" C05="-O'n_storageTypeVDSK=1'" C06="-O's_votingdisklocation=/dev/sdc1' -O's_OcrVdskMirror1RetVal=/dev/sdd1' -O's_VdskMirror2RetVal=/dev/sde1'" C07="-O'n_storageTypeOCR=1'" C08="-O's_ocrpartitionlocation=/dev/sdc2' -O's_ocrMirrorLocation=/dev/sdd2'" perl CRS_home/clone/bin/clone.pl $E01 $E02 $E03 $E04 $C01 $C02 $C03 $C04 $C05 $C06 $C07 $C08
The start.sh
script sets several environment variables and cloning parameters, as described in Table 3-1 and Table 3-2, respectively.
Table 3-1 describes the environment variables E01, E02, E03, and E04 that are shown in bold typeface in Example 3-1.
Table 3-1 Environment Variables Passed to the clone.pl Script
Symbol | Variable | Description |
---|---|---|
E01 |
The location of the Oracle Clusterware home. This directory location must exist and must be owned by the Oracle operating system group: |
|
E02 |
The location of the Oracle Clusterware home. This directory location must exist and must be owned by the Oracle operating system group: |
|
E03 |
The name of the Oracle Clusterware home. This is stored in the Oracle Inventory. |
|
E04 |
The location of the Oracle Base directory. |
Also, see "Cloning Script Variables Reference" for a description of the cloning parameters C01 through C08, that are shown in bold typeface in Example 3-1.
Step 4 Run the orainstRoot.sh script on each node
In the Central Inventory directory on each destination node, run the orainstRoot.sh
script as the operating system user that installed Oracle Clusterware. This script populates the /etc/oraInst.loc
directory with the location of the central inventory.
Note that you can run the script on each node simultaneously. For example:
[root@node1 root]# /opt/oracle/oraInventory/orainstRoot.sh
Ensure the orainstRoot.sh
script has completed on each destination node before proceeding to the next step.
Step 5 Run the CRS_home/root.sh script
On each destination node, run the CRS_home
/root.sh
script. You can run the script on only one node at a time. The following example is for a Linux or UNIX system:
On the first node, issue the following command:
[root@node1 root]# /opt/oracle/product/11g/crs/root.sh
Ensure the CRS_home
/root.sh
script has completed on the first node before running it on the second node.
On each subsequent node, issue the following command:
[root@node2 root]# /opt/oracle/product/11g/crs/root.sh
The root.sh
script automatically sets up the node applications: Global Services Daemon (GSD), Oracle Notification Services (ONS), and Virtual IP (VIP) resources in the OCR.
Step 6 Run the configuration assistants and the Oracle Cluster Verify utility
At the end of the Oracle Clusterware installation on each new node, run the configuration assistants and CVU using the commands in the CRS_home/cfgtoollogs/configToolAllCommands
file.
For example, you can use cloning to quickly extend a successfully installed Oracle Clusterware environment to more nodes in the same cluster. Figure 3-2 shows the end result of a cloning procedure in which the Oracle Clusterware home on Node 1 has been cloned to Node 2 in the same cluster, making it a two-node cluster.
Figure 3-2 Cloning to Extend the Oracle Clusterware Environment to Another Node
At a high level, the steps to extend Oracle Clusterware to more nodes are nearly identical to the steps described in the "Cloning Oracle Clusterware to Create a New Cluster" section.
The following list describes the steps you perform to extend Oracle Clusterware to additional nodes in the cluster:
Run the clone.pl
script on each destination node. The following example is for Linux or UNIX systems:
perl clone.pl ORACLE_BASE=/opt/oracle ORACLE_HOME=$ORACLE_HOME ORACLE_HOME_ NAME=Oracle_home_name "sl_tableList={node2:node2_priv:node2-vip}" INVENTORY_LOCATION=central_inventory_location -noConfig
Run the orainstRoot.sh script on each node on each destination node.
Run the addNode
script on the source node.
Run the following command on the source node, where new_node
is the name of the new node, new_node-priv
is the private interconnect protocol address for the new node, and new_node-vip
is the virtual interconnect protocol address for the new node:
$ORACLE_HOME/oui/bin/addNode.sh –silent "CLUSTER_NEW_NODES=(new_nodes)" "CLUSTER_NEW_PRIVATE_NODE_NAMES=(new_node-priv)" "CLUSTER_NEW_VIRTUAL_ HOSTNAMES=(new_node-vip)" –noCopy
Note:
Because theclone.pl
script has already been run on the new node, this step only updates the inventories on the nodes and instantiates scripts on the local nodeOn the source node, run a script to instantiate the node:
On Linux and UNIX systems, run the rootaddnode.sh
script from the CRS_HOME
/install
directory as root
user.
On Windows systems, run the crssetup.add.bat
script from the %
CRS_HOME%\install
directory.
Run the CRS_home/root.sh script on each destination node in Linux and UNIX environments.
Run the configuration assistants and the CLUVFY utility.
As the user that owns the clusterware on the source node of the cluster, run the configuration assistants as described in the following steps:
On Linux or UNIX systems, issue the following onsconfig
command:
onsconfig add_config node2:remote_port node3:remote_port
You can obtain the remote port by issuing the cat ons.config
command from the /opmn/conf
directory in the CRS home location.
On Windows systems, issue the racgons command:
./racgons add_config node2:remote_port node3:remote_port
On Linux, UNIX, or Windows systems, run the CLUVFY utility in postinstallation verification mode to confirm that the installation of Oracle Clusterware was successful. For example:
CRS_HOME/bin/cluvfy stage -post crsinst -n node1,node2
Table 3-2 describes the variables that can be passed to the clone.pl
script when you include the -O
option on the command.
Table 3-2 Variables for the clone.pl Script with the -O option
Variable | Datatype | Description |
---|---|---|
|
String |
Set the value for this variable to be the unique name of the cluster that you are creating from a cloning operation. Use a maximum of 15 characters. Valid characters for the cluster name can be any combination of lower and uppercase alphabetic characters A to Z, numerics 0 through 9 , hyphens (-), pound signs (#) and underscores (_). |
|
The location of the inventory. This directory location must exist and must be owned by the Oracle operating system group: |
|
|
String List |
A list of the nodes that make up the cluster. The format is a comma-delimited list of Set the value of this variable to be equal to the information in the cluster configuration information table. This file contains a comma-delimited list of values. The first field designates the public node name, the second field designates the private node name, and the third field designates the virtual host name. The fourth and fifth fields are used only by OUI and should default to {"node1:node1-priv:node1-vip:N:Y","node2:node2-priv:node2-vip:N:Y"}. |
|
String List |
This is the return value from the Private Interconnect Enforcement table. This variable has values in the format
For example: {"eth0:10.87.24.0:2","eth1:140.87.24.0:1","eth3:140.74.30.0:3"} You can run the |
|
Integer |
If you are using:
|
|
Integer |
If you are using:
|
|
String |
This variable contains user-entered cluster name information; allow a maximum of 15 characters. |
|
String |
This variable is not required in the Oracle Cluster Registry (OCR) dialog. |
|
String |
This variable is used to pass the cluster configuration file information which is the same file as that specified during installation. You may use this file instead of node1 node1-priv node1-vip node2 node2-priv node2-vip Note that if you are cloning from an existing installation, then you should use |
|
String |
Set the value of this variable to be the location of the voting disk. For example: /oradbshare/oradata/vdisk If you are using: |
|
String |
Set the value of this variable to be the location of the first additional voting disk. You must set this variable if you choose a value of /oradbshare/oradata/vdiskmirror1 |
|
String |
Set the value of this variable to the OCR location. Oracle places this value in the /oradbshare/oradata/ocr If you are using:
|
|
String |
Set the value of this variable to the value for the OCR mirror location. Oracle places this value in the /oradbshare/oradata/ocrmirror |
|
String |
Set the value of this variable to be the location of the second additional voting disk. You must set this variable if you choose a value of /oradbshare/oradata/vdiskmirror2 |
|
String List |
The value of this variable represents the cluster node names that you selected for installation. For example, if you selected CLUSTER_NODES = {"node1"} |
|
Boolean |
Only set this variable when performing a silent installation with a response file. The valid values are |
|
String List |
Set the value for this variable using the following format:
For example, to configure the OCR and voting disk on raw devices and to not use a cluster file system for either data or software, set sl_OhPartitionsAndSpace_valueFromDlg = {Disk,Partition,partition size, 0,N/A,1,Disk,Partition, partition size,0,N/A,2,.....) |
The cloning script runs multiple tools, each of which may generate its own log files. After the clone.pl
script finishes running, you can view log files to obtain more information about the cloning process.
The following log files that are generated during cloning are the key log files of interest for diagnostic purposes:
Central_Inventory
/logs/cloneActions timestamp.log
Contains a detailed log of the actions that occur during the OUI part of the cloning.
Central_Inventory
/logs/oraInstall timestamp.err
Contains information about errors that occur when OUI is running.
Central_Inventory
/logs/oraInstall timestamp.out
Contains other miscellaneous messages generated by OUI.
$ORACLE_HOME/clone/logs/clone timestamp.log
Contains a detailed log of the actions that occur prior to cloning as well as during the cloning operations.
$ORACLE_HOME/clone/logs/error timestamp.log
Contains information about errors that occur prior to cloning as well as during cloning operations.
Table 3-3 describes how to find the location of the Oracle inventory directory.
Table 3-3 Finding the Location of the Oracle Inventory Directory
Type of System ,,, | Location of the Oracle Inventory Directory |
---|---|
All UNIX computers except Linux and IBM AIX |
/var/opt/oracle/oraInst.loc |
IBM AIX and Linux |
|
Windows |
Obtain the location from the Windows Registry key: |