Skip Headers
Oracle® Database Administrator's Reference
10g Release 2 (10.2) for UNIX-Based Operating Systems

Part Number B15658-03
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Index
Index
Go to Master Index
Master Index
Go to Feedback page
Feedback

Go to previous page
Previous
Go to next page
Next
View PDF

F Administering Oracle Database on Tru64 UNIX

This appendix contains information about administering Oracle Database on Tru64 UNIX. It contains the following topics:

F.1 Enabling Oracle Database Directed Placement Optimizations

HP AlphaServer GS series, ES47 and ES80 systems consist of smaller building blocks called Resource Affinity Domains (RADs). A RAD is a collection of tightly coupled CPUs, memory modules, and an I/O controller coupled through a fast interconnect. A second-level interconnect connects each of the RADs together to form a larger configuration.

Unlike previous generation servers which have only one common shared interconnect between CPUs, memory, and I/O controller, GS series, ES47 and ES80 servers can offer superior performance and memory access times when a particular CPU accesses memory within its own RAD or uses its local I/O controller. Because of the switched interconnect, all I/O activity and memory accesses within one RAD do not interfere with those within another RAD. However, because memory accesses between a CPU and memory module located across RAD boundaries must traverse two levels of interconnect hierarchy, these memory references take longer relative to memory references that are within a RAD.

Directed memory and process placement support enables sophisticated applications to communicate their specific requirements for process and memory layout to the operating system. This communication results in greater performance through increased localization of memory references within a RAD.

Oracle Database includes enhanced support for the special capabilities of high performance servers such as the GS series, ES47 and ES80. Directed placement optimizations specifically take advantage of hierarchical interconnects available in these servers. All previous generation servers have a single shared interconnect, so these servers neither directly benefit from directed placement optimizations nor is there any loss of performance on these servers. Therefore, by default, these optimizations are disabled in Oracle Database.

The following sections provide information about directed placement optimizations:

F.1.1 Requirements to Run the Directed Placement Optimizations

The system must meet the following requirements for Oracle Database directed placement optimizations to work:

  • The system must be an HP GS series, ES47, or ES80 AlphaServer or similar locality sensitive system. Oracle Database optimizations only affect systems that are locality sensitive.

  • The operating system must be Tru64 UNIX V5.1B or later.

F.1.2 Enabling Oracle Directed Placement Optimizations

To enable Oracle directed placement optimizations, follow these steps:

  1. Shut down the Oracle instance.

  2. Relink Oracle Database by entering the following commands:

    $ cd $ORACLE_HOME/rdbms/lib
    $ make -f ins_rdbms.mk numa_on
    $ make -f ins_rdbms.mk ioracle
    
    

If you are not using a compatible version of Tru64 UNIX, then the following message is displayed:

Operating System Version Does not Support NUMA.
Disabling NUMA!

If you enable Oracle directed placement optimizations, and later, then change Tru64 UNIX to an incompatible version, then disable Oracle directed placement optimizations as described in the following section.

F.1.3 Disabling Oracle Directed Placement Optimizations

To disable Oracle directed placement optimizations, follow these steps:

  1. Shut down the Oracle instance.

  2. Relink Oracle Database using the numa_off option:

    $ cd $ORACLE_HOME/rdbms/lib
    $ make -f ins_rdbms.mk numa_off
    $ make -f ins_rdbms.mk ioracle
    

F.1.4 Using Oracle Directed Placement Optimizations

The Oracle directed placement optimizations assume an equi-partitioned configuration. This means that all RADs are configured with the same number of CPUs and the same amount of memory. Oracle Database is assumed to run across all RADs on the system.

F.1.5 Oracle Initialization Parameters

To make the most efficient use of the local environment, Oracle Database adjusts some initialization parameters automatically depending on the database configuration as reported by the operating system. This practice eliminates common errors in correctly computing subtle dependencies in these parameters.

F.1.6 Tru64 UNIX Subsystem Attributes

You must set the subsystem attributes in the following table to realize the full benefits of a NUMA system.

Subsystem Attribute Setting
ipc ssm_threshold 0

shm_allocate_striped 1 (default)
vm rad_gh_regions[0], rad_gh_regions[1], and so on Set the rad_gh_regions[n] attributes as follows:
  1. Run the following command:

    $ ipcs -b
    
    
  2. From the output, add the values of the SEGSZ column, and convert the total to megabytes, by dividing by 1048576.

  3. If the value of the LOG_BUFFER initialization parameter is less than 8388608 (8 MB), then subtract 8388608 from the total. If the LOG_BUFFER value is between 8388608 and 16777216 (16 MB), then subtract 16777216 from the total of the SEGSZ output.

  4. Divide the result of step 3 by the number of RADs on the system.

  5. Set the rad_gh_regions[0] parameter to the result of step 4, plus either 8 MB or 16 MB, depending on the value of the LOG_BUFFER initialization parameter and other valid rad_gh_regions[n] attributes.

These steps assume that all the RADs on the system are allocated to Oracle.

If Oracle is restricted to using a subset of the number of RADs on the system, then refer to the "Restricting Oracle Database to a Subset of the Number of RADs on the System" section. Instead of the preceding steps, the rad_gh_regions[0] setting must be applied to the rad_gh_regions[n] parameter for the first RAD listed in the numa_config_sid.ora file.


There are 63 rad_gh_regions attributes in the vm subsystem in Tru64 UNIX V5.1B. Set only the attributes for the total number of RADs on the system. For example, if there are 4 RADs on the system, which are used by Oracle, and the SEGSZ column total is 10248 MB with the LOG_BUFFER initialization parameter set to 2097152 (2 MB), then set rad_gh_regions[0] to 2568 and rad_gh_regions[1], rad_gh_regions[2], and rad_gh_regions[3] to 2560. To successfully start the instance, you may have to raise this value slightly, by 1 or 2.

If CPUs and memory are taken off-line, then Oracle Database continues to function, but loses performance.

F.1.7 Process Affinity to RADs

You can improve performance by directing the operating system to run the processes on specific RADs. If connections to the database are made through the Oracle Listener process, and there is a corresponding network interconnect adapter on the RAD, then you can run a listener on each RAD. To run the listener on a particular RAD, run the following command, where rad_number is the number of the RAD:

$ runon -r rad_number lsnrctl start [listener_name]

All Oracle shadow processes are automatically created on the same RAD as the Oracle listener.

F.1.8 Restricting Oracle Database to a Subset of the Number of RADs on the System

You can restrict Oracle Database to run on a subset of the number of RADs on the system. When an instance starts, Oracle Database looks for the numa_config_sid.ora file in the $ORACLE_HOME/dbs directory. If the file does not exist, then Oracle Database uses the RADs on the system. If the file exists, then it instructs the instance to use only the RADs specified in the file, and to run processes on only the RADs specified.

The numa_config_sid.ora file has the following format:

number_RADs
group1
...
groupn

In this example, number_RADs is the number of processor groups or RADs that Oracle should use and group1 to groupn are the numbers of the RADs for Oracle to use.

Use the following guidelines when creating the file:

  • Do not include any blank lines

  • Include each number on a separate line

  • Do not include anything else on the line

The RADs are numbered from zero to the maximum for the particular system. If the system is a fully integrated AlphaServer GS320, then the RADs are numbered 0,1,2,3,4,5,6,7. For example, if you want to restrict Oracle Database to RADs 1 and 3 of a fully configured GS320, then create a numa_config_sid.ora file similar to the following:

2
1
3

The group numbers do not have to be consecutive or in increasing numerical order.

Parallel query slaves and the shadow processes for Bequeath connections are created on the RADs specified in the file. For remote TCP connections, you must bind the listener to the RADs explicitly as described in the "Process Affinity to RADs" section.

F.2 Supporting Mixed CPU Systems

Tru64 UNIX systems using Tru64 UNIX V5.1B or later can have mixed CPU speeds and types. All CPUs in a single RAD must have the same speed and cache size. Another RAD can have a set of CPUs with a different speed and cache size.

The performance of a mixed CPU system depends on the proportion of slower CPUs to faster CPUs. Also, performance is affected by the placement of Oracle processes on the system. In a high transaction Online Transaction Processing (OLTP) environment, placing the database writer and log writer processes on the slower CPUs can adversely affect performance. In a data warehousing or decision support environment, placing the database writer and log writer processes on the slower CPUs may not be noticeable at all.

The ability to mix CPU systems enables you to protect your hardware investment. You can add faster and more powerful CPUs to a system without replacing older CPUs. HP and Oracle have tested and support mixed CPU systems.


Note:

Do not expect the mixed CPU system to perform as well as a system made up entirely of the fastest CPUs of the mixed CPU system. However, a mixed CPU system should perform better than a system made up entirely of the slowest CPUs of the mixed CPU system. Contact HP for a complete list of rules and restrictions for mixed CPU systems.

F.3 Gathering Database Statistics on Tru64 UNIX

Oracle Database 10g runs only on Tru64 UNIX V5.1B or later. This is because HP changed the size of the long double data type from 64 bits on Tru64 UNIX V4.0x to 128 bits on Tru64 UNIX V5.x. This change causes certain Oracle operations to perform with increased precision. One of these operations stores statistics in the data dictionary after a table or index is analyzed.

The query optimizer within Oracle Database uses the statistics stored in the data dictionary to determine how best to run a query. If a stored statistic does not match a statistic calculated by the query optimizer while it searches for the best plan, then the query optimizer may use the wrong plan to run the query. This can cause the query to perform poorly or fail.

For this reason, after upgrading from Oracle8i release 8.1.7 or lower to Oracle Database 10g you should analyze all object statistics for each schema. It is not required to reanalyze any schemas after upgrading from Oracle9i release 1 (9.0.1) or release 2 (9.0.2) to Oracle Database 10g. You can use the DBMS_STATS.GATHER_SCHEMA_STATS procedure to perform the analysis to gather statistics for each schema. The DBMS_STATS package saves the current table or index statistics in a table in case the new statistics cause problems.


See Also:

For more information about gathering database statistics, refer to Oracle Database PL/SQL Packages and Types Reference

F.4 Tuning Asynchronous I/O

Oracle Database for Tru64 UNIX systems can perform either synchronous or asynchronous I/O. To improve performance, Oracle recommends that you use asynchronous I/O. Set the DISK_ASYNC_IO initialization parameter to true to enable asynchronous I/O.

Oracle Database can use asynchronous I/O on any data files that are stored on AdvFS or cluster file systems (CFS), in Automatic Storage Management disk groups, or on raw devices. You must tune some kernel subsystem attributes for optimal asynchronous I/O performance.

F.4.1 aio_task_max_num Attribute

Set the aio_task_max_num kernel subsystem attribute for a single instance to 8193.

You should adjust the setting of the aio_task_max_num attribute to accommodate any other applications that use asynchronous I/O, including multiple Oracle Database instances on a single node. Set the value of the parameter to the maximum number of I/O requests that any application can issue. For example, if three applications are running on a system and application 1 can issue a maximum of 10 simultaneous asynchronous I/O requests, application 2 can issue 100 simultaneous asynchronous I/O requests, and application 3 can issue 1000 simultaneous asynchronous I/O requests, then set the aio_task_max_num parameter to at least 1000.

If you do not set the aio_task_max_num attribute as described in this section, then the performance of Oracle Database is reduced and spurious I/O errors may occur. These errors are recorded in the alert log and trace files.

F.5 Direct I/O Support and Concurrent Direct I/O Support

This section describes support for direct and concurrent I/O. It includes the following sections:

F.5.1 Single Instance Requirements

Oracle Database has the following requirements for single instance installations:

  • Tru64 UNIX V5.1B or later with the appropriate patch kits.


    See Also:

    For more information about Tru64 UNIX patch kits, refer to Oracle Database Installation Guide for UNIX Systems.

  • Oracle data files stored on an AdvFS file system or in an Automatic Storage Management disk group.

  • The disks that use the AdvFS file system must be physically connected to the computer running the Oracle Database instance. This includes disks attached by fiber channel. This specifically excludes cases where I/O must be served by another node because of a lack of physical connectivity.

On Tru64 UNIX V5.1B systems or later in a non-clustered system environment, the AdvFS file system and direct I/O give almost all the performance of raw devices because the file system cache is not used. In addition to this, the file system enables you to more easily manage the database files.

F.5.2 Cluster File Systems

On V5.1B systems or later, Tru64 UNIX supports cluster file systems (CFS). CFS provides a single namespace file system for all nodes in a cluster. All file systems mounted in a cluster are automatically seen by all nodes in the cluster. Because it is layered on top of the AdvFS file system, the CFS file system inherits much of the characteristics of non-clustered systems.

F.5.3 Tru64 UNIX V5.1B Cluster File Systems

Oracle supports CFS only on Tru64 UNIX V5.1B or later because this file system now supports a concurrent direct I/O model. Any node that has physical connectivity to a drive can issue data I/O to its file systems without consulting with the owning node.

All metadata changes to a file, for example extending, closing, changing the access or modification date, are still served by the owner node and can still cause cluster interconnect saturation. Therefore, it is possible for the CREATE TABLESPACE, ALTER TABLESPACE, ADD DATAFILE, ALTER DATABASE DATAFILE, or RESIZE commands to perform poorly on a CFS file system when compared to raw devices.

F.5.4 Disabling Direct I/O Support

Oracle Database running on an AdvFS file system with direct I/O support enabled should perform as well as Oracle Database running on raw devices. In most cases, an Oracle Database that is stored on AdvFS volumes with direct I/O support enabled should perform the same as or better than the same database with direct I/O support disabled. However, the following workload attributes can reduce performance when direct I/O support is enabled:

  • A high read to write ratio

  • Oracle data blocks not cached in the SGA because the query uses parallel query slaves

  • A UNIX buffer cache (UBC) several megabytes or larger

  • Full table scan queries where the same set of tables are scanned repeatedly

  • Tables being scanned can fit in the UBC

When direct I/O support is disabled, workloads that have most of the attributes in the preceding list rely heavily on the UBC. Because most, if not all, of the tables being scanned are cached in the UBC, the I/O requests issued by the parallel query are met by the UBC. As a result, the query completes much faster than if the data had to be read from disk as it would with direct I/O enabled.

When direct I/O support is enabled, Oracle data blocks are not cached in the UBC. They are read into process-private memory instead. This means that any query that reads a previously-scanned table must perform I/O requests to disk to retrieve the data. Disk I/O latencies are several orders of magnitude slower than memory latencies. Therefore, the query runs slower and performance is adversely affected.

If your workload has most of the attributes described in the preceding list, then disabling direct I/O support probably improves performance. However, often there are many different types of queries running on the system at the same time. Some queries only read data while others insert, modify, or delete data and the ratio of the various types of queries differ from site to site. Generally, if your site has more of an OLTP workload, then disabling direct I/O support does not improve performance.

Direct I/O support is enabled by default with Oracle Database 10g. The undocumented _TRU64_DIRECTIO_DISABLED initialization parameter that is used to disable direct I/O support in Oracle9i release 1 (9.0.1) is removed in Oracle Database 10g. The generic FILESYSTEMIO_OPTIONS initialization parameter is used instead. The following table describes the valid values for the FILESYSTEMIO_OPTIONS initialization parameter as interpreted on Tru64 UNIX.

Value Description
directIO Implies that direct I/O support is enabled but asynchronous I/O support is not enabled for I/O to files on an AdvFS files system.
asynch Equivalent to none because asynchronous I/O support is enabled for AdvFS files only if direct I/O support is also enabled.
setall Implies that both direct I/O and asynchronous I/O support are enabled for AdvFS files. This is the default option.
none Disables both direct I/O support and asynchronous I/O support on AdvFS files.


See Also:

See the Oracle Database Reference for more information about the FILESYSTEMIO_OPTIONS initialization parameter.

The DISK_ASYNCH_IO initialization parameter controls the asynchronous I/O state for all database files, whether they are on file systems or raw devices. Therefore, if the DISK_ASYNCH_IO initialization parameter is set to false, then all I/O requests to file system files are synchronous regardless of the value of the FILESYSTEMIO_OPTIONS initialization parameter. The DISK_ASYNCH_IO initialization parameter defaults to true.

F.6 Enabling Access to the Real-Time Clock

Many Oracle processes are timed, especially if the TIMED_STATISTICS initialization parameter is set to true. These timing functions call the Tru64 UNIX kernel and can affect Oracle Database performance. On Tru64 UNIX, you can improve performance on heavily loaded systems by enabling processes to directly access the real time clock.

To enable access to the real time clock:

  1. Log in as the root user.

  2. Run the following commands:

    # mknod /dev/timedev c 15 0 
    # chmod a+r /dev/timedev
    
    

    Note:

    The special file /dev/timedev remains on the system after restarting.

  3. Restart the Oracle Database instance.

    The system checks for the existence of the /dev/timedev file only on instance startup.

F.7 Setting Up Raw Devices


Caution:

Do not attempt to set up raw devices without the help of an experienced system administrator and specific knowledge about the system that you are using.


Note:

To simplify database file management, Oracle recommends that you use Automatic Storage Management or AdvFS with direct I/O in preference to raw devices.

To set up raw devices/volumes on Tru64 UNIX systems:

  1. If you are using RAC, then ensure that the partitions you are adding are on a shared disk.

  2. Determine the names of the free disk partitions.

    A free disk partition is one that is not used for a Tru64 UNIX file system that complies with the following restrictions:

    • It is not listed when you run the /usr/sbin/mount command.

    • It is not in use as a swap device.

    • It does not overlap a swap partition.

    • It is not in use by other Tru64 UNIX applications (for example, other instances of Oracle Database).

    • It does not overlap the Tru64 UNIX file system.

    • It does not use space already used by the file system.

    To determine if a partition is free, obtain a complete map of the starting locations and sizes of the partitions on the device and check for free space. Some partitions may contain file systems that are currently not mounted and are not listed in the /usr/sbin/mount output.


    Note:

    Ensure that the partition does not start at cylinder 0.

  3. Set up the raw device for use by Oracle Database.

    Begin by verifying that the disk is partitioned. If it is not, then use the disklabel command to partition it.

  4. Run the ls command to view the owner and permissions of the device file. For example:

    $ ls -la
    
    
  5. Ensure that the partition is owned by the Oracle software owner. If required, use the chown command to change the ownership on the block and character files for the device. For example:

    # chown oracle:dba /dev/rdisk/dsk10c
    
    
  6. Ensure that the partition has the correct permissions. If required, use the chmod command to make the partition accessible to only the Oracle software owner. For example:

    # chmod 600 /dev/rdisk/dsk10c
    
    
  7. Create a symbolic link to the raw devices you require. For example:

    $ ln -s /dev/rdisk/dsk10c /oracle_data/datafile.dbf
    
    

    To verify that you have created the symbolic link, use the character special device (not the block special device) and run the following command:

    $ ls -Ll datafile
    
    

    The following output should be displayed:

    crwxrwxrwx oracle dba datafile
    

    Caution:

    This symbolic link must be set up on each node of the cluster. Check that no two symbolic links specify the same raw device.

  8. Create or add the new partition to a new database.

    To create a new partition, run the following command from SQL*Plus:


    Note:

    The size of an Oracle data file created in a raw partition must be at least 64 KB plus one Oracle block size smaller than the size of the raw partition.

    SQL> CREATE DATABASE sid
      2  LOGFILE '/oracle_data/log1.dbf' SIZE 100K
      3 '/oracle_data/log2.dbf' SIZE 100K
      3  DATAFILE '/oracle_data/datafile.dbf' SIZE 10000K REUSE;
    
    

    To add a partition to a tablespace in an existing Oracle Database, run the following command:

    SQL> ALTER TABLESPACE tablespace_name 
      2  ADD DATAFILE '/dev/rdisk/dsk10c' SIZE 10000K REUSE;
    
    

You can use the same procedure to set up a raw device for the redo log files.

F.8 Spike Optimization Tool

The Spike optimization tool (Spike) is a performance optimization tool that increases the performance of a Tru64 UNIX binary. In a testing environment, Spike, with feedback, increased the performance of Oracle Database by up to 23 percent on an OLTP workload.

For information about Spike, refer to the Tru64 UNIX documentation or run one of the following commands:

Oracle Database requires Spike version V5.2: (510 USG) GEM: 48C5K LIBMLD: 2.4 DATE: Sep 28 2003 or later.


Note:

If you have a version of Spike earlier than V5.2: (510 USG) GEM: 48C5K LIBMLD: 2.4 DATE: Sep 28 2003, then contact HP for a patch kit.

Run the following command to check the version of Spike:

$ spike -V

You can download the latest version of Spike from the HP Web site.


Note:

Oracle does not support versions of the Oracle executable optimized using the spike command. If you encounter a problem in an Oracle Database binary that has been optimized using Spike, then reproduce the problem with the original unoptimized binary. If the problem persists, then contact Oracle Support Services.

F.8.1 Using Spike

This section describes the system resources required by Spike, how and why to use Spike optimization flags, and the various ways to run Spike.

Setting System Resources

Table F-1 lists the system resources required to run Spike.

Table F-1 System Resource Requirements for Spike

Resource Minimum Value

Physical memory

1024 MB

max-per-proc-address-space subsystem attribute value

1024 MB

max-per-proc-data-space subsystem attribute value

1024 MB

vm-maxvas subsystem attribute value

1024 MB


To set the value of these subsystem attributes in the /etc/sysconfigtab file, add the following lines:

proc:
max-per-proc-address-space = 0x40000000
    max-per-proc-data-size = 0x40000000
vm:
vm-maxvas = 0x40000000

Set the limits in your shell environment to the highest values. For the C shell, run the following command:

% limit datasize unlimited
% limit memoryuse unlimited
% limit vmemoryuse unlimited

Spike can run out of virtual memory if the stacksize limit is set too high. To avoid this problem, run the following C shell command:

% limit stacksize 8192

Checking Optimization Flags

Spike provides a large number of optimization flags. However, you cannot use all spike command optimizations with Oracle Database. The following Spike optimization flags are certified to run with Oracle Database:

-arch, -controlOpt, -fb, -feedback, -map, -nosplit, -nochain, -noporder,
-noaggressiveAlign, -o, optThresh, -splitThresh, -symbols_live, -tune, -v, -V

When you run Spike, it places a copy of the optimization flags in the image header comment section of the binary that you are optimizing. Oracle Database checks Spike optimizations used on itself at the beginning of instance startup. If Oracle Database detects an optimization not known to work for Oracle Database binary, or if the binary had been previously optimized with OM (the predecessor to Spike from HP), then the instance startup fails with an ORA-4940 error message. If the instance startup fails, then check the alert log file for more information.


Note:

Oracle Database requires that you use the Spike -symbols_live optimization flag.

Running Spike

Use one of the following methods to optimize an executable using Spike:

  • Static spiking

  • Running Spike with feedback

Static spiking requires only a few set-up steps and yields approximately half the performance benefit possible compared to running Spike with feedback.

Running Spike with feedback includes all the optimizations of static spiking plus additional optimizations that are workload-related. Running spike with feedback provides the best possible performance benefit, however, it requires considerably more effort than static spiking.

For both running Spike with feedback and static spiking, Oracle recommends running the spiked Oracle binary in a test environment before moving it to a production environment.

Static Spiking

Static spiking performs optimizations that are not specific to your workload, such as manipulating the global pointer (gp) register and taking advantage of the CPU architecture. In a test environment, roughly half of the performance optimization gain possible from Spike was through static spiking. Furthermore, static spiking is relatively straight-forward and simple. The combination of simplicity and performance gain makes static spiking worth the effort.

Perform the following steps to use static spiking:

  1. Shut down the database.

  2. Spike the oracle image by entering the following command:

    $ spike oracle -o oracle.spike -symbols_live
    
    
  3. Save the original image and create a symbolic link to the spiked image by entering the following commands:

    $ mv oracle oracle.orig
    $ ln -s oracle.spike oracle
    
    
  4. Start up the database.


    Note:

    Before contacting Oracle for support, you must use the original image to reproduce any problems.

Running Spike with Feedback

Running Spike with feedback performs all the same optimizations as static spiking plus optimizations that are workload-related such as hot and cold basic block movement. In a test environment, approximately half of the performance optimizations gained from Spike was due to the optimizations that depend on feedback information. Running Spike with feedback requires multiple steps and considerably more effort than static spiking. However, performance sensitive customers may find the extra effort worthwhile.

Perform the followings steps to run Spike with feedback:

  1. Instrument the Oracle binary by entering the following command:

    $ pixie -output oracle.pixie -dirname dir -pids oracle_image
    
    

    In the preceding example, oracle_image is your original image and dir is the name of the directory into which the instrumented executable writes the profiling data files.


    Note:

    The -dirname option saves the oracle.Counts.pid files in the dir directory. Because these files are large and may be numerous depending on the workload, ensure that there is enough free disk space.

    This step also creates an oracle.Addrs file that is required later.

    The output of the pixie command may contain errors. You can safely ignore these errors.

  2. Shut down the database.

  3. Save the original image and create a symbolic link to the pixie image by entering the following commands:

    $ mv oracle oracle.orig
    $ ln -s oracle.pixie oracle
    
    
  4. Start up the database and run your workload.

    You cannot run as many users as you can with the standard executable because the pixie executable is larger and slower. As you use Oracle Database, several oracle.Counts.pid files are created, where pid is the process ID for the corresponding Oracle process. Keep track of the process ID of each Oracle process for which the optimization is aimed. These could be the shadow Oracle processes of the clients.

  5. Shut down the database.

  6. Create a symbolic link to replace the original executable by entering the following command:

    $ ln -s oracle.orig oracle
    
    
  7. If you can identify one oracle.Counts.pid file as representative of your workload, then perform step a. If you must merge several counts files together to better represent your workload, then perform step b.

    1. Ensure that the oracle.Addrs file created by the pixie command, the oracle.Counts.pid files, and the original Oracle executable are available.

      Use the process ID (pid) to pick a representative oracle.Counts.pid file and then copy it by entering the following command:

      $ cp oracle.Counts.pid oracle.Counts
      
      
    2. Use the prof utility to merge several oracle.Counts.pid files. See the prof man pages for more information about this utility.

      If you are using the parallel query option, then merge the oracle.Counts.pid files generated by the query slaves and the query coordinator, which is the shadow Oracle process of the query-initiating client.

      If you are not using the parallel query option, then merge the oracle.Counts.pid files from the Oracle foreground processes that use the most memory.

      To merge the oracle.Counts.pid files, run the following command:

      $ prof -pixie -merge oracle.Counts $ORACLE_HOME/bin/oracle \
      oracle.Addrs oracle.Counts.pid1 oracle.Counts.pid2
      
      
  8. Ensure that the oracle.Addrs and oracle.Counts files are available in the current directory, then run Spike using the feedback information by entering the following command:

    $ spike oracle -fb oracle -o oracle.spike_fb -symbols_live
    
    

    The output of the spike command may contain errors. You can safely ignore these errors.

  9. Create a symbolic link to the new oracle image by entering the following command:

    $ ln -s oracle.spike_fb oracle
    
    
  10. Start up the database.