hp.com home
technical knowledge base

Technical knowledge base document


Europe

» return to original page
Submitted Date: 2/17/03
Title: How to account for memory usage on an HP-UX system
Document ID: KBRC00011764
Last Modified Date: 8/4/05

You may provide feedback on this document


How to account for memory usage on an HP-UX system
DocId:KBRC00011764  Updated:10/7/04 5:02:00 AM

PROBLEM

How to account for memory usage on an HP-UX system?

RESOLUTION

1. Obtain the tools that will make this analysis easier.

Start with glance (also known as GlancePlus).

glance - Fully supported HP product for monitoring system metrics.
Trial and licensed copies are available on the HP-UX application media.

2. Collect system information

2.1. Note system model:

# model

2.2. Identify the OS version, and whether it's 32-bit or 64-bit.

# uname -a
# getconf KERNEL_BITS

2.3. Determine the current swap space (paging area) configuration. Is there any
memory paged out to disk that needs to be accounted for?

# swapinfo -tm

2.4. Note the value of some tunable kernel parameters that relate to memory
usage.

  nbuf & bufpages: If both are set to zero then system is dynamically sizing
  buffer cache - else the buffer cache is statically fixed in size.

  dbc_min_pct & dbc_max_pct: On systems using dynamic buffer cache sizing
  these parameters are used to set minimum and maximum buffer cache size,
  respectively, as a percentage of RAM.

2.5. Are applications running as 32-bit, 64-bit, or both? Check with
application developers and DBA's for help in determining this.

3. Determine how much memory is presently in use that needs to be accounted
for.  Is there any memory paged out to disk as per swapinfo -tm? How
much?

Use one of the following procedures with either Glance, or with dmesg
and/or adb to identify how much of physical memory is in use.

3.1. Go to the memory report in glance.

In the text mode of Glance, click on "Memory Report". Or enter the short-cut
key "m".

Note: Shift-? shows all glance short-cut keys.

In the GUI form of Glance started with gpm, use the "Reports" pull-down
 menu. Click on "Memory Info", then "Memory Report".

Note the physical memory (Phys Mem) and the free memory (Free Mem).

To determine the amount of RAM used, subtract free memory from physical memory.

If any memory is paged out to disk, then add that amount to the total of
[virtual] memory usage to be accounted for.

3.2. If the text or GUI form of Glance is not available, then use other tools
such as dmesg and adb.

The following examples of dmesg and adb are from a model 712/60
workstation where around 88 out of 98 Mb RAM is in use.

# dmesg | grep Physical
    Physical: 98304 Kbytes, lockable: 64752 Kbytes, available: 77708 Kbytes

# echo freemem/D | adb /stand/vmunix /dev/kmem
freemem:
freemem:        2500


# bc                <-- Start the command line calculator
2500*4096
10240000            <-- 2500 pages of 4096 bytes each is 10.24 Mb free
qu

If the message buffer has wrapped around and dmesg is no longer
displaying the amount of physical memory on the system, then the following
adb procedures from ITRC document UPERFKBAN00000726 might be used. The
output of each of these commands will be in 4096-byte pages.

For HP-UX 10.X
  Example:

  # echo  physmem/D | adb  /stand/vmunix /dev/kmem
  physmem:
  physmem:  24576

For HP-UX 11.X systems running on 32 bit architecture:
  Example:

  # echo phys_mem_pages/D | adb /stand/vmunix /dev/kmem
  phys_mem_pages:
  phys_mem_pages: 24576

For HP-UX 11.X systems running on 64 bit architecture:
  Example:

  # echo phys_mem_pages/D | adb64  /stand/vmunix /dev/mem
  phys_mem_pages:
  phys_mem_pages: 262144

For the ongoing example, the glance memory report is showing around 2.9 Gb
of memory in use (Phys Mem less Free Mem), and no memory is paged out to
disk.

From this point on continue to keep a running total as the different types of
memory in use are identified.

4. From the same glance memory report, note how much memory the system/kernel
and buffer cache are using.

  1500  Mbyte buffer cache  (Buf Cache)
    94  Mbyte system memory (Sys Mem)

The amount of "Sys Mem" reported is static system memory plus certain types of
dynamic system memory, not including the buffer cache. For a more detailed
view of how the kernel is using system memory refer to the unsupported yet
very useful WTEC kmeminfo tool. But for now, the "Sys Mem" value is sufficient.

Note: 1500 Mb is an overly large buffer cache. Most likely, a Gigabyte or more
of memory could be freed from the buffer cache to other uses.

1594 out of 2900 Mb has been so far accounted for in the ongoing example,
leaving 1306 Mb yet to account for.

4.1. If glance, gpm or kmeminfo are not on the system
then use the following adb command to identify the amount of memory
used in the buffer cache. The output will be in 4096-byte pages.

# echo bufpages/D | adb /stand/vmunix /dev/kmem
bufpages:
bufpages:       1994

The command line calculator may be used to convert 4096-byte pages to bytes:

# bc
1994*4096
8167424
qu

5. Look at the shared memory usage on the system.

At this point in the example 1594 Mb have been accounted for (buffer cache +
system memory) out of 2900 Mb, with 1306 Mb yet to be accounted for.

It is not uncommon to see signficant amounts of shared memory in use on a Unix
system - especially if a database is running. The types of shared memory
in use on a system may be shared memory segments, shared memory mapped files
(as opposed to private memory mapped files found in a processes data segment)
and shared libraries.

`ipcs -ma` might be run to see what shared memory segments are in use. Then
either total the SEGSZ column, or sort the ipcs output by segment
size.

For example, to simply list shared memory segments from largest to smallest
(10th rank/column is SEGSZ):

# ipcs -ma | sort -rnk10 | more

The `ipcs -ma` command just shows shared memory segments. To see a listing of
all shared memory types in one report, obtain a copy of an unsupported yet
very useful WTEC utility called shminfo.

By default, shminfo looks at the 32-bit global window. Using the -64bit option
tells shminfo to look at the 64-bit global window.

The shminfo output will identify shared memory segments as type "SHMEM", while
shared memory mapped files and shared libraries will be identified as
type "OTHER".

The shminfo output identifies the location in memory of each shared memory
type, including quadrant number.

Look primarily in quadrants 3 (Q3) and 4 (Q4). Shared memory might also be
found in quadrant 2 (Q2), but only for specially compiled processes
(SHMEM_MAGIC executable types).

Add together the size of all shared memory segments (SHMEM) and OTHER areas
(shared libraries and memory mapped files show at type "OTHER") that are found
in Q2, Q3 or Q4 of the global window.

In the following sample shminfo excerpt note that global Q2 and Q3 are
completely unused. While Global Q4 has 1364 Kbytes of type SHMEM (1.364 Mb),
and 8720 Kbytes (8.72 Mb) of type OTHER.


Global 32-bit shared quadrants:
===============================
        Space      Start        End  Kbytes Usage
Q2 0x000009bb.0x40000000-0x7ffe6000 1048472 FREE
Q3 0x00000000.0x80000000-0xc0000000 1048576 FREE
Q4 0x00000000.0xc0000000-0xc05b9000    5860 OTHER
Q4 0x00000000.0xc05b9000-0xc06f3000    1256 SHMEM id=0
Q4 0x00000000.0xc06f3000-0xc0708000      84 OTHER
Q4 0x00000000.0xc0708000-0xc0709000       4 SHMEM id=1
Q4 0x00000000.0xc0709000-0xc0711000      32 SHMEM id=2 locked
Q4 0x00000000.0xc0711000-0xc0713000       8 SHMEM id=3
Q4 0x00000000.0xc0713000-0xc0884000    1476 OTHER
Q4 0x00000000.0xc0884000-0xc0894000      64 SHMEM id=4
Q4 0x00000000.0xc0894000-0xc08d9000     276 OTHER
Q4 0x00000000.0xc08d9000-0xc08e2000      36 FREE
Q4 0x00000000.0xc08e2000-0xc08fe000     112 OTHER
Q4 0x00000000.0xc08fe000-0xc0901000      12 FREE
Q4 0x00000000.0xc0901000-0xc09e5000     912 OTHER
Q4 0x00000000.0xc09e5000-0xc09eb000      24 FREE



If memory windows are in use on the system then also add together the shared
memory found in the private quadrants Q2 and Q3 of those windows (a separate
section of shminfo output).

In the following sample shminfo excerpt note that private Q2 and Q3 quadrants
are completely unused in two available memory windows. There are two memory
windows available on this example system because the max_mem_window kernel
tunable was set to "2" (allowing two memory windows in addition to the global
window).

# kmtune | grep win
max_mem_window       2

Here's an excerpt from shminfo showing memory windows information.


Shared space from Window index 1 (unused):      <-- Here's the first window
       Space      Start        End  Kbytes Usage
Q2 0x00007194.0x40000000-0x7ffeffff 1048512 FREE
Q3 0x00001973.0x80000000-0xbfffffff 1048576 FREE

Shared space from Window index 2 (unused):      <-- Here's the second window
       Space      Start        End  Kbytes Usage
Q2 0x00003156.0x40000000-0x7ffeffff 1048512 FREE
Q3 0x0000094e.0x80000000-0xbfffffff 1048576 FREE


In this running example pretend that shminfo showed a total of 1000 Mb of type
SHMEM in use, and 50 Mb of type OTHER. Add that into the running total as
follows...

2900  Mbyte memory used

1500  Mbyte buffer cache
  94  Mbyte system memory
1000  Mbyte shared memory segments (type SHMEM in shminfo)
  50  Mbyte memory mapped files and shared libraries (type OTHER in shminfo)

So far accounted for are 2644 out of 2900 Mb. For the remaining 256 Mb look
next at process memory.

6. See how the individual processes running on the system use text, data and
stack areas of memory.

The ps command may be used to get a decent ballpark total. The unsupported HP
WTEC procsize utility might also be used.

Examples of ps and procsize command usage are shown below.

6.1. The SZ column in ps -el output indicates in 4096-byte pages, the
total amount of memory used in each processes text, data and stack areas.

Running ps -el | sort -rnk10 will list all processes on the system,
sorted by the SZ column.

The following awk script with usage example simplifies the task of totaling
all values in the SZ column:

  #!/usr/bin/awk -f
       {
       total=total+$1
       }
  END  {
       print total
       }


If this script were saved as, for example a file called "summ" with execute
permissions, then it could be used as follows to total all values found in
column 10 of ps -el output (the SZ column). The result is in 4096-byte
pages.

One example (column 10 is the SZ data):

# ps -el | awk -F' ' '{print $10}' | ./summ
61048

Multiply the the result by the 4096-byte page size.

Another example:

# ps -el | awk -F' ' '{print $10}' | ./summ
4.81781 E+06           <-- That's 4817810 pages (decimal point moved right six
                           places for "E+06") multiplied by a 4096-byte page
                           size, or about 19.73 Gb


19.73 Gb is a large number ie might not be an acceptable ball park number -
look further...

# ps -el | wc -l
810

That's 810 total processes on the system using an average of about 24 Mb a
piece. This is probably not all private memory, so look further with Glance eg
there might be 700 instances of a process each sharing the same 2 Mb data
segment. If many instances of the same process were each using a large amount
of private memory, or a larger amount than they had previously been known to
use, then it would be appropriate to ask the developer or vendor to explain
the resource requirements of the related application. Or perhaps the per-
process memory requirements are normal and there are simply many more users
running many more instances of the process.

Either the glance process memory region report or the unsupported WTEC
procsize utility might be used (both discussed below) to determine which
memory regions of each process are shared and which are private. Memory that
is shared between more than one process should only be counted once in the
running total.

6.2) To use the Glance process memory regions report:

Note the PID of a process in question eg one that has many running instance's.

Start glance.
Click on "Select Process" (or use the "s" select shortcut key).
When prompted, enter the PID.
Click on "Memory Regions" (or use the "M" process memory regions shortcut key).

Here's an excerpt from a glance process memory region listing:

TEXT  /Shared   na   404kb   444kb       na       <--
Shared so only count once
DATA  /Priv     na   2.0mb   2.0mb       na       <--
Priv so add to the running total
STACK /Priv     na    40kb   120kb       na                         <--
Priv so add to the running total

If the SZ numbers on the system total much more than the total to be accounted
for, then much of the memory listed in the SZ column is probably shared
between processes. In that case, the procsize utility discussed below could be
used to better identify process memory usage.

6.3. Here's an example of using the unsupported HP WTEC procsize utility.

Note: The displayed units in the output of procsize are in 4096 byte pages.


The following command line would show memory regions for all processes, with
shared regions only shown once. Note that it's the Resident Set Size (RSS)
that's actually in memory that is of interested and not the Virtual Set Size
(VSS). Both the RSS and VSS data could be displayed if both -R and -V are
specified. The default is VSS.

# ./procsize -Rfn

"-R" says to display the RSS (Resident Set Size) information, rather than the
     default VSS (Virtual Set Size).

"-f" says to report on every process.

"-n" says to only count a region once, for the first process that uses it. The
     default action is to report shared regions each time they are found. You
     might run procsize with and without this flag towards gaining an
     understanding of what regions are shared, and not shared.

Here's a sample excerpt of procsize output. Note procsize itself using a total
of around 9.8 Mb (2414 pages) of memory. The "r" in column 3 indicates that
RSS values are being displayed.

# ./procsize -Rfn

libp4 (6.93): Opening /stand/vmunix /dev/kmem

regions set to 1000
hpux 11.00 32 bit in Narrow mode
nproc=276
  pid Comm             UAREA   TEXT   DATA  STACK  SHMEM     IO   MMAP    Total
    0 swapper        r     0      0      0      0      0      0      0        0
    1 init           r     4     61     24     10      0      0      0      100
    2 vhand          r     4      0      0      0      0      0      0        4
...
 5237 gpm            r     4    218   1464     12      0      0    187     1885
23828 dtsession      r     4     17     64      2      0      0     50      137
29565 procsize       r     4     49    328      6      0      0   2027     2414
23876 dtfile         r     0      0      0      0      0      0      0        0
 5233 dtexec         r     4      4      1      2      0      0      1       12
23880 dtexec         r     4      0      0      0      0      0      0        4
23885 netscape       r     4     47     53      1      0      0      8      113

If the numbers on the system do not come close to the amount of memory being
utilized, then there are other tools such as the kmeminfo kernel memory tool
to help look further. More on that below.

From the current example, say that the cumulative total of the ps command SZ
column came out to 61035 pages, or about 250 Mb:

2900  Mbyte memory used

1500  Mbyte buffer cache
  94  Mbyte system memory
1000  Mbyte shared memory
  40  Mbyte memory mapped files
  10  Mbyte shared libraries
 250  Mbyte process text+data+stack
====
2894


That's 2894 of 2900 Mb accounted for. The buffer cache seems over sized at
1500 Mb, and might well be reduced in size by a Gb or more. The 1000 Mb shared
memory may not be out of line at all - especially if there is a database
running. And if a database were running then consider that any memory freed
through limiting buffer cache size might be better used as shared memory by the
database.

Perhaps a large portion of that 250 Mb process text+data+stack is actually
shared eg perhaps it's high by as much as 200 Mb, and when combined with the 6
Mb yet to be accounted for might indicate that the kernel is using 206 Mb -
not an excessive amount.


7. Look at how the HP-UX kernel is utilizing memory.

Say that in the ongoing example it was found that there was yet another 206 Mb
of used memory to account for. The appropriate tool to use would be kmeminfo.

When reaching the point of running kmeminfo, it would be a good idea to be
working with the HP Response Center to fully understand the kmeminfo data.

In the current ongoing example, 206 Mb being used by the kernel may not be out
of line. And depending upon how the system is being used, several hundred Mb
might not be out of line.

"Used by the kernel" generally means memory that is handled by the kernel as
it manages user resources, and works to satisfy the memory requests of the
many processes running on the system." Some kernel memory is statically sized
at boot time, and some dynamically sized after boot time. Different processes
tend to request memory in different denomintations ie make use of different
kernel memory "buckets" or "arena's".

For example, starting at HP-UX 11.0 JFS tends to make use of memory in 2 Kb
pieces - and so the kernel might build up an accumulation of 4096-byte pages
of memory specifically for satisfying 2048-byte requests. At HP-UX 10.20 JFS
tended to make use of memory in 1 Kb pieces, and kmeminfo would reflect that
activity in the "1 Kb kernel bucket".

Starting with HP-UX 11i default kmeminfo output displays information about
kernel memory "arena's" rather than "buckets". The memory activity of
different processes on the system is reflected in the increase and decrease of
different arena's.

Memory makes it's way to the arena's by way of the Super Page Pool. A super
page remains in the Super Page Pool so long as any portion of it is in use by
an arena.

Ideally, the amount of memory seen in any kernel bucket/arena, or in the Super
Page Pool should be minimal. But it is important to understand that seeing a
quantity of memory in a particular bucket/arena does not by itself indicate a
memory leak. A memory leak is where something continues to grow over time
without ever decreasing.

Excessive use of memory in particular buckets/arena's might be indications
that kernel or application tuning needs to take place.

Excessive use of memory in the Super Page Pool might indicate a problem that's
addressed through HP-UX patching.

Contact the HP Response Center for any concerns or questions in these areas.

The following kmeminfo excerpt (values in 4096-byte pages) shows a system with
around 268 Mb RAM. 232 Mb of that memory is in use - 51 Mb of that by user
processes and 184 Mb of that by the kernel. What's used "by the kernel" is
broken out as 134 Mb buffer cache, 27 Mb dynamic eg kernel buckets, 22 Mb
static structures eg system tables.


Physical memory usage summary (in pages):

Physmem        =   65536  Available physical memory:
  Freemem      =    8765    Free physical memory
  Used         =   56771    Used physical memory:
    System     =   45015      by kernel:
      Static   =    5373        for text and static data
      Dynamic  =    6730        for dynamic data
      Bufcache =   32768        for file-system buffer cache
      Eqmem    =      16        for equiv. mapped page pool
      SCmem    =     128        for system critical page pool
    User       =   12490      by user processes
      Uarea    =     460        for thread uareas


From another excerpt of that same kmeminfo output, usage of the 27 Mb of
dynamic kernel memory is broken out as follows. Almost 11 Mb is used in kernel
buckets (2671 pages * 4096 byte page size) with the largest accumulation of
about 2.7 Mb in bucket 10 (656 pages * 4096 byte page size), the 1 Kb bucket.


Dynamic memory usage summary (in pages):

Dynamic        =    6730  Kernel dynamic data (sysmap):
  MALLOC       =    2671    memory buckets:
    bucket[ 5] =     179      size    32 bytes
    bucket[ 6] =      25      size    64 bytes
    bucket[ 7] =     282      size   128 bytes
    bucket[ 8] =     164      size   256 bytes
    bucket[ 9] =     190      size   512 bytes
    bucket[10] =     656      size  1024 bytes
    bucket[11] =     340      size  2048 bytes
    bucket[12] =     387      size  4096 bytes
    bucket[13] =      30      size     2 pages
    bucket[14] =      21      size     3 pages
    bucket[15] =      20      size     4 pages
    bucket[16] =       5      size     5 pages
    bucket[17] =      18      size     6 pages
    bucket[18] =       0      size     7 pages
    bucket[19] =      56      size     8 pages
    bucket[20] =     298      size >   8 pages
  Kalloc       =    4003    kalloc()
  Eqalloc      =      45    eqalloc()
  Reserved     =      11    Reserved pools


Note that the 134 Mb buffer cache is half of RAM. It looks like the
dbc_max_pct kernel tunable that controls the upper limit of dynamic buffer
cache growth is at it's default value of 50 [percent]. On this small memory
system that may be OK. Note that there's about 36 Mb free memory. If memory
were needed elsewhere eg in the data segments of user processes, then perhaps
the buffer cache size could be smaller. Before making such a change be sure to
monitor the buffer cache usage over time eg with sar -b or in the glance disk
report (under the percent column of the "Local" "Logl Rds" and "Logl Wts"
lines). Or perhaps just letting the system page out to disk is not a problem.
After all, the kernel will work to page out only those memory pages that have
not been recently used.

8. Still unable to account for all memory utilization on the system?

There are more intrusive methods of identifying how memory is being used by
the kernel. If this point is reached and it is not yet seen where exactly
memory is being used - the HP Response Center should be involved.

ALT KEYWORDS

memory usage tools
privacy statement using this site means you accept its terms