Oracle® interMedia Reference 10g Release 2 (10.2) Part Number B14297-01 |
|
|
View PDF |
This appendix describes the command options, or operators, used in the Oracle interMedia ("interMedia") process( ) and processCopy( ) methods.
The available operators fall into three broad categories, each described in its own section:
Section D.1, "Common Concepts" describes the relative order of these operators.
Note: Information about supported image file formats and image compression formats are presented in Appendix B. See Table B-1, Table B-2, and Table B-3, in particular. |
This section describes concepts common to all the image operators and the process( ) and processCopy( ) methods.
The process( ) and processCopy( ) methods operate on one image, called the source image, and produce another image, called the destination image. In the case of the process( ) method, the destination image is written into the same storage space as the source image, replacing it permanently. For the processCopy( ) method, the storage for the destination image is distinct from the storage for the source image.
The process( ) and processCopy( ) methods are functionally identical except for the fact that the process( ) method writes its output into the same BLOB from which it takes its input while the processCopy( ) method writes its output into a different BLOB. Their command string options are identical and no distinction is drawn between them.
For the rest of this appendix, the names process( ) and processCopy( ) are used interchangeably, and the use of the name process( ) implies both process( ) and processCopy( ) unless explicitly noted otherwise.
Unless otherwise noted, the process( ) operators appear in the command string in the form <operator> = <value>. The right-hand side of the expression is called the value of the operator, and determines how the operator will be applied.
In general, any number of operators can be combined in the command string passed into the process( ) method if the combination makes sense. However, certain operators are supported only if other operators are present or if other conditions are met. For example, the compressionQuality operator is supported only if the compression format of the destination image is JPEG. Other operators require that the source or destination image be a Raw Pixel or foreign image.
The flexibility in combining operators allows a single operation to change the format of an image, reduce or increase the number of colors, compress the data, and cut or scale the resulting image. This is highly preferable to making multiple calls to do each of these operations sequentially.
At the most abstract level, the image formatting operators are used to change the layout of the data within the image storage. They do not change the semantic content of the image, and unless the source image contains more information than the destination image can store, they do not change the visual appearance of the image at all. Examples of a source image with more information than the destination image can store are:
Converting a 24-bit image to an 8-bit image (too many bits per pixel)
Converting a color image to a grayscale or monochrome image (too many color planes)
Converting an uncompressed image, or an image stored in a lossless compression format, to a lossy compression format (too much detail)
The fileFormat operator determines the image file type, or format, of the output image. The value of this operator is a 4-character code, which is a mnemonic for the new file format name. The list of allowable values for the image fileFormat operator is shown in Table 5-1 in Chapter 5. Appendix B contains basic information about each file format, including its mnemonic (file format), typical file extension, allowable compression and content formats, and other notable features.
The value given to the fileFormat operator is the single most important detail when specifying the output for process( ). This value determines the range of allowable content and compression formats, whether or not compression quality will be useful, and whether or not the format-specific operators will be useful.
If the fileFormat operator is not used in the process( ) command string, interMedia will determine the file format of the source image and use that as the default file format value. If the file format of the source image does not support output, then an error will occur. If the source image is a foreign image, then the output image will be written as Raw Pixel.
The contentFormat operator determines the format of the image content. The content means the number of colors supported by the image and the manner in which they are supported. Depending on which file format is used to store the output image, some or most of the content formats may not be supported.
Image content formats fall into two broad categories, as follows:
In direct color images, the pixel data indicate color values directly, without reference to any additional information. This category includes monochrome images (pure black and white), grayscale images (shades of gray) and RGB (true color) images.
In direct color images, the bit depth of the image indicates the size of the pixel data; monochrome images are implicitly 1 bit deep, grayscale images are 8 bits deep, or 16 if an optional 8-bit alpha channel is present, and RGB images are 24 bits deep -- usually 8 bits each for red, green, and blue, or 32 bits deep if an optional 8-bit alpha channel is present.
LUT images (also referred to as indexed color images) store possible color values in a table of possible color combinations, and pixel data then indicate which possible color from the table is to be used.
The bit depth of a LUT image indicates both the size of the pixel data and the number of possible colors in the lookup table. A 1-bit LUT image would have 1-bit pixels and 2 possible colors (2^1), a 4-bit image would have 16 (2^4) possible colors, and an 8-bit image would have 256 (2^8) possible colors. Typically, the color table uses 24 bits to represent the possible colors, so although only 16 colors might be available in an image, they could each be any of up to 16 million possible RGB combinations. If the LUT image supports an alpha channel, then the table will usually use 32 bits to represent each color.
If the contentFormat operator is not passed to the process( ) method, then interMedia attempts to duplicate the content format of the source image if it is supported by the file format of the destination image. Otherwise, a default content format is chosen depending on the destination file format.
The following four figures illustrate the syntax and options for the contentFormat operator.
Figure D-1 illustrates the contentFormat syntax that you use to convert an image to monochrome.
For finer control of the image output when you convert an image to monochrome, use the quantize operator with the ERRORDIFFUSION, ORDEREDDITHER, or THRESHOLD value. See Section D.3.7 for information about the quantize operator.
Figure D-2 illustrates the contentFormat syntax that you use to convert an image to LUT format.
The bit depth portion of the contentFormat syntax determines how many colors will be present in the LUT of the final image, as follows:
An 8-bit image can contain up to 256 colors.
A 4-bit image can contain up to 16 colors.
A 1-bit image can contain only 2 colors, however, each of these colors may be any 24-bit RGB value.
The color portion of the contentFormat syntax controls whether the resulting image will be composed of RGB triplets or grayscale values. There is no difference between GRAY and GREY, and the optional SCALE suffix has no functional effect.
The A and T portion of the contentFormat syntax provides the ability to preserve alpha (A) or transparency (T) values in an image. You cannot use the transparency syntax to reduce a 32-bit image to an 8-bit image with alpha or transparency, but you can use it to preserve alpha or transparency when converting an image to a different file format. You can also use it to convert a transparency effect into a full alpha effect (however, only the transparent index will have alpha in the output).
For finer control of the image output when you convert a direct color image to a LUT color image, use the quantize operator with the ERRORDIFFUSION, ORDEREDDITHER, or MEDIANCUT value. See Section D.3.7 for information about the quantize operator.
Figure D-3 illustrates the contentFormat syntax that you use to convert an image to grayscale.
The bit depth portion of the contentFormat syntax determines the overall type of the grayscale image: an 8-bit grayscale image may not have an alpha channel, while a 16-bit grayscale image currently must have an alpha channel. In either case, the DRCT specification is optional, because any non-LUT image will always be direct color. There is no difference between GRAY and GREY, and the optional SCALE suffix has no functional effect. The alpha specification (A) is required for 16-bit grayscale output, and can be used to either preserve an existing alpha channel in a currently grayscale image or reduce a 32-bit RGBA image to grayscale with alpha.
The quantize operator has no effect on conversions to grayscale.
Figure D-4 illustrates the contentFormat syntax that you use to convert an image to direct color.
The bit depth portion of the contentFormat syntax determines the overall type of the direct RGB image: a 24-bit RGB image will not have an alpha channel, while a 32-bit RGB image must always have an alpha channel. In either case, the DRCT specification is optional because any non-LUT image will always be direct color. The alpha specification (A) is required for 32-bit RGB output; it preserves an existing alpha channel in a 32-bit or 64-bit RGB image, and it preserves the alpha channel in a 16-bit grayscale image that is being promoted to RGB.
The optional pixel chunking syntax allows images to be forced to band-interleaved-by-pixel (BIP, also known as chunky), band-interleaved-by-line (BIL), or band-interleaved-by-plane (BSQ, also known as band-sequential or planar). This portion of the syntax is supported only for RPIX formats.
The quantize operator is not used for conversions to direct color.
The following list of examples provides some common uses of the contentFormat operator:
To specify that the output image be monochrome (black and white only):
image1.process('contentFormat=monochrome');
To specify that the output image be an RGB lookup table (indexed color), either of the following is valid:
image1.process('contentFormat=8bitlutrgb'); image1.process('contentFormat=8bitlut');
To specify that the output image be a grayscale lookup table (indexed color):
image1.process('contentFormat=8bitlutgray');
To specify that the output image be grayscale, either of the following is valid:
image1.process('contentFormat=8bitgray'); image1.process('contentFormat=8bitgreyscale');
To specify that the output image be direct color, either of the following is valid:
image1.process('contentFormat=24bitrgb'); image1.process('contentFormat=24bitdrctrgb');
To specify that the output image be direct color and band sequential:
image1.process('contentFormat=24bitbsqrgb');
The compressionFormat operator determines the compression algorithm used to compress the image data. The range of supported compression formats depends heavily upon the file format of the output image. Some file formats support only a single compression format, and some compression formats are supported only by one file format.
The supported values for the compressionFormat operator are listed in Table 5-1 in Chapter 5.
All compression formats that include RLE in their mnemonic are run-length encoding compression schemes, and work well only for images that contain large areas of identical color. The PACKBITS compression type is a run-length encoding scheme that originates from the Macintosh system but is supported by other systems. It has limitations that are similar to other run-length encoding compression formats. Formats that contain LZW or HUFFMAN compression types are more complex compression schemes that examine the image for redundant information and are more useful for a broader class of images. FAX3 and FAX4 are the CCITT Group 3 and Group 4 standards for compressing facsimile data and are useful only for monochrome images. All the compression formats mentioned in this paragraph are lossless compression schemes, which means that compressing the image does not discard data. An image compressed into a lossless format and then decompressed will look the same as the original image.
The JPEG compression format is a special case. Developed to compress photographic images, the JPEG format is a lossy format, which means that it compresses the image typically by discarding unimportant details. Because this format is optimized for compressing photographic and similarly noisy images, it often produces poor results for other image types, such as line art images and images with large areas of similar color. JPEG is the only lossy compression scheme currently supported by interMedia.
The DEFLATE compression type is ZIP Deflate and is used by PNG image file formats. The DEFLATE-ADAM7 compression format is interlaced ZIP Deflate and is used by PNG image file formats. The ASCII compression type is ASCII encoding and the RAW compression type is binary encoding, and both are for PNM image file formats.
If the compressionFormat operator is not specified, then interMedia will use the default compression format; often this default is "None" or "No Compression."
If the compressionFormat operator is not specified and the file format of the destination image is different from that of the source image, then a default compression format will be selected depending on the destination image file format. This default compression is often "None" or "No Compression."
The compressionQuality operator determines the relative quality of an image compressed with a lossy compression format. This operator has no meaning for lossless compression formats, and therefore is not currently supported for any compression format except JPEG. File formats that support JPEG compression include JFIF, TIFF, and PICT.
The compressionQuality operator accepts five values, ranging from the most compression (lowest visual quality) to the least compression (highest visual quality): MAXCOMPRATIO, HIGHCOMP, MEDCOMP, LOWCOMP, and MAXINTEGRITY. Using the MAXCOMPRATIO value results in the smallest amount of image data, but may introduce visible aberrations. Using the MAXINTEGRITY value keeps the resulting image more faithful to the original, but requires more space to store. The compressionQuality operator also accepts integer values between 0 (lowest quality) and 100 (highest quality) for JFIF and TIFF file formats only.
The default values for the compressionQuality operator are LOWCOMP for the JFIF and TIFF file formats and MAXINTEGRITY for the PICT file format.
The image processing operators supported by interMedia directly change the way the image looks on the display. The operators supported by interMedia represent only a fraction of all possible image processing operations, and are not intended for users performing intricate image analysis.
The contrast operator is used to adjust contrast. You can adjust contrast by percentage or by upper and lower bound, as follows:
By percentage
To adjust contrast by percentage, the syntax is as follows:
contrast = <percent1> [<percent2> <percent3>]
One or three parameters may be specified when specifying contrast by percentage. If one value is passed, then it is applied to all color components (either gray, or red, green, and blue) of the input image. If three values are specified then percent1 is applied to the red component of the image, percent2 to the green component, and percent3 to the blue component.
The percent values are floating-point numbers that indicate the percentage of the input pixel values that are mapped onto the full available output range of the image; the remaining input values are forced to either extreme (zero or full intensity). For example, a percentage of 60 indicates that the middle 60% of the input range is to be mapped to the full output range of the color space, while the lower 20% of the input range is forced to zero intensity (black for a grayscale image) and the upper 20% of the input range is forced to full intensity (white for a grayscale image).
By upper and lower bound
To adjust contrast by lower and upper bound, the syntax is as follows:
contrast = <lower1> <upper1> [<lower2> <upper2> <lower3> <upper3>]
The lower and upper values are integers that indicate the lower and upper bounds of the input pixel values that are to be mapped to the full output range. Values below the lower bound are forced to zero intensity and values above the upper bound are forced to full intensity. For 8-bit grayscale and 24-bit RGB images, these bounds may range from 0 to 255.
Two or six values can be specified when using this contrast mode. If two values are specified, then those bounds are used for all color components of the image. If six values are specified, then lower1 and upper1 are applied to the red component of the image, lower2 and upper2 are applied to the green component, and lower3 and upper3 are applied to the blue component.
Note: Enclose all floating-point arguments with double quotation marks ("" ) to ensure correct Globalization Support interpretation. |
The cut operator is used to create a subset of the original image. The values supplied to the cut operator are the origin coordinates (x,y) of the cut window in the source image, and the width and height of the cut window in pixels. This operator is applied before any scaling that is requested.
If the cut operator is not supplied, the entire source image is used.
The flip operator places an image's scanlines in reverse order such that the scanlines are swapped from top to bottom. This operator accepts no values.
The gamma operator corrects the gamma (brightness) of an image. This operator accepts either one or three floating-point values using the following syntax:
gamma = <gamma1> [<gamma2> <gamma3>]
The values gamma1, gamma2, and gamma3 are the denominators of the gamma exponent applied to the input image. If only one value is specified, then that value is applied to all color components (either gray, or red, green, and blue) of the input image. If three values are specified then gamma1 is applied to the red component of the image, gamma2 to the green component, and gamma3 to the blue component.
To brighten an image, specify gamma values greater than 1.0; typical values are in the range 1.0 to 2.5. To darken an image, specify gamma values smaller than 1.0 (but larger than 0).
Note: Enclose all floating-point arguments with double quotation marks ("" ) to ensure correct Globalization Support interpretation. |
The mirror operator places an image's scanlines in inverse order such that the pixel columns are swapped from left to right. This operator accepts no values.
The page operator allows page selection from a multipage input image. The value specifies the input page that should be used as the source image for the process operation. The first page is numbered 0, the second page is 1, and so on.
Currently, only TIFF images support page selection.
The quantize operator affects the outcome of the contentFormat operator when you change the bit depth of an image. When an explicit change in content format is requested, or when the content format has to be changed due to other requested operations (such as scaling a LUT image, which requires promotion to direct color before scaling, or converting to a file format that only supports LUT images), the quantize operator indicates how any resulting quantization (reduction in number of colors) will be performed.
The value of the quantize operator can be any one of the following, referred to as quantizers:
ERRORDIFFUSION
You can use the ERRORDIFFUSION quantizer in 2 ways: to reduce an 8-bit grayscale image to a monochrome image, or to reduce a 24-bit RGB image to an 8-bit LUT image.
The ERRORDIFFUSION quantizer retains the error resulting from the quantization of an existing pixel and diffuses that error among neighboring pixels. This quantization uses a fixed color table. The result looks good for most photographic images, but creates objectionable speckling artifacts for synthetic images. The artifacts are due to the fixed color lookup table used by the existing quantization method, which is statistically well balanced across the entire RGB color space, but is often a poor match for an image that contains many intensities of just a few colors. The result is more accurate than when the ORDEREDDITHER quantizer is specified; however, it is returned more slowly.
This is the default quantization value.
ORDEREDDITHER
You can use the ORDEREDDITHER quantizer in 2 ways: to reduce an 8-bit grayscale image to a monochrome image, or to reduce a 24-bit RGB image to an 8-bit LUT image.
The ORDEREDDITHER quantizer finds the closest color match for each pixel in a fixed color table and then dithers the result to minimize the more obvious effects of color substitution. The result is satisfactory for most images but fine details can be lost in the dithering process. Although the result is not as accurate as when the ERRORDIFFUSION quantizer is specified, it is returned more quickly.
THRESHOLD <threshold>
The THRESHOLD quantizer reduces 8-bit grayscale images to monochrome images.
The THRESHOLD quantizer assigns a monochrome output value (black or white) to a pixel by comparing that pixel's grayscale value to the threshold argument that is supplied along with the quantizer. If the input grayscale value is greater than or equal to the supplied threshold argument, then the output is white, otherwise the output is black. For an 8-bit grayscale or 24-bit RGB image, a grayscale value of 255 denotes white, while a grayscale value of 0 denotes black.
For example, a threshold argument of 128 will cause any input value less than 128 to become black, while the remainder of the image will become white. A threshold value of 0 will cause the entire image to be white, and a value of 256 will cause the entire image to be black (for an 8-bit grayscale or a 24-bit RGB input image).
The THRESHOLD quantizer is most appropriately applied to synthetic images. The ERRORDIFFUSION and ORDEREDDITHER quantizers will produce better output when converting photographic images to monochrome, but will result in fuzziness in synthetic images; using the THRESHOLD quantizer will eliminate this fuzziness at the cost of the ability to discriminate between various intensities in the input image.
MEDIANCUT [optional sampling rate]
The MEDIANCUT quantizer reduces 24-bit RGB images to 8-bit LUT images.
The MEDIANCUT quantizer generates a more optimal color table than the ERRORDIFFUSION or ORDEREDDITHER quantizers for some images, including most synthetic images, by choosing colors according to their popularity in the original image. However, the analysis of the original image is time consuming for large images, and some photographic images may look better when quantized using ERRORDIFFUSION or ORDEREDDITHER.
The MEDIANCUT quantizer accepts an optional integer argument that specifies the sampling rate to be used when scanning the input image to collect statistics on color use. The default value for this quantizer argument is 1, meaning that every input pixel is examined, but any value greater than 1 may be specified. For a sampling rate n greater than 1, 1 pixel out of every n pixels is examined.
The following examples demonstrate how values and arguments are specified for the quantize operator:
image.process('contentformat=8bitlutrbg quantize = mediancut 2'); image.process('contentformat=monochrome quantize = threshold 128');
The rotate operator rotates an image within the image plane by the angle specified.
The value specified must be a floating-point number. A positive value specifies a clockwise rotation. A negative value for the operator specifies a counter-clockwise rotation. After the rotation, the image content is translated to an origin of 0,0 and the pixels not covered by the rotated image footprint are filled with the resulting colorspace black value.
Rotation values of 90, 180, and 270 use special code that quickly copies pixels without geometrically projecting them, for faster operation.
Note: Enclose all floating-point arguments with double quotation marks ("" ) to ensure correct Globalization Support interpretation. |
Oracle interMedia supports several operators that change the scale of an image, as described in the following sections.
The fixedScale operator is intended to simplify the creation of images with a specific size, such as thumbnail images. The scale, xScale, and yScale operators all accept floating-point scaling ratios, while the fixedScale (and maxScale) operators specify scaling values in pixels.
The two integer values supplied to the fixedScale operator are the desired dimensions (width and height) of the destination image. The supplied dimensions may be larger or smaller (or one larger and one smaller) than the dimensions of the source image.
The scaling method used by this operator will be the same as used by the scale operator in all cases. This operator cannot be combined with other scaling operators.
The maxScale operator is a variant of the fixedScale operator that preserves the aspect ratio (relative width and height) of the source image. The maxScale operator also accepts two integer dimensions, but these values represent the maximum value of the appropriate dimension after scaling. The final dimension may actually be less than the supplied value.
Like the fixedScale operator, this operator is also intended to simplify the creation of images with a specific size. The maxScale operator is even better suited to thumbnail image creation than the fixedScale operator because thumbnail images created using the maxScale operator will have the same aspect ratio as the original image.
The maxScale operator scales the source image to fit within the dimensions specified while preserving the aspect ratio of the source image. Because the aspect ratio is preserved, only one dimension of the destination image may actually be equal to the values supplied to the operator. The other dimension may be smaller than, or equal to, the supplied value. Another way to think of this scaling method is that the source image is scaled by a single scale factor that is as large as possible, with the constraint that the destination image fit entirely within the dimensions specified by the maxScale operator.
If the cut operator is used in conjunction with the maxScale operator, then the aspect ratio of the cut window is preserved instead of the aspect ratio of the input image.
The scaling method used by this operator is the same as used by the scale operator in all cases. This operator cannot be combined with other scaling operators.
The scale operator enlarges or reduces the image by the ratio given as the value for the operator. If the value is greater than 1.0, then the destination image will be scaled up (enlarged). If the value is less than 1.0, then the output will be scaled down (reduced). A scale value of 1.0 has no effect, and is not an error. No scaling is applied to the source image if the scale operator is not passed to the process( ) method.
There are two scaling techniques used by interMedia. The first technique is "scaling by sampling," and is used only if the requested compression quality is MAXCOMPRATIO or HIGHCOMP, or if the image is being scaled up in both dimensions. This scaling technique works by selecting the source image pixel that is closest to the pixel being computed by the scaling algorithm and using the color of that pixel. This technique is faster, but results in a poorer quality image.
The second scaling technique is "scaling by averaging," and is used in all other cases. This technique works by selecting several pixels that are close to the pixel being computed by the scaling algorithm and computing the average color. This technique is slower, but results in a better quality image.
If the scale operator is not used, the default scaling value is 1.0. This operator cannot be combined with other scaling operators.
Note: Enclose all floating-point arguments with double quotation marks ("" ) to ensure correct Globalization Support interpretation. |
The xScale operator is similar to the scale operator but affects only the width (x-dimension) of the image. The important difference between xScale and scale is that with xScale, scaling by sampling is used whenever the image quality is specified to be MAXCOMPRATIO or HIGHCOMP, and is not dependent on whether the image is being scaled up or down.
This operator may be combined with the yScale operator to scale each axis differently. It may not be combined with other scaling operators (Scale, fixedScale, maxScale).
Note: Enclose all floating-point arguments with double quotation marks ("" ) to ensure correct Globalization Support interpretation. |
The yScale operator is similar to the scale operator but affects only the height (y-dimension) of the image. The important difference between yScale and scale is that with yScale, scaling by sampling is used whenever the image quality is specified to be MAXCOMPRATIO or HIGHCOMP, and is not dependent on whether the image is being scaled up or down.
This operator may be combined with the xScale operator to scale each axis differently. It may not be combined with other scaling operators (scale, fixedScale, maxScale).
Note: Enclose all floating-point arguments with double quotation marks ("" ) to ensure correct Globalization Support interpretation. |
The following operators are supported only when the destination image file format is Raw Pixel or BMPF (scanlineOrder operator only), with the exception of the inputChannels operator, which is supported only when the source image is Raw Pixel or a foreign image. It does not matter if the destination image format is set to Raw Pixel or BMPF explicitly using the fileFormat operator, or if the Raw Pixel or BMPF format is selected by interMedia automatically, because the source format is Raw Pixel, BMPF, or a foreign image.
The channelOrder operator determines the relative order of the red, green, and blue channels (bands) within the destination Raw Pixel image. The order of the characters R, G, and B within the mnemonic value passed to this operator determine the order of these channels within the output. The header of the Raw Pixel image will be written such that this order is not lost.
For more information about the Raw Pixel file format and the ordering of channels in that format, see Appendix E.
The pixelOrder operator controls the direction of pixels within a scanline in a Raw Pixel Image. The value Normal indicates that the leftmost pixel of a scanline will appear first in the image data stream. The value Reverse causes the rightmost pixel of the scanline to appear first.
For more information about the Raw Pixel file format and pixel ordering, see Appendix E.
The scanlineOrder operator controls the order of scanlines within a Raw Pixel or BMPF image. The value Normal indicates that the top display scanline will appear first in the image data stream. The value Inverse causes the bottom scanline to appear first. For BMPF, scanlineOrder = inverse is the default and ordinary value.
For more information about the Raw Pixel or BMPF file format and scanline ordering, see Appendix E.
As stated in Section D.4, the inputChannels operator is supported only when the source image is in Raw Pixel format, or if the source is a foreign image.
The inputChannels operator assigns individual bands from a multiband image to be the red, green, and blue channels for later image processing. Any band within the source image can be assigned to any channel. If desired, only a single band may be specified and the selected band will be used as the grayscale channel, resulting in a grayscale output image. The first band in the image is number 1, and the band numbers passed to the Input Channels operator must be greater than or equal to one, and less than or equal to the total number of bands in the source image. Only the bands selected the by inputChannels operator are written to the output. Other bands are not transferred, even if the output image is in Raw Pixel format.
It should be noted that every Raw Pixel or foreign image has these input channel assignments written into its header block, but that this operator overrides those default assignments.
For more information about the Raw Pixel file format and input channels, see Appendix E.