VMS Help  —  RMU72  Unload
    There are two RMU Unload commands, as follows:

    o  An RMU Unload command without the After_Journal qualifier
       copies the data from a specified table or view of the database
       into either a specially structured file that contains both the
       data and the metadata or into an RMS file that contains data
       only.

    o  An RMU Unload command with the After_Journal qualifier
       extracts added, modified, and deleted record contents from
       committed transactions from specified tables in one or more
       after-image journal files.

1  –  Database

    Copies the data from a specified table or view of the database
    into one of the following:

    o  A specially structured file that contains both the data and
       the metadata (.unl).

    o  An RMS file that contains data only (.unl). This file is
       created when you specify the Record_Definition qualifier.
       (The Record_Definition qualifier also creates a second file,
       with file extension .rrd, that contains the metadata.)

    Data from the specially structured file can be reloaded by using
    the RMU Load command only. Data from the RMS file can be reloaded
    using the RMU Load command or by using an alternative utility
    such as is offered by DATATRIEVE.

1.1  –  Description

    The RMU Unload command copies data from a specified table or view
    and places it in a specially structured file or in an RMS file.
    Be aware that the RMU Unload command does not remove data from
    the specified table; it merely makes a copy of the data.

    The RMU Unload command can be used to do the following:

    o  Extract data for an application that cannot access the Oracle
       Rdb database directly.

    o  Create an archival copy of data.

    o  Perform restructuring operations.

    o  Sort data by defining a view with a sorted-by clause, then
       unloading that view.

    The specially structured files created by the RMU Unload command
    contain metadata for the table that was unloaded. The RMS files
    created by the RMU Unload command contain only data; the metadata
    can be found either in the data dictionary or in the .rrd file
    created using the Record_Definition qualifier. Specify the
    Record_Definition qualifier to exchange data with an application
    that uses RMS files.

    The LIST OF BYTE VARYING (segmented string) data type cannot be
    unloaded into an RMS file; however, it can be unloaded into the
    specially structured file type.

    Data type conversions are valid only if Oracle Rdb supports the
    conversion.

    The RMU Unload command executes a read-only transaction to gather
    the metadata and user data to be unloaded. It is compatible with
    all operations that do not require exclusive access.

1.2  –  Format

  (B)0RMU/Unload root-file-spec table-name output-file-name

  Command Qualifiers                                  x Defaults
                                                      x
  /Allocation=n                                       x /Allocation=2048
  /Buffers=n                                          x See description
  /Commit_Every=n                                     x None
  /[No]Compression[=options]                          x /Nocompression
  /Debug_Options={options}                            x See description
  /Delete_Rows                                        x None
  /[No]Error_Delete                                   x See description
  /Extend_Quantity=number-blocks                      x /Extend_Quantity=2048
  /Fields=(column-name-list)                          x See description
  /Flush={Buffer_End|On_Commit}                       x See description
  /[No]Limit_To=n                                     x /Nolimit_To
  /Optimize={options}                                 x None
  /Record_Definition={([No]File|Path)=name,options}   x See description
  /Reopen_Count=n                                     x None
  /Row_Count=n                                        x See description
  /Statistics_Interval=seconds                        x See description
  /Transaction_Type[=(transaction_mode,options...)]   x See description
  /[No]Virtual_Fields[=[No]Automatic,[No]Computed_By] x /Novirtual_Fields

1.3  –  Parameters

1.3.1  –  root-file-spec

    The root file specification of the database from which tables or
    views will be unloaded. The default file extension is .rdb.

1.3.2  –  table-name

    The name of the table or view to be unloaded, or its synonym.

1.3.3  –  output-file-name

    The destination file name. The default file extension is .unl.

1.4  –  Command Qualifiers

1.4.1  –  Allocation

    Allocation=n

    Enables you to preallocate the generated output file. The
    default allocation is 2048 blocks; when the file is closed it
    is truncated to the actual length used.

    If the value specified for the Allocation qualifier is less
    than 65535, it becomes the new maximum for the Extend_Quantity
    qualifier.

1.4.2  –  Buffers

    Buffers=n

    Specifies the number of database buffers used for the unload
    operation. If no value is specified, the default value for
    the database is used. Although this qualifier might affect
    the performance of the unload operation, the default number of
    buffers for the database usually allows adequate performance.

1.4.3  –  Commit Every

    Commit_Every=n

    Turns the selection query into a WITH HOLD cursor so that the
    data stream is not closed by a commit. Refer to the Oracle Rdb7
    SQL Reference Manual for more information about the WITH HOLD
    clause.

1.4.4  –  Compression

    Compression[=options]
    NoCompression

    Data compression is applied to the user data unloaded to the
    internal (interchange) format file. Table rows, null byte vector
    and LIST OF BYTE VARYING data are compressed using either the LZW
    (Lempel-Ziv-Welch) technique or the ZLIB algorithm developed by
    Jean-loup Gailly and Mark Adler. Table metadata (column names and
    attributes) are never compressed and the resulting file remains
    a structured interchange file. Allowing compression allows the
    result data file to be more compact, using less disk space and
    permitting faster transmission over communication lines. This
    file can also be processed using the RMU Dump Export command.

    The default value is Nocompression.

    This qualifier accepts the following optional keywords (ZLIB is
    the default if no compression algorithm is specified):

    o  LZW

       Selects the LZW compression technique.

    o  ZLIB

       Selects the ZLIB compression technique. This can be modified
       using the LEVEL option.

    o  LEVEL=number

       ZLIB allows further tuning with the LEVEL option that accepts
       a numeric level between 1 and 9. The default of 6 is usually
       a good trade off between result file size and the CPU cost of
       the compression.

    o  EXCLUDE_LIST[=(column-name,...)]

       It is possible that data in LIST OF BYTE VARYING columns is
       already in a compressed format (for instance images as JPG
       data) and therefore need not be compressed by RMU Unload.
       In fact, compression in such cases might actually cause
       the output to grow. The EXCLUDE_LIST option will disable
       compression for LIST OF BYTE VARYING columns. Specific column
       names can be listed, or if omitted, all LIST OF BYTE VARYING
       columns will be excluded from compression.

    Only the user data is compressed. Therefore, additional
    compression may be applied using various third party compression
    tools, such as ZIP. It is not the goal of RMU to replace such
    tools.

    The qualifier RECORD_DEFINITION (or RMS_RECORD_DEF) is not
    compatible /COMPRESSION. Note that the TRIM option for DELIMITED
    format output can be used to trim trailing spaces from VARCHAR
    data.

1.4.5  –  Debug Options

    Debug_Options={options}

    The Debug_Options qualifier allows you to turn on certain debug
    functions. The Debug_Options qualifier accepts the following
    options:

    o  [NO]TRACE

       Traces the qualifier and parameter processing performed by
       RMU Unload. In addition, the query executed to read the table
       data is annotated with the TRACE statement at each Commit
       (controlled by Commit_Every qualifier). When the logical name
       RDMS$SET_FLAGS is defined as "TRACE", then a line similar to
       the following is output after each commit is performed.

       ~Xt: 2009-04-23 15:16:16.95: Commit executed.

       The default is NOTRACE.

       $RMU/UNLOAD/REC=(FILE=WS,FORMAT=CONTROL) SQL$DATABASE WORK_STATUS WS/DEBUG=TRACE
       Debug = TRACE
       * Synonyms are not enabled
       Row_Count = 500
       Message buffer: Len: 13524
       Message buffer: Sze: 27, Cnt: 500, Use: 4 Flg: 00000000
       %RMU-I-DATRECUNL,   3 data records unloaded.

    o  [NO]FILENAME_ONLY

       When the qualifier Record_Definition=Format:CONTROL is used,
       the name of the created unload file is written to the control
       file (.CTL). When the keyword FILENAME_ONLY is specified, RMU
       Unload will prune the output file specification to show only
       the file name and type. The default is NOFILENAME_ONLY.

       $RMU/UNLOAD/REC=(FILE=TT:,FORMAT=CONTROL) SQL$DATABASE WORK_STATUS WS/DEBUG=
       FILENAME
       --
       -- SQL*Loader Control File
       --   Generated by: RMU/UNLOAD
       --   Version:      Oracle Rdb X7.2-00
       --   On:           23-APR-2009 11:12:46.29
       --
       LOAD DATA
       INFILE 'WS.UNL'
       APPEND
       INTO TABLE "WORK_STATUS"
       (
        STATUS_CODE                     POSITION(1:1) CHAR NULLIF (RDB$UL_NB1 = '1')
       ,STATUS_NAME                     POSITION(2:9) CHAR NULLIF (RDB$UL_NB2 = '1')
       ,STATUS_TYPE                     POSITION(10:23) CHAR NULLIF (RDB$UL_NB3 = '1')
       -- NULL indicators
       ,RDB$UL_NB1               FILLER POSITION(24:24) CHAR -- indicator for
       STATUS_CODE
       ,RDB$UL_NB2               FILLER POSITION(25:25) CHAR -- indicator for
       STATUS_NAME
       ,RDB$UL_NB3               FILLER POSITION(26:26) CHAR -- indicator for
       STATUS_TYPE
       )
       %RMU-I-DATRECUNL,   3 data records unloaded.

    o  [NO]HEADER

       This keyword controls the output of the header in the control
       file. To suppress the header use NOHEADER. The default is
       HEADER.

    o  APPEND, INSERT, REPLACE, TRUNCATE

       These keywords control the text that is output prior to the
       INTO TABLE clause in the control file. The default is APPEND,
       and only one of these options can be specified.

1.4.6  –  Delete Rows

    Specifies that Oracle Rdb delete rows after they have been
    unloaded from the database. You can use this qualifier with the
    Commit_Every qualifier to process small batches of rows.

    If constraints, triggers, or table protection prevent the
    deletion of rows, the RMU Unload operation will fail. The Delete_
    Rows qualifier cannot be used with non-updatable views, those
    containing joins, or aggregates (union or group by).

1.4.7  –  Error Delete

    Noerror_Delete

    Specifies whether the unload and record definition files should
    be deleted on error. By default, the RMU Unload command deletes
    the unload and record definition files if an unrecoverable error
    occurs that causes an abnormal termination of the unload command
    execution. Use the Noerror_Delete qualifier to retain the files.

    If the Delete_Rows qualifier is specified, the default for this
    qualifier is Noerror_Delete. This default is necessary to allow
    you to use the unload and record definition files to reload the
    data if an unrecoverable error has occurred after the delete of
    some of the unloaded rows has been committed. Even if the unload
    file is retained, it may not be able to reload the data using the
    RMU Load command if the error is severe enough to prevent the RMU
    error handler from continuing to access the unload file once the
    error is detected.

    If the Delete_Rows qualifier is not specified, the default is
    Error_Delete.

1.4.8  –  Extend Quantity

    Extend_Quantity=number-blocks

    Sets the size, in blocks, by which the unload file (.unl) can
    be extended. The minimum value for the number-blocks parameter
    is 1; the maximum value is 65535. If you provide a value for the
    Allocation qualifier that is less than 65535, that value becomes
    the maximum you can specify.

    If you do not specify the Extend_Quantity qualifier, the default
    block size by which .unl files can be extended is 2048 blocks.

1.4.9  –  Fields

    Fields=(column-name-list)

    Specifies the column or columns of the table or view to be
    unloaded from the database. If you list multiple columns,
    separate the column names with a comma, and enclose the list
    of column names within parentheses. This qualifier also specifies
    the order in which the columns should be unloaded if that order
    differs from what is defined for the table or view. Changing the
    structure of the table or view could be useful when restructuring
    a database or when migrating data between two databases with
    different metadata definitions. The default is all the columns
    defined for the table or view in the order defined.

1.4.10  –  Flush

    Flush=Buffer_End
    Flush=On_Commit

    Controls when internal RMS buffers are flushed to the unload
    file. By default, the RMU Unload command flushes any data left
    in the internal RMS file buffers only when the unload file is
    closed. The Flush qualifier changes that behavior. You must use
    one of the following options with the Flush qualifier:

    o  Buffer_End

       The Buffer_End option specifies that the internal RMS buffers
       be flushed to the unload file after each unload buffer has
       been written to the unload file.

    o  On_Commit

       The On_Commit option specifies that the internal RMS buffers
       be flushed to the unload file just before the current unload
       transaction is committed.

    If the Delete_Rows qualifier is specified, the default for this
    qualifier is Flush=On_Commit. This default is necessary to allow
    you to use the unload and record definition files to reload the
    data if an unrecoverable error has occurred after the delete of
    some of the unloaded rows has been committed.

    If the Delete_Rows qualifier is not specified, the default is to
    flush the record definition buffers only when the unload files
    are closed.

    More frequent flushing of the internal RMS buffers will avoid the
    possible loss of some unload file data if an error occurs and the
    Noerror_Delete qualifer has been specified. Additional flushing
    of the RMS internal buffers to the unload file can cause the RMU
    Unload command to take longer to complete.

1.4.11  –  Limit To

    Limit_To=n
    Nolimit_To

    Limits the number of rows unloaded from a table or view. The
    primary use of the Limit_To qualifier is to unload a data sample
    for loading into test databases. The default is the Nolimit_To
    qualifier.

1.4.12  –  Optimize

    Optimize={options}

    Controls the query optimization of the RMU Unload command. You
    must use one or more of the following options with the Optimize
    qualifier:

    o  Conformance={Optional|Mandatory}

       This option accepts two keywords, Optional or Mandatory, which
       can be used to override the settings in the specified query
       outline.

       If the matching query outline is invalid, the
       Conformance=Mandatory option causes the query compile, and
       hence the RMU Unload operation, to stop. The query outline
       will be one which either matches the string provided by
       the Using_Outline or Name_As option or matches the query
       identification.

       The default behavior is to use the setting within the query
       outline. If no query outline is found, or query outline usage
       is disabled, then this option is ignored.

    o  Fast_First

       This option asks the query optimizer to favor strategies that
       return the first rows quickly, possibly at the expense of
       longer overall retrieval time. This option does not override
       the setting if any query outline is used.

       This option cannot be specified at the same time as the Total_
       Time option.

                                      NOTE

          Oracle Corporation does not recommend this optimization
          option for the RMU Unload process. It is provided only
          for backward compatibility with prior Rdb releases when
          it was the default behavior.

    o  Name_As=query_name

       This option supplies the name of the query. It is used to
       annotate output from the Rdb debug flags (enabled using the
       logical RDMS$SET_FLAGS) and is also logged by Oracle TRACE.

       If the Using_Outline option is not used, this name is also
       used as the query outline name.

    o  Selectivity=selectivity-value

       This option allows you to influence the Oracle Rdb query
       optimizer to use different selectivity values.

       The Selectivity option accepts the following keywords:

       -  Aggressive - assumes a smaller number of rows is selected
          compared to the default Oracle Rdb selectivity

       -  Sampled - uses literals in the query to perform preliminary
          estimation on indices

       -  Default - uses default selectivity rules

       The following example shows a use of the Selectivity option:

       $RMU/UNLOAD/OPTIMIZE=(TOTAL_TIME,SELECTIVITY=SAMPLED) -
       _$  SALES_DB CUSTOMER_TOP10 TOP10.UNL

       This option is most useful when the RMU Unlaod command
       references a view definition with a complex predicate.

    o  Sequential_Access

       This option requests that index access be disabled for this
       query. This is particularly useful for RMU Unload from views
       against strictly partitioned tables. Strict partitioning is
       enabled by the PARTITIONING IS NOT UPDATABLE clause on the
       CREATE or ALTER STORAGE MAP statements. Retrieval queries
       only use this type of partition optimization during sequential
       table access.

       This option cannot be specified at the same time as the Using_
       Outline option.

    o  Total_Time

       This option requests that total time optimization be applied
       to the unload query. It does not override the setting if any
       query outline is used.

       In some cases, total time optimization may improve performance
       of the RMU Unload command when the query optimizer favors
       overall performance instead of faster retrieval of the first
       row. Since the RMU Unload process is unloading the entire set,
       there is no need to require fast delivery of the first few
       rows.

       This option may not be specified at the same time as the Fast_
       First option. The Optimize=Total_Time behavior is the default
       behavior for the RMU Unload command if the Optimize qualifier
       is not specified.

    o  Using_Outline=outline_name

       This option supplies the name of the query outline to be
       used by the RMU Unload command. If the query outline does
       not exist, the name is ignored.

       This option may not be specified at the same time as the
       Sequential_Access option.

1.4.13  –  Record Definition

    Record_Definition=[File=name,options]
    Record_Definition=[Path=name,options]
    Record_Definition=Nofile

    Creates an RMS file containing the record structure definition
    for the output file. The record description uses the CDO record
    and field definition format. The default file extension is .rrd.

    If you omit the File=name or Path=name option you must specify an
    option.

    The date-time syntax in .rrd files generated by this qualifier
    changed in Oracle Rdb V6.0 to make the .rrd file compatible with
    the date-time syntax support for Oracle CDD/Repository V6.1. The
    RMU Unload command accepts both the date-time syntax generated
    by the Record_Definition qualifier in previous versions of Oracle
    Rdb and the syntax generated in Oracle Rdb V6.0 and later.

    See the help entry for RRD_File_Syntax for more information on
    .rrd files and details on the date-time syntax generated by this
    qualifier.

    The options are:

    o  Format=(Text)

       If you specify the Format=(Text) option, Oracle RMU converts
       all data to printable text before unloading it.

    o  Format=Control

       The Format=Control option provides support for SQL*Loader
       control files and portable data files. The output file
       defaults to type .CTL.

       FORMAT=CONTROL implicitly uses a portable data format as TEXT
       rather than binary values. The unloaded data files are similar
       to that generated by FORMAT=TEXT but includes a NULL vector to
       represent NULL values ('1') and non-NULL values ('0').

       The SQL*Loader control file uses this NULL vector to set NULL
       for the data upon loading.

       When FORMAT=CONTROL is used, the output control file and
       associated data file are intended to be used with the Oracle
       RDBMS SQL*Loader (sqlldr) command to load the data into an
       Oracle RDBMS database table. LIST OF BYTE VARYING (SEGMENTED
       STRING) columns are not unloaded.

       The keywords NULL, PREFIX, SEPARATOR, SUFFIX, and TERMINATOR
       only apply to DELIMITED_TEXT format and may not be used in
       conjunction with the CONTROL keyword.

       DATE VMS data is unloaded including the fractional seconds
       precision. However, when mapped to Oracle DATE type in the
       control file, the fractional seconds value is ignored. It
       is possible to modify the generated control file to use the
       TIMESTAMP type and add FF to the date edit mask.

                                      NOTE

          The RMU Load command does not support loading data using
          FORMAT=Control.

    o  Format=XML

       The Format=XML option causes the output Record_Definition file
       type to default to .DTD (Document Type Definition). The output
       file defaults to type .XML. The contents of the data file is
       in XML format suitable for processing with a Web browser or
       XML application.

       If you use the Nofile option or do not specify the File or
       Path keyword, the DTD is included in the XML output file
       (internal DTD). If you specify a name with the File or Path
       keyword to identify an output file, the file is referenced as
       an external DTD from within the XML file.

       The XML file contains a single table that has the name of the
       database and multiple rows named <RMU_ROW>. Each row contains
       the values for each column in printable text. If a value is
       NULL, then the tag <NULL/> is displayed. Example 16 shows this
       behavior.

                                      NOTE

          The RMU Load command does not support loading data using
          FORMAT=XML.

    o  Format=(Delimited_Text [,delimiter-options])

       If you specify the Format=Delimited_Text option, Oracle RMU
       applies delimiters to all data before unloading it.

       Note that DATE VMS dates are output in the collatable time
       format, which is yyyymmddhhmmsscc. For example, March 20, 1993
       is output as: 1993032000000000.

       If the Format option is not used, Oracle RMU outputs data to
       a fixed-length binary flat file. If the Format=Delimited_Text
       options is not used, VARCHAR(n) strings are padded with blanks
       when the specified string has fewer characters than n so that
       the resulting string is n characters long.

       Delimiter options (and their default values if you do not
       specify delimiter options) are:

       -  Prefix=string

          Specifies a prefix string that begins any column value in
          the ASCII output file. If you omit this option, the column
          prefix will be a quotation mark (").

       -  Separator=string

          Specifies a string that separates column values of a row.
          If you omit this option, the column separator will be a
          single comma (,).

       -  Suffix=string

          Specifies a suffix string that ends any column value in
          the ASCII output file. If you omit this option, the column
          suffix will be a quotation mark (").

       -  Terminator=string

          Specifies the row terminator that completes all the column
          values corresponding to a row. If you omit this option, the
          row terminator will be the end of the line.

       -  Null=string

          Specifies a string, which when found in the database
          column, is unloaded as NULL in the output file.

          The Null option can be specified on the command line as any
          one of the following:

          *  A quoted string

          *  An empty set of double quotes ("")

          *  No string

          The string that represents the null character must be
          quoted on the Oracle RMU command line. You cannot specify a
          blank space or spaces as the null character. You cannot use
          the same character for the Null value and other Delimited_
          Text options.

                                      NOTE

          The values of each of the strings specified in the
          delimiter options must be enclosed within quotation
          marks. Oracle RMU strips these quotation marks while
          interpreting the values. If you want to specify a
          quotation mark (") as a delimiter, specify a string
          of four quotation marks. Oracle RMU interprets four
          quotation marks as your request to use one quotation
          mark as a delimiter. For example, Suffix = """".

          Oracle RMU reads these quotation marks as follows:

          o  The first quotation mark is stripped from the string.

          o  The second and third quotation mark are interpreted
             as your request for one quotation mark (") as a
             delimiter.

          o  The fourth quotation mark is stripped.

          This results in one quotation mark being used as a
          delimiter.

          Furthermore, if you want to specify a quotation mark as
          part of the delimited string, you must use two quotation
          marks for each quotation mark that you want to appear in
          the string. For example, Suffix = "**""**" causes Oracle
          RMU to use a delimiter of **"**.

    o  Trim=option

       If you specify the Trim=option keyword, leading and/or
       trailing spaces area removed from each output field. Option
       supports three keywords:

       o  TRAILING - trailing spaces will be trimmed from CHARACTER
          and CHARACTER VARYING (VARCHAR) data that is unloaded.
          This is the default setting if only the TRIM option is
          specified.

       o  LEADING - leading spaces will be trimmed from CHARACTER and
          CHARACTER VARYING (VARCHAR) data that is unloaded.

       o  BOTH - both leading and trailing spaces will be trimmed.

    When the Record_Definition qualifier is used with load or unload
    operations, and the Null option to the Delimited_Text option
    is not specified, any null values stored in the rows of the
    tables being loaded or unloaded are not preserved. Therefore,
    if you want to preserve null values stored in tables and you are
    moving data within the database or between databases, specify the
    Null option with Delimited_Text option of the Record_Definition
    qualifier.

1.4.14  –  Reopen Count

    Reopen_Count=n

    The Reopen_Count=n qualifier allows you to specify how many
    records are written to an output file. The output file will
    be re-created (that is, a new version of the file will be
    created) when the record count reaches the specified value.
    The Reopen_Count=n qualifier is only valid when used with the
    Record_Definition or Rms_Record_Def qualifiers.

1.4.15  –  Rms Record Def

    Rms_Record_Def=(File=name[,options])

    Rms_Record_Def=(Path=name[,options])

    Synonymous with the Record_Definition qualifier. See the
    description of the Record_Definition qualifier.

1.4.16  –  Row Count

    Row_Count=n

    Specifies that Oracle Rdb buffer multiple rows between the Oracle
    Rdb server and the RMU Unload process. The default value for n
    is 500 rows; however, this value should be adjusted based on
    working set size and length of unloaded data. Increasing the row
    count may reduce the CPU cost of the unload operation. For remote
    databases, this may significantly reduce network traffic for
    large volumes of data because the buffered data can be packaged
    into larger network packets.

    The minimum value you can specify for n is 1. The default row
    size is the value specified for the Commit_Every qualifier or
    500, whichever is smaller.

1.4.17  –  Statistics Interval

    Statistics_Interval=seconds

    Specifies that statistics are to be displayed at regular
    intervals so that you can evaluate the progress of the unload
    operation.

    The displayed statistics include:

    o  Elapsed time

    o  CPU time

    o  Buffered I/O

    o  Direct I/O

    o  Page faults

    o  Number of records unloaded since the last transaction was
       committed

    o  Number of records unloaded so far in the current transaction

    If the Statistics_Interval qualifier is specified, the seconds
    parameter is required. The minimum value is 1. If the unload
    operation completes successfully before the first time interval
    has passed, you receive only an informational message on the
    number of files unloaded. If the unload operation is unsuccessful
    before the first time interval has passed, you receive error
    messages and statistics on the number of records unloaded.

    At any time during the unload operation, you can press Ctrl/T to
    display the current statistics.

1.4.18  –  Transaction Type

    Transaction_Type[=(transaction_mode,options,...)]

    Allows you to specify the transaction mode, isolation level, and
    wait behavior for transactions.

    Use one of the following keywords to control the transaction
    mode:

    o  Automatic

       When Transaction_Type=Automatic is specified, the transaction
       type depends on the current database settings for snapshots
       (enabled, deferred, or disabled), transaction modes available
       to this user, and the standby status of the database.
       Automatic mode is the default.

    o  Read_Only

       Starts a Read_Only transaction.

    o  Exclusive

       Starts a Read_Write transaction and reserves the table for
       Exclusive_Read.

    o  Protected

       Starts a Read_Write transaction and reserves the table for
       Protected_Read.

    o  Shared

       Starts a Read_Write transaction and reserves the table for
       Shared_Read.

    Use one of the following options with the keyword Isolation_
    Level=[option] to specify the transaction isolation level:

    o  Read_Committed

    o  Repeatable_Read

    o  Serializable. Serializable is the default setting.

    Refer to the SET TRANSACTION statement in the Oracle Rdb SQL
    Reference Manual for a complete description of the transaction
    isolation levels.

    Specify the wait setting by using one of the following keywords:

    o  Wait

       Waits indefinitely for a locked resource to become available.
       Wait is the default behavior.

    o  Wait=n

       The value you supply for n is the transaction lock timeout
       interval. When you supply this value, Oracle Rdb waits n
       seconds before aborting the wait and the RMU Unload session.
       Specifying a wait timeout interval of zero is equivalent to
       specifying Nowait.

    o  Nowait

       Does not wait for a locked resource to become available.

1.4.19  –  Virtual Fields

    Virtual_Fields(=[No]Automatic,[No]Computed_By)
    Novirtual_Fields

    The Virtual_Fields qualifier unloads any AUTOMATIC or COMPUTED
    BY fields as real data. This qualifier permits the transfer of
    computed values to another application. It also permits unloading
    through a view that is a union of tables or that is comprised
    of columns from multiple tables. For example, if there are two
    tables, EMPLOYEES and RETIRED_EMPLOYEES, the view ALL_EMPLOYEES
    (a union of EMPLOYEES and RETIRED_EMPLOYEES tables) can be
    unloaded.

    The Novirtual_Fields qualifier is the default, which is
    equivalent to the Virtual_Fields=[Noautomatic,Nocomputed_By)
    qualifier.

    If you specify the Virtual_Fields qualifier without a keyword,
    all fields are unloaded, including COMPUTED BY and AUTOMATIC
    table columns, and calculated VIEW columns.

    If you specify the Virtual_Fields=(Automatic,Nocomputed_By)
    qualifier or the Virtual_Fields=Nocomputed_By qualifier, data
    is only unloaded from Automatic fields. If you specify the
    Virtual_Fields=(Noautomatic,Computed_By) qualifier or the
    Virtual_Fields=Noautomatic qualifier, data is only unloaded from
    Computed_By fields.

1.5  –  Usage Notes

    o  To use the RMU Unload command for a database, you must have
       the RMU$UNLOAD privilege in the root file access control
       list (ACL) for the database or the OpenVMS SYSPRV or BYPASS
       privilege. You must also have the SQL SELECT privilege to the
       table or view being unloaded.

    o  For tutorial information on the RMU Unload command, refer to
       the Oracle Rdb Guide to Database Design and Definition.

    o  Detected asynchronous prefetch should be enabled to achieve
       the best performance of this command. Beginning with Oracle
       Rdb V7.0, by default, detected asynchronous prefetch is
       enabled. You can determine the setting for your database by
       issuing the RMU Dump command with the Header qualifier.

       If detected asynchronous prefetch is disabled, and you do not
       want to enable it for the database, you can enable it for your
       Oracle RMU operations by defining the following logicals at
       the process level:

       $ DEFINE RDM$BIND_DAPF_ENABLED 1
       $ DEFINE RDM$BIND_DAPF_DEPTH_BUF_CNT P1

       P1 is a value between 10 and 20 percent of the user buffer
       count.

    o  You can unload a table from a database structured under
       one version of Oracle Rdb and load it into the same table
       of a database structured under another version of Oracle
       Rdb. For example, if you unload the EMPLOYEES table from
       a mf_personnel database created under Oracle Rdb V6.0, you
       can load the generated .unl file into an Oracle Rdb V7.0
       database. Likewise, if you unload the EMPLOYEES table from
       a mf_personnel database created under Oracle Rdb V7.0, you
       can load the generated .unl file into an Oracle Rdb V6.1
       database. This is true even for specially formatted binary
       files (created with the RMU Unload command without the Record_
       Definition qualifier). The earliest version into which you can
       load a .unl file from another version is Oracle Rdb V6.0.

    o  The Fields qualifier can be used with indirect file
       references. When you use the Fields qualifier with an indirect
       file reference in the field list, the referenced file is
       written to SYS$OUTPUT if you have used the DCL SET VERIFY
       command. See the Indirect-Command-Files help entry for more
       information.

    o  To view the contents of the specially structured .unl file
       created by the RMU Unload command, use the RMU Dump Export
       command.

    o  To preserve the null indicator in a load or unload operation,
       use the Null option with the Record_Definition qualifier.
       Using the Record_Definition qualifier without the Null option
       replaces all null values with zeros; this can cause unexpected
       results with computed-by columns.

    o  Oracle RMU does not allow you to unload a system table.

    o  The RMU Unload command recognizes character set information.
       When you unload a table, RMU Unload transfers information
       about the character set to the record definition file.

    o  When it creates the record definition file, the RMU Unload
       command preserves any lowercase characters in table and column
       names by allowing delimited identifiers. Delimited identifiers
       are user-supplied names enclosed within quotation marks ("").

       By default, RMU Unload changes any table or column (field)
       names that you specify to uppercase. To preserve lowercase
       characters, use delimited identifiers. That is, enclose the
       names within quotation marks. In the following example, RMU
       Unload preserves the uppercase and lowercase characters in
       "Last_Name" and "Employees":

       $  RMU/UNLOAD/FIELDS=("Last_name",FIRST_NAME) TEST "Employees" -
       _$ TEST.UNL

                                      NOTE

          The data dictionary does not preserve the distinction
          between uppercase and lowercase identifiers. If you use
          delimited identifiers, you must be careful to ensure that
          the record definition does not include objects with names
          that are duplicates except for the case. For example,
          the data dictionary considers the delimited identifiers
          "Employee_ID" and "EMPLOYEE_ID" to be the same name.

    o  Oracle RMU does not support the multischema naming convention
       and returns an error if you specify one. For example:

       $ RMU/UNLOAD CORPORATE_DATA ADMINISTRATION.PERSONNEL.EMPLOYEES -
       _$ OUTPUT.UNL
       %RMU-E-OUTFILDEL, Fatal error, output file deleted
       -RMU-F-RELNOTFND, Relation (ADMINISTRATION.PERSONNEL.EMPLOYEES) not found

       When using a multischema database, you must specify the SQL
       stored name for the database object.

       For example, to find the stored name that corresponds to the
       ADMINISTRATION.PERSONNEL.EMPLOYEES table in the corporate_data
       database, issue an SQL SHOW TABLE command, as follows:

       SQL> SHOW TABLE ADMINISTRATION.PERSONNEL.EMPLOYEES
       Information for table ADMINISTRATION.PERSONNEL.EMPLOYEES
           Stored name is EMPLOYEES
          .
          .
          .

       Then to unload the table, issue the following RMU Unload
       command:

       $ RMU/UNLOAD CORPORATE_DATA EMPLOYEES OUTPUT.UNL

    o  If the Transaction_Type qualifier is omitted, a Read_Only
       transaction is started against the database. This behavior is
       provided for backward compatibility with prior Rdb releases.
       If the Transaction_Type qualifier is specified without a
       transaction mode, the default value Automatic is used.

    o  If the database has snapshots disabled, Oracle Rdb defaults to
       a READ WRITE ISOLATION LEVEL SERIALIZABLE transaction. Locking
       may be reduced by specifying Transaction_Type=(Automatic), or
       Transaction_Type=(Shared,Isolation_Level=Read_Committed).

    o  If you use a synonym to represent a table or a view, the RMU
       Unload command translates the synonym to the base object
       and processes the data as though the base table or view had
       been named. This implies that the unload interchange files
       (.UNL) or record definition files (.RRD) that contain the
       table metadata will name the base table or view and not use
       the synonym name. If the metadata is used against a different
       database, you may need to use the Match_Name qualifier to
       override this name during the RMU load process.

1.6  –  Examples

    Example 1

    The following command unloads the EMPLOYEE_ID and LAST_NAME
    column values from the EMPLOYEES table of the mf_personnel
    database. The data is stored in names.unl.

    $ RMU/UNLOAD -
    _$ /FIELDS=(EMPLOYEE_ID, LAST_NAME) -
    _$ MF_PERSONNEL EMPLOYEES NAMES.UNL
    %RMU-I-DATRECUNL, 100 data records unloaded.

    Example 2

    The following command unloads the EMPLOYEES table from the
    mf_personnel database and places the data in the RMS file,
    names.unl. The names.rrd file contains the record structure
    definitions for the data in names.unl.

    $ RMU/UNLOAD/RECORD_DEFINITION=FILE=NAMES.RRD MF_PERSONNEL -
    _$ EMPLOYEES NAMES.UNL
    %RMU-I-DATRECUNL, 100 data records unloaded.

    Example 3

    The following command unloads the EMPLOYEE_ID and LAST_NAME
    column values from the EMPLOYEES table of the mf_personnel
    database and accepts the default values for delimiters, as shown
    by viewing the names.unl file:

    $ RMU/UNLOAD/FIELDS=(EMPLOYEE_ID, LAST_NAME) -
    -$ /RECORD_DEFINITION=(FILE=NAMES, FORMAT=DELIMITED_TEXT) -
    -$ MF_PERSONNEL EMPLOYEES NAMES.UNL
    %RMU-I-DATRECUNL, 100 data records unloaded.
    $ !
    $ ! TYPE the names.unl file to see the effect of the RMU Unload
    $ ! command.
    $ !
    $ TYPE NAMES.UNL

    "00164","Toliver       "
    "00165","Smith         "
    "00166","Dietrich      "
    "00167","Kilpatrick    "
    "00168","Nash          "
       .
       .
       .

    Example 4

    The following command unloads the EMPLOYEE_ID and LAST_NAME
    column values from the EMPLOYEES table of the mf_personnel
    database and specifies the asterisk (*)  character as the string
    to mark the beginning and end of each column (the prefix and
    suffix string):

    $ RMU/UNLOAD/FIELDS=(EMPLOYEE_ID, LAST_NAME) -
    _$ /RECORD_DEFINITION=(FILE=NAMES, -
    _$ FORMAT=DELIMITED_TEXT, SUFFIX="*", -
    _$ PREFIX="*") -
    _$ MF_PERSONNEL EMPLOYEES NAMES.UNL
    %RMU-I-DATRECUNL, 100 data records unloaded.
    $ !
    $ ! TYPE the names.unl file to see the effect of the RMU Unload
    $ ! command.
    $ !
    $ TYPE NAMES.UNL
    *00164*,*Toliver       *
    *00165*,*Smith         *
    *00166*,*Dietrich      *
    *00167*,*Kilpatrick    *
    *00168*,*Nash          *
    *00169*,*Gray          *
    *00170*,*Wood          *
    *00171*,*D'Amico       *
       .
       .
       .

    Example 5

    The following command unloads all column values from the
    EMPLOYEES table of the mf_personnel database, and specifies the
    Format=Text option of the Record_Definition qualifier. Oracle RMU
    will convert all the data to printable text, as can be seen by
    viewing the text_output.unl file:

    $ RMU/UNLOAD/RECORD_DEFINITION=(FILE=TEXT_RECORD,FORMAT=TEXT) -
    _$ MF_PERSONNEL EMPLOYEES TEXT_OUTPUT
    %RMU-I-DATRECUNL, 100 data records unloaded.
    $ !
    $ ! TYPE the text_output.unl file to see the effect of the RMU Unload
    $ ! command.
    $ !
    $ TYPE TEXT_OUTPUT.UNL
    00164Toliver       Alvin     A146 Parnell Place
    Chocorua            NH03817M19470328000000001
    00165Smith         Terry     D120 Tenby Dr.
    Chocorua            NH03817M19540515000000002
    00166Dietrich      Rick       19 Union Square
    Boscawen            NH03301M19540320000000001
       .
       .
       .

    Example 6

    The following command unloads the EMPLOYEE_ID and LAST_NAME
    column values from the EMPLOYEES table of the mf_personnel
    database and requests that statistics be displayed on the
    terminal at 2-second intervals:

    $ RMU/UNLOAD/FIELDS=(EMPLOYEE_ID, LAST_NAME)   -
    _$ /STATISTICS_INTERVAL=2             -
    _$ MF_PERSONNEL EMPLOYEES NAMES.UNL

    Example 7

    The following example unloads a subset of data from the EMPLOYEES
    table, using the following steps:

    1. Create a temporary view on the EMPLOYEES table that includes
       only employees who live in Massachusetts.

    2. Use an RMU Unload command to unload the data from this view.

    3. Delete the temporary view.

    $ SQL
    SQL> ATTACH 'FILENAME MF_PERSONNEL';
    SQL> CREATE VIEW MA_EMPLOYEES
    cont>       (EMPLOYEE_ID,
    cont>        LAST_NAME,
    cont>        FIRST_NAME,
    cont>        MIDDLE_INITIAL,
    cont>        STATE,
    cont>        STATUS_CODE)
    cont>  AS SELECT
    cont>        E.EMPLOYEE_ID,
    cont>        E.LAST_NAME,
    cont>        E.FIRST_NAME,
    cont>        E.MIDDLE_INITIAL,
    cont>        E.STATE,
    cont>        E.STATUS_CODE
    cont>     FROM EMPLOYEES E
    cont>     WHERE E.STATE='MA';
    SQL> COMMIT;
    SQL> EXIT;

    $ RMU/UNLOAD/RECORD_DEFINITION=(FILE=MA_EMPLOYEES,FORMAT=DELIMITED_TEXT) -
    _$ MF_PERSONNEL MA_EMPLOYEES MA_EMPLOYEES.UNL
    %RMU-I-DATRECUNL, 9 data records unloaded.

    $ SQL
    SQL> ATTACH 'FILENAME MF_PERSONNEL';
    SQL> DROP VIEW MA_EMPLOYEES;
    SQL> COMMIT;

    Example 8

    The following example shows that null values in blank columns
    are not preserved unless the Null option is specified with the
    Delimited_Text option of the Record_Definition qualifier:

    $ SQL
    SQL> ATTACH 'FILENAME MF_PERSONNEL';
    SQL> --
    SQL> -- Create the NULL_DATE table:
    SQL> CREATE TABLE NULL_DATE
    cont> (COL1 VARCHAR(5),
    cont>  DATE1 DATE,
    cont>  COL2 VARCHAR(5));
    SQL> --
    SQL> -- Store a row that does not include a value for the DATE1
    SQL> -- column of the NULL_DATE table:
    SQL> INSERT INTO NULL_DATE
    cont>            (COL1, COL2)
    cont>    VALUES ('first','last');
    1 row inserted
    SQL> --
    SQL> COMMIT;
    SQL> --
    SQL> -- The previous SQL INSERT statement causes a null value to
    SQL> -- be stored in NULL_DATE:
    SQL> SELECT * FROM NULL_DATE;
     COL1    DATE1                     COL2
     first   NULL                      last
    1 row selected
    SQL> --
    SQL> DISCONNECT DEFAULT;
    SQL> EXIT;
    $ !
    $ ! In the following RMU Unload command, the Record_Definition
    $ ! qualifier is used to unload the row with the NULL value, but
    $ ! the Null option is not specified:
    $ RMU/UNLOAD/RECORD_DEFINITION=(FILE=NULL_DATE,FORMAT=DELIMITED_TEXT) -
    _$  MF_PERSONNEL NULL_DATE NULL_DATE
    %RMU-I-DATRECUNL, 1 data records unloaded.
    $ !
    $ ! The null_date.unl file created by the previous unload
    $ ! operation does not preserve the NULL value in the DATE1 column.
    $ ! Instead, the Oracle Rdb default date value is used:
    $ TYPE NULL_DATE.UNL
    "first","1858111700000000","last"
    $ !
    $ ! This time, unload the row in NULL_DATE with the Null option to
    $ ! the Record_Definition qualifier:
    $ RMU/UNLOAD MF_PERSONNEL NULL_DATE NULL_DATE -
    _$ /RECORD_DEFINITION=(FILE=NULL_DATE.RRD, FORMAT=DELIMITED_TEXT, NULL="*")
    %RMU-I-DATRECUNL,   1 data records unloaded.
    $ !
    $ TYPE NULL_DATE.UNL
    "first",*,"last "
    $ SQL
    SQL> ATTACH 'FILENAME MF_PERSONNEL';
    SQL> --
    SQL> -- Delete the existing row from NULL_DATE:
    SQL> DELETE FROM NULL_DATE;
    1 row deleted
    SQL> --
    SQL> COMMIT;
    SQL> EXIT;
    $ !
    $ ! Load the row that was unloaded back into the table,
    $ ! using the null_date.unl file created by the
    $ ! previous RMU Unload command:
    $ RMU/LOAD MF_PERSONNEL /RECORD_DEFINITION=(FILE=NULL_DATE.RRD, -
    _$ FORMAT=DELIMITED_TEXT, NULL="*") NULL_DATE NULL_DATE
    %RMU-I-DATRECREAD,  1 data records read from input file.
    %RMU-I-DATRECSTO, 1 data records stored.
    $ !
    $ SQL
    SQL> ATTACH 'FILENAME MF_PERSONNEL';
    SQL> --
    SQL> -- Display the row stored in NULL_DATE.
    SQL> -- The NULL value stored in the data row
    SQL> -- was preserved by the load and unload operations:
    SQL> SELECT * FROM NULL_DATE;
     COL1    DATE1                     COL2
     first   NULL                      last
    1 row selected

    Example 9

    The following example demonstrates the use of the Null="" option
    of the Record_Definition qualifier to signal to Oracle RMU that
    any data that is an empty string in the .unl file (as represented
    by two commas with no space separating them) should have the
    corresponding column in the database flagged as NULL.

    The first part of this example shows the contents of the .unl
    file and the RMU Load command used to load the .unl file. The
    terminator for each record in the .unl file is the number sign
    (#).  The second part of this example unloads unloads the data
    and specifies that any columns that are flagged as NULL should be
    represented in the output file with an asterisk.

    "90021","ABUSHAKRA","CAROLINE","A","5 CIRCLE STREET",,
    "CHELMSFORD", "MA", "02184", "1960061400000000"#
    "90015","BRADFORD","LEO","B","4 PLACE STREET",, "NASHUA","NH",
    "03030", "1949051800000000"#
    $ !
    $ RMU/LOAD/FIELDS=(EMPLOYEE_ID, LAST_NAME, FIRST_NAME, -
    _$ MIDDLE_INITIAL, ADDRESS_DATA_1, ADDRESS_DATA_2,  -
    _$ CITY, STATE, POSTAL_CODE, BIRTHDAY) -
    _$ /RECORD_DEFINITION=(FILE= EMPLOYEES.RRD, -
    _$ FORMAT=DELIMITED_TEXT, -
    _$ TERMINATOR="#", -
    _$ NULL="") -
    _$ MF_PERSONNEL EMPLOYEES EMPLOYEES.UNL
    %RMU-I-DATRECREAD,  2 data records read from input file.
    %RMU-I-DATRECSTO,   2 data records stored.
    $ !
    $ ! Unload this data first without specifying the Null option:
    $ RMU/UNLOAD/FIELDS=(EMPLOYEE_ID, LAST_NAME, FIRST_NAME, -
    _$ MIDDLE_INITIAL, ADDRESS_DATA_1, ADDRESS_DATA_2,  -
    _$ CITY, STATE, POSTAL_CODE, BIRTHDAY) -
    _$ /RECORD_DEFINITION=(FILE= EMPLOYEES.RRD, -
    _$ FORMAT=DELIMITED_TEXT, -
    _$ TERMINATOR="#") -
    _$ MF_PERSONNEL EMPLOYEES EMPLOYEES.UNL
    %RMU-I-DATRECUNL,   102 data records unloaded.
    $ !
    $ ! The ADDRESS_DATA_2 field appears as a quoted string:
    $ TYPE EMPLOYEES.UNL
       .
       .
       .
    "90021","ABUSHAKRA     ","CAROLINE  ","A","5 CIRCLE STREET        ","
                ","CHELMSFORD          ","MA","02184","1960061400000000"#
    $ !
    $ ! Now unload the data with the Null option specified:
    $ RMU/UNLOAD/FIELDS=(EMPLOYEE_ID, LAST_NAME, FIRST_NAME, -
    _$ MIDDLE_INITIAL, ADDRESS_DATA_1, ADDRESS_DATA_2,  -
    _$ CITY, STATE, POSTAL_CODE, BIRTHDAY) -
    _$ /RECORD_DEFINITION=(FILE= EMPLOYEES.RRD, -
    _$ FORMAT=DELIMITED_TEXT, -
    _$ TERMINATOR="#", -
    _$ NULL="*") -
    _$ MF_PERSONNEL EMPLOYEES EMPLOYEES.UNL
    %RMU-I-DATRECUNL,   102 data records unloaded.
    $ !
    $ ! The value for ADDRESS_DATA_2 appears as an asterisk:
    $ !
    $ TYPE EMPLOYEES.UNL
       .
       .
       .
    "90021","ABUSHAKRA     ","CAROLINE  ","A","5 CIRCLE STREET       ",*,
    "CHELMSFORD          ","MA","02184","1960061400000000"#

    Example 10

    The following example specifies a transaction for the RMU Unload
    command equivalent to the SQL command SET TRANSACTION READ WRITE
    WAIT 36 RESERVING table1 FOR SHARED READ;

    $ RMU/UNLOAD-
        /TRANSACTION_TYPE=(SHARED,ISOLATION=REPEAT,WAIT=36)-
        SAMPLE.RDB-
        TABLE1-
        TABLE.DAT

    Example 11

    The following example specifies the options that were the default
    transaction style in prior releases.

    $ RMU/UNLOAD-
        /TRANSACTION_TYPE=(READ_ONLY,ISOLATION_LEVEL=SERIALIZABLE)-
        SAMPLE.RDB-
        TABLE1-
        TABLE1.DAT

    Example 12

    If the database currently has snapshots deferred, it may be more
    efficient to start a read-write transaction with isolation level
    read committed. This allows the transaction to start immediately
    (a read-only transaction may stall), and the selected isolation
    level keeps row locking to a minimum.

    $ RMU/UNLOAD-
        /TRANSACTION_TYPE=(SHARED_READ,ISOLATION=READ_COMMITTED)-
        SAMPLE.RDB-
        TABLE1-
        TABLE1.DAT

    Using a transaction type of automatic adapts to different
    database settings.

    $ RMU/UNLOAD-
        /TRANSACTION_TYPE=(AUTOMATIC)-
        SAMPLE.RDB-
        TABLE1-
        TABLE1.DAT

    Example 13

    The following example shows the output from the flags STRATEGY
    and ITEM_LIST which indicates that the Optimize qualifier
    specified that sequential access be used, and also that Total_
    Time is used as the default optimizer preference.

    $ DEFINE RDMS$SET_FLAGS "STRATEGY,ITEM_LIST"
    $ RMU/UNLOAD/OPTIMIZE=SEQUENTIAL_ACCESS PERSONNEL EMPLOYEES E.DAT
       .
       .
       .
    ~H Request Information Item List: (len=11)
    0000 (00000) RDB$K_SET_REQ_OPT_PREF "0"
    0005 (00005) RDB$K_SET_REQ_OPT_SEQ "1"
    000A (00010) RDB$K_INFO_END
    Get     Retrieval sequentially of relation EMPLOYEES
    %RMU-I-DATRECUNL,   100 data records unloaded.

    Example 14

    AUTOMATIC columns are evaluated during INSERT and UPDATE
    operations for a table; for instance, they may record the
    timestamp for the last operation. If the table is being
    reorganized, it may be necessary to unload the data and reload it
    after the storage map and indexes for the table are re-created,
    yet the old auditing data must remain the same.

    Normally, the RMU Unload command does not unload columns marked
    as AUTOMATIC; you must use the Virtual_Fields qualifier with the
    keyword Automatic to request this action.

    $ rmu/unload/virtual_fields=(automatic) payroll_db people people.unl

    Following the restructure of the database, the data can be
    reloaded. If the target columns are also defined as AUTOMATIC,
    then the RMU Load process will not write to those columns. You
    must use the Virtual_Fields qualifier with the keyword Automatic
    to request this action.

    $ rmu/load/virtual_fields=(automatic) payroll_db people people.unl

    Example 15

    This example shows the action of the Delete_Rows qualifier.
    First, SQL is used to display the count of the rows in the table.
    The file PEOPLE.COLUMNS is verified (written to SYS$OUTPUT) by
    the RMU Unload command.

    $ define sql$database db$:scratch
    $ sql$ select count (*) from people;

             100
    1 row selected
    $ rmu/unload/fields="@people.columns" -
        sql$database -
        /record_definition=(file:people,format:delimited) -
        /delete_rows -
        people -
        people2.dat
      EMPLOYEE_ID
      LAST_NAME
      FIRST_NAME
      MIDDLE_INITIAL
      SEX
      BIRTHDAY
    %RMU-I-DATRECERA,   100 data records erased.
    %RMU-I-DATRECUNL,   100 data records unloaded.

    A subsequent query shows that the rows have been deleted.

    $ sql$ select count (*) from people;

               0
    1 row selected

    Example 16

    The following example shows the output from the RMU Unload
    command options for XML support. The two files shown in the
    example are created by this RMU Unload command:

    $ rmu/unload -
        /record_def=(format=xml,file=work_status) -
        mf_personnel -
        work_status -
        work_status.xml

    Output WORK_STATUS.DTD file

    <?xml version="1.0"?>
    <!-- RMU Unload for Oracle Rdb V7.1-00 -->
    <!-- Generated: 16-MAR-2001 22:26:47.30 -->

    <!ELEMENT WORK_STATUS (RMU_ROW*)>
    <!ELEMENT RMU_ROW (
     STATUS_CODE,
     STATUS_NAME,
     STATUS_TYPE
    )>
    <!ELEMENT STATUS_CODE (#PCDATA)>
    <!ELEMENT STATUS_NAME (#PCDATA)>
    <!ELEMENT STATUS_TYPE (#PCDATA)>
    <!ELEMENT NULL (EMPTY)>

    Output WORK_STATUS.XML file

    <?xml version="1.0"?>
    <!-- RMU Unload for Oracle Rdb V7.1-00 -->
    <!-- Generated: 16-MAR-2001 22:26:47.85 -->

    <!DOCTYPE WORK_STATUS SYSTEM "work_status.dtd">

    <WORK_STATUS>
     <RMU_ROW>
      <STATUS_CODE>0</STATUS_CODE>
      <STATUS_NAME>INACTIVE</STATUS_NAME>
     <STATUS_TYPE>RECORD EXPIRED</STATUS_TYPE>
     </RMU_ROW>
     <RMU_ROW>
      <STATUS_CODE>1</STATUS_CODE>
      <STATUS_NAME>ACTIVE  </STATUS_NAME>
      <STATUS_TYPE>FULL TIME     </STATUS_TYPE>
     </RMU_ROW>
     <RMU_ROW>
      <STATUS_CODE>2</STATUS_CODE>
      <STATUS_NAME>ACTIVE  </STATUS_NAME>
      <STATUS_TYPE>PART TIME     </STATUS_TYPE>
     </RMU_ROW>
    </WORK_STATUS>

    <!-- 3 rows unloaded -->

    Example 17

    The following example shows that if the Flush=On_Commit qualifier
    is specified, the value for the Commit_Every qualifier must be
    equal to or a multiple of the Row_Count value so the commits
    of unload transactions occur after the internal RMS buffers are
    flushed to the unload file. This prevents loss of data if an
    error occurs.

    $RMU/UNLOAD/ROW_COUNT=5/COMMIT_EVERY=2/FLUSH=ON_COMMIT MF_PERSONNEL -
    _$ EMPLOYEES EMPLOYEES
    %RMU-F-DELROWCOM, For DELETE_ROWS or FLUSH=ON_COMMIT the COMMIT_EVERY value must
     equal or be a multiple of the ROW_COUNT value.
    The COMMIT_EVERY value of 2 is not equal to or a multiple of the ROW_COUNT value
     of 5.
    %RMU-F-FTL_UNL, Fatal error for UNLOAD operation at 27-Oct-2005 08:55:14.06

    Example 18

    The following examples show that the unload file and record
    definition files are not deleted on error if the Noerror_Delete
    qualifier is specified and that these files are deleted on error
    if the Error_Delete qualifier is specified. If the unload file is
    empty when the error occurs, it will be deleted.

    $RMU/UNLOAD/NOERROR_DELETE/ROW_ACOUNT=50/COMMIT_EVERY=50 MF_PERSONNEL -
    _$ EMPLOYEES EMPLOYEES.UNL

    %RMU-E-OUTFILNOTDEL, Fatal error, the output file is not deleted but may not
    be useable,
    50 records have been unloaded.
    -COSI-F-WRITERR, write error
    -RMS-F-FUL, device full  (insufficient space for allocation)

    $RMU/UNLOAD/ERROR_DELETE/ROW_COUNT=50/COMMIT_EVERY=50 MF_PERSONNEL -
    _$ EMPLOYEES EMPLOYEES.UNL

    %RMU-E-OUTFILDEL, Fatal error, output file deleted
    -COSI-F-WRITERR, write error
    -RMS-F-FUL, device full (insufficient space for allocation)

    Example 19

    The following example shows the FORMAT=CONTROL option. This
    command creates a file EMP.CTL (the SQL*Loader control file)
    and EMPLOYEES.DAT in a portable format to be loaded.

    $ RMU/UNLOAD/RECORD_DEFINITION=(FORMAT=CONTROL,FILE=EMP) -
        SQL$DATABASE -
        EMPLOYEES -
        EMPLOYEES

    Example 20

    The following shows an example of using the COMPRESSION qualifier
    with the RMU Unload command.

    $ RMU/UNLOAD/COMPRESS=LZW/DEBUG=TRACE COMPLETE_WORKS COMPLETE_WORKS
    COMPLETE_WORKS
    Debug = TRACE
    Compression = LZW
    * Synonyms are not enabled
    Unloading Blob columns.
    Row_Count = 500
    Message buffer: Len: 54524
    Message buffer: Sze: 109, Cnt: 500, Use: 31 Flg: 00000000
    ** compress data: input 2700 output 981 deflate 64%
    ** compress TEXT_VERSION : input 4454499 output 1892097 deflate 58%
    ** compress PDF_VERSION : input 274975 output 317560 deflate -15%
    %RMU-I-DATRECUNL,   30 data records unloaded.

    Example 21

    The following shows an example of using the COMPRESSION qualifier
    with RMU Unload and using the EXCLUDE_LIST option to avoid
    attempting to compress data that does not compress.

    $ RMU/UNLAOD/COMPRESS=(LZW,EXCLUDE_LIST:PDF_VERSION)/DEBUG=TRACE COMPLETE_WORKS
    COMPLETE_WORKS COMPLETE_WORKS
    Debug = TRACE
    Compression = LZW
    Exclude_List:
            Exclude column PDF_VERSION
    * Synonyms are not enabled
    Unloading Blob columns.
    Row_Count = 500
    Message buffer: Len: 54524
    Message buffer: Sze: 109, Cnt: 500, Use: 31 Flg: 00000000
    ** compress data: input 2700 output 981 deflate 64%
    ** compress TEXT_VERSION : input 4454499 output 1892097 deflate 58%
    %RMU-I-DATRECUNL,   30 data records unloaded.

2  –  After Journal

    Allows you to extract added, modified, committed, and deleted
    record contents from committed transactions from specified tables
    in one or more after-image journal files.

2.1  –  Description

    The RMU Unload After_Journal command translates the binary data
    record contents of an after-image journal (.aij) file into an
    output file. Data records for the specified tables for committed
    transactions are extracted to an output stream (file, device,
    or application callback) in the order that the transactions were
    committed.

    Before you use the RMU Unload After_Journal command, you must
    enable the database for LogMiner extraction. Use the RMU Set
    Logminer command to enable the LogMiner for Rdb feature for the
    database. Before you use the RMU Unload After_Journal command
    with the Continuous qualifier, you must enable the database for
    Continuous LogMiner extraction. See Set Logminer help topic for
    more information.

    Data records extracted from the .aij file are those records that
    transactions added, modified, or deleted in base database tables.
    Index nodes, database metadata, segmented strings (BLOB), views,
    COMPUTED BY columns, system relations, and temporary tables
    cannot be unloaded from after-image journal files.

    For each transaction, only the final content of a record
    is extracted. Multiple changes to a single record within a
    transaction are condensed so that only the last revision of the
    record appears in the output stream. You cannot determine which
    columns were changed in a data record directly from the after-
    image journal file. In order to determine which columns were
    changed, you must compare the record in the after-image journal
    file with a previous record.

    The database used to create the after-image journal files being
    extracted must be available during the RMU Unload After_Journal
    command execution. The database is used to obtain metadata
    information (such as table names, column counts, record version,
    and record compression) needed to extract data records from the
    .aij file. The database is read solely to load the metadata
    and is then detached. Database metadata information can also
    be saved and used in a later session. See the Save_MetaData and
    Restore_MetaData qualifiers for more information.

    If you use the Continuous qualifier, the database must be opened
    on the node where the Continuous LogMiner process is running. The
    database is always used and must be available for both metadata
    information and for access to the online after-image journal
    files. The Save_MetaData and Restore_MetaData qualifiers are not
    permitted with the Continuous qualifier.

    When one or more .aij files and the Continuous qualifier are
    both specified on the RMU Unload After_Journal command line,
    it is important that no .aij backup operations occur until the
    Continuous LogMiner process has transitioned to online mode
    (where the active online .aij files are being extracted). If you
    are using automatic .aij backups and wish to use the Continuous
    LogMiner feature, Oracle recommends that you consider disabling
    the automatic backup feature (ABS) and use manual .aij backups
    so that you can explicitly control when .aij backup operations
    occur.

    The after-image journal file or files are processed sequentially.
    All specified tables are extracted in one pass through the
    after-image journal file.

    As each transaction commit record is processed, all modified and
    deleted records for the specified tables are sorted to remove
    duplicates. The modified and deleted records are then written
    to the output streams. Transactions that were rolled back are
    ignored. Data records for tables that are not being extracted are
    ignored. The actual order of output records within a transaction
    is not predictable.

    In the extracted output, records that were modified or added are
    returned as being modified. It is not possible to distinguish
    between inserted and updated records in the output stream.
    Deleted (erased) records are returned as being deleted. A
    transaction that modifies and deletes a record generates only
    a deleted record. A transaction that adds a new record to
    the database and then deletes it within the same transaction
    generates only a deleted record.

    The LogMiner process signals that a row has been deleted by
    placing a D in the RDB$LM_ACTION field. The contents of the
    row at the instant before the delete operation are recorded
    in the user fields of the output record. If a row was modified
    several times within a transaction before being deleted, the
    output record contains only the delete indicator and the results
    of the last modify operation. If a row is inserted and deleted
    in the same transaction, only the delete record appears in the
    output.

    Records from multiple tables can be output to the same or to
    different destination streams. Possible output destination
    streams include the following:

    o  File

    o  OpenVMS Mailbox

    o  OpenVMS Pipe

    o  Direct callback to an application through a run-time activated
       shareable image

    Refer to the Using_LogMiner_for_Rdb help topic for more
    information about using the LogMiner for Rdb feature.

2.2  –  Format

  (B)0RMU/Unload/After_Journal root-file-spec aij-file-name

  Command Qualifiers                  x Defaults
                                      x
  /Before=date-time                   x None
  /Continuous                         x /NoContinuous
  /Extend_Size=integer                x /Extend_Size=1000
  /Format=options                     x See description
  /Ignore=Old_Version[=table-list]    x /Ignore=Old_Version=all
  /Include=Action=(include-type)      x Include=Action=
                                      x   (NoCommit,Modify,Delete)
  /IO_Buffers=integer                 x /IO_Buffers=2
  /[No]Log                            x Current DCL verify value
  /Options=options-list               x See description
  /Order_AIJ_files                    x /NoOrder_aij_files
  /Output=file-spec                   x /Output=SYS$OUTPUT
  /Parameter=character-strings        x None
  /Quick_Sort_Limit=integer           x /Quick_Sort_Limit=5000
  /Restart=(restart-point)            x None
  /Restore_Metadata=file-spec         x None

  (B)0/Save_Metadata=file-spec            x None
  /Select=selection-type              x /Select=Commit_Transaction
  /Since=date-time                    x None
  /Sort_Workfiles=integer             x /Sort_Workfiles=2
  /Statistics_Interval=integer        x See description
  /[No]Symbols                        x /Symbols
  /Table=(Name=table-name,            x See description
          [table-options ...])        x None
  [No]Trace                           x /Notrace

2.3  –  Parameters

2.3.1  –  root-file-spec

    The root file specification of the database for the after-image
    journal file from which tables will be unloaded. The default file
    extension is .rdb.

    The database must be the same actual database that was used to
    create the after-image journal files. The database is required
    so that the table metadata (information about data) is available
    to the RMU Unload After_Journal command. In particular, the names
    and relation identification of valid tables within the database
    are required along with the number of columns in the table and
    the compression information for the table in various storage
    areas.

    The RMU Unload After_Journal process attaches to the database
    briefly at the beginning of the extraction operation in order to
    read the metadata. Once the metadata has been read, the process
    disconnects from the database for the remainder of the operation
    unless the Continuous qualifier is specified. The Continuous
    qualifier indicates that the extraction operation is to run non-
    stop, and the process remains attached to the database.

2.3.2  –  aij-file-name

    One or more input after-image journal backup files to be used
    as the source of the extraction operation. Multiple journal
    files can be extracted by specifying a comma-separated list
    of file specifications. Oracle RMU supports OpenVMS wildcard
    specifications (using the * and % characters) to extract a
    group of files. A file specification beginning with the at
    (@) character refers to an options file containing a list of
    after-image journal files (rather than the file specification
    of an after-image journal itself). If you use the at character
    syntax, you must enclose the at character and the file name in
    double quotation marks (for example, specify aij-file-name as
    "@files.opt"). The default file extension is .aij.

2.4  –  Command Qualifiers

2.4.1  –  Before

    Before=date-time

    Specifies the ending time and date for transactions to be
    extracted. Based on the Select qualifier, transactions that
    committed or started prior to the specified Before date are
    selected. Information changed due to transactions that committed
    or started after the Before date is not included in the output.

2.4.2  –  Continuous

    Continuous
    Nocontinuous

    Causes the LogMiner process to attach to the database and begin
    extracting records in "near-real" time. When the Continuous
    qualifier is specified, the RMU Unload After_Journal command
    extracts records from the online after-image journal files of the
    database until it is stopped via an external source (for example,
    Ctrl/y, STOP/ID, $FORCEX, or database shutdown).

    A database must be explicitly enabled for the Continuous LogMiner
    feature. To enable the Continuous LogMiner feature, use the RMU
    Set Logminer command with the Enable and Continuous qualifiers;
    to disable use of the Continuous LogMiner feature, use the RMU
    Set Logminer command with the Enable and Nocontinuous qualifiers.

    The output from the Continuous LogMiner process is a continuous
    stream of information. The intended use of the Continuous
    LogMiner feature is to write the changes into an OpenVMS
    mailbox or pipe, or to call a user-supplied callback routine.
    Writing output to a disk file is completely functional with the
    Continuous LogMiner feature, however, no built-in functionality
    exists to prevent the files from growing indefinitely.

    It is important that the callback routine or processing of
    the mailbox be very responsive. If the user-supplied callback
    routine blocks, or if the mailbox is not being read fast enough
    and fills, the RMU Unload After_Journal command will stall. The
    Continuous LogMiner process prevents backing up the after-image
    journal that it is currently extracting along with all subsequent
    journals. If the Continuous LogMiner process is blocked from
    executing for long enough, it is possible that all available
    journals will fill and will not be backed up.

    When a database is enabled for the Continuous LogMiner feature,
    an AIJ "High Water" lock (AIJHWM) is utilized to help coordinate
    and maintain the current .aij end-of-file location. The lock
    value block for the AIJHWM lock contains the location of the
    highest written .aij block. The RMU Unload After_Journal command
    with the Continuous qualifier polls the AIJHWM lock to determine
    if data has been written to the .aij file and to find the highest
    written block. If a database is not enabled for the Continuous
    LogMiner feature, there is no change in locking behavior; the
    AIJHWM lock is not maintained and thus the Continuous qualifier
    of the RMU Unload After_Journal command is not allowed.

    In order to maintain the .aij end-of-file location lock,
    processes that write to the after-image journal file must use
    the lock to serialize writing to the journal. When the Continuous
    LogMiner feature is not enabled, processes instead coordinate
    allocating space in the after-image journal file and can write
    to the file without holding a lock. The Continuous LogMiner
    process requires that the AIJHWM lock be held during the .aij
    I/O operation. In some cases, this can reduce overall throughput
    to the .aij file as it serves to reduce multiple over-lapped I/O
    write operations by multiple processes.

    The Save_Metadata and Restore_Metadata qualifiers are
    incompatible with the Continuous qualifier.

2.4.3  –  Extend Size

    Extend_size=integer

    Specifies the file allocation and extension quantity for output
    data files. The default extension size is 1000 blocks. Using a
    larger value can help reduce output file fragmentation and can
    improve performance when large amounts of data are extracted.

2.4.4  –  Format

    Format=options

    If the Format qualifier is not specified, Oracle RMU outputs data
    to a fixed-length binary flat file.

    The format options are:

    o  Format=Binary

       If you specify the Format=Binary option, Oracle RMU does not
       perform any data conversion; data is output in a flat file
       format with all data in the original binary state.

       Output Fields describes the output fields and data types of an
       output record in Binary format.

    Table 19 Output Fields

                                 Byte
    Field Name    Data Type      LengthDescription

    ACTION        CHAR (1)       1     Indicates record state.
                                       "M" indicates an insert or
                                       modify action. "D" indicates a
                                       delete action. "E" indicates
                                       stream end-of-file (EOF)
                                       when a callback routine is
                                       being used. "P" indicates
                                       a value from the command
                                       line Parameter qualifier
                                       when a callback routine is
                                       being used (see Parameter
                                       qualifier). "C" indicates
                                       transaction commit information
                                       when the Include=Action=Commit
                                       qualifier is specified.
    RELATION_     CHAR (31)      31    Table name. Space padded to 31
    NAME                               characters.
    RECORD_TYPE   INTEGER        4     The Oracle Rdb internal
                  (Longword)           relation identifier.
    DATA_LEN      SMALLINT       2     Length, in bytes, of the data
                  (Word)               record content.
    NBV_LEN       SMALLINT       2     Length, in bits, of the null
                  (Word)               bit vector content.
    DBK           BIGINT         8     Records logical database key.
                  (Quadword)           The database key is a 3-field
                                       structure containing a 16-
                                       bit line number, a 32-bit
                                       page number and a 16-bit area
                                       number.
    START_TAD     DATE VMS       8     Date/time of the start of the
                  (Quadword)           transaction.
    COMMIT_TAD    DATE VMS       8     Date/time of the commitment of
                  (Quadword)           the transaction.
    TSN           BIGINT         8     Transaction sequence number of
                  (Quadword)           the transaction that performed
                                       the record operation.
    RECORD_       SMALLINT       2     Record version.
    VERSION       (Word)
    Record Data   Varies               Actual data record field
                                       contents.
    Record NBV    BIT VECTOR           Null bit vector. There is
                  (array of            one bit for each field in the
                  bits)                data record. If a bit value
                                       is 1, the corresponding field
                                       is NULL; if a bit value is
                                       0, the corresponding field
                                       is not NULL and contains an
                                       actual data value. The null
                                       bit vector begins on a byte
                                       boundary. Any extra bits in
                                       the final byte of the vector
                                       after the final null bit are
                                       unused.

    o  Format=Dump

       If you specify the Format=Dump option, Oracle RMU produces an
       output format suitable for viewing. Each line of Dump format
       output contains the column name (including LogMiner prefix
       columns) and up to 200 bytes of the column data. Unprintable
       characters are replaced with periods (.), and numbers and
       dates are converted to text. NULL columns are indicated
       with the string "NULL". This format is intended to assist
       in debugging; the actual output contents and formatting will
       change in the future.

    o  Format=Text

       If you specify the Format=Text option, Oracle RMU converts
       all data to printable text in fixed-length columns before
       unloading it. VARCHAR(n) strings are padded with blanks when
       the specified string has fewer characters than n so that the
       resulting string is n characters long.

    o  Format=(Delimited_Text [,delimiter-options])

       If you specify the Format=Delimited_Text option, Oracle RMU
       applies delimiters to all data before unloading it.

       DATE VMS dates are output in the collatable time format, which
       is yyyymmddhhmmsscc. For example, March 20, 1993 is output as:
       1993032000000000.

       Delimiter options are:

       -  Prefix=string

          Specifies a prefix string that begins any column value in
          the ASCII output file. If you omit this option, the column
          prefix is a quotation mark (").

       -  Separator=string

          Specifies a string that separates column values of a row.
          If you omit this option, the column separator is a single
          comma (,).

       -  Suffix=string

          Specifies a suffix string that ends any column value in
          the ASCII output file. If you omit this option, the column
          suffix is a quotation mark (").

       -  Terminator=string

          Specifies the row terminator that completes all the column
          values corresponding to a row. If you omit this option, the
          row terminator is the end of the line.

       -  Null=string

          Specifies a string that, when found in the database column,
          is unloaded as "NULL" in the output file.

          The Null option can be specified on the command line as any
          one of the following:

          *  A quoted string

          *  An empty set of double quotes ("")

          *  No string

          The string that represents the null character must be
          quoted on the Oracle RMU command line. You cannot specify a
          blank space or spaces as the null character. You cannot use
          the same character for the Null value and other Delimited_
          Text options.

                                      NOTE

          The values for each of the strings specified in the
          delimiter options must be enclosed within quotation
          marks. Oracle RMU strips these quotation marks while
          interpreting the values. If you want to specify a
          quotation mark (") as a delimiter, specify a string
          of four quotation marks. Oracle RMU interprets four
          quotation marks as your request to use one quotation
          mark as a delimiter. For example, Suffix = """".

          Oracle RMU reads these quotation marks as follows:

          o  The first quotation mark is stripped from the string.

          o  The second and third quotation mark are interpreted
             as your request for one quotation mark (") as a
             delimiter.

          o  The fourth quotation mark is stripped.

          This results in one quotation mark being used as a
          delimiter.

          Furthermore, if you want to specify a quotation mark as
          part of the delimited string, you must use two quotation
          marks for each quotation mark that you want to appear in
          the string. For example, Suffix = "**""**" causes Oracle
          RMU to use a delimiter of **"**.

2.4.5  –  Ignore

    Ignore=Old_Version[=table-list]

    Specifies optional conditions or items to ignore.

    The RMU Unload After_Journal command treats non-current record
    versions in the AIJ file as a fatal error condition. That is,
    attempting to extract a record that has a record version not the
    same as the table's current maximum version results in a fatal
    error.

    There are, however, some very rare cases where a verb rollback
    of a modification of a record may result in an old version of a
    record being written to the after-image journal even though the
    transaction did not actually complete a successful modification
    to the record. The RMU Unload After_Journal command detects the
    old record version and aborts with a fatal error in this unlikely
    case.

    When the Ignore=Old_Version qualifier is present, the RMU Unload
    After_Journal command displays a warning message for each
    record that has a non-current record version and the record
    is not written to the output stream. The Old_Version qualifier
    accepts an optional list of table names to indicate that only the
    specified tables are permitted to have non-current record version
    errors ignored.

2.4.6  –  Include

    Include=Action=include-type

    Specifies if deleted or modified records or transaction commit
    information is to be extracted from the after-image journal. The
    following keywords can be specified:

    o  Commit
       NoCommit

       If you specify Commit, a transaction commit record is
       written to each output stream as the final record for each
       transaction. The commit information record is written to
       output streams after all other records for the transaction
       have been written. The default is NoCommit.

       Because output streams are created with a default file name
       of the table being extracted, it is important to specify a
       unique file name on each occurrence of the output stream.
       The definition of "unique" is such that when you write to a
       non-file-oriented output device (such as a pipe or mailbox),
       you must be certain to specify a specific file name on each
       output destination. This means that rather than specifying
       Output=MBA1234: for each output stream, you should use
       Output=MBA1234:MBX, or any file name that is the same on all
       occurrences of MBA1234:.

       Failure to use a specific file name can result in additional,
       and unexpected, commit records being returned. However, this
       is generally a restriction only when using a stream-oriented
       output device (as opposed to a disk file).

       The binary record format is based on the standard LogMiner
       output format. However, some fields are not used in the commit
       action record. The binary format and contents of this record
       are shown in Commit Record Contents. This record type is
       written for all output data formats.

    Table 20 Commit Record Contents

                 Length (in
    Field        bytes)       Contents

    ACTION       1            "C"
    RELATION     31           Zero
    RECORD_TYPE  4            Zero
    DATA_LEN     2            Length of RM_TID_LEN, AERCP_LEN, RM_
                              TID, AERCP
    NBV_LEN      2            Zero
    TID          4            Transaction (Attach) ID
    PID          4            Process ID
    START_TAD    8            Transaction Start Time/Date
    COMMIT_TAD   8            Transaction Commit Time/Date
    TSN          8            Transaction ID
    RM_TID_LEN   4            Length of the Global TID
    AERCP_LEN    4            Length of the AERCP information
    RM_TID       RM_TID_LEN   Global TID
    AERCP        AERCP_LEN    Restart Control Information
    RDB$LM_      12           USERNAME
    USERNAME

       When the original transaction took part in a distributed,
       two-phase transaction, the RM_TID component is the Global
       transaction manager (XA or DDTM) unique transaction ID.
       Otherwise, this field contains binary zeroes.

       The AIJ Extract Recovery Control Point (AERCP) information is
       used to uniquely identify this transaction within the scope
       of the database and after-image journal files. It contains
       the .aij sequence number, VBN and TSN of the last "Micro Quiet
       Point", and is used by the Continuous LogMiner process to
       restart a particular point in the journal sequence.

    o  Delete
       NoDelete

       If you specify Delete, pre-deletion record contents are
       extracted from the aij file. If you specify NoDelete, no
       pre-deletion record contents are extracted. The default is
       Delete.

    o  Modify
       NoModify

       If you specify Modify, modified or added record contents are
       extracted from the .aij file. If you specify NoModify, then no
       modified or added record contents are extracted. The default
       is Modify.

2.4.7  –  IO Buffers

    IO_Buffers=integer

    Specifies the number of I/O buffers used for output data files.
    The default number of buffers is two, which is generally
    adequate. With sufficiently fast I/O subsystem hardware,
    additional buffers may improve performance. However, using a
    larger number of buffers will also consume additional virtual
    memory and process working set.

2.4.8  –  Log

    Log
    Nolog

    Specifies that the extraction of the .aij file is be reported
    to SYS$OUTPUT or the destination specified with the Output
    qualifier. When activity is logged, the output from the Log
    qualifier provides the number of transactions committed or rolled
    back. The default is the setting of the DCL VERIFY flag, which is
    controlled by the DCL SET VERIFY command.

2.4.9  –  Options

    Options=options-list

    The following options can be specified:

    o  File=file-spec

       An options file contains a list of tables and output
       destinations. The options file can be used instead of, or
       along with, the Table qualifier to specify the tables to be
       extracted. Each line of the options file must specify a table
       name prefixed with "Table=". After the table name, the output
       destination is specified as either "Output=", or "Callback_
       Module=" and "Callback_Routine=", for example:

       TABLE=tblname,OUTPUT=outfile
       TABLE=tblname,CALLBACK_MODULE=image,CALLBACK_ROUTINE=routine

       You can use the Record_Definition=file-spec option from the
       Table qualifier to create a record definition file for the
       output data. The default file type is .rrd; the default file
       name is the name of the table.

       You can use the Table_Definition=file-spec option from
       the Table qualifier to create a file that contains an SQL
       statement that creates a table to hold transaction data. The
       default file type is .sql; the default file name is the name
       of the table.

       Each option in the Options=File qualifier must be fully
       specified (no abbreviations are allowed) and followed with
       an equal sign (=)  and a value string. The value string must
       be followed by a comma or the end of a line. Continuation
       lines can be specified by using a trailing dash. Comments are
       indicated by using the exclamation point (!)  character.

       You can use the asterisk (*)  and the percent sign (%)
       wildcard characters in the table name specification to select
       all tables that satisfy the components you specify. The
       asterisk matches zero or more characters; the percent sign
       matches a single character.

       For table name specifications that contain wild card
       characters, if the first character of the string is a pound
       sign (#),  the wildcard specification is changed to a "not
       matching" comparison. This allows exclusion of tables based
       on a wildcard specification. The pound sign designation is
       only evaluated when the table name specification contains an
       asterisk or percent sign.

       For example, a table name specification of "#FOO%" indicates
       that all table names that are four characters long and do not
       start with the string "FOO" are to be selected.

    o  Shared_Read

       Specifies that the input after-image journal backup files are
       to be opened with an RMS shared locking specification.

    o  Dump

       Specifies that the contents of an input metadata file are to
       be formatted and displayed. Typically, this information is
       used as a debugging tool.

2.4.10  –  Order AIJ Files

    Order_AIJ_Files
    NoOrder_AIJ_Files

    By default, after-image journal files are processed in the order
    that they are presented to the RMU Unload After_Journal command.
    The Order_AIJ_Files qualifier specifies that the input after-
    image journal files are to be processed in increasing order by
    sequence number. This can be of benefit when you use wildcard (*
    or %) processing of a number of input files. The .aij files are
    each opened, the first block is read (to determine the sequence
    number), and the files are closed prior to the sorting operation.

2.4.11  –  Output

    Output=file-spec

    Redirects the log and trace output (selected with the Log and
    Trace qualifiers) to the named file. If this qualifier is not
    specified, the output generated by the Log and Trace qualifiers,
    which can be voluminous, is displayed to SYS$OUTPUT.

2.4.12  –  Parameter

    Parameter=character-strings

    Specifies one or more character strings that are concatenated
    together and passed to the callback routine upon startup.

    For each table that is associated with a user-supplied callback
    routine, the callback routine is called with two parameters: the
    length of the Parameter record and a pointer to the Parameter
    record. The binary format and contents of this record are shown
    in Parameter Record Contents.

    Table 21 Parameter Record Contents

                 Length (in
    Field        bytes)       Contents

    ACTION       1            "P"
    RELATION     31           Relation name
    RECORD_TYPE  4            Zero
    DATA_LEN     2            Length of parameter string
    NBV_LEN      2            Zero
    LDBK         8            Zero
    START_TAD    8            Zero
    COMMIT_TAD   8            Zero
    TSN          8            Zero
    DATA         ?            Variable length parameter string
                              content

2.4.13  –  Quick Sort Limit

    Quick_Sort_Limit=integer

    Specifies the maximum number of records that will be sorted with
    the in-memory "quick sort" algorithm.

    The default value is 5000 records. The minimum value that can be
    specified is 10 and the maximum value is 100,000.

    Larger values specified for the /Quick_Sort_Limit qualifier may
    reduce sort work file IO at the expense of additional CPU time
    and/or memory consumption. A value that is too small may result
    in additional disk file IO. In general, the default value should
    be accepted.

2.4.14  –  Restart

    Restart=restart-point

    Specifies an AIJ Extract Restart Control Point (AERCP) that
    indicates the location to begin the extraction. The AERCP
    indicates the transaction sequence number (TSN) of the last
    extracted transaction along with a location in the .aij file
    where a known "Micro-quiet point" exists.

    When the Restart qualifier is not specified and no input after-
    image journal files are specified on the command line, the
    Continuous LogMiner process starts extracting at the beginning
    of the earliest modified online after-image journal file.

    When formatted for text display, the AERCP structure consists of
    the six fields (the MBZ field is excluded) displayed as unsigned
    integers separated by dashes; for example, "1-28-12-7-3202-3202".

2.4.15  –  Restore Metadata

    Restore_Metadata=file-spec

    Specifies that the RMU Unload After_Journal command is to read
    database metadata information from the specified file. The
    Database parameter is required but the database itself is not
    accessed when the Restore_Metadata qualifier is specified. The
    default file type is .metadata. The Continuous qualifier is not
    allowed when the Restore_Metadata qualifier is present.

    Because the database is not available when the Restore_Metadata
    qualifier is specified, certain database-specific actions cannot
    be taken. For example, checks for after-image journaling are
    disabled. Because the static copy of the metadata information is
    not updated as database structure and table changes are made, it
    is important to make sure that the metadata file is saved after
    database DML operations.

    When the Restore_Metadata qualifier is specified, additional
    checks are made to ensure that the after-image journal files
    were created using the same database that was used to create the
    metadata file. These checks provide additional security and help
    prevent accidental mismatching of files.

2.4.16  –  Save Metadata

    Save_Metadata=file-spec

    Specifies that the RMU Unload After_Journal command is to
    write metadata information to the named file. The Continuous,
    Restore_Metadata, Table, and Options=file qualifiers and the
    aij-file-name parameter are not allowed when the Save_Metadata
    qualifier is present. The default file type is .metadata.

2.4.17  –  Select

    Select=selection-type

    Specifies if the date and time of the Before and Since qualifiers
    refer to transaction start time or transaction commit time.

    The following options can be specified as the selection-type of
    the Select qualifier:

    o  Commit_Transaction

       Specifies that the Before and Since qualifiers select
       transactions based on the time of the transaction commit.

    o  Start_Transaction

       Specifies that the Before and Since qualifiers select
       transactions based on the time of the transaction start.

    The default for date selection is Commit_Transaction.

2.4.18  –  Since

    Since=date-time

    Specifies the starting time for transactions to be extracted.
    Depending on the value specified in the Select qualifier,
    transactions that committed or started on or after the specified
    Since date are selected. Information from transactions that
    committed or started prior to the specified Since date is not
    included in the output.

2.4.19  –  Sort Workfiles

    Sort_Workfiles=integer

    Specifies the number of sort work files. The default number
    of sort work files is two. When large transactions are being
    extracted, using additional sort work files may improve
    performance by distributing I/O loads over multiple disk devices.
    Use the SORTWORKn (where n is a number from 0 to 9) logical names
    to specify the location of the sort work files.

2.4.20  –  Statistics Interval

    Statistics_Interval=integer

    Specifies that statistics are to be displayed at regular
    intervals so that you can evaluate the progress of the unload
    operation.

    The displayed statistics include:

    o  Elapsed time

    o  CPU time

    o  Buffered I/O

    o  Direct I/O

    o  Page faults

    o  Number of records unloaded for a table

    o  Total number of records extracted for all tables

    If the Statistics_Interval qualifier is specified, the default
    interval is 60 seconds. The minimum value is one second. If the
    unload operation completes successfully before the first time
    interval has passed, you will receive an informational message
    on the number of files unloaded. If the unload operation is
    unsuccessful before the first time interval has passed, you will
    receive error messages and statistics on the number of records
    unloaded.

    At any time during the unload operation, you can press Ctrl/T to
    display the current statistics.

2.4.21  –  Symbols

    Symbols
    Nosymbols

    Specifies whether DCL symbols are to be created, indicating
    information about records extracted for each table.

    If a large enough number of tables is being unloaded, too many
    associated symbols are created, and the CLI symbol table space
    can become exhausted. The error message "LIB-F-INSCLIMEM,
    insufficient CLI memory" is returned in this case. Specify the
    Nosymbols qualifier to prevent creation of the symbols.

    The default is Symbols, which causes the symbols to be created.

2.4.22  –  Table

    Table=(Name=table-name, table-options)

    Specifies the name of a table to be unloaded and an output
    destination. The table-name must be a table within the database.
    Views, indexes, and system relations may not be unloaded from the
    after-image journal file.

    The asterisk (*)  and the percent sign (%) wildcard characters
    can be used in the table name specification to select all tables
    that satisfy the components you specify. The asterisk matches
    zero or more characters and the percent sign matches a single
    character.

    For table name specifications that contain wild card characters,
    if the first character of the string is a pound sign (#),
    the wildcard specification is changed to a "not matching"
    comparison. This allows exclusion of tables based on a wildcard
    specification. The pound sign designation is only evaluated when
    the table name specification contains an asterisk or percent
    sign.

    For exmple, a table name specification of "#FOO%" indicates that
    all table names that are four characters long and do not start
    with the string "FOO" are to be selected.

    The following table-options can be specified with the Table
    qualifier:

    o  Callback_Module=image-name, Callback_Routine=routine-name

       The LogMiner process uses the OpenVMS library routine
       LIB$FIND_IMAGE_SYMBOL to activate the specified shareable
       image and locate the specified entry point routine name. This
       routine is called with each extracted record. A final call is
       made with the Action field set to "E" to indicate the end of
       the output stream. These options must be specified together.

    o  Control

       Use the Control table option to produce output files that
       can be used by SQL*Loader to load the extracted data into an
       Oracle database. This option must be used in conjunction with
       fixed text format for the data file. The Control table option
       can be specified on either the command line or in an options
       file.

    o  Output=file-spec

       If an Output file specification is present, unloaded records
       are written to the specified location.

    o  Record_Definition=file-spec

       The Record_Definition=file-spec option can be used to create a
       record definition file for the output data. The default file
       type is .rrd; the default file name is the name of the table.

    o  Table_Definition=file-spec

       You can use the Table_Definition=file-spec option to create
       a file that contains an SQL statement that creates a table
       to hold transaction data. The default file type is .sql; the
       default file name is the name of the table.

    Unlike other qualifiers where only the final occurrence of the
    qualifier is used by an application, the Table qualifier can
    be specified multiple times for the RMU Unload After_Journal
    command. Each occurrence of the Table qualifier must specify a
    different table.

2.4.23  –  Trace

    Trace
    Notrace

    Specifies that the unloading of the .aij file be traced. The
    default is Notrace. When the unload operation is traced, the
    output from the Trace qualifier identifies transactions in the
    .aij file by TSNs and describes what Oracle RMU did with each
    transaction during the unload process. You can specify the Log
    qualifier with the Trace qualifier.

2.5  –  Usage Notes

    o  To use the RMU Unload After_Journal command for a database,
       you must have the RMU$DUMP privilege in the root file access
       control list (ACL) for the database or the OpenVMS SYSPRV or
       BYPASS privilege.

    o  Oracle Rdb after-image journaling protects the integrity
       of your data by recording all changes made by committed
       transactions to a database in a sequential log or journal
       file. Oracle Corporation recommends that you enable after-
       image journaling to record your database transaction activity
       between full backup operations as part of your database
       restore and recovery strategy. In addition to LogMiner for
       Rdb, the after-image journal file is used to enable several
       database performance enhancements such as the fast commit, row
       cache, and hot standby features.

    o  When the Continuous qualifier is not specified, you can only
       extract changed records from a backup copy of the after-image
       journal files. You create this file using the RMU Backup
       After_Journal command.

       You cannot extract from an .aij file that has been optimized
       with the RMU Optimize After_Journal command.

    o  As part of the extraction process, Oracle RMU sorts extracted
       journal records to remove duplicate record updates. Because
       .aij file extraction uses the OpenVMS Sort/Merge Utility
       (SORT/MERGE) to sort journal records for large transactions,
       you can improve the efficiency of the sort operation by
       changing the number and location of the work files used by
       SORT/MERGE. The number of work files is controlled by the
       Sort_Workfiles qualifier of the RMU Unload After_Journal
       command. The allowed values are 1 through 10 inclusive, with
       a default value of 2. The location of these work files can be
       specified with device specifications, using the SORTWORKn
       logical name (where n is a number from 0 to 9). See the
       OpenVMS documentation set for more information on using
       SORT/MERGE. See the Oracle Rdb7 Guide to Database Performance
       and Tuning for more information on using these Oracle Rdb
       logical names.

    o  When extracting large transactions, the RMU Unload After_
       Journal command may create temporary work files. You can
       redirect the .aij rollforward temporary work files to a
       different disk and directory location than the current default
       directory by assigning a different directory to the RDM$BIND_
       AIJ_WORK_FILE logical name in the LNM$FILE_DEV name table.
       This can help to alleviate I/O bottlenecks that might occur on
       the default disk.

    o  You can specify a search list by defining logicals
       RDM$BIND_AIJ_WORK_FILEn, with each logical pointing to
       a different device or directory. The numbers must start
       with 1 and increase sequentially without any gaps. When an
       AIJ file cannot be created due to a "device full" error,
       Oracle Rdb looks for the next device in the search list
       by translating the next sequential work file logical. If
       RDM$BIND_AIJ_WORK_FILE is defined, it is used first.

    o  The RMU Unload After_Journal command can read either a backed
       up .aij file on disk or a backed up .aij file on tape that is
       in the Old_File format.

    o  You can select one or more tables to be extracted from an
       after-image journal file. All tables specified by the Table
       qualifier and all those specified in the Options file are
       combined to produce a single list of output streams. A
       particular table can be specified only once. Multiple tables
       can be written to the same output destination by specifying
       the exact same output stream specification (that is, by using
       an identical file specification).

    o  At the completion of the unload operation, RMU creates a
       number of DCL symbols that contain information about the
       extraction statistics. For each table extracted, RMU creates
       the following symbols:

       -  RMU$UNLOAD_DELETE_COUNT_tablename

       -  RMU$UNLOAD_MODIFY_COUNT_tablename

       -  RMU$UNLOAD_OUTPUT_tablename

       The tablename component of the symbol is the name of the
       table. When multiple tables are extracted in one operation,
       multiple sets of symbols are created. The value for the
       symbols RMU$UNLOAD_MODIFY_COUNT_tablename and RMU$UNLOAD_
       DELETE_COUNT_tablename is a character string containing
       the number of records returned for modified and deleted
       rows. The RMU$UNLOAD_OUTPUT_tablename symbol is a character
       string indicating the full file specification for the output
       destination, or the shareable image name and routine name when
       the output destination is an application callback routine.

    o  When you use the Callback_Module and Callback_Routine option,
       you must supply a shareable image with a universal symbol or
       entry point for the LogMiner process to be able to call your
       routine. See the OpenVMS documentation discussing the Linker
       utility for more information about creating shareable images.

    o  Your Callback_Routine is called once for each output record.
       The Callback_Routine is passed two parameters:

       -  The length of the output record, by longword value

       -  A pointer to the record buffer

       The record buffer is a data structure of the same fields and
       lengths written to an output destination.

    o  Because the Oracle RMU image is installed as a known image,
       your shareable image must also be a known image. Use the
       OpenVMS Install Utility to make your shareable image known.
       You may wish to establish an exit handler to perform any
       required cleanup processing at the end of the extraction.

    o  Segmented string data (BLOB) cannot be extracted using the
       LogMiner process. Because the segmented string data is
       related to the base table row by means of a database key,
       there is no convenient way to determine what data to extract.
       Additionally, the data type of an extracted column is changed
       from LIST OF BYTE VARYING to BIGINT. This column contains
       the DBKEY of the original BLOB data. Therefore, the contents
       of this column should be considered unreliable. However, the
       field definition itself is extracted as a quadword integer
       representing the database key of the original segmented string
       data. In generated table definition or record definition
       files, a comment is added indicating that the segmented string
       data type is not supported by the LogMiner for Rdb feature.

    o  Records removed from tables using the SQL TRUNCATE TABLE
       statement are not extracted. The SQL TRUNCATE TABLE statement
       does not journal each individual data record being removed
       from the database.

    o  Records removed from tables using the SQL ALTER DATABASE
       command with the DROP STORAGE AREA clause and CASCADE keyword
       are not extracted. Any data deleted by this process is not
       journalled.

    o  Records removed by dropping tables using the SQL DROP TABLE
       statement are not extracted. The SQL DROP TABLE statement does
       not journal each individual data record being removed from the
       database.

    o  When the RDMS$CREATE_LAREA_NOLOGGING logical is defined, DML
       operations are not available for extraction between the time
       the table is created and when the transaction is committed.

    o  Tables that use the vertical record partitioning (VRP) feature
       cannot be extracted using the LogMiner feature. LogMiner
       software currently does not detect these tables. A future
       release of Oracle Rdb will detect and reject access to
       vertically partitioned tables.

    o  In binary format output, VARCHAR fields are not padded with
       spaces in the output file. The VARCHAR data type is extracted
       as a 2-byte count field and a fixed-length data field. The 2-
       byte count field indicates the number of valid characters in
       the fixed-length data field. Any additional contents in the
       data field are unpredictable.

    o  You cannot extract changes to a table when the table
       definition is changed within an after-image journal file.
       Data definition language (DDL) changes to a table are not
       allowed within an .aij file being extracted. All records in an
       .aij file must be the current record version. If you are going
       to perform DDL operations on tables that you wish to extract
       using the LogMiner for Rdb, you should:

       1. Back up your after-image journal files.

       2. Extract the .aij files using the RMU Unload After_Journal
          command.

       3. Make the DDL changes.

    o  Do not use the OpenVMS Alpha High Performance Sort/Merge
       utility (selected by defining the logical name SORTSHR
       to SYS$SHARE:HYPERSORT) when using the LogMiner feature.
       HYPERSORT supports only a subset of the library sort routines
       that LogMiner requires. Make sure that the SORTSHR logical
       name is not defined to HYPERSORT.

    o  The metadata information file used by the RMU Unload After_
       Journal command is in an internal binary format. The contents
       and format are not documented and are not directly accessible
       by other utilities. The content and format of the metadata
       information file is specific to a version of the RMU Unload
       After_Journal utility. As new versions and updates of Oracle
       Rdb are released, you will proably have to re-create the
       metadata information file. The same version of Oracle Rdb must
       be used to both write and read a metadata information file.
       The RMU Unload After_Journal command verifies the format and
       version of the metadata information file and issues an error
       message in the case of a version mismatch.

    o  For debugging purposes, you can format and display the
       contents of a metadata information file by using the
       Options=Dump qualifier with the Restore_Metadata qualifier.
       This dump may be helpful to Oracle Support engineers during
       problem analysis. The contents and format of the metadata
       information file are subject to change.

    o  If you use both the Output and Statistics_Interval qualifiers,
       the output stream used for the log, trace, and statistics
       information is flushed to disk (via the RMS $FLUSH service) at
       each statistics interval. This makes sure that an output file
       of trace and log information is written to disk periodically.

    o  You can specify input backup after-image journal files along
       with the Continuous qualifier from the command line. The
       specified after-image journal backup files are processed in
       an offline mode. Once they have been processed, the RMU Unload
       After_Journal command switches to "online" mode and the active
       online journals are processed.

    o  When no input after-image journal files are specified on the
       command line, the Continuous LogMiner starts extracting at the
       beginning of the earliest modified online after-image journal
       file. The Restart= qualifier can be used to control the first
       transaction to be extracted.

    o  The Continuous LogMiner requires fixed-size circular after-
       image journals.

    o  An after-image journal file cannot be backed up if there
       are any Continuous LogMiner checkpoints in the aij file.
       The Continuous LogMiner moves its checkpoint to the physical
       end-of-file for the online .aij file that it is extracting.

    o  In order to ensure that all records have been written by all
       database users, Continuous LogMiner processes do not switch
       to the next live journal file until it has been written to by
       another process. Live journals SHOULD NOT be backed up while
       the Continuous LogMiner process is processing a list of .aij
       backup files. This is an unsupported activity and could lead
       to the LogMiner losing data.

    o  If backed up after-image journal files are specified on the
       command line and the Continuous qualifier is specified, the
       journal sequence numbers must ascend directly from the backed
       up journal files to the online journal files.

       In order to preserve the after-image journal file sequencing
       as processed by the RMU Unload After_Journal /Continuous
       command, it is important that no after-image journal backup
       operations are attempted between the start of the command and
       when the Continuous LogMiner process reaches the live online
       after-image journals.

    o  You can run multiple Continuous LogMiner processes at one
       time on a database. Each Continuous LogMiner process acts
       independently.

    o  The Continuous LogMiner reads the live after-image journal
       file just behind writers to the journal. This will likely
       increase the I/O load on the disk devices where the journals
       are located. The Continuous LogMiner attempts to minimize
       unneeded journal I/O by checking a "High Water Mark" lock to
       determine if the journal has been written to and where the
       highest written block location is located.

    o  Vertically partitioned tables cannot be extracted.

2.6  –  Examples

    Example 1

    The following example unloads the EMPLOYEES table from the .aij
    backup file MFP.AIJBCK.

    RMU /UNLOAD /AFTER_JOURNAL MFP.RDB MFP.AIJBCK -
        /TABLE = (NAME = EMPLOYEES, OUTPUT = EMPLOYEES.DAT)

    Example 2

    The following example simultaneously unloads the SALES,
    STOCK, SHIPPING, and ORDERS tables from the .aij backup files
    MFS.AIJBCK_1-JUL-1999 through MFS.AIJBCK_3-JUL-1999. Note that
    the input .aij backup files are processed sequentially in the
    order specified.

    $ RMU /UNLOAD /AFTER_JOURNAL MFS.RDB -
       MFS.AIJBCK_1-JUL-1999, -
       MFS.AIJBCK_2-JUL-1999, -
       MFS.AIJBCK_3-JUL-1999 -
       /TABLE = (NAME = SALES, OUTPUT = SALES.DAT) -
       /TABLE = (NAME = STOCK, OUTPUT = STOCK.DAT) -
       /TABLE = (NAME = SHIPPING, OUTPUT = SHIPPING.DAT) -
       /TABLE = (NAME = ORDER, OUTPUT = ORDER.DAT)

    Example 3

    Use the Before and Since qualifiers to unload data based on a
    time range. The following example extracts changes made to the
    PLANETS table by transactions that committed between 1-SEP-1999
    at 14:30 and 3-SEP-1999 at 16:00.

    $ RMU /UNLOAD /AFTER_JOURNAL MFS.RDB MFS.AIJBCK -
       /TABLE = (NAME = PLANETS, OUTPUT = PLANETS.DAT) -
       /BEFORE = "3-SEP-1999 16:00:00.00" -
       /SINCE = "1-SEP-1999 14:30:00.00"

    Example 4

    The following example simultaneously unloads the SALES and
    STOCK tables from all .aij backup files that match the wildcard
    specification MFS.AIJBCK_1999-07-*. The input .aij backup files
    are processed sequentially in the order returned from the file
    system.

    $ RMU /UNLOAD /AFTER_JOURNAL MFS.RDB -
       MFS.AIJBCK_1999-07-* -
       /TABLE = (NAME = SALES, OUTPUT = SALES.DAT) -
       /TABLE = (NAME = STOCK, OUTPUT = STOCK.DAT)

    Example 5

    The following example unloads the TICKER table from the .aij
    backup files listed in the file called AIJ_BACKUP_FILES.DAT
    (note the double quotation marks surrounding the at (@) character
    and the file specification). The input .aij backup files are
    processed sequentially. The output records are written to the
    mailbox device called MBA127:. A separate program is already
    running on the system, and it reads and processes the data
    written to the mailbox.

    $ RMU /UNLOAD /AFTER_JOURNAL MFS.RDB -
       "@AIJ_BACKUP_FILES.DAT" -
       /TABLE = (NAME = TICKER, OUTPUT = MBA127:)

    Example 6

    You can use the RMU Unload After_Journal command followed by RMU
    Load commands to move transaction data from one database into
    a change table in another database. You must create a record
    definition (.rrd) file for each table being loaded into the
    target database. The record definition files can be created by
    specifying the Record_Definition option on the Table qualifier.

    $ RMU /UNLOAD /AFTER_JOURNAL OLTP.RDB MYAIJ.AIJBCK -
      /TABLE = ( NAME = MYTBL, -
                 OUTPUT = MYTBL.DAT, -
                 RECORD_DEFINITION=MYLOGTBL) -
      /TABLE = ( NAME = SALE, -
                 OUTPUT=SALE.DAT, -
                 RECORD_DEFINITION=SALELOGTBL)

    $ RMU /LOAD WAREHOUSE.RDB MYLOGTBL MYTBL.DAT -
       /RECORD_DEFINITION = FILE = MYLOGTBL.RRD

    $ RMU /LOAD WAREHOUSE.RDB SALELOGTBL SALE.DAT -
       /RECORD_DEFINITION = FILE = SALELOGTBL.RRD

    Example 7

    You can use an RMS file containing the record structure
    definition for the output file as an input file to the RMU Load
    command. The record description uses the CDO record and field
    definition format. This is the same format used by the RMU Load
    and RMU Unload commands when the Record_Definition qualifier is
    used. The default file extension is .rrd.

    The record definitions for the fields that the LogMiner processs
    writes to the output .rrd file are shown in the following table.
    These fields can be manually appended to a record definition file
    for the actual user data fields being unloaded. The file can be
    used to load a transaction table within a database. A transaction
    table is the output that the LogMiner process writes to a table
    consisting of sequential transactions performed in a database.

    DEFINE FIELD RDB$LM_ACTION          DATATYPE IS TEXT SIZE IS 1.
    DEFINE FIELD RDB$LM_RELATION_NAME   DATATYPE IS TEXT SIZE IS 31.
    DEFINE FIELD RDB$LM_RECORD_TYPE     DATATYPE IS SIGNED LONGWORD.
    DEFINE FIELD RDB$LM_DATA_LEN        DATATYPE IS SIGNED WORD.
    DEFINE FIELD RDB$LM_NBV_LEN         DATATYPE IS SIGNED WORD.
    DEFINE FIELD RDB$LM_DBK             DATATYPE IS SIGNED QUADWORD.
    DEFINE FIELD RDB$LM_START_TAD       DATETYPE IS DATE
    DEFINE FIELD RDB$LM_COMMIT_TAD      DATATYPE IS DATE
    DEFINE FIELD RDB$LM_TSN             DATATYPE IS SIGNED QUADWORD.
    DEFINE FIELD RDB$LM_RECORD_VERSION  DATATYPE IS SIGNED WORD.

    Example 8

    Instead of using the Table qualifier, you can use an Options file
    to specify the table or tables to be extracted, as shown in the
    following example.

    $ TYPE TABLES.OPTIONS
    TABLE=MYTBL, OUTPUT=MYTBL.DAT
    TABLE=SALES, OUTPUT=SALES.DAT
    $ RMU /UNLOAD /AFTER_JOURNAL OLTP.RDB MYAIJ.AIJBCK -
       /OPTIONS = FILE = TABLES.OPTIONS

    Example 9

    The following example unloads the EMPLOYEES table from the live
    database and writes all change records to the MBA145 device. A
    separate program is presumed to be reading the mailbox at all
    times and processing the records.

    $ RMU /UNLOAD /AFTER_JOURNAL /CONTINUOUS MFP.RDB -
     /TABLE = (NAME = EMPLOYEES, OUTPUT = MBA145:)

    Example 10

    This example demonstrates unloading three tables (EMPLOYEES,
    SALES, and CUSTOMERS) to a single mailbox. Even though the
    mailbox is not a file-oriented device, the same file name is
    specified for each. This is required because the LogMiner process
    defaults the file name to the table name. If the same file name
    is not explicitly specified for each output stream destination,
    the LogMiner process assigns one mailbox channel for each table.
    When the file name is the same for all tables, the LogMiner
    process detects this and assigns only a single channel for all
    input tables.

    $ DEFINE MBX$ LOADER_MBX:X
    $ RMU /UNLOAD /AFTER_JOURNAL /CONTINUOUS MFP.RDB -
     /TABLE = (NAME = EMPLOYEES, OUTPUT = MBX$:) -
     /TABLE = (NAME = SALES, OUTPUT = MBX$:) -
     /TABLE = (NAME = CUSTOMERS, OUTPUT = MBX$:)

    Example 11

    In order to include transaction commit information, the
    /Include =Action =Commit qualifier is specified in this example.
    Additionally, the EMPLOYEES and SALES tables are extracted to two
    different mailbox devices (ready by separate processes). A commit
    record is written to each mailbox after all changed records for
    each transaction have been extracted.

    $ RMU /UNLOAD /AFTER_JOURNAL /CONTINUOUS MFP.RDB -
     /INCLUDE = ACTION = COMMIT -
     /TABLE = (NAME = EMPLOYEES, OUTPUT = LOADER_EMP_MBX:X) -
     /TABLE = (NAME = SALES, OUTPUT = LOADER_SAL_MBX:X)

    Example 12

    In this example, multiple input backup after-image journal
    files are supplied. The Order_AIJ_Files qualifier specifies
    that the .aij files are to be processed in ascending order of
    .aij sequence number (regardless of file name). Prior to the
    extraction operation, each input file is opened and the .aij Open
    record is read. The .aij files are then opened and extracted, one
    at a time, by ascending .aij sequence number.

    $ RMU /UNLOAD /AFTER_JOURNAL /LOG /ORDER_AIJ_FILES -
     MFP.RDB *.AIJBCK -
     /TABLE = (NAME = C1, OUTPUT=C1.DAT)
    %RMU-I-UNLAIJFL, Unloading table C1 to DGA0:[DB]C1.DAT;1
    %RMU-I-LOGOPNAIJ, opened journal file DGA0:[DB]ABLE.AIJBCK;1
    %RMU-I-AIJRSTSEQ, journal sequence number is "5"
    %RMU-I-LOGOPNAIJ, opened journal file DGA0:[DB]BAKER.AIJBCK;1
    %RMU-I-AIJRSTSEQ, journal sequence number is "4"
    %RMU-I-LOGOPNAIJ, opened journal file DGA0:[DB]CHARLIE.AIJBCK;1
    %RMU-I-AIJRSTSEQ, journal sequence number is "6"
    %RMU-I-LOGOPNAIJ, opened journal file DGA0:[DB]BAKER.AIJBCK;1
    %RMU-I-AIJRSTSEQ, journal sequence number is "4"
    %RMU-I-AIJMODSEQ, next AIJ file sequence number will be 5
    %RMU-I-LOGOPNAIJ, opened journal file DGA0:[DB]ABLE.AIJBCK;1
    %RMU-I-AIJRSTSEQ, journal sequence number is "5"
    %RMU-I-AIJMODSEQ, next AIJ file sequence number will be 6
    %RMU-I-LOGOPNAIJ, opened journal file DGA0:[DB]CHARLIE.AIJBCK;1
    %RMU-I-AIJRSTSEQ, journal sequence number is "6"
    %RMU-I-AIJMODSEQ, next AIJ file sequence number will be 7
    %RMU-I-LOGSUMMARY, total 7 transactions committed
    %RMU-I-LOGSUMMARY, total 0 transactions rolled back
    ---------------------------------------------------------------------
    ELAPSED: 0 00:00:00.15 CPU: 0:00:00.08 BUFIO: 62 DIRIO: 19 FAULTS: 73
    Table "C1" : 3 records written (3 modify, 0 delete)
    Total : 3 records written (3 modify, 0 delete)

    Example 13

    The SQL record definitions for the fields that the LogMiner
    process writes to the output are shown in the following
    example. These fields can be manually appended to the table
    creation command for the actual user data fields being unloaded.
    Alternately, the Table_Definition qualifier can be used with the
    Table qualifier or within an Options file to automatically create
    the SQL definition file. This can be used to create a transaction
    table of changed data.

    SQL> CREATE TABLE MYLOGTABLE (
    cont> RDB$LM_ACTION          CHAR,
    cont> RDB$LM_RELATION_NAME   CHAR (31),
    cont> RDB$LM_RECORD_TYPE     INTEGER,
    cont> RDB$LM_DATA_LEN        SMALLINT,
    cont> RDB$LM_NBV_LEN         SMALLINT,
    cont> RDB$LM_DBK             BIGINT,
    cont> RDB$LM_START_TAD       DATE VMS,
    cont> RDB$LM_COMMIT_TAD      DATE VMS,
    cont> RDB$LM_TSN             BIGINT,
    cont> RDB$LM_RECORD_VERSION  SMALLINT ...);

    Example 14

    The following example is the transaction table record definition
    (.rrd) file for the EMPLOYEES table from the PERSONNEL database:

    DEFINE FIELD RDB$LM_ACTION          DATATYPE IS TEXT SIZE IS 1.
    DEFINE FIELD RDB$LM_RELATION_NAME   DATATYPE IS TEXT SIZE IS 31.
    DEFINE FIELD RDB$LM_RECORD_TYPE     DATATYPE IS SIGNED LONGWORD.
    DEFINE FIELD RDB$LM_DATA_LEN        DATATYPE IS SIGNED WORD.
    DEFINE FIELD RDB$LM_NBV_LEN         DATATYPE IS SIGNED WORD.
    DEFINE FIELD RDB$LM_DBK             DATATYPE IS SIGNED QUADWORD.
    DEFINE FIELD RDB$LM_START_TAD       DATATYPE IS DATE.
    DEFINE FIELD RDB$LM_COMMIT_TAD      DATATYPE IS DATE.
    DEFINE FIELD RDB$LM_TSN             DATATYPE IS SIGNED QUADWORD.
    DEFINE FIELD RDB$LM_RECORD_VERSION  DATATYPE IS SIGNED WORD.

    DEFINE FIELD EMPLOYEE_ID            DATATYPE IS TEXT SIZE IS 5.
    DEFINE FIELD LAST_NAME              DATATYPE IS TEXT SIZE IS 14.
    DEFINE FIELD FIRST_NAME             DATATYPE IS TEXT SIZE IS 10.
    DEFINE FIELD MIDDLE_INITIAL         DATATYPE IS TEXT SIZE IS 1.
    DEFINE FIELD ADDRESS_DATA_1         DATATYPE IS TEXT SIZE IS 25.
    DEFINE FIELD ADDRESS_DATA_2         DATATYPE IS TEXT SIZE IS 20.
    DEFINE FIELD CITY                   DATATYPE IS TEXT SIZE IS 20.
    DEFINE FIELD STATE                  DATATYPE IS TEXT SIZE IS 2.
    DEFINE FIELD POSTAL_CODE            DATATYPE IS TEXT SIZE IS 5.
    DEFINE FIELD SEX                    DATATYPE IS TEXT SIZE IS 1.
    DEFINE FIELD BIRTHDAY               DATATYPE IS DATE.
    DEFINE FIELD STATUS_CODE            DATATYPE IS TEXT SIZE IS 1.

    DEFINE RECORD EMPLOYEES.
       RDB$LM_ACTION .
       RDB$LM_RELATION_NAME .
       RDB$LM_RECORD_TYPE .
       RDB$LM_DATA_LEN .
       RDB$LM_NBV_LEN .
       RDB$LM_DBK .
       RDB$LM_START_TAD .
       RDB$LM_COMMIT_TAD .
       RDB$LM_TSN .
       RDB$LM_RECORD_VERSION .
       EMPLOYEE_ID .
       LAST_NAME .
       FIRST_NAME .
       MIDDLE_INITIAL .
       ADDRESS_DATA_1 .
       ADDRESS_DATA_2 .
       CITY .
       STATE .
       POSTAL_CODE .
       SEX .
       BIRTHDAY .
       STATUS_CODE .
    END EMPLOYEES RECORD.

    Example 15

    The following C source code segment demonstrates the structure
    of a module that can be used as a callback module and routine
    to process employee transaction information from the LogMiner
    process. The routine, Employees_Callback, would be called by the
    LogMiner process for each extracted record. The final time the
    callback routine is called, the RDB$LM_ACTION field will be set
    to "E" to indicate the end of the output stream.

    #include <stdio>
    typedef unsigned char date_type[8];
    typedef unsigned char dbkey_type[8];
    typedef unsigned char tsn_type[8];

    typedef struct {
        unsigned char       rdb$lm_action;
        char                rdb$lm_relation_name[31];
        unsigned int        rdb$lm_record_type;
        unsigned short int  rdb$lm_data_len;
        unsigned short int  rdb$lm_nbv_len;
        dbkey_type          rdb$lm_dbk;
        date_type           rdb$lm_start_tad;
        date_type           rdb$lm_commit_tad;
        tsn_type            rdb$lm_tsn;
        unsigned short int  rdb$lm_record_version;
        char                employee_id[5];
        char                last_name[14];
        char                first_name[10];
        char                middle_initial[1];
        char                address_data_1[25];
        char                address_data_2[20];
        char                city[20];
        char                state[2];
        char                postal_code[5];
        char                sex[1];
        date_type           birthday;
        char                status_code[1];
    } transaction_data;

    void employees_callback (unsigned int data_len, transaction_data
                             data_buf)
    {    .
         .
         .
     return;}

    Use the C compiler (either VAX C or DEC C) to compile this
    module. When linking this module, the symbol EMPLOYEES_CALLBACK
    needs to be externalized in the shareable image. Refer to the
    OpenVMS manual discussing the Linker utility for more information
    about creating shareable images.

    On OpenVMS Alpha systems, you can use a LINK command similar to
    the following:

    $ LINK /SHAREABLE = EXAMPLE.EXE EXAMPLE.OBJ + SYS$INPUT: /OPTIONS
    SYMBOL_VECTOR = (EMPLOYEES_CALLBACK = PROCEDURE)
    <Ctrl/Z>

    On OpenVMS VAX systems, you can use a LINK command similar to the
    following:

    $ LINK /SHAREABLE = EXAMPLE.EXE EXAMPLE.OBJ + SYS$INPUT: /OPTIONS
    UNIVERSAL = EMPLOYEES_CALLBACK
    <Ctrl/Z>

    Example 16

    You can use triggers and a transaction table to construct a
    method to replicate table data from one database to another
    using RMU Unload After_Journal and RMU Load commands. This
    data replication method is based on transactional changes
    to the source table and requires no programming. Instead,
    existing features of Oracle Rdb can be combined to provide this
    functionality.

    For this example, consider a simple customer information table
    called CUST with a unique customer ID value, customer name,
    address, and postal code. Changes to this table are to be
    moved from an OLTP database to a reporting database system on
    a periodic (perhaps nightly) basis.

    First, in the reporting database, a customer table of the same
    structure as the OLTP customer table is created. In this example,
    this table is called RPT_CUST. It contains the same fields as the
    OLTP customer table called CUST.

    SQL> CREATE TABLE RPT_CUST
    cont> CUST_ID               INTEGER,
    cont> CUST_NAME             CHAR (50),
    cont> CUST_ADDRESS          CHAR (50),
    cont> CUST_POSTAL_CODE      INTEGER);

    Next, a temporary table is created in the reporting database for
    the LogMiner-extracted transaction data from the CUST table. This
    temporary table definition specifies ON COMMIT DELETE ROWS so
    that data in the temporary table is deleted from memory at each
    transaction commit. A temporary table is used because there is no
    need to journal changes to the table.

    SQL> CREATE GLOBAL TEMPORARY TABLE RDB_LM_RPT_CUST (
    cont> RDB$LM_RECORD_TYPE    INTEGER,
    cont> RDB$LM_DATA_LEN       SMALLINT,
    cont> RDB$LM_NBV_LEN        SMALLINT,
    cont> RDB$LM_DBK            BIGINT,
    cont> RDB$LM_START_TAD      DATE VMS,
    cont> RDB$LM_COMMIT_TAD     DATE VMS,
    cont> RDB$LM_TSN            BIGINT,
    cont> RDB$LM_RECORD_VERSION SMALLINT,
    cont> CUST_ID               INTEGER,
    cont> CUST_NAME             CHAR (50),
    cont> CUST_ADDRESS          CHAR (50),
    cont> CUST_POSTAL_CODE      INTEGER) ON COMMIT DELETE ROWS;

    For data to be populated in the RPT_CUST table in the reporting
    database, a trigger is created for the RDB_LM_RPT_CUST
    transaction table. This trigger is used to insert, update,
    or delete rows in the RPT_CUST table based on the transaction
    information from the OLTP database for the CUST table. The unique
    CUST_ID field is used to determine if customer records are to be
    modified or added.

    SQL> CREATE TRIGGER RDB_LM_RPT_CUST_TRIG
    cont>  AFTER INSERT ON RDB_LM_RPT_CUST
    cont>
    cont> -- Modify an existing customer record
    cont>
    cont>  WHEN (RDB$LM_ACTION = 'M' AND
    cont>        EXISTS (SELECT RPT_CUST.CUST_ID FROM RPT_CUST
    cont>                WHERE RPT_CUST.CUST_ID =
    cont>                RDB_LM_RPT_CUST.CUST_ID))
    cont>      (UPDATE RPT_CUST SET
    cont>              RPT_CUST.CUST_NAME = RDB_LM_RPT_CUST.CUST_NAME,
    cont>              RPT_CUST.CUST_ADDRESS =
    cont>              RDB_LM_RPT_CUST.CUST_ADDRESS,
    cont>              RPT_CUST.CUST_POSTAL_CODE =
    cont>              RDB_LM_RPT_CUST.CUST_POSTAL_CODE
    cont>       WHERE RPT_CUST.CUST_ID = RDB_LM_RPT_CUST.CUST_ID)
    cont>  FOR EACH ROW
    cont>
    cont> -- Add a new customer record
    cont>
    cont>  WHEN (RDB$LM_ACTION = 'M' AND NOT
    cont>        EXISTS (SELECT RPT_CUST.CUST_ID FROM RPT_CUST
    cont>                WHERE RPT_CUST.CUST_ID =
    cont>                RDB_LM_RPT_CUST.CUST_ID))
    cont>      (INSERT INTO RPT_CUST VALUES
    cont>              (RDB_LM_RPT_CUST.CUST_ID,
    cont>               RDB_LM_RPT_CUST.CUST_NAME,
    cont>               RDB_LM_RPT_CUST.CUST_ADDRESS,
    cont>               RDB_LM_RPT_CUST.CUST_POSTAL_CODE))
    cont>  FOR EACH ROW
    cont>
    cont> -- Delete an existing customer record
    cont>
    cont>  WHEN (RDB$LM_ACTION = 'D')
    cont>      (DELETE FROM RPT_CUST
    cont>       WHERE RPT_CUST.CUST_ID = RDB_LM_RPT_CUST.CUST_ID)
    cont>  FOR EACH ROW;

    Within the trigger, the action to take (for example, to add,
    update, or delete a customer record) is based on the RDB$LM_
    ACTION field (defined as D or M) and the existence of the
    customer record in the reporting database. For modifications,
    if the customer record does not exist, it is added; if it does
    exist, it is updated. For a deletion on the OLTP database, the
    customer record is deleted from the reporting database.

    The RMU Load command is used to read the output from the LogMiner
    process and load the data into the temporary table where each
    insert causes the trigger to execute. The Commit_Every qualifier
    is used to avoid filling memory with the customer records in
    the temporary table because as soon as the trigger executes, the
    record in the temporary table is no longer needed.

    $ RMU /UNLOAD /AFTER_JOURNAL OLTP.RDB OLTP.AIJBCK -
     /TABLE = (NAME = CUST, -
               OUTPUT = CUST.DAT, -
               RECORD_DEFINITION = RDB_LM_RPT_CUST.RRD)

    $ RMU /LOAD REPORT_DATABASE.RDB RDB_LM_RPT_CUST CUST.DAT -
          /RECORD_DEFINITION = FILE = RDB_LM_RPT_CUST.RRD -
          /COMMIT_EVERY = 1000

    Example 17

    The following example shows how to produce a control file that
    can be used by SQL*Loader to load the extracted data into an
    Oracle database.

    $ RMU/UNLOAD/AFTER TEST_DB TEST_DB_AIJ1_BCK -
         /FORMAT=TEXT -
         /TABLE=(NAME=TEST_TBL, -
                 OUTPUT=LOGMINER_TEXT.TXT, -
                 CONTROL=LOGMINER_CONTROL.CTL, -
                 TABLE_DEFINITION=TEST_TBL.SQL)

    This example produces the following control file. The control
    file is specific to a fixed length record text file. NULLs are
    handled by using the NULLIF clause for the column definition that
    references a corresponding null byte filler column. There is a
    null byte filler column for each column in the underlying table
    but not for the LogMiner specific columns at the beginning of
    the record. If a column is NULL, the corresponding RDB$LM_NBn
    filler column is set to 1. VARCHAR columns are padded with blanks
    but the blanks are ignored by default when the file is loaded by
    SQL*Loader. If you wish to preserve the blanks, you can update
    the control file and add the "PRESERVE BLANKS" clause.

    -- Control file for LogMiner transaction data 25-AUG-2000 12:15:50.47
    -- From database table "TEST_DB"
    LOAD DATA
    INFILE 'DISK:[DIRECTORY]LOGMINER_TEXT.TXT;'
    APPEND INTO TABLE 'RDB_LM_TEST_TBL'
    (
    RDB$LM_ACTION                   POSITION(1:1) CHAR,
    RDB$LM_RELATION_NAME            POSITION(2:32) CHAR,
    RDB$LM_RECORD_TYPE              POSITION(33:44) INTEGER EXTERNAL,
    RDB$LM_DATA_LEN                 POSITION(45:50) INTEGER EXTERNAL,
    RDB$LM_NBV_LEN                  POSITION(51:56) INTEGER EXTERNAL,
    RDB$LM_DBK                      POSITION(57:76) INTEGER EXTERNAL,
    RDB$LM_START_TAD                POSITION(77:90) DATE "YYYYMMDDHHMISS",
    RDB$LM_COMMIT_TAD               POSITION(91:104) DATE "YYYYMMDDHHMISS",
    RDB$LM_TSN                      POSITION(105:124) INTEGER EXTERNAL,
    RDB$LM_RECORD_VERSION           POSITION(125:130) INTEGER EXTERNAL,
    TEST_COL                        POSITION(131:150) CHAR NULLIF RDB$LM_NB1 = 1,
    RDB$LM_NB1               FILLER POSITION(151:151) INTEGER EXTERNAL
    )

    Example 17

    The following example creates a metadata file for the database
    MFP. This metadata file can be used as input to a later RMU
    Unload After_Journal command.

    $ RMU /UNLOAD /AFTER_JOURNAL MFP /SAVE_METADATA=MF_MFP.METADATA /LOG
    %RMU-I-LMMFWRTCNT, Wrote 107 objects to metadata file
     "DUA0:[DB]MFMFP.METADATA;1"

    Example 18

    This example uses a previously created metadata information file
    for the database MFP. The database is not accessed during the
    unload operation; the database metadata information is read from
    the file. As the extract operation no longer directly relies on
    the source database, the AIJ and METADATA files can be moved to
    another systems and extracted there.

    $ RMU /UNLOAD /AFTER_JOURNAL /RESTORE_METADATA=MF_MFP.METADATA -
        MFP AIJ_BACKUP1 /TABLE=(NAME=TAB1, OUTPUT=TAB1) /LOG
    %RMU-I-LMMFRDCNT, Read 107 objects from metadata file
     "DUA0:[DB]MF_MFP.METADATA;1"
    %RMU-I-UNLAIJFL, Unloading table TAB1 to DUA0:[DB]TAB1.DAT;1
    %RMU-I-LOGOPNAIJ, opened journal file DUA0:[DB]AIJ_BACKUP1.AIJ;1
    %RMU-I-AIJRSTSEQ, journal sequence number is "7216321"
    %RMU-I-AIJMODSEQ, next AIJ file sequence number will be 7216322
    %RMU-I-LOGSUMMARY, total 2 transactions committed
    %RMU-I-LOGSUMMARY, total 0 transactions rolled back
    ----------------------------------------------------------------------
     ELAPSED:  0 00:00:00.15 CPU: 0:00:00.01 BUFIO: 11 DIRIO: 5 FAULTS: 28
    Table "TAB1" : 1 record written (1 modify, 0 delete)
    Total : 1 record written (1 modify, 0 delete)
Close Help