VMS Help  —  RMU72  Unload  After Journal, Examples
    Example 1

    The following example unloads the EMPLOYEES table from the .aij
    backup file MFP.AIJBCK.

    RMU /UNLOAD /AFTER_JOURNAL MFP.RDB MFP.AIJBCK -
        /TABLE = (NAME = EMPLOYEES, OUTPUT = EMPLOYEES.DAT)

    Example 2

    The following example simultaneously unloads the SALES,
    STOCK, SHIPPING, and ORDERS tables from the .aij backup files
    MFS.AIJBCK_1-JUL-1999 through MFS.AIJBCK_3-JUL-1999. Note that
    the input .aij backup files are processed sequentially in the
    order specified.

    $ RMU /UNLOAD /AFTER_JOURNAL MFS.RDB -
       MFS.AIJBCK_1-JUL-1999, -
       MFS.AIJBCK_2-JUL-1999, -
       MFS.AIJBCK_3-JUL-1999 -
       /TABLE = (NAME = SALES, OUTPUT = SALES.DAT) -
       /TABLE = (NAME = STOCK, OUTPUT = STOCK.DAT) -
       /TABLE = (NAME = SHIPPING, OUTPUT = SHIPPING.DAT) -
       /TABLE = (NAME = ORDER, OUTPUT = ORDER.DAT)

    Example 3

    Use the Before and Since qualifiers to unload data based on a
    time range. The following example extracts changes made to the
    PLANETS table by transactions that committed between 1-SEP-1999
    at 14:30 and 3-SEP-1999 at 16:00.

    $ RMU /UNLOAD /AFTER_JOURNAL MFS.RDB MFS.AIJBCK -
       /TABLE = (NAME = PLANETS, OUTPUT = PLANETS.DAT) -
       /BEFORE = "3-SEP-1999 16:00:00.00" -
       /SINCE = "1-SEP-1999 14:30:00.00"

    Example 4

    The following example simultaneously unloads the SALES and
    STOCK tables from all .aij backup files that match the wildcard
    specification MFS.AIJBCK_1999-07-*. The input .aij backup files
    are processed sequentially in the order returned from the file
    system.

    $ RMU /UNLOAD /AFTER_JOURNAL MFS.RDB -
       MFS.AIJBCK_1999-07-* -
       /TABLE = (NAME = SALES, OUTPUT = SALES.DAT) -
       /TABLE = (NAME = STOCK, OUTPUT = STOCK.DAT)

    Example 5

    The following example unloads the TICKER table from the .aij
    backup files listed in the file called AIJ_BACKUP_FILES.DAT
    (note the double quotation marks surrounding the at (@) character
    and the file specification). The input .aij backup files are
    processed sequentially. The output records are written to the
    mailbox device called MBA127:. A separate program is already
    running on the system, and it reads and processes the data
    written to the mailbox.

    $ RMU /UNLOAD /AFTER_JOURNAL MFS.RDB -
       "@AIJ_BACKUP_FILES.DAT" -
       /TABLE = (NAME = TICKER, OUTPUT = MBA127:)

    Example 6

    You can use the RMU Unload After_Journal command followed by RMU
    Load commands to move transaction data from one database into
    a change table in another database. You must create a record
    definition (.rrd) file for each table being loaded into the
    target database. The record definition files can be created by
    specifying the Record_Definition option on the Table qualifier.

    $ RMU /UNLOAD /AFTER_JOURNAL OLTP.RDB MYAIJ.AIJBCK -
      /TABLE = ( NAME = MYTBL, -
                 OUTPUT = MYTBL.DAT, -
                 RECORD_DEFINITION=MYLOGTBL) -
      /TABLE = ( NAME = SALE, -
                 OUTPUT=SALE.DAT, -
                 RECORD_DEFINITION=SALELOGTBL)

    $ RMU /LOAD WAREHOUSE.RDB MYLOGTBL MYTBL.DAT -
       /RECORD_DEFINITION = FILE = MYLOGTBL.RRD

    $ RMU /LOAD WAREHOUSE.RDB SALELOGTBL SALE.DAT -
       /RECORD_DEFINITION = FILE = SALELOGTBL.RRD

    Example 7

    You can use an RMS file containing the record structure
    definition for the output file as an input file to the RMU Load
    command. The record description uses the CDO record and field
    definition format. This is the same format used by the RMU Load
    and RMU Unload commands when the Record_Definition qualifier is
    used. The default file extension is .rrd.

    The record definitions for the fields that the LogMiner processs
    writes to the output .rrd file are shown in the following table.
    These fields can be manually appended to a record definition file
    for the actual user data fields being unloaded. The file can be
    used to load a transaction table within a database. A transaction
    table is the output that the LogMiner process writes to a table
    consisting of sequential transactions performed in a database.

    DEFINE FIELD RDB$LM_ACTION          DATATYPE IS TEXT SIZE IS 1.
    DEFINE FIELD RDB$LM_RELATION_NAME   DATATYPE IS TEXT SIZE IS 31.
    DEFINE FIELD RDB$LM_RECORD_TYPE     DATATYPE IS SIGNED LONGWORD.
    DEFINE FIELD RDB$LM_DATA_LEN        DATATYPE IS SIGNED WORD.
    DEFINE FIELD RDB$LM_NBV_LEN         DATATYPE IS SIGNED WORD.
    DEFINE FIELD RDB$LM_DBK             DATATYPE IS SIGNED QUADWORD.
    DEFINE FIELD RDB$LM_START_TAD       DATETYPE IS DATE
    DEFINE FIELD RDB$LM_COMMIT_TAD      DATATYPE IS DATE
    DEFINE FIELD RDB$LM_TSN             DATATYPE IS SIGNED QUADWORD.
    DEFINE FIELD RDB$LM_RECORD_VERSION  DATATYPE IS SIGNED WORD.

    Example 8

    Instead of using the Table qualifier, you can use an Options file
    to specify the table or tables to be extracted, as shown in the
    following example.

    $ TYPE TABLES.OPTIONS
    TABLE=MYTBL, OUTPUT=MYTBL.DAT
    TABLE=SALES, OUTPUT=SALES.DAT
    $ RMU /UNLOAD /AFTER_JOURNAL OLTP.RDB MYAIJ.AIJBCK -
       /OPTIONS = FILE = TABLES.OPTIONS

    Example 9

    The following example unloads the EMPLOYEES table from the live
    database and writes all change records to the MBA145 device. A
    separate program is presumed to be reading the mailbox at all
    times and processing the records.

    $ RMU /UNLOAD /AFTER_JOURNAL /CONTINUOUS MFP.RDB -
     /TABLE = (NAME = EMPLOYEES, OUTPUT = MBA145:)

    Example 10

    This example demonstrates unloading three tables (EMPLOYEES,
    SALES, and CUSTOMERS) to a single mailbox. Even though the
    mailbox is not a file-oriented device, the same file name is
    specified for each. This is required because the LogMiner process
    defaults the file name to the table name. If the same file name
    is not explicitly specified for each output stream destination,
    the LogMiner process assigns one mailbox channel for each table.
    When the file name is the same for all tables, the LogMiner
    process detects this and assigns only a single channel for all
    input tables.

    $ DEFINE MBX$ LOADER_MBX:X
    $ RMU /UNLOAD /AFTER_JOURNAL /CONTINUOUS MFP.RDB -
     /TABLE = (NAME = EMPLOYEES, OUTPUT = MBX$:) -
     /TABLE = (NAME = SALES, OUTPUT = MBX$:) -
     /TABLE = (NAME = CUSTOMERS, OUTPUT = MBX$:)

    Example 11

    In order to include transaction commit information, the
    /Include =Action =Commit qualifier is specified in this example.
    Additionally, the EMPLOYEES and SALES tables are extracted to two
    different mailbox devices (ready by separate processes). A commit
    record is written to each mailbox after all changed records for
    each transaction have been extracted.

    $ RMU /UNLOAD /AFTER_JOURNAL /CONTINUOUS MFP.RDB -
     /INCLUDE = ACTION = COMMIT -
     /TABLE = (NAME = EMPLOYEES, OUTPUT = LOADER_EMP_MBX:X) -
     /TABLE = (NAME = SALES, OUTPUT = LOADER_SAL_MBX:X)

    Example 12

    In this example, multiple input backup after-image journal
    files are supplied. The Order_AIJ_Files qualifier specifies
    that the .aij files are to be processed in ascending order of
    .aij sequence number (regardless of file name). Prior to the
    extraction operation, each input file is opened and the .aij Open
    record is read. The .aij files are then opened and extracted, one
    at a time, by ascending .aij sequence number.

    $ RMU /UNLOAD /AFTER_JOURNAL /LOG /ORDER_AIJ_FILES -
     MFP.RDB *.AIJBCK -
     /TABLE = (NAME = C1, OUTPUT=C1.DAT)
    %RMU-I-UNLAIJFL, Unloading table C1 to DGA0:[DB]C1.DAT;1
    %RMU-I-LOGOPNAIJ, opened journal file DGA0:[DB]ABLE.AIJBCK;1
    %RMU-I-AIJRSTSEQ, journal sequence number is "5"
    %RMU-I-LOGOPNAIJ, opened journal file DGA0:[DB]BAKER.AIJBCK;1
    %RMU-I-AIJRSTSEQ, journal sequence number is "4"
    %RMU-I-LOGOPNAIJ, opened journal file DGA0:[DB]CHARLIE.AIJBCK;1
    %RMU-I-AIJRSTSEQ, journal sequence number is "6"
    %RMU-I-LOGOPNAIJ, opened journal file DGA0:[DB]BAKER.AIJBCK;1
    %RMU-I-AIJRSTSEQ, journal sequence number is "4"
    %RMU-I-AIJMODSEQ, next AIJ file sequence number will be 5
    %RMU-I-LOGOPNAIJ, opened journal file DGA0:[DB]ABLE.AIJBCK;1
    %RMU-I-AIJRSTSEQ, journal sequence number is "5"
    %RMU-I-AIJMODSEQ, next AIJ file sequence number will be 6
    %RMU-I-LOGOPNAIJ, opened journal file DGA0:[DB]CHARLIE.AIJBCK;1
    %RMU-I-AIJRSTSEQ, journal sequence number is "6"
    %RMU-I-AIJMODSEQ, next AIJ file sequence number will be 7
    %RMU-I-LOGSUMMARY, total 7 transactions committed
    %RMU-I-LOGSUMMARY, total 0 transactions rolled back
    ---------------------------------------------------------------------
    ELAPSED: 0 00:00:00.15 CPU: 0:00:00.08 BUFIO: 62 DIRIO: 19 FAULTS: 73
    Table "C1" : 3 records written (3 modify, 0 delete)
    Total : 3 records written (3 modify, 0 delete)

    Example 13

    The SQL record definitions for the fields that the LogMiner
    process writes to the output are shown in the following
    example. These fields can be manually appended to the table
    creation command for the actual user data fields being unloaded.
    Alternately, the Table_Definition qualifier can be used with the
    Table qualifier or within an Options file to automatically create
    the SQL definition file. This can be used to create a transaction
    table of changed data.

    SQL> CREATE TABLE MYLOGTABLE (
    cont> RDB$LM_ACTION          CHAR,
    cont> RDB$LM_RELATION_NAME   CHAR (31),
    cont> RDB$LM_RECORD_TYPE     INTEGER,
    cont> RDB$LM_DATA_LEN        SMALLINT,
    cont> RDB$LM_NBV_LEN         SMALLINT,
    cont> RDB$LM_DBK             BIGINT,
    cont> RDB$LM_START_TAD       DATE VMS,
    cont> RDB$LM_COMMIT_TAD      DATE VMS,
    cont> RDB$LM_TSN             BIGINT,
    cont> RDB$LM_RECORD_VERSION  SMALLINT ...);

    Example 14

    The following example is the transaction table record definition
    (.rrd) file for the EMPLOYEES table from the PERSONNEL database:

    DEFINE FIELD RDB$LM_ACTION          DATATYPE IS TEXT SIZE IS 1.
    DEFINE FIELD RDB$LM_RELATION_NAME   DATATYPE IS TEXT SIZE IS 31.
    DEFINE FIELD RDB$LM_RECORD_TYPE     DATATYPE IS SIGNED LONGWORD.
    DEFINE FIELD RDB$LM_DATA_LEN        DATATYPE IS SIGNED WORD.
    DEFINE FIELD RDB$LM_NBV_LEN         DATATYPE IS SIGNED WORD.
    DEFINE FIELD RDB$LM_DBK             DATATYPE IS SIGNED QUADWORD.
    DEFINE FIELD RDB$LM_START_TAD       DATATYPE IS DATE.
    DEFINE FIELD RDB$LM_COMMIT_TAD      DATATYPE IS DATE.
    DEFINE FIELD RDB$LM_TSN             DATATYPE IS SIGNED QUADWORD.
    DEFINE FIELD RDB$LM_RECORD_VERSION  DATATYPE IS SIGNED WORD.

    DEFINE FIELD EMPLOYEE_ID            DATATYPE IS TEXT SIZE IS 5.
    DEFINE FIELD LAST_NAME              DATATYPE IS TEXT SIZE IS 14.
    DEFINE FIELD FIRST_NAME             DATATYPE IS TEXT SIZE IS 10.
    DEFINE FIELD MIDDLE_INITIAL         DATATYPE IS TEXT SIZE IS 1.
    DEFINE FIELD ADDRESS_DATA_1         DATATYPE IS TEXT SIZE IS 25.
    DEFINE FIELD ADDRESS_DATA_2         DATATYPE IS TEXT SIZE IS 20.
    DEFINE FIELD CITY                   DATATYPE IS TEXT SIZE IS 20.
    DEFINE FIELD STATE                  DATATYPE IS TEXT SIZE IS 2.
    DEFINE FIELD POSTAL_CODE            DATATYPE IS TEXT SIZE IS 5.
    DEFINE FIELD SEX                    DATATYPE IS TEXT SIZE IS 1.
    DEFINE FIELD BIRTHDAY               DATATYPE IS DATE.
    DEFINE FIELD STATUS_CODE            DATATYPE IS TEXT SIZE IS 1.

    DEFINE RECORD EMPLOYEES.
       RDB$LM_ACTION .
       RDB$LM_RELATION_NAME .
       RDB$LM_RECORD_TYPE .
       RDB$LM_DATA_LEN .
       RDB$LM_NBV_LEN .
       RDB$LM_DBK .
       RDB$LM_START_TAD .
       RDB$LM_COMMIT_TAD .
       RDB$LM_TSN .
       RDB$LM_RECORD_VERSION .
       EMPLOYEE_ID .
       LAST_NAME .
       FIRST_NAME .
       MIDDLE_INITIAL .
       ADDRESS_DATA_1 .
       ADDRESS_DATA_2 .
       CITY .
       STATE .
       POSTAL_CODE .
       SEX .
       BIRTHDAY .
       STATUS_CODE .
    END EMPLOYEES RECORD.

    Example 15

    The following C source code segment demonstrates the structure
    of a module that can be used as a callback module and routine
    to process employee transaction information from the LogMiner
    process. The routine, Employees_Callback, would be called by the
    LogMiner process for each extracted record. The final time the
    callback routine is called, the RDB$LM_ACTION field will be set
    to "E" to indicate the end of the output stream.

    #include <stdio>
    typedef unsigned char date_type[8];
    typedef unsigned char dbkey_type[8];
    typedef unsigned char tsn_type[8];

    typedef struct {
        unsigned char       rdb$lm_action;
        char                rdb$lm_relation_name[31];
        unsigned int        rdb$lm_record_type;
        unsigned short int  rdb$lm_data_len;
        unsigned short int  rdb$lm_nbv_len;
        dbkey_type          rdb$lm_dbk;
        date_type           rdb$lm_start_tad;
        date_type           rdb$lm_commit_tad;
        tsn_type            rdb$lm_tsn;
        unsigned short int  rdb$lm_record_version;
        char                employee_id[5];
        char                last_name[14];
        char                first_name[10];
        char                middle_initial[1];
        char                address_data_1[25];
        char                address_data_2[20];
        char                city[20];
        char                state[2];
        char                postal_code[5];
        char                sex[1];
        date_type           birthday;
        char                status_code[1];
    } transaction_data;

    void employees_callback (unsigned int data_len, transaction_data
                             data_buf)
    {    .
         .
         .
     return;}

    Use the C compiler (either VAX C or DEC C) to compile this
    module. When linking this module, the symbol EMPLOYEES_CALLBACK
    needs to be externalized in the shareable image. Refer to the
    OpenVMS manual discussing the Linker utility for more information
    about creating shareable images.

    On OpenVMS Alpha systems, you can use a LINK command similar to
    the following:

    $ LINK /SHAREABLE = EXAMPLE.EXE EXAMPLE.OBJ + SYS$INPUT: /OPTIONS
    SYMBOL_VECTOR = (EMPLOYEES_CALLBACK = PROCEDURE)
    <Ctrl/Z>

    On OpenVMS VAX systems, you can use a LINK command similar to the
    following:

    $ LINK /SHAREABLE = EXAMPLE.EXE EXAMPLE.OBJ + SYS$INPUT: /OPTIONS
    UNIVERSAL = EMPLOYEES_CALLBACK
    <Ctrl/Z>

    Example 16

    You can use triggers and a transaction table to construct a
    method to replicate table data from one database to another
    using RMU Unload After_Journal and RMU Load commands. This
    data replication method is based on transactional changes
    to the source table and requires no programming. Instead,
    existing features of Oracle Rdb can be combined to provide this
    functionality.

    For this example, consider a simple customer information table
    called CUST with a unique customer ID value, customer name,
    address, and postal code. Changes to this table are to be
    moved from an OLTP database to a reporting database system on
    a periodic (perhaps nightly) basis.

    First, in the reporting database, a customer table of the same
    structure as the OLTP customer table is created. In this example,
    this table is called RPT_CUST. It contains the same fields as the
    OLTP customer table called CUST.

    SQL> CREATE TABLE RPT_CUST
    cont> CUST_ID               INTEGER,
    cont> CUST_NAME             CHAR (50),
    cont> CUST_ADDRESS          CHAR (50),
    cont> CUST_POSTAL_CODE      INTEGER);

    Next, a temporary table is created in the reporting database for
    the LogMiner-extracted transaction data from the CUST table. This
    temporary table definition specifies ON COMMIT DELETE ROWS so
    that data in the temporary table is deleted from memory at each
    transaction commit. A temporary table is used because there is no
    need to journal changes to the table.

    SQL> CREATE GLOBAL TEMPORARY TABLE RDB_LM_RPT_CUST (
    cont> RDB$LM_RECORD_TYPE    INTEGER,
    cont> RDB$LM_DATA_LEN       SMALLINT,
    cont> RDB$LM_NBV_LEN        SMALLINT,
    cont> RDB$LM_DBK            BIGINT,
    cont> RDB$LM_START_TAD      DATE VMS,
    cont> RDB$LM_COMMIT_TAD     DATE VMS,
    cont> RDB$LM_TSN            BIGINT,
    cont> RDB$LM_RECORD_VERSION SMALLINT,
    cont> CUST_ID               INTEGER,
    cont> CUST_NAME             CHAR (50),
    cont> CUST_ADDRESS          CHAR (50),
    cont> CUST_POSTAL_CODE      INTEGER) ON COMMIT DELETE ROWS;

    For data to be populated in the RPT_CUST table in the reporting
    database, a trigger is created for the RDB_LM_RPT_CUST
    transaction table. This trigger is used to insert, update,
    or delete rows in the RPT_CUST table based on the transaction
    information from the OLTP database for the CUST table. The unique
    CUST_ID field is used to determine if customer records are to be
    modified or added.

    SQL> CREATE TRIGGER RDB_LM_RPT_CUST_TRIG
    cont>  AFTER INSERT ON RDB_LM_RPT_CUST
    cont>
    cont> -- Modify an existing customer record
    cont>
    cont>  WHEN (RDB$LM_ACTION = 'M' AND
    cont>        EXISTS (SELECT RPT_CUST.CUST_ID FROM RPT_CUST
    cont>                WHERE RPT_CUST.CUST_ID =
    cont>                RDB_LM_RPT_CUST.CUST_ID))
    cont>      (UPDATE RPT_CUST SET
    cont>              RPT_CUST.CUST_NAME = RDB_LM_RPT_CUST.CUST_NAME,
    cont>              RPT_CUST.CUST_ADDRESS =
    cont>              RDB_LM_RPT_CUST.CUST_ADDRESS,
    cont>              RPT_CUST.CUST_POSTAL_CODE =
    cont>              RDB_LM_RPT_CUST.CUST_POSTAL_CODE
    cont>       WHERE RPT_CUST.CUST_ID = RDB_LM_RPT_CUST.CUST_ID)
    cont>  FOR EACH ROW
    cont>
    cont> -- Add a new customer record
    cont>
    cont>  WHEN (RDB$LM_ACTION = 'M' AND NOT
    cont>        EXISTS (SELECT RPT_CUST.CUST_ID FROM RPT_CUST
    cont>                WHERE RPT_CUST.CUST_ID =
    cont>                RDB_LM_RPT_CUST.CUST_ID))
    cont>      (INSERT INTO RPT_CUST VALUES
    cont>              (RDB_LM_RPT_CUST.CUST_ID,
    cont>               RDB_LM_RPT_CUST.CUST_NAME,
    cont>               RDB_LM_RPT_CUST.CUST_ADDRESS,
    cont>               RDB_LM_RPT_CUST.CUST_POSTAL_CODE))
    cont>  FOR EACH ROW
    cont>
    cont> -- Delete an existing customer record
    cont>
    cont>  WHEN (RDB$LM_ACTION = 'D')
    cont>      (DELETE FROM RPT_CUST
    cont>       WHERE RPT_CUST.CUST_ID = RDB_LM_RPT_CUST.CUST_ID)
    cont>  FOR EACH ROW;

    Within the trigger, the action to take (for example, to add,
    update, or delete a customer record) is based on the RDB$LM_
    ACTION field (defined as D or M) and the existence of the
    customer record in the reporting database. For modifications,
    if the customer record does not exist, it is added; if it does
    exist, it is updated. For a deletion on the OLTP database, the
    customer record is deleted from the reporting database.

    The RMU Load command is used to read the output from the LogMiner
    process and load the data into the temporary table where each
    insert causes the trigger to execute. The Commit_Every qualifier
    is used to avoid filling memory with the customer records in
    the temporary table because as soon as the trigger executes, the
    record in the temporary table is no longer needed.

    $ RMU /UNLOAD /AFTER_JOURNAL OLTP.RDB OLTP.AIJBCK -
     /TABLE = (NAME = CUST, -
               OUTPUT = CUST.DAT, -
               RECORD_DEFINITION = RDB_LM_RPT_CUST.RRD)

    $ RMU /LOAD REPORT_DATABASE.RDB RDB_LM_RPT_CUST CUST.DAT -
          /RECORD_DEFINITION = FILE = RDB_LM_RPT_CUST.RRD -
          /COMMIT_EVERY = 1000

    Example 17

    The following example shows how to produce a control file that
    can be used by SQL*Loader to load the extracted data into an
    Oracle database.

    $ RMU/UNLOAD/AFTER TEST_DB TEST_DB_AIJ1_BCK -
         /FORMAT=TEXT -
         /TABLE=(NAME=TEST_TBL, -
                 OUTPUT=LOGMINER_TEXT.TXT, -
                 CONTROL=LOGMINER_CONTROL.CTL, -
                 TABLE_DEFINITION=TEST_TBL.SQL)

    This example produces the following control file. The control
    file is specific to a fixed length record text file. NULLs are
    handled by using the NULLIF clause for the column definition that
    references a corresponding null byte filler column. There is a
    null byte filler column for each column in the underlying table
    but not for the LogMiner specific columns at the beginning of
    the record. If a column is NULL, the corresponding RDB$LM_NBn
    filler column is set to 1. VARCHAR columns are padded with blanks
    but the blanks are ignored by default when the file is loaded by
    SQL*Loader. If you wish to preserve the blanks, you can update
    the control file and add the "PRESERVE BLANKS" clause.

    -- Control file for LogMiner transaction data 25-AUG-2000 12:15:50.47
    -- From database table "TEST_DB"
    LOAD DATA
    INFILE 'DISK:[DIRECTORY]LOGMINER_TEXT.TXT;'
    APPEND INTO TABLE 'RDB_LM_TEST_TBL'
    (
    RDB$LM_ACTION                   POSITION(1:1) CHAR,
    RDB$LM_RELATION_NAME            POSITION(2:32) CHAR,
    RDB$LM_RECORD_TYPE              POSITION(33:44) INTEGER EXTERNAL,
    RDB$LM_DATA_LEN                 POSITION(45:50) INTEGER EXTERNAL,
    RDB$LM_NBV_LEN                  POSITION(51:56) INTEGER EXTERNAL,
    RDB$LM_DBK                      POSITION(57:76) INTEGER EXTERNAL,
    RDB$LM_START_TAD                POSITION(77:90) DATE "YYYYMMDDHHMISS",
    RDB$LM_COMMIT_TAD               POSITION(91:104) DATE "YYYYMMDDHHMISS",
    RDB$LM_TSN                      POSITION(105:124) INTEGER EXTERNAL,
    RDB$LM_RECORD_VERSION           POSITION(125:130) INTEGER EXTERNAL,
    TEST_COL                        POSITION(131:150) CHAR NULLIF RDB$LM_NB1 = 1,
    RDB$LM_NB1               FILLER POSITION(151:151) INTEGER EXTERNAL
    )

    Example 17

    The following example creates a metadata file for the database
    MFP. This metadata file can be used as input to a later RMU
    Unload After_Journal command.

    $ RMU /UNLOAD /AFTER_JOURNAL MFP /SAVE_METADATA=MF_MFP.METADATA /LOG
    %RMU-I-LMMFWRTCNT, Wrote 107 objects to metadata file
     "DUA0:[DB]MFMFP.METADATA;1"

    Example 18

    This example uses a previously created metadata information file
    for the database MFP. The database is not accessed during the
    unload operation; the database metadata information is read from
    the file. As the extract operation no longer directly relies on
    the source database, the AIJ and METADATA files can be moved to
    another systems and extracted there.

    $ RMU /UNLOAD /AFTER_JOURNAL /RESTORE_METADATA=MF_MFP.METADATA -
        MFP AIJ_BACKUP1 /TABLE=(NAME=TAB1, OUTPUT=TAB1) /LOG
    %RMU-I-LMMFRDCNT, Read 107 objects from metadata file
     "DUA0:[DB]MF_MFP.METADATA;1"
    %RMU-I-UNLAIJFL, Unloading table TAB1 to DUA0:[DB]TAB1.DAT;1
    %RMU-I-LOGOPNAIJ, opened journal file DUA0:[DB]AIJ_BACKUP1.AIJ;1
    %RMU-I-AIJRSTSEQ, journal sequence number is "7216321"
    %RMU-I-AIJMODSEQ, next AIJ file sequence number will be 7216322
    %RMU-I-LOGSUMMARY, total 2 transactions committed
    %RMU-I-LOGSUMMARY, total 0 transactions rolled back
    ----------------------------------------------------------------------
     ELAPSED:  0 00:00:00.15 CPU: 0:00:00.01 BUFIO: 11 DIRIO: 5 FAULTS: 28
    Table "TAB1" : 1 record written (1 modify, 0 delete)
    Total : 1 record written (1 modify, 0 delete)
Close Help