o To use the RMU Unload After_Journal command for a database,
you must have the RMU$DUMP privilege in the root file access
control list (ACL) for the database or the OpenVMS SYSPRV or
BYPASS privilege.
o Oracle Rdb after-image journaling protects the integrity
of your data by recording all changes made by committed
transactions to a database in a sequential log or journal
file. Oracle Corporation recommends that you enable after-
image journaling to record your database transaction activity
between full backup operations as part of your database
restore and recovery strategy. In addition to LogMiner for
Rdb, the after-image journal file is used to enable several
database performance enhancements such as the fast commit, row
cache, and hot standby features.
o When the Continuous qualifier is not specified, you can only
extract changed records from a backup copy of the after-image
journal files. You create this file using the RMU Backup
After_Journal command.
You cannot extract from an .aij file that has been optimized
with the RMU Optimize After_Journal command.
o As part of the extraction process, Oracle RMU sorts extracted
journal records to remove duplicate record updates. Because
.aij file extraction uses the OpenVMS Sort/Merge Utility
(SORT/MERGE) to sort journal records for large transactions,
you can improve the efficiency of the sort operation by
changing the number and location of the work files used by
SORT/MERGE. The number of work files is controlled by the
Sort_Workfiles qualifier of the RMU Unload After_Journal
command. The allowed values are 1 through 10 inclusive, with
a default value of 2. The location of these work files can be
specified with device specifications, using the SORTWORKn
logical name (where n is a number from 0 to 9). See the
OpenVMS documentation set for more information on using
SORT/MERGE. See the Oracle Rdb7 Guide to Database Performance
and Tuning for more information on using these Oracle Rdb
logical names.
o When extracting large transactions, the RMU Unload After_
Journal command may create temporary work files. You can
redirect the .aij rollforward temporary work files to a
different disk and directory location than the current default
directory by assigning a different directory to the RDM$BIND_
AIJ_WORK_FILE logical name in the LNM$FILE_DEV name table.
This can help to alleviate I/O bottlenecks that might occur on
the default disk.
o You can specify a search list by defining logicals
RDM$BIND_AIJ_WORK_FILEn, with each logical pointing to
a different device or directory. The numbers must start
with 1 and increase sequentially without any gaps. When an
AIJ file cannot be created due to a "device full" error,
Oracle Rdb looks for the next device in the search list
by translating the next sequential work file logical. If
RDM$BIND_AIJ_WORK_FILE is defined, it is used first.
o The RMU Unload After_Journal command can read either a backed
up .aij file on disk or a backed up .aij file on tape that is
in the Old_File format.
o You can select one or more tables to be extracted from an
after-image journal file. All tables specified by the Table
qualifier and all those specified in the Options file are
combined to produce a single list of output streams. A
particular table can be specified only once. Multiple tables
can be written to the same output destination by specifying
the exact same output stream specification (that is, by using
an identical file specification).
o At the completion of the unload operation, RMU creates a
number of DCL symbols that contain information about the
extraction statistics. For each table extracted, RMU creates
the following symbols:
- RMU$UNLOAD_DELETE_COUNT_tablename
- RMU$UNLOAD_MODIFY_COUNT_tablename
- RMU$UNLOAD_OUTPUT_tablename
The tablename component of the symbol is the name of the
table. When multiple tables are extracted in one operation,
multiple sets of symbols are created. The value for the
symbols RMU$UNLOAD_MODIFY_COUNT_tablename and RMU$UNLOAD_
DELETE_COUNT_tablename is a character string containing
the number of records returned for modified and deleted
rows. The RMU$UNLOAD_OUTPUT_tablename symbol is a character
string indicating the full file specification for the output
destination, or the shareable image name and routine name when
the output destination is an application callback routine.
o When you use the Callback_Module and Callback_Routine option,
you must supply a shareable image with a universal symbol or
entry point for the LogMiner process to be able to call your
routine. See the OpenVMS documentation discussing the Linker
utility for more information about creating shareable images.
o Your Callback_Routine is called once for each output record.
The Callback_Routine is passed two parameters:
- The length of the output record, by longword value
- A pointer to the record buffer
The record buffer is a data structure of the same fields and
lengths written to an output destination.
o Because the Oracle RMU image is installed as a known image,
your shareable image must also be a known image. Use the
OpenVMS Install Utility to make your shareable image known.
You may wish to establish an exit handler to perform any
required cleanup processing at the end of the extraction.
o Segmented string data (BLOB) cannot be extracted using the
LogMiner process. Because the segmented string data is
related to the base table row by means of a database key,
there is no convenient way to determine what data to extract.
Additionally, the data type of an extracted column is changed
from LIST OF BYTE VARYING to BIGINT. This column contains
the DBKEY of the original BLOB data. Therefore, the contents
of this column should be considered unreliable. However, the
field definition itself is extracted as a quadword integer
representing the database key of the original segmented string
data. In generated table definition or record definition
files, a comment is added indicating that the segmented string
data type is not supported by the LogMiner for Rdb feature.
o Records removed from tables using the SQL TRUNCATE TABLE
statement are not extracted. The SQL TRUNCATE TABLE statement
does not journal each individual data record being removed
from the database.
o Records removed from tables using the SQL ALTER DATABASE
command with the DROP STORAGE AREA clause and CASCADE keyword
are not extracted. Any data deleted by this process is not
journalled.
o Records removed by dropping tables using the SQL DROP TABLE
statement are not extracted. The SQL DROP TABLE statement does
not journal each individual data record being removed from the
database.
o When the RDMS$CREATE_LAREA_NOLOGGING logical is defined, DML
operations are not available for extraction between the time
the table is created and when the transaction is committed.
o Tables that use the vertical record partitioning (VRP) feature
cannot be extracted using the LogMiner feature. LogMiner
software currently does not detect these tables. A future
release of Oracle Rdb will detect and reject access to
vertically partitioned tables.
o In binary format output, VARCHAR fields are not padded with
spaces in the output file. The VARCHAR data type is extracted
as a 2-byte count field and a fixed-length data field. The 2-
byte count field indicates the number of valid characters in
the fixed-length data field. Any additional contents in the
data field are unpredictable.
o You cannot extract changes to a table when the table
definition is changed within an after-image journal file.
Data definition language (DDL) changes to a table are not
allowed within an .aij file being extracted. All records in an
.aij file must be the current record version. If you are going
to perform DDL operations on tables that you wish to extract
using the LogMiner for Rdb, you should:
1. Back up your after-image journal files.
2. Extract the .aij files using the RMU Unload After_Journal
command.
3. Make the DDL changes.
o Do not use the OpenVMS Alpha High Performance Sort/Merge
utility (selected by defining the logical name SORTSHR
to SYS$SHARE:HYPERSORT) when using the LogMiner feature.
HYPERSORT supports only a subset of the library sort routines
that LogMiner requires. Make sure that the SORTSHR logical
name is not defined to HYPERSORT.
o The metadata information file used by the RMU Unload After_
Journal command is in an internal binary format. The contents
and format are not documented and are not directly accessible
by other utilities. The content and format of the metadata
information file is specific to a version of the RMU Unload
After_Journal utility. As new versions and updates of Oracle
Rdb are released, you will proably have to re-create the
metadata information file. The same version of Oracle Rdb must
be used to both write and read a metadata information file.
The RMU Unload After_Journal command verifies the format and
version of the metadata information file and issues an error
message in the case of a version mismatch.
o For debugging purposes, you can format and display the
contents of a metadata information file by using the
Options=Dump qualifier with the Restore_Metadata qualifier.
This dump may be helpful to Oracle Support engineers during
problem analysis. The contents and format of the metadata
information file are subject to change.
o If you use both the Output and Statistics_Interval qualifiers,
the output stream used for the log, trace, and statistics
information is flushed to disk (via the RMS $FLUSH service) at
each statistics interval. This makes sure that an output file
of trace and log information is written to disk periodically.
o You can specify input backup after-image journal files along
with the Continuous qualifier from the command line. The
specified after-image journal backup files are processed in
an offline mode. Once they have been processed, the RMU Unload
After_Journal command switches to "online" mode and the active
online journals are processed.
o When no input after-image journal files are specified on the
command line, the Continuous LogMiner starts extracting at the
beginning of the earliest modified online after-image journal
file. The Restart= qualifier can be used to control the first
transaction to be extracted.
o The Continuous LogMiner requires fixed-size circular after-
image journals.
o An after-image journal file cannot be backed up if there
are any Continuous LogMiner checkpoints in the aij file.
The Continuous LogMiner moves its checkpoint to the physical
end-of-file for the online .aij file that it is extracting.
o In order to ensure that all records have been written by all
database users, Continuous LogMiner processes do not switch
to the next live journal file until it has been written to by
another process. Live journals SHOULD NOT be backed up while
the Continuous LogMiner process is processing a list of .aij
backup files. This is an unsupported activity and could lead
to the LogMiner losing data.
o If backed up after-image journal files are specified on the
command line and the Continuous qualifier is specified, the
journal sequence numbers must ascend directly from the backed
up journal files to the online journal files.
In order to preserve the after-image journal file sequencing
as processed by the RMU Unload After_Journal /Continuous
command, it is important that no after-image journal backup
operations are attempted between the start of the command and
when the Continuous LogMiner process reaches the live online
after-image journals.
o You can run multiple Continuous LogMiner processes at one
time on a database. Each Continuous LogMiner process acts
independently.
o The Continuous LogMiner reads the live after-image journal
file just behind writers to the journal. This will likely
increase the I/O load on the disk devices where the journals
are located. The Continuous LogMiner attempts to minimize
unneeded journal I/O by checking a "High Water Mark" lock to
determine if the journal has been written to and where the
highest written block location is located.
o Vertically partitioned tables cannot be extracted.