Allows you to extract added, modified, committed, and deleted
record contents from committed transactions from specified tables
in one or more after-image journal files.
1 – Description
The RMU Unload After_Journal command translates the binary data
record contents of an after-image journal (.aij) file into an
output file. Data records for the specified tables for committed
transactions are extracted to an output stream (file, device,
or application callback) in the order that the transactions were
committed.
Before you use the RMU Unload After_Journal command, you must
enable the database for LogMiner extraction. Use the RMU Set
Logminer command to enable the LogMiner for Rdb feature for the
database. Before you use the RMU Unload After_Journal command
with the Continuous qualifier, you must enable the database for
Continuous LogMiner extraction. See Set Logminer help topic for
more information.
Data records extracted from the .aij file are those records that
transactions added, modified, or deleted in base database tables.
Index nodes, database metadata, segmented strings (BLOB), views,
COMPUTED BY columns, system relations, and temporary tables
cannot be unloaded from after-image journal files.
For each transaction, only the final content of a record
is extracted. Multiple changes to a single record within a
transaction are condensed so that only the last revision of the
record appears in the output stream. You cannot determine which
columns were changed in a data record directly from the after-
image journal file. In order to determine which columns were
changed, you must compare the record in the after-image journal
file with a previous record.
The database used to create the after-image journal files being
extracted must be available during the RMU Unload After_Journal
command execution. The database is used to obtain metadata
information (such as table names, column counts, record version,
and record compression) needed to extract data records from the
.aij file. The database is read solely to load the metadata
and is then detached. Database metadata information can also
be saved and used in a later session. See the Save_MetaData and
Restore_MetaData qualifiers for more information.
If you use the Continuous qualifier, the database must be opened
on the node where the Continuous LogMiner process is running. The
database is always used and must be available for both metadata
information and for access to the online after-image journal
files. The Save_MetaData and Restore_MetaData qualifiers are not
permitted with the Continuous qualifier.
When one or more .aij files and the Continuous qualifier are
both specified on the RMU Unload After_Journal command line,
it is important that no .aij backup operations occur until the
Continuous LogMiner process has transitioned to online mode
(where the active online .aij files are being extracted). If you
are using automatic .aij backups and wish to use the Continuous
LogMiner feature, Oracle recommends that you consider disabling
the automatic backup feature (ABS) and use manual .aij backups
so that you can explicitly control when .aij backup operations
occur.
The after-image journal file or files are processed sequentially.
All specified tables are extracted in one pass through the
after-image journal file.
As each transaction commit record is processed, all modified and
deleted records for the specified tables are sorted to remove
duplicates. The modified and deleted records are then written
to the output streams. Transactions that were rolled back are
ignored. Data records for tables that are not being extracted are
ignored. The actual order of output records within a transaction
is not predictable.
In the extracted output, records that were modified or added are
returned as being modified. It is not possible to distinguish
between inserted and updated records in the output stream.
Deleted (erased) records are returned as being deleted. A
transaction that modifies and deletes a record generates only
a deleted record. A transaction that adds a new record to
the database and then deletes it within the same transaction
generates only a deleted record.
The LogMiner process signals that a row has been deleted by
placing a D in the RDB$LM_ACTION field. The contents of the
row at the instant before the delete operation are recorded
in the user fields of the output record. If a row was modified
several times within a transaction before being deleted, the
output record contains only the delete indicator and the results
of the last modify operation. If a row is inserted and deleted
in the same transaction, only the delete record appears in the
output.
Records from multiple tables can be output to the same or to
different destination streams. Possible output destination
streams include the following:
o File
o OpenVMS Mailbox
o OpenVMS Pipe
o Direct callback to an application through a run-time activated
shareable image
Refer to the Using_LogMiner_for_Rdb help topic for more
information about using the LogMiner for Rdb feature.
2 – Format
(B)0[mRMU/Unload/After_Journal root-file-spec aij-file-name
[4mCommand[m [4mQualifiers[m x [4mDefaults[m
x
/Before=date-time x None
/Continuous x /NoContinuous
/Extend_Size=integer x /Extend_Size=1000
/Format=options x See description
/Ignore=Old_Version[=table-list] x /Ignore=Old_Version=all
/Include=Action=(include-type) x Include=Action=
x (NoCommit,Modify,Delete)
/IO_Buffers=integer x /IO_Buffers=2
/[No]Log x Current DCL verify value
/Options=options-list x See description
/Order_AIJ_files x /NoOrder_aij_files
/Output=file-spec x /Output=SYS$OUTPUT
/Parameter=character-strings x None
/Quick_Sort_Limit=integer x /Quick_Sort_Limit=5000
/Restart=(restart-point) x None
/Restore_Metadata=file-spec x None
(B)0[m/Save_Metadata=file-spec x None
/Select=selection-type x /Select=Commit_Transaction
/Since=date-time x None
/Sort_Workfiles=integer x /Sort_Workfiles=2
/Statistics_Interval=integer x See description
/[No]Symbols x /Symbols
/Table=(Name=table-name, x See description
[table-options ...]) x None
[No]Trace x /Notrace
3 – Parameters
3.1 – root-file-spec
The root file specification of the database for the after-image
journal file from which tables will be unloaded. The default file
extension is .rdb.
The database must be the same actual database that was used to
create the after-image journal files. The database is required
so that the table metadata (information about data) is available
to the RMU Unload After_Journal command. In particular, the names
and relation identification of valid tables within the database
are required along with the number of columns in the table and
the compression information for the table in various storage
areas.
The RMU Unload After_Journal process attaches to the database
briefly at the beginning of the extraction operation in order to
read the metadata. Once the metadata has been read, the process
disconnects from the database for the remainder of the operation
unless the Continuous qualifier is specified. The Continuous
qualifier indicates that the extraction operation is to run non-
stop, and the process remains attached to the database.
3.2 – aij-file-name
One or more input after-image journal backup files to be used
as the source of the extraction operation. Multiple journal
files can be extracted by specifying a comma-separated list
of file specifications. Oracle RMU supports OpenVMS wildcard
specifications (using the * and % characters) to extract a
group of files. A file specification beginning with the at
(@) character refers to an options file containing a list of
after-image journal files (rather than the file specification
of an after-image journal itself). If you use the at character
syntax, you must enclose the at character and the file name in
double quotation marks (for example, specify aij-file-name as
"@files.opt"). The default file extension is .aij.
4 – Command Qualifiers
4.1 – Before
Before=date-time
Specifies the ending time and date for transactions to be
extracted. Based on the Select qualifier, transactions that
committed or started prior to the specified Before date are
selected. Information changed due to transactions that committed
or started after the Before date is not included in the output.
4.2 – Continuous
Continuous
Nocontinuous
Causes the LogMiner process to attach to the database and begin
extracting records in "near-real" time. When the Continuous
qualifier is specified, the RMU Unload After_Journal command
extracts records from the online after-image journal files of the
database until it is stopped via an external source (for example,
Ctrl/y, STOP/ID, $FORCEX, or database shutdown).
A database must be explicitly enabled for the Continuous LogMiner
feature. To enable the Continuous LogMiner feature, use the RMU
Set Logminer command with the Enable and Continuous qualifiers;
to disable use of the Continuous LogMiner feature, use the RMU
Set Logminer command with the Enable and Nocontinuous qualifiers.
The output from the Continuous LogMiner process is a continuous
stream of information. The intended use of the Continuous
LogMiner feature is to write the changes into an OpenVMS
mailbox or pipe, or to call a user-supplied callback routine.
Writing output to a disk file is completely functional with the
Continuous LogMiner feature, however, no built-in functionality
exists to prevent the files from growing indefinitely.
It is important that the callback routine or processing of
the mailbox be very responsive. If the user-supplied callback
routine blocks, or if the mailbox is not being read fast enough
and fills, the RMU Unload After_Journal command will stall. The
Continuous LogMiner process prevents backing up the after-image
journal that it is currently extracting along with all subsequent
journals. If the Continuous LogMiner process is blocked from
executing for long enough, it is possible that all available
journals will fill and will not be backed up.
When a database is enabled for the Continuous LogMiner feature,
an AIJ "High Water" lock (AIJHWM) is utilized to help coordinate
and maintain the current .aij end-of-file location. The lock
value block for the AIJHWM lock contains the location of the
highest written .aij block. The RMU Unload After_Journal command
with the Continuous qualifier polls the AIJHWM lock to determine
if data has been written to the .aij file and to find the highest
written block. If a database is not enabled for the Continuous
LogMiner feature, there is no change in locking behavior; the
AIJHWM lock is not maintained and thus the Continuous qualifier
of the RMU Unload After_Journal command is not allowed.
In order to maintain the .aij end-of-file location lock,
processes that write to the after-image journal file must use
the lock to serialize writing to the journal. When the Continuous
LogMiner feature is not enabled, processes instead coordinate
allocating space in the after-image journal file and can write
to the file without holding a lock. The Continuous LogMiner
process requires that the AIJHWM lock be held during the .aij
I/O operation. In some cases, this can reduce overall throughput
to the .aij file as it serves to reduce multiple over-lapped I/O
write operations by multiple processes.
The Save_Metadata and Restore_Metadata qualifiers are
incompatible with the Continuous qualifier.
4.3 – Extend Size
Extend_size=integer
Specifies the file allocation and extension quantity for output
data files. The default extension size is 1000 blocks. Using a
larger value can help reduce output file fragmentation and can
improve performance when large amounts of data are extracted.
4.4 – Format
Format=options
If the Format qualifier is not specified, Oracle RMU outputs data
to a fixed-length binary flat file.
The format options are:
o Format=Binary
If you specify the Format=Binary option, Oracle RMU does not
perform any data conversion; data is output in a flat file
format with all data in the original binary state.
Output Fields describes the output fields and data types of an
output record in Binary format.
Table 19 Output Fields
Byte
Field Name Data Type LengthDescription
ACTION CHAR (1) 1 Indicates record state.
"M" indicates an insert or
modify action. "D" indicates a
delete action. "E" indicates
stream end-of-file (EOF)
when a callback routine is
being used. "P" indicates
a value from the command
line Parameter qualifier
when a callback routine is
being used (see Parameter
qualifier). "C" indicates
transaction commit information
when the Include=Action=Commit
qualifier is specified.
RELATION_ CHAR (31) 31 Table name. Space padded to 31
NAME characters.
RECORD_TYPE INTEGER 4 The Oracle Rdb internal
(Longword) relation identifier.
DATA_LEN SMALLINT 2 Length, in bytes, of the data
(Word) record content.
NBV_LEN SMALLINT 2 Length, in bits, of the null
(Word) bit vector content.
DBK BIGINT 8 Records logical database key.
(Quadword) The database key is a 3-field
structure containing a 16-
bit line number, a 32-bit
page number and a 16-bit area
number.
START_TAD DATE VMS 8 Date/time of the start of the
(Quadword) transaction.
COMMIT_TAD DATE VMS 8 Date/time of the commitment of
(Quadword) the transaction.
TSN BIGINT 8 Transaction sequence number of
(Quadword) the transaction that performed
the record operation.
RECORD_ SMALLINT 2 Record version.
VERSION (Word)
Record Data Varies Actual data record field
contents.
Record NBV BIT VECTOR Null bit vector. There is
(array of one bit for each field in the
bits) data record. If a bit value
is 1, the corresponding field
is NULL; if a bit value is
0, the corresponding field
is not NULL and contains an
actual data value. The null
bit vector begins on a byte
boundary. Any extra bits in
the final byte of the vector
after the final null bit are
unused.
o Format=Dump
If you specify the Format=Dump option, Oracle RMU produces an
output format suitable for viewing. Each line of Dump format
output contains the column name (including LogMiner prefix
columns) and up to 200 bytes of the column data. Unprintable
characters are replaced with periods (.), and numbers and
dates are converted to text. NULL columns are indicated
with the string "NULL". This format is intended to assist
in debugging; the actual output contents and formatting will
change in the future.
o Format=Text
If you specify the Format=Text option, Oracle RMU converts
all data to printable text in fixed-length columns before
unloading it. VARCHAR(n) strings are padded with blanks when
the specified string has fewer characters than n so that the
resulting string is n characters long.
o Format=(Delimited_Text [,delimiter-options])
If you specify the Format=Delimited_Text option, Oracle RMU
applies delimiters to all data before unloading it.
DATE VMS dates are output in the collatable time format, which
is yyyymmddhhmmsscc. For example, March 20, 1993 is output as:
1993032000000000.
Delimiter options are:
- Prefix=string
Specifies a prefix string that begins any column value in
the ASCII output file. If you omit this option, the column
prefix is a quotation mark (").
- Separator=string
Specifies a string that separates column values of a row.
If you omit this option, the column separator is a single
comma (,).
- Suffix=string
Specifies a suffix string that ends any column value in
the ASCII output file. If you omit this option, the column
suffix is a quotation mark (").
- Terminator=string
Specifies the row terminator that completes all the column
values corresponding to a row. If you omit this option, the
row terminator is the end of the line.
- Null=string
Specifies a string that, when found in the database column,
is unloaded as "NULL" in the output file.
The Null option can be specified on the command line as any
one of the following:
* A quoted string
* An empty set of double quotes ("")
* No string
The string that represents the null character must be
quoted on the Oracle RMU command line. You cannot specify a
blank space or spaces as the null character. You cannot use
the same character for the Null value and other Delimited_
Text options.
NOTE
The values for each of the strings specified in the
delimiter options must be enclosed within quotation
marks. Oracle RMU strips these quotation marks while
interpreting the values. If you want to specify a
quotation mark (") as a delimiter, specify a string
of four quotation marks. Oracle RMU interprets four
quotation marks as your request to use one quotation
mark as a delimiter. For example, Suffix = """".
Oracle RMU reads these quotation marks as follows:
o The first quotation mark is stripped from the string.
o The second and third quotation mark are interpreted
as your request for one quotation mark (") as a
delimiter.
o The fourth quotation mark is stripped.
This results in one quotation mark being used as a
delimiter.
Furthermore, if you want to specify a quotation mark as
part of the delimited string, you must use two quotation
marks for each quotation mark that you want to appear in
the string. For example, Suffix = "**""**" causes Oracle
RMU to use a delimiter of **"**.
4.5 – Ignore
Ignore=Old_Version[=table-list]
Specifies optional conditions or items to ignore.
The RMU Unload After_Journal command treats non-current record
versions in the AIJ file as a fatal error condition. That is,
attempting to extract a record that has a record version not the
same as the table's current maximum version results in a fatal
error.
There are, however, some very rare cases where a verb rollback
of a modification of a record may result in an old version of a
record being written to the after-image journal even though the
transaction did not actually complete a successful modification
to the record. The RMU Unload After_Journal command detects the
old record version and aborts with a fatal error in this unlikely
case.
When the Ignore=Old_Version qualifier is present, the RMU Unload
After_Journal command displays a warning message for each
record that has a non-current record version and the record
is not written to the output stream. The Old_Version qualifier
accepts an optional list of table names to indicate that only the
specified tables are permitted to have non-current record version
errors ignored.
4.6 – Include
Include=Action=include-type
Specifies if deleted or modified records or transaction commit
information is to be extracted from the after-image journal. The
following keywords can be specified:
o Commit
NoCommit
If you specify Commit, a transaction commit record is
written to each output stream as the final record for each
transaction. The commit information record is written to
output streams after all other records for the transaction
have been written. The default is NoCommit.
Because output streams are created with a default file name
of the table being extracted, it is important to specify a
unique file name on each occurrence of the output stream.
The definition of "unique" is such that when you write to a
non-file-oriented output device (such as a pipe or mailbox),
you must be certain to specify a specific file name on each
output destination. This means that rather than specifying
Output=MBA1234: for each output stream, you should use
Output=MBA1234:MBX, or any file name that is the same on all
occurrences of MBA1234:.
Failure to use a specific file name can result in additional,
and unexpected, commit records being returned. However, this
is generally a restriction only when using a stream-oriented
output device (as opposed to a disk file).
The binary record format is based on the standard LogMiner
output format. However, some fields are not used in the commit
action record. The binary format and contents of this record
are shown in Commit Record Contents. This record type is
written for all output data formats.
Table 20 Commit Record Contents
Length (in
Field bytes) Contents
ACTION 1 "C"
RELATION 31 Zero
RECORD_TYPE 4 Zero
DATA_LEN 2 Length of RM_TID_LEN, AERCP_LEN, RM_
TID, AERCP
NBV_LEN 2 Zero
TID 4 Transaction (Attach) ID
PID 4 Process ID
START_TAD 8 Transaction Start Time/Date
COMMIT_TAD 8 Transaction Commit Time/Date
TSN 8 Transaction ID
RM_TID_LEN 4 Length of the Global TID
AERCP_LEN 4 Length of the AERCP information
RM_TID RM_TID_LEN Global TID
AERCP AERCP_LEN Restart Control Information
RDB$LM_ 12 USERNAME
USERNAME
When the original transaction took part in a distributed,
two-phase transaction, the RM_TID component is the Global
transaction manager (XA or DDTM) unique transaction ID.
Otherwise, this field contains binary zeroes.
The AIJ Extract Recovery Control Point (AERCP) information is
used to uniquely identify this transaction within the scope
of the database and after-image journal files. It contains
the .aij sequence number, VBN and TSN of the last "Micro Quiet
Point", and is used by the Continuous LogMiner process to
restart a particular point in the journal sequence.
o Delete
NoDelete
If you specify Delete, pre-deletion record contents are
extracted from the aij file. If you specify NoDelete, no
pre-deletion record contents are extracted. The default is
Delete.
o Modify
NoModify
If you specify Modify, modified or added record contents are
extracted from the .aij file. If you specify NoModify, then no
modified or added record contents are extracted. The default
is Modify.
4.7 – IO Buffers
IO_Buffers=integer
Specifies the number of I/O buffers used for output data files.
The default number of buffers is two, which is generally
adequate. With sufficiently fast I/O subsystem hardware,
additional buffers may improve performance. However, using a
larger number of buffers will also consume additional virtual
memory and process working set.
4.8 – Log
Log
Nolog
Specifies that the extraction of the .aij file is be reported
to SYS$OUTPUT or the destination specified with the Output
qualifier. When activity is logged, the output from the Log
qualifier provides the number of transactions committed or rolled
back. The default is the setting of the DCL VERIFY flag, which is
controlled by the DCL SET VERIFY command.
4.9 – Options
Options=options-list
The following options can be specified:
o File=file-spec
An options file contains a list of tables and output
destinations. The options file can be used instead of, or
along with, the Table qualifier to specify the tables to be
extracted. Each line of the options file must specify a table
name prefixed with "Table=". After the table name, the output
destination is specified as either "Output=", or "Callback_
Module=" and "Callback_Routine=", for example:
TABLE=tblname,OUTPUT=outfile
TABLE=tblname,CALLBACK_MODULE=image,CALLBACK_ROUTINE=routine
You can use the Record_Definition=file-spec option from the
Table qualifier to create a record definition file for the
output data. The default file type is .rrd; the default file
name is the name of the table.
You can use the Table_Definition=file-spec option from
the Table qualifier to create a file that contains an SQL
statement that creates a table to hold transaction data. The
default file type is .sql; the default file name is the name
of the table.
Each option in the Options=File qualifier must be fully
specified (no abbreviations are allowed) and followed with
an equal sign (=) and a value string. The value string must
be followed by a comma or the end of a line. Continuation
lines can be specified by using a trailing dash. Comments are
indicated by using the exclamation point (!) character.
You can use the asterisk (*) and the percent sign (%)
wildcard characters in the table name specification to select
all tables that satisfy the components you specify. The
asterisk matches zero or more characters; the percent sign
matches a single character.
For table name specifications that contain wild card
characters, if the first character of the string is a pound
sign (#), the wildcard specification is changed to a "not
matching" comparison. This allows exclusion of tables based
on a wildcard specification. The pound sign designation is
only evaluated when the table name specification contains an
asterisk or percent sign.
For example, a table name specification of "#FOO%" indicates
that all table names that are four characters long and do not
start with the string "FOO" are to be selected.
o Shared_Read
Specifies that the input after-image journal backup files are
to be opened with an RMS shared locking specification.
o Dump
Specifies that the contents of an input metadata file are to
be formatted and displayed. Typically, this information is
used as a debugging tool.
4.10 – Order AIJ Files
Order_AIJ_Files
NoOrder_AIJ_Files
By default, after-image journal files are processed in the order
that they are presented to the RMU Unload After_Journal command.
The Order_AIJ_Files qualifier specifies that the input after-
image journal files are to be processed in increasing order by
sequence number. This can be of benefit when you use wildcard (*
or %) processing of a number of input files. The .aij files are
each opened, the first block is read (to determine the sequence
number), and the files are closed prior to the sorting operation.
4.11 – Output
Output=file-spec
Redirects the log and trace output (selected with the Log and
Trace qualifiers) to the named file. If this qualifier is not
specified, the output generated by the Log and Trace qualifiers,
which can be voluminous, is displayed to SYS$OUTPUT.
4.12 – Parameter
Parameter=character-strings
Specifies one or more character strings that are concatenated
together and passed to the callback routine upon startup.
For each table that is associated with a user-supplied callback
routine, the callback routine is called with two parameters: the
length of the Parameter record and a pointer to the Parameter
record. The binary format and contents of this record are shown
in Parameter Record Contents.
Table 21 Parameter Record Contents
Length (in
Field bytes) Contents
ACTION 1 "P"
RELATION 31 Relation name
RECORD_TYPE 4 Zero
DATA_LEN 2 Length of parameter string
NBV_LEN 2 Zero
LDBK 8 Zero
START_TAD 8 Zero
COMMIT_TAD 8 Zero
TSN 8 Zero
DATA ? Variable length parameter string
content
4.13 – Quick Sort Limit
Quick_Sort_Limit=integer
Specifies the maximum number of records that will be sorted with
the in-memory "quick sort" algorithm.
The default value is 5000 records. The minimum value that can be
specified is 10 and the maximum value is 100,000.
Larger values specified for the /Quick_Sort_Limit qualifier may
reduce sort work file IO at the expense of additional CPU time
and/or memory consumption. A value that is too small may result
in additional disk file IO. In general, the default value should
be accepted.
4.14 – Restart
Restart=restart-point
Specifies an AIJ Extract Restart Control Point (AERCP) that
indicates the location to begin the extraction. The AERCP
indicates the transaction sequence number (TSN) of the last
extracted transaction along with a location in the .aij file
where a known "Micro-quiet point" exists.
When the Restart qualifier is not specified and no input after-
image journal files are specified on the command line, the
Continuous LogMiner process starts extracting at the beginning
of the earliest modified online after-image journal file.
When formatted for text display, the AERCP structure consists of
the six fields (the MBZ field is excluded) displayed as unsigned
integers separated by dashes; for example, "1-28-12-7-3202-3202".
4.15 – Restore Metadata
Restore_Metadata=file-spec
Specifies that the RMU Unload After_Journal command is to read
database metadata information from the specified file. The
Database parameter is required but the database itself is not
accessed when the Restore_Metadata qualifier is specified. The
default file type is .metadata. The Continuous qualifier is not
allowed when the Restore_Metadata qualifier is present.
Because the database is not available when the Restore_Metadata
qualifier is specified, certain database-specific actions cannot
be taken. For example, checks for after-image journaling are
disabled. Because the static copy of the metadata information is
not updated as database structure and table changes are made, it
is important to make sure that the metadata file is saved after
database DML operations.
When the Restore_Metadata qualifier is specified, additional
checks are made to ensure that the after-image journal files
were created using the same database that was used to create the
metadata file. These checks provide additional security and help
prevent accidental mismatching of files.
4.16 – Save Metadata
Save_Metadata=file-spec
Specifies that the RMU Unload After_Journal command is to
write metadata information to the named file. The Continuous,
Restore_Metadata, Table, and Options=file qualifiers and the
aij-file-name parameter are not allowed when the Save_Metadata
qualifier is present. The default file type is .metadata.
4.17 – Select
Select=selection-type
Specifies if the date and time of the Before and Since qualifiers
refer to transaction start time or transaction commit time.
The following options can be specified as the selection-type of
the Select qualifier:
o Commit_Transaction
Specifies that the Before and Since qualifiers select
transactions based on the time of the transaction commit.
o Start_Transaction
Specifies that the Before and Since qualifiers select
transactions based on the time of the transaction start.
The default for date selection is Commit_Transaction.
4.18 – Since
Since=date-time
Specifies the starting time for transactions to be extracted.
Depending on the value specified in the Select qualifier,
transactions that committed or started on or after the specified
Since date are selected. Information from transactions that
committed or started prior to the specified Since date is not
included in the output.
4.19 – Sort Workfiles
Sort_Workfiles=integer
Specifies the number of sort work files. The default number
of sort work files is two. When large transactions are being
extracted, using additional sort work files may improve
performance by distributing I/O loads over multiple disk devices.
Use the SORTWORKn (where n is a number from 0 to 9) logical names
to specify the location of the sort work files.
4.20 – Statistics Interval
Statistics_Interval=integer
Specifies that statistics are to be displayed at regular
intervals so that you can evaluate the progress of the unload
operation.
The displayed statistics include:
o Elapsed time
o CPU time
o Buffered I/O
o Direct I/O
o Page faults
o Number of records unloaded for a table
o Total number of records extracted for all tables
If the Statistics_Interval qualifier is specified, the default
interval is 60 seconds. The minimum value is one second. If the
unload operation completes successfully before the first time
interval has passed, you will receive an informational message
on the number of files unloaded. If the unload operation is
unsuccessful before the first time interval has passed, you will
receive error messages and statistics on the number of records
unloaded.
At any time during the unload operation, you can press Ctrl/T to
display the current statistics.
4.21 – Symbols
Symbols
Nosymbols
Specifies whether DCL symbols are to be created, indicating
information about records extracted for each table.
If a large enough number of tables is being unloaded, too many
associated symbols are created, and the CLI symbol table space
can become exhausted. The error message "LIB-F-INSCLIMEM,
insufficient CLI memory" is returned in this case. Specify the
Nosymbols qualifier to prevent creation of the symbols.
The default is Symbols, which causes the symbols to be created.
4.22 – Table
Table=(Name=table-name, table-options)
Specifies the name of a table to be unloaded and an output
destination. The table-name must be a table within the database.
Views, indexes, and system relations may not be unloaded from the
after-image journal file.
The asterisk (*) and the percent sign (%) wildcard characters
can be used in the table name specification to select all tables
that satisfy the components you specify. The asterisk matches
zero or more characters and the percent sign matches a single
character.
For table name specifications that contain wild card characters,
if the first character of the string is a pound sign (#),
the wildcard specification is changed to a "not matching"
comparison. This allows exclusion of tables based on a wildcard
specification. The pound sign designation is only evaluated when
the table name specification contains an asterisk or percent
sign.
For exmple, a table name specification of "#FOO%" indicates that
all table names that are four characters long and do not start
with the string "FOO" are to be selected.
The following table-options can be specified with the Table
qualifier:
o Callback_Module=image-name, Callback_Routine=routine-name
The LogMiner process uses the OpenVMS library routine
LIB$FIND_IMAGE_SYMBOL to activate the specified shareable
image and locate the specified entry point routine name. This
routine is called with each extracted record. A final call is
made with the Action field set to "E" to indicate the end of
the output stream. These options must be specified together.
o Control
Use the Control table option to produce output files that
can be used by SQL*Loader to load the extracted data into an
Oracle database. This option must be used in conjunction with
fixed text format for the data file. The Control table option
can be specified on either the command line or in an options
file.
o Output=file-spec
If an Output file specification is present, unloaded records
are written to the specified location.
o Record_Definition=file-spec
The Record_Definition=file-spec option can be used to create a
record definition file for the output data. The default file
type is .rrd; the default file name is the name of the table.
o Table_Definition=file-spec
You can use the Table_Definition=file-spec option to create
a file that contains an SQL statement that creates a table
to hold transaction data. The default file type is .sql; the
default file name is the name of the table.
Unlike other qualifiers where only the final occurrence of the
qualifier is used by an application, the Table qualifier can
be specified multiple times for the RMU Unload After_Journal
command. Each occurrence of the Table qualifier must specify a
different table.
4.23 – Trace
Trace
Notrace
Specifies that the unloading of the .aij file be traced. The
default is Notrace. When the unload operation is traced, the
output from the Trace qualifier identifies transactions in the
.aij file by TSNs and describes what Oracle RMU did with each
transaction during the unload process. You can specify the Log
qualifier with the Trace qualifier.
5 – Usage Notes
o To use the RMU Unload After_Journal command for a database,
you must have the RMU$DUMP privilege in the root file access
control list (ACL) for the database or the OpenVMS SYSPRV or
BYPASS privilege.
o Oracle Rdb after-image journaling protects the integrity
of your data by recording all changes made by committed
transactions to a database in a sequential log or journal
file. Oracle Corporation recommends that you enable after-
image journaling to record your database transaction activity
between full backup operations as part of your database
restore and recovery strategy. In addition to LogMiner for
Rdb, the after-image journal file is used to enable several
database performance enhancements such as the fast commit, row
cache, and hot standby features.
o When the Continuous qualifier is not specified, you can only
extract changed records from a backup copy of the after-image
journal files. You create this file using the RMU Backup
After_Journal command.
You cannot extract from an .aij file that has been optimized
with the RMU Optimize After_Journal command.
o As part of the extraction process, Oracle RMU sorts extracted
journal records to remove duplicate record updates. Because
.aij file extraction uses the OpenVMS Sort/Merge Utility
(SORT/MERGE) to sort journal records for large transactions,
you can improve the efficiency of the sort operation by
changing the number and location of the work files used by
SORT/MERGE. The number of work files is controlled by the
Sort_Workfiles qualifier of the RMU Unload After_Journal
command. The allowed values are 1 through 10 inclusive, with
a default value of 2. The location of these work files can be
specified with device specifications, using the SORTWORKn
logical name (where n is a number from 0 to 9). See the
OpenVMS documentation set for more information on using
SORT/MERGE. See the Oracle Rdb7 Guide to Database Performance
and Tuning for more information on using these Oracle Rdb
logical names.
o When extracting large transactions, the RMU Unload After_
Journal command may create temporary work files. You can
redirect the .aij rollforward temporary work files to a
different disk and directory location than the current default
directory by assigning a different directory to the RDM$BIND_
AIJ_WORK_FILE logical name in the LNM$FILE_DEV name table.
This can help to alleviate I/O bottlenecks that might occur on
the default disk.
o You can specify a search list by defining logicals
RDM$BIND_AIJ_WORK_FILEn, with each logical pointing to
a different device or directory. The numbers must start
with 1 and increase sequentially without any gaps. When an
AIJ file cannot be created due to a "device full" error,
Oracle Rdb looks for the next device in the search list
by translating the next sequential work file logical. If
RDM$BIND_AIJ_WORK_FILE is defined, it is used first.
o The RMU Unload After_Journal command can read either a backed
up .aij file on disk or a backed up .aij file on tape that is
in the Old_File format.
o You can select one or more tables to be extracted from an
after-image journal file. All tables specified by the Table
qualifier and all those specified in the Options file are
combined to produce a single list of output streams. A
particular table can be specified only once. Multiple tables
can be written to the same output destination by specifying
the exact same output stream specification (that is, by using
an identical file specification).
o At the completion of the unload operation, RMU creates a
number of DCL symbols that contain information about the
extraction statistics. For each table extracted, RMU creates
the following symbols:
- RMU$UNLOAD_DELETE_COUNT_tablename
- RMU$UNLOAD_MODIFY_COUNT_tablename
- RMU$UNLOAD_OUTPUT_tablename
The tablename component of the symbol is the name of the
table. When multiple tables are extracted in one operation,
multiple sets of symbols are created. The value for the
symbols RMU$UNLOAD_MODIFY_COUNT_tablename and RMU$UNLOAD_
DELETE_COUNT_tablename is a character string containing
the number of records returned for modified and deleted
rows. The RMU$UNLOAD_OUTPUT_tablename symbol is a character
string indicating the full file specification for the output
destination, or the shareable image name and routine name when
the output destination is an application callback routine.
o When you use the Callback_Module and Callback_Routine option,
you must supply a shareable image with a universal symbol or
entry point for the LogMiner process to be able to call your
routine. See the OpenVMS documentation discussing the Linker
utility for more information about creating shareable images.
o Your Callback_Routine is called once for each output record.
The Callback_Routine is passed two parameters:
- The length of the output record, by longword value
- A pointer to the record buffer
The record buffer is a data structure of the same fields and
lengths written to an output destination.
o Because the Oracle RMU image is installed as a known image,
your shareable image must also be a known image. Use the
OpenVMS Install Utility to make your shareable image known.
You may wish to establish an exit handler to perform any
required cleanup processing at the end of the extraction.
o Segmented string data (BLOB) cannot be extracted using the
LogMiner process. Because the segmented string data is
related to the base table row by means of a database key,
there is no convenient way to determine what data to extract.
Additionally, the data type of an extracted column is changed
from LIST OF BYTE VARYING to BIGINT. This column contains
the DBKEY of the original BLOB data. Therefore, the contents
of this column should be considered unreliable. However, the
field definition itself is extracted as a quadword integer
representing the database key of the original segmented string
data. In generated table definition or record definition
files, a comment is added indicating that the segmented string
data type is not supported by the LogMiner for Rdb feature.
o Records removed from tables using the SQL TRUNCATE TABLE
statement are not extracted. The SQL TRUNCATE TABLE statement
does not journal each individual data record being removed
from the database.
o Records removed from tables using the SQL ALTER DATABASE
command with the DROP STORAGE AREA clause and CASCADE keyword
are not extracted. Any data deleted by this process is not
journalled.
o Records removed by dropping tables using the SQL DROP TABLE
statement are not extracted. The SQL DROP TABLE statement does
not journal each individual data record being removed from the
database.
o When the RDMS$CREATE_LAREA_NOLOGGING logical is defined, DML
operations are not available for extraction between the time
the table is created and when the transaction is committed.
o Tables that use the vertical record partitioning (VRP) feature
cannot be extracted using the LogMiner feature. LogMiner
software currently does not detect these tables. A future
release of Oracle Rdb will detect and reject access to
vertically partitioned tables.
o In binary format output, VARCHAR fields are not padded with
spaces in the output file. The VARCHAR data type is extracted
as a 2-byte count field and a fixed-length data field. The 2-
byte count field indicates the number of valid characters in
the fixed-length data field. Any additional contents in the
data field are unpredictable.
o You cannot extract changes to a table when the table
definition is changed within an after-image journal file.
Data definition language (DDL) changes to a table are not
allowed within an .aij file being extracted. All records in an
.aij file must be the current record version. If you are going
to perform DDL operations on tables that you wish to extract
using the LogMiner for Rdb, you should:
1. Back up your after-image journal files.
2. Extract the .aij files using the RMU Unload After_Journal
command.
3. Make the DDL changes.
o Do not use the OpenVMS Alpha High Performance Sort/Merge
utility (selected by defining the logical name SORTSHR
to SYS$SHARE:HYPERSORT) when using the LogMiner feature.
HYPERSORT supports only a subset of the library sort routines
that LogMiner requires. Make sure that the SORTSHR logical
name is not defined to HYPERSORT.
o The metadata information file used by the RMU Unload After_
Journal command is in an internal binary format. The contents
and format are not documented and are not directly accessible
by other utilities. The content and format of the metadata
information file is specific to a version of the RMU Unload
After_Journal utility. As new versions and updates of Oracle
Rdb are released, you will proably have to re-create the
metadata information file. The same version of Oracle Rdb must
be used to both write and read a metadata information file.
The RMU Unload After_Journal command verifies the format and
version of the metadata information file and issues an error
message in the case of a version mismatch.
o For debugging purposes, you can format and display the
contents of a metadata information file by using the
Options=Dump qualifier with the Restore_Metadata qualifier.
This dump may be helpful to Oracle Support engineers during
problem analysis. The contents and format of the metadata
information file are subject to change.
o If you use both the Output and Statistics_Interval qualifiers,
the output stream used for the log, trace, and statistics
information is flushed to disk (via the RMS $FLUSH service) at
each statistics interval. This makes sure that an output file
of trace and log information is written to disk periodically.
o You can specify input backup after-image journal files along
with the Continuous qualifier from the command line. The
specified after-image journal backup files are processed in
an offline mode. Once they have been processed, the RMU Unload
After_Journal command switches to "online" mode and the active
online journals are processed.
o When no input after-image journal files are specified on the
command line, the Continuous LogMiner starts extracting at the
beginning of the earliest modified online after-image journal
file. The Restart= qualifier can be used to control the first
transaction to be extracted.
o The Continuous LogMiner requires fixed-size circular after-
image journals.
o An after-image journal file cannot be backed up if there
are any Continuous LogMiner checkpoints in the aij file.
The Continuous LogMiner moves its checkpoint to the physical
end-of-file for the online .aij file that it is extracting.
o In order to ensure that all records have been written by all
database users, Continuous LogMiner processes do not switch
to the next live journal file until it has been written to by
another process. Live journals SHOULD NOT be backed up while
the Continuous LogMiner process is processing a list of .aij
backup files. This is an unsupported activity and could lead
to the LogMiner losing data.
o If backed up after-image journal files are specified on the
command line and the Continuous qualifier is specified, the
journal sequence numbers must ascend directly from the backed
up journal files to the online journal files.
In order to preserve the after-image journal file sequencing
as processed by the RMU Unload After_Journal /Continuous
command, it is important that no after-image journal backup
operations are attempted between the start of the command and
when the Continuous LogMiner process reaches the live online
after-image journals.
o You can run multiple Continuous LogMiner processes at one
time on a database. Each Continuous LogMiner process acts
independently.
o The Continuous LogMiner reads the live after-image journal
file just behind writers to the journal. This will likely
increase the I/O load on the disk devices where the journals
are located. The Continuous LogMiner attempts to minimize
unneeded journal I/O by checking a "High Water Mark" lock to
determine if the journal has been written to and where the
highest written block location is located.
o Vertically partitioned tables cannot be extracted.
6 – Examples
Example 1
The following example unloads the EMPLOYEES table from the .aij
backup file MFP.AIJBCK.
RMU /UNLOAD /AFTER_JOURNAL MFP.RDB MFP.AIJBCK -
/TABLE = (NAME = EMPLOYEES, OUTPUT = EMPLOYEES.DAT)
Example 2
The following example simultaneously unloads the SALES,
STOCK, SHIPPING, and ORDERS tables from the .aij backup files
MFS.AIJBCK_1-JUL-1999 through MFS.AIJBCK_3-JUL-1999. Note that
the input .aij backup files are processed sequentially in the
order specified.
$ RMU /UNLOAD /AFTER_JOURNAL MFS.RDB -
MFS.AIJBCK_1-JUL-1999, -
MFS.AIJBCK_2-JUL-1999, -
MFS.AIJBCK_3-JUL-1999 -
/TABLE = (NAME = SALES, OUTPUT = SALES.DAT) -
/TABLE = (NAME = STOCK, OUTPUT = STOCK.DAT) -
/TABLE = (NAME = SHIPPING, OUTPUT = SHIPPING.DAT) -
/TABLE = (NAME = ORDER, OUTPUT = ORDER.DAT)
Example 3
Use the Before and Since qualifiers to unload data based on a
time range. The following example extracts changes made to the
PLANETS table by transactions that committed between 1-SEP-1999
at 14:30 and 3-SEP-1999 at 16:00.
$ RMU /UNLOAD /AFTER_JOURNAL MFS.RDB MFS.AIJBCK -
/TABLE = (NAME = PLANETS, OUTPUT = PLANETS.DAT) -
/BEFORE = "3-SEP-1999 16:00:00.00" -
/SINCE = "1-SEP-1999 14:30:00.00"
Example 4
The following example simultaneously unloads the SALES and
STOCK tables from all .aij backup files that match the wildcard
specification MFS.AIJBCK_1999-07-*. The input .aij backup files
are processed sequentially in the order returned from the file
system.
$ RMU /UNLOAD /AFTER_JOURNAL MFS.RDB -
MFS.AIJBCK_1999-07-* -
/TABLE = (NAME = SALES, OUTPUT = SALES.DAT) -
/TABLE = (NAME = STOCK, OUTPUT = STOCK.DAT)
Example 5
The following example unloads the TICKER table from the .aij
backup files listed in the file called AIJ_BACKUP_FILES.DAT
(note the double quotation marks surrounding the at (@) character
and the file specification). The input .aij backup files are
processed sequentially. The output records are written to the
mailbox device called MBA127:. A separate program is already
running on the system, and it reads and processes the data
written to the mailbox.
$ RMU /UNLOAD /AFTER_JOURNAL MFS.RDB -
"@AIJ_BACKUP_FILES.DAT" -
/TABLE = (NAME = TICKER, OUTPUT = MBA127:)
Example 6
You can use the RMU Unload After_Journal command followed by RMU
Load commands to move transaction data from one database into
a change table in another database. You must create a record
definition (.rrd) file for each table being loaded into the
target database. The record definition files can be created by
specifying the Record_Definition option on the Table qualifier.
$ RMU /UNLOAD /AFTER_JOURNAL OLTP.RDB MYAIJ.AIJBCK -
/TABLE = ( NAME = MYTBL, -
OUTPUT = MYTBL.DAT, -
RECORD_DEFINITION=MYLOGTBL) -
/TABLE = ( NAME = SALE, -
OUTPUT=SALE.DAT, -
RECORD_DEFINITION=SALELOGTBL)
$ RMU /LOAD WAREHOUSE.RDB MYLOGTBL MYTBL.DAT -
/RECORD_DEFINITION = FILE = MYLOGTBL.RRD
$ RMU /LOAD WAREHOUSE.RDB SALELOGTBL SALE.DAT -
/RECORD_DEFINITION = FILE = SALELOGTBL.RRD
Example 7
You can use an RMS file containing the record structure
definition for the output file as an input file to the RMU Load
command. The record description uses the CDO record and field
definition format. This is the same format used by the RMU Load
and RMU Unload commands when the Record_Definition qualifier is
used. The default file extension is .rrd.
The record definitions for the fields that the LogMiner processs
writes to the output .rrd file are shown in the following table.
These fields can be manually appended to a record definition file
for the actual user data fields being unloaded. The file can be
used to load a transaction table within a database. A transaction
table is the output that the LogMiner process writes to a table
consisting of sequential transactions performed in a database.
DEFINE FIELD RDB$LM_ACTION DATATYPE IS TEXT SIZE IS 1.
DEFINE FIELD RDB$LM_RELATION_NAME DATATYPE IS TEXT SIZE IS 31.
DEFINE FIELD RDB$LM_RECORD_TYPE DATATYPE IS SIGNED LONGWORD.
DEFINE FIELD RDB$LM_DATA_LEN DATATYPE IS SIGNED WORD.
DEFINE FIELD RDB$LM_NBV_LEN DATATYPE IS SIGNED WORD.
DEFINE FIELD RDB$LM_DBK DATATYPE IS SIGNED QUADWORD.
DEFINE FIELD RDB$LM_START_TAD DATETYPE IS DATE
DEFINE FIELD RDB$LM_COMMIT_TAD DATATYPE IS DATE
DEFINE FIELD RDB$LM_TSN DATATYPE IS SIGNED QUADWORD.
DEFINE FIELD RDB$LM_RECORD_VERSION DATATYPE IS SIGNED WORD.
Example 8
Instead of using the Table qualifier, you can use an Options file
to specify the table or tables to be extracted, as shown in the
following example.
$ TYPE TABLES.OPTIONS
TABLE=MYTBL, OUTPUT=MYTBL.DAT
TABLE=SALES, OUTPUT=SALES.DAT
$ RMU /UNLOAD /AFTER_JOURNAL OLTP.RDB MYAIJ.AIJBCK -
/OPTIONS = FILE = TABLES.OPTIONS
Example 9
The following example unloads the EMPLOYEES table from the live
database and writes all change records to the MBA145 device. A
separate program is presumed to be reading the mailbox at all
times and processing the records.
$ RMU /UNLOAD /AFTER_JOURNAL /CONTINUOUS MFP.RDB -
/TABLE = (NAME = EMPLOYEES, OUTPUT = MBA145:)
Example 10
This example demonstrates unloading three tables (EMPLOYEES,
SALES, and CUSTOMERS) to a single mailbox. Even though the
mailbox is not a file-oriented device, the same file name is
specified for each. This is required because the LogMiner process
defaults the file name to the table name. If the same file name
is not explicitly specified for each output stream destination,
the LogMiner process assigns one mailbox channel for each table.
When the file name is the same for all tables, the LogMiner
process detects this and assigns only a single channel for all
input tables.
$ DEFINE MBX$ LOADER_MBX:X
$ RMU /UNLOAD /AFTER_JOURNAL /CONTINUOUS MFP.RDB -
/TABLE = (NAME = EMPLOYEES, OUTPUT = MBX$:) -
/TABLE = (NAME = SALES, OUTPUT = MBX$:) -
/TABLE = (NAME = CUSTOMERS, OUTPUT = MBX$:)
Example 11
In order to include transaction commit information, the
/Include =Action =Commit qualifier is specified in this example.
Additionally, the EMPLOYEES and SALES tables are extracted to two
different mailbox devices (ready by separate processes). A commit
record is written to each mailbox after all changed records for
each transaction have been extracted.
$ RMU /UNLOAD /AFTER_JOURNAL /CONTINUOUS MFP.RDB -
/INCLUDE = ACTION = COMMIT -
/TABLE = (NAME = EMPLOYEES, OUTPUT = LOADER_EMP_MBX:X) -
/TABLE = (NAME = SALES, OUTPUT = LOADER_SAL_MBX:X)
Example 12
In this example, multiple input backup after-image journal
files are supplied. The Order_AIJ_Files qualifier specifies
that the .aij files are to be processed in ascending order of
.aij sequence number (regardless of file name). Prior to the
extraction operation, each input file is opened and the .aij Open
record is read. The .aij files are then opened and extracted, one
at a time, by ascending .aij sequence number.
$ RMU /UNLOAD /AFTER_JOURNAL /LOG /ORDER_AIJ_FILES -
MFP.RDB *.AIJBCK -
/TABLE = (NAME = C1, OUTPUT=C1.DAT)
%RMU-I-UNLAIJFL, Unloading table C1 to DGA0:[DB]C1.DAT;1
%RMU-I-LOGOPNAIJ, opened journal file DGA0:[DB]ABLE.AIJBCK;1
%RMU-I-AIJRSTSEQ, journal sequence number is "5"
%RMU-I-LOGOPNAIJ, opened journal file DGA0:[DB]BAKER.AIJBCK;1
%RMU-I-AIJRSTSEQ, journal sequence number is "4"
%RMU-I-LOGOPNAIJ, opened journal file DGA0:[DB]CHARLIE.AIJBCK;1
%RMU-I-AIJRSTSEQ, journal sequence number is "6"
%RMU-I-LOGOPNAIJ, opened journal file DGA0:[DB]BAKER.AIJBCK;1
%RMU-I-AIJRSTSEQ, journal sequence number is "4"
%RMU-I-AIJMODSEQ, next AIJ file sequence number will be 5
%RMU-I-LOGOPNAIJ, opened journal file DGA0:[DB]ABLE.AIJBCK;1
%RMU-I-AIJRSTSEQ, journal sequence number is "5"
%RMU-I-AIJMODSEQ, next AIJ file sequence number will be 6
%RMU-I-LOGOPNAIJ, opened journal file DGA0:[DB]CHARLIE.AIJBCK;1
%RMU-I-AIJRSTSEQ, journal sequence number is "6"
%RMU-I-AIJMODSEQ, next AIJ file sequence number will be 7
%RMU-I-LOGSUMMARY, total 7 transactions committed
%RMU-I-LOGSUMMARY, total 0 transactions rolled back
---------------------------------------------------------------------
ELAPSED: 0 00:00:00.15 CPU: 0:00:00.08 BUFIO: 62 DIRIO: 19 FAULTS: 73
Table "C1" : 3 records written (3 modify, 0 delete)
Total : 3 records written (3 modify, 0 delete)
Example 13
The SQL record definitions for the fields that the LogMiner
process writes to the output are shown in the following
example. These fields can be manually appended to the table
creation command for the actual user data fields being unloaded.
Alternately, the Table_Definition qualifier can be used with the
Table qualifier or within an Options file to automatically create
the SQL definition file. This can be used to create a transaction
table of changed data.
SQL> CREATE TABLE MYLOGTABLE (
cont> RDB$LM_ACTION CHAR,
cont> RDB$LM_RELATION_NAME CHAR (31),
cont> RDB$LM_RECORD_TYPE INTEGER,
cont> RDB$LM_DATA_LEN SMALLINT,
cont> RDB$LM_NBV_LEN SMALLINT,
cont> RDB$LM_DBK BIGINT,
cont> RDB$LM_START_TAD DATE VMS,
cont> RDB$LM_COMMIT_TAD DATE VMS,
cont> RDB$LM_TSN BIGINT,
cont> RDB$LM_RECORD_VERSION SMALLINT ...);
Example 14
The following example is the transaction table record definition
(.rrd) file for the EMPLOYEES table from the PERSONNEL database:
DEFINE FIELD RDB$LM_ACTION DATATYPE IS TEXT SIZE IS 1.
DEFINE FIELD RDB$LM_RELATION_NAME DATATYPE IS TEXT SIZE IS 31.
DEFINE FIELD RDB$LM_RECORD_TYPE DATATYPE IS SIGNED LONGWORD.
DEFINE FIELD RDB$LM_DATA_LEN DATATYPE IS SIGNED WORD.
DEFINE FIELD RDB$LM_NBV_LEN DATATYPE IS SIGNED WORD.
DEFINE FIELD RDB$LM_DBK DATATYPE IS SIGNED QUADWORD.
DEFINE FIELD RDB$LM_START_TAD DATATYPE IS DATE.
DEFINE FIELD RDB$LM_COMMIT_TAD DATATYPE IS DATE.
DEFINE FIELD RDB$LM_TSN DATATYPE IS SIGNED QUADWORD.
DEFINE FIELD RDB$LM_RECORD_VERSION DATATYPE IS SIGNED WORD.
DEFINE FIELD EMPLOYEE_ID DATATYPE IS TEXT SIZE IS 5.
DEFINE FIELD LAST_NAME DATATYPE IS TEXT SIZE IS 14.
DEFINE FIELD FIRST_NAME DATATYPE IS TEXT SIZE IS 10.
DEFINE FIELD MIDDLE_INITIAL DATATYPE IS TEXT SIZE IS 1.
DEFINE FIELD ADDRESS_DATA_1 DATATYPE IS TEXT SIZE IS 25.
DEFINE FIELD ADDRESS_DATA_2 DATATYPE IS TEXT SIZE IS 20.
DEFINE FIELD CITY DATATYPE IS TEXT SIZE IS 20.
DEFINE FIELD STATE DATATYPE IS TEXT SIZE IS 2.
DEFINE FIELD POSTAL_CODE DATATYPE IS TEXT SIZE IS 5.
DEFINE FIELD SEX DATATYPE IS TEXT SIZE IS 1.
DEFINE FIELD BIRTHDAY DATATYPE IS DATE.
DEFINE FIELD STATUS_CODE DATATYPE IS TEXT SIZE IS 1.
DEFINE RECORD EMPLOYEES.
RDB$LM_ACTION .
RDB$LM_RELATION_NAME .
RDB$LM_RECORD_TYPE .
RDB$LM_DATA_LEN .
RDB$LM_NBV_LEN .
RDB$LM_DBK .
RDB$LM_START_TAD .
RDB$LM_COMMIT_TAD .
RDB$LM_TSN .
RDB$LM_RECORD_VERSION .
EMPLOYEE_ID .
LAST_NAME .
FIRST_NAME .
MIDDLE_INITIAL .
ADDRESS_DATA_1 .
ADDRESS_DATA_2 .
CITY .
STATE .
POSTAL_CODE .
SEX .
BIRTHDAY .
STATUS_CODE .
END EMPLOYEES RECORD.
Example 15
The following C source code segment demonstrates the structure
of a module that can be used as a callback module and routine
to process employee transaction information from the LogMiner
process. The routine, Employees_Callback, would be called by the
LogMiner process for each extracted record. The final time the
callback routine is called, the RDB$LM_ACTION field will be set
to "E" to indicate the end of the output stream.
#include <stdio>
typedef unsigned char date_type[8];
typedef unsigned char dbkey_type[8];
typedef unsigned char tsn_type[8];
typedef struct {
unsigned char rdb$lm_action;
char rdb$lm_relation_name[31];
unsigned int rdb$lm_record_type;
unsigned short int rdb$lm_data_len;
unsigned short int rdb$lm_nbv_len;
dbkey_type rdb$lm_dbk;
date_type rdb$lm_start_tad;
date_type rdb$lm_commit_tad;
tsn_type rdb$lm_tsn;
unsigned short int rdb$lm_record_version;
char employee_id[5];
char last_name[14];
char first_name[10];
char middle_initial[1];
char address_data_1[25];
char address_data_2[20];
char city[20];
char state[2];
char postal_code[5];
char sex[1];
date_type birthday;
char status_code[1];
} transaction_data;
void employees_callback (unsigned int data_len, transaction_data
data_buf)
{ .
.
.
return;}
Use the C compiler (either VAX C or DEC C) to compile this
module. When linking this module, the symbol EMPLOYEES_CALLBACK
needs to be externalized in the shareable image. Refer to the
OpenVMS manual discussing the Linker utility for more information
about creating shareable images.
On OpenVMS Alpha systems, you can use a LINK command similar to
the following:
$ LINK /SHAREABLE = EXAMPLE.EXE EXAMPLE.OBJ + SYS$INPUT: /OPTIONS
SYMBOL_VECTOR = (EMPLOYEES_CALLBACK = PROCEDURE)
<Ctrl/Z>
On OpenVMS VAX systems, you can use a LINK command similar to the
following:
$ LINK /SHAREABLE = EXAMPLE.EXE EXAMPLE.OBJ + SYS$INPUT: /OPTIONS
UNIVERSAL = EMPLOYEES_CALLBACK
<Ctrl/Z>
Example 16
You can use triggers and a transaction table to construct a
method to replicate table data from one database to another
using RMU Unload After_Journal and RMU Load commands. This
data replication method is based on transactional changes
to the source table and requires no programming. Instead,
existing features of Oracle Rdb can be combined to provide this
functionality.
For this example, consider a simple customer information table
called CUST with a unique customer ID value, customer name,
address, and postal code. Changes to this table are to be
moved from an OLTP database to a reporting database system on
a periodic (perhaps nightly) basis.
First, in the reporting database, a customer table of the same
structure as the OLTP customer table is created. In this example,
this table is called RPT_CUST. It contains the same fields as the
OLTP customer table called CUST.
SQL> CREATE TABLE RPT_CUST
cont> CUST_ID INTEGER,
cont> CUST_NAME CHAR (50),
cont> CUST_ADDRESS CHAR (50),
cont> CUST_POSTAL_CODE INTEGER);
Next, a temporary table is created in the reporting database for
the LogMiner-extracted transaction data from the CUST table. This
temporary table definition specifies ON COMMIT DELETE ROWS so
that data in the temporary table is deleted from memory at each
transaction commit. A temporary table is used because there is no
need to journal changes to the table.
SQL> CREATE GLOBAL TEMPORARY TABLE RDB_LM_RPT_CUST (
cont> RDB$LM_RECORD_TYPE INTEGER,
cont> RDB$LM_DATA_LEN SMALLINT,
cont> RDB$LM_NBV_LEN SMALLINT,
cont> RDB$LM_DBK BIGINT,
cont> RDB$LM_START_TAD DATE VMS,
cont> RDB$LM_COMMIT_TAD DATE VMS,
cont> RDB$LM_TSN BIGINT,
cont> RDB$LM_RECORD_VERSION SMALLINT,
cont> CUST_ID INTEGER,
cont> CUST_NAME CHAR (50),
cont> CUST_ADDRESS CHAR (50),
cont> CUST_POSTAL_CODE INTEGER) ON COMMIT DELETE ROWS;
For data to be populated in the RPT_CUST table in the reporting
database, a trigger is created for the RDB_LM_RPT_CUST
transaction table. This trigger is used to insert, update,
or delete rows in the RPT_CUST table based on the transaction
information from the OLTP database for the CUST table. The unique
CUST_ID field is used to determine if customer records are to be
modified or added.
SQL> CREATE TRIGGER RDB_LM_RPT_CUST_TRIG
cont> AFTER INSERT ON RDB_LM_RPT_CUST
cont>
cont> -- Modify an existing customer record
cont>
cont> WHEN (RDB$LM_ACTION = 'M' AND
cont> EXISTS (SELECT RPT_CUST.CUST_ID FROM RPT_CUST
cont> WHERE RPT_CUST.CUST_ID =
cont> RDB_LM_RPT_CUST.CUST_ID))
cont> (UPDATE RPT_CUST SET
cont> RPT_CUST.CUST_NAME = RDB_LM_RPT_CUST.CUST_NAME,
cont> RPT_CUST.CUST_ADDRESS =
cont> RDB_LM_RPT_CUST.CUST_ADDRESS,
cont> RPT_CUST.CUST_POSTAL_CODE =
cont> RDB_LM_RPT_CUST.CUST_POSTAL_CODE
cont> WHERE RPT_CUST.CUST_ID = RDB_LM_RPT_CUST.CUST_ID)
cont> FOR EACH ROW
cont>
cont> -- Add a new customer record
cont>
cont> WHEN (RDB$LM_ACTION = 'M' AND NOT
cont> EXISTS (SELECT RPT_CUST.CUST_ID FROM RPT_CUST
cont> WHERE RPT_CUST.CUST_ID =
cont> RDB_LM_RPT_CUST.CUST_ID))
cont> (INSERT INTO RPT_CUST VALUES
cont> (RDB_LM_RPT_CUST.CUST_ID,
cont> RDB_LM_RPT_CUST.CUST_NAME,
cont> RDB_LM_RPT_CUST.CUST_ADDRESS,
cont> RDB_LM_RPT_CUST.CUST_POSTAL_CODE))
cont> FOR EACH ROW
cont>
cont> -- Delete an existing customer record
cont>
cont> WHEN (RDB$LM_ACTION = 'D')
cont> (DELETE FROM RPT_CUST
cont> WHERE RPT_CUST.CUST_ID = RDB_LM_RPT_CUST.CUST_ID)
cont> FOR EACH ROW;
Within the trigger, the action to take (for example, to add,
update, or delete a customer record) is based on the RDB$LM_
ACTION field (defined as D or M) and the existence of the
customer record in the reporting database. For modifications,
if the customer record does not exist, it is added; if it does
exist, it is updated. For a deletion on the OLTP database, the
customer record is deleted from the reporting database.
The RMU Load command is used to read the output from the LogMiner
process and load the data into the temporary table where each
insert causes the trigger to execute. The Commit_Every qualifier
is used to avoid filling memory with the customer records in
the temporary table because as soon as the trigger executes, the
record in the temporary table is no longer needed.
$ RMU /UNLOAD /AFTER_JOURNAL OLTP.RDB OLTP.AIJBCK -
/TABLE = (NAME = CUST, -
OUTPUT = CUST.DAT, -
RECORD_DEFINITION = RDB_LM_RPT_CUST.RRD)
$ RMU /LOAD REPORT_DATABASE.RDB RDB_LM_RPT_CUST CUST.DAT -
/RECORD_DEFINITION = FILE = RDB_LM_RPT_CUST.RRD -
/COMMIT_EVERY = 1000
Example 17
The following example shows how to produce a control file that
can be used by SQL*Loader to load the extracted data into an
Oracle database.
$ RMU/UNLOAD/AFTER TEST_DB TEST_DB_AIJ1_BCK -
/FORMAT=TEXT -
/TABLE=(NAME=TEST_TBL, -
OUTPUT=LOGMINER_TEXT.TXT, -
CONTROL=LOGMINER_CONTROL.CTL, -
TABLE_DEFINITION=TEST_TBL.SQL)
This example produces the following control file. The control
file is specific to a fixed length record text file. NULLs are
handled by using the NULLIF clause for the column definition that
references a corresponding null byte filler column. There is a
null byte filler column for each column in the underlying table
but not for the LogMiner specific columns at the beginning of
the record. If a column is NULL, the corresponding RDB$LM_NBn
filler column is set to 1. VARCHAR columns are padded with blanks
but the blanks are ignored by default when the file is loaded by
SQL*Loader. If you wish to preserve the blanks, you can update
the control file and add the "PRESERVE BLANKS" clause.
-- Control file for LogMiner transaction data 25-AUG-2000 12:15:50.47
-- From database table "TEST_DB"
LOAD DATA
INFILE 'DISK:[DIRECTORY]LOGMINER_TEXT.TXT;'
APPEND INTO TABLE 'RDB_LM_TEST_TBL'
(
RDB$LM_ACTION POSITION(1:1) CHAR,
RDB$LM_RELATION_NAME POSITION(2:32) CHAR,
RDB$LM_RECORD_TYPE POSITION(33:44) INTEGER EXTERNAL,
RDB$LM_DATA_LEN POSITION(45:50) INTEGER EXTERNAL,
RDB$LM_NBV_LEN POSITION(51:56) INTEGER EXTERNAL,
RDB$LM_DBK POSITION(57:76) INTEGER EXTERNAL,
RDB$LM_START_TAD POSITION(77:90) DATE "YYYYMMDDHHMISS",
RDB$LM_COMMIT_TAD POSITION(91:104) DATE "YYYYMMDDHHMISS",
RDB$LM_TSN POSITION(105:124) INTEGER EXTERNAL,
RDB$LM_RECORD_VERSION POSITION(125:130) INTEGER EXTERNAL,
TEST_COL POSITION(131:150) CHAR NULLIF RDB$LM_NB1 = 1,
RDB$LM_NB1 FILLER POSITION(151:151) INTEGER EXTERNAL
)
Example 17
The following example creates a metadata file for the database
MFP. This metadata file can be used as input to a later RMU
Unload After_Journal command.
$ RMU /UNLOAD /AFTER_JOURNAL MFP /SAVE_METADATA=MF_MFP.METADATA /LOG
%RMU-I-LMMFWRTCNT, Wrote 107 objects to metadata file
"DUA0:[DB]MFMFP.METADATA;1"
Example 18
This example uses a previously created metadata information file
for the database MFP. The database is not accessed during the
unload operation; the database metadata information is read from
the file. As the extract operation no longer directly relies on
the source database, the AIJ and METADATA files can be moved to
another systems and extracted there.
$ RMU /UNLOAD /AFTER_JOURNAL /RESTORE_METADATA=MF_MFP.METADATA -
MFP AIJ_BACKUP1 /TABLE=(NAME=TAB1, OUTPUT=TAB1) /LOG
%RMU-I-LMMFRDCNT, Read 107 objects from metadata file
"DUA0:[DB]MF_MFP.METADATA;1"
%RMU-I-UNLAIJFL, Unloading table TAB1 to DUA0:[DB]TAB1.DAT;1
%RMU-I-LOGOPNAIJ, opened journal file DUA0:[DB]AIJ_BACKUP1.AIJ;1
%RMU-I-AIJRSTSEQ, journal sequence number is "7216321"
%RMU-I-AIJMODSEQ, next AIJ file sequence number will be 7216322
%RMU-I-LOGSUMMARY, total 2 transactions committed
%RMU-I-LOGSUMMARY, total 0 transactions rolled back
----------------------------------------------------------------------
ELAPSED: 0 00:00:00.15 CPU: 0:00:00.01 BUFIO: 11 DIRIO: 5 FAULTS: 28
Table "TAB1" : 1 record written (1 modify, 0 delete)
Total : 1 record written (1 modify, 0 delete)